Skip to the content Skip to the Navigation

Takky blog

AI

  1. HOME
  2. AI
11/29/2024 / Last updated : 12/09/2024 takky AI

Using ollama from the command line

Using Ollama from the command line allows you to take advantage of LLM from within your programs and shells. This environment assumes that ollama is installed with Docker. Access ollama via CLI Check which model is installed. Models that are not installed will be downloaded automatically when you run the command, but you can also […]

11/04/2024 / Last updated : 11/07/2024 takky AI

How to install Ollama and Open WebUI using docker compose

Install Ollama and Open WebUI Stop and remove all containers which are already running. Ollama Docker to run with NVIDIA GPU (–gpus=all) You might see these errors. If see above errors, remove all containers as below and run again. Open WebUI Docker Run WebUI docker. Open this URL, add a model such as “phi3” Chooose […]

11/04/2024 / Last updated : 11/07/2024 takky AI

How to install Dify, Ollama, and Open WebUI in the same docker compose

You may encounter these errors when integrating Ollama with Dify. This may happen if you install Dify and Ollama in separate Docker Compose setups. To avoid this error, ollama, and dify can be in the same docker component, so dify can simply access to ollama with, Install Dify Clone Dify Confirm to run containers Wait […]

Recent Post

  • Linux GPU CLI Monitoring
  • Using ollama from the command line
  • Changing the hostname on WSL2
  • Install Django Development Environment with VS Code and WSL2
  • How to install Ollama and Open WebUI using docker compose

Archive

  • December 2024
  • November 2024
  • October 2024

Category

  • AI
  • docker
  • Linux
  • Python
  • WSL

Copyright © Takky blog All Rights Reserved.

Powered by WordPress with Lightning Theme & VK All in One Expansion Unit

MENU
PAGE TOP