How to install Dify, Ollama, and Open WebUI in the same docker compose

You may encounter these errors when integrating Ollama with Dify. This may happen if you install Dify and Ollama in separate Docker Compose setups.

httpconnectionpool(host=127.0.0.1, port=11434): max retries exceeded with url:/cpi/chat (Caused by NewConnectionError(': fail to establish a new connection:[Errno 111] Connection refused'))

To avoid this error, ollama, and dify can be in the same docker component, so dify can simply access to ollama with,

http://ollama:11434

Install Dify

Clone Dify

cd
git clone https://github.com/langgenius/dify.git
cd dify/docker
cp .env.example .env

Confirm to run containers

docker compose up -d

Wait a moment to see below output.

 ✔ Network docker_default             Created  0.2s
 ✔ Network docker_ssrf_proxy_network  Created  0.2s
 ✔ Container docker-redis-1           Started  6.3s
 ✔ Container docker-sandbox-1         Started  6.3s
 ✔ Container docker-ssrf_proxy-1      Started  6.4s
 ✔ Container docker-weaviate-1        Started  6.3s
 ✔ Container docker-web-1             Started  6.3s
 ✔ Container docker-db-1              Started  6.2s
 ✔ Container docker-worker-1          Started  2.7s
 ✔ Container docker-api-1             Started  2.7s
 ✔ Container docker-nginx-1           Started  3.1s

Access the Dify web interface with http://localhost

You should see this after creating an admin account and login

Shutdown dify

docker compose down

Install Ollama together with Dify docker compose

Open “~/dify/docker/docker-compose.yaml” and add these lines at services section.

services:
...
  ollama:
    volumes:
      - ollama:/root/.ollama
    container_name: ollama
    pull_policy: always
    tty: true
    restart: unless-stopped
    image: ollama/ollama:${OLLAMA_DOCKER_TAG-latest}
      - 'OLLAMA_HOST=0.0.0.0:11434'
    # GPU support
    deploy:
      resources:
        reservations:
          devices:
            - driver: ${OLLAMA_GPU_DRIVER-nvidia}
              count: ${OLLAMA_GPU_COUNT-1}
              capabilities:
                - gpu
  open-webui:
    build:
      context: .
      args:
        OLLAMA_BASE_URL: '/ollama'
      dockerfile: Dockerfile
    image: ghcr.io/open-webui/open-webui:${WEBUI_DOCKER_TAG-main}
    container_name: open-webui
    volumes:
      - open-webui:/app/backend/data
    depends_on:
      - ollama
    ports:
      - ${OPEN_WEBUI_PORT-3000}:8080
    environment:
      - 'OLLAMA_BASE_URL=http://ollama:11434'
      - 'WEBUI_AUTH=False'
    extra_hosts:
      - host.docker.internal:host-gateway
    restart: unless-stopped

add these lines at vlumes section.

volumes:
...
  ollama: {}
  open-webui: {}

Run again,

docker compose up -d

Wait a moment to see below output.

[+] Running 1/1
 ✔ ollama Pulled                               5.1s
[+] Running 15/15
 ✔ Network docker_ssrf_proxy_network  Created  0.2s
 ✔ Network docker_default             Created  0.2s
 ✔ Volume "docker_open-webui"         Created  0.0s
 ✔ Volume "docker_ollama"             Created  0.0s
 ✔ Container docker-ssrf_proxy-1      Started  2.6s
 ✔ Container docker-sandbox-1         Started  1.5s
 ✔ Container docker-weaviate-1        Started  2.6s
 ✔ Container docker-redis-1           Started  2.6s
 ✔ Container ollama                   Started  1.9s
 ✔ Container docker-db-1              Started  1.7s
 ✔ Container docker-web-1             Started  2.6s
 ✔ Container open-webui               Started  2.8s
 ✔ Container docker-api-1             Started  3.3s
 ✔ Container docker-worker-1          Started  3.6s
 ✔ Container docker-nginx-1           Started  3.7s

Access the open-webui, and download a model (i.e. llava). http://localhost:3000

Integrate Ollama as a Dify’s model provider

Settings > Model Providers > Ollama,