docker - How to pass a scriptmultiple commands to a service in Gitlab CICD - Stack Overflow

admin2025-04-17  4

I want to run Ollama as a service to use in a Gitlab CI/CD job. For example, this works:

ollama:
  services:
    - name: ollama/ollama
      alias: ollama
      command: ["serve"]
  script: |
    curl -s -S ollama:11434

I get the default ollama response Ollama is running.

The problem is, the ollama service hasn't pulled down any models. If I try to generate a prompt it fails:

curl -s -S ollama:11434/api/generate -X POST -d '{ "model": "llama3.2", "stream": false, "prompt": "Generate some text" }'
# {"error":"model 'llama3.2' not found"}

The solution to this (when running locally) is to first pull the model with ollama pull llama3.2. The problem is that since the service's command value translates to Docker's CMD argument, it only accepts a single executable and its arguments. Whereas, I'll need to run 2 commands: ollama pull then ollama serve.

I've found a question asking the same and their solution was to not run a service and instead run Ollama in a background process within their job script: ollama serve &. Apparently this works, but it seems like an imperfect option.

Does anyone know how I can get this to work with services?

I want to run Ollama as a service to use in a Gitlab CI/CD job. For example, this works:

ollama:
  services:
    - name: ollama/ollama
      alias: ollama
      command: ["serve"]
  script: |
    curl -s -S ollama:11434

I get the default ollama response Ollama is running.

The problem is, the ollama service hasn't pulled down any models. If I try to generate a prompt it fails:

curl -s -S ollama:11434/api/generate -X POST -d '{ "model": "llama3.2", "stream": false, "prompt": "Generate some text" }'
# {"error":"model 'llama3.2' not found"}

The solution to this (when running locally) is to first pull the model with ollama pull llama3.2. The problem is that since the service's command value translates to Docker's CMD argument, it only accepts a single executable and its arguments. Whereas, I'll need to run 2 commands: ollama pull then ollama serve.

I've found a question asking the same and their solution was to not run a service and instead run Ollama in a background process within their job script: ollama serve &. Apparently this works, but it seems like an imperfect option.

Does anyone know how I can get this to work with services?

Share asked Feb 1 at 18:18 jeremywatjeremywat 1,0764 silver badges17 bronze badges
Add a comment  | 

1 Answer 1

Reset to default 0

Okay I've gotten it working by querying the pull endpoint of the Ollama API in my before_script:

variables:
  MODEL: llama3.2

ollama:
  services:
    - name: ollama/ollama
      alias: ollama
      command: ["serve"]
      pull_policy: if-not-present
  before_script:
    - curl ollama:11434/api/pull -sS -X POST -d "{\"model\":\"$MODEL\",\"stream\":false}"
  script: curl -sS ollama:11434/api/tags

# {"models":[{"name":"llama3.2:latest","model":"llama3.2:latest","modified_at":"2025-02-01T18:20:17.384118917Z","size":2019393189,"digest":"a80c4f17acd55265feec403c7aef86be0c25983ab279d83f3bcd3abbcb5b8b72","details":{"parent_model":"","format":"gguf","family":"llama","families":["llama"],"parameter_size":"3.2B","quantization_level":"Q4_K_M"}}]}

Ideally I'd want it to be entirely done in the services section, but this gets the job done nicely.

转载请注明原文地址:http://anycun.com/QandA/1744820435a88070.html