Cerebras Distribution

The llamastack/distribution-cerebras distribution consists of the following provider configurations.

API

Provider(s)

agents

inline::meta-reference

datasetio

remote::huggingface, inline::localfs

eval

inline::meta-reference

inference

remote::cerebras, inline::sentence-transformers

safety

inline::llama-guard

scoring

inline::basic, inline::llm-as-judge, inline::braintrust

telemetry

inline::meta-reference

tool_runtime

remote::brave-search, remote::tavily-search, inline::code-interpreter, inline::rag-runtime

vector_io

inline::faiss, remote::chromadb, remote::pgvector

Environment Variables

The following environment variables can be configured:

  • LLAMA_STACK_PORT: Port for the Llama Stack distribution server (default: 8321)

  • CEREBRAS_API_KEY: Cerebras API Key (default: ``)

Models

The following models are available by default:

  • llama3.1-8b (aliases: meta-llama/Llama-3.1-8B-Instruct)

  • llama-3.3-70b (aliases: meta-llama/Llama-3.3-70B-Instruct)

Prerequisite: API Keys

Make sure you have access to a Cerebras API Key. You can get one by visiting cloud.cerebras.ai.

Running Llama Stack with Cerebras

You can do this via Conda (build code) or Docker which has a pre-built image.

Via Docker

This method allows you to get started quickly without having to build the distribution code.

LLAMA_STACK_PORT=8321
docker run \
  -it \
  --pull always \
  -p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
  -v ./run.yaml:/root/my-run.yaml \
  llamastack/distribution-cerebras \
  --yaml-config /root/my-run.yaml \
  --port $LLAMA_STACK_PORT \
  --env CEREBRAS_API_KEY=$CEREBRAS_API_KEY

Via Conda

llama stack build --template cerebras --image-type conda
llama stack run ./run.yaml \
  --port 8321 \
  --env CEREBRAS_API_KEY=$CEREBRAS_API_KEY