Llama Stack Playground
Note
The Llama Stack Playground is currently experimental and subject to change. We welcome feedback and contributions to help improve it.
The Llama Stack Playground is an simple interface which aims to:
Showcase capabilities and concepts of Llama Stack in an interactive environment
Demo end-to-end application code to help users get started to build their own applications
Provide an UI to help users inspect and understand Llama Stack API providers and resources
Key Features
Playground
Interactive pages for users to play with and explore Llama Stack API capabilities.
Chatbot
Chat: Chat with Llama models.
This page is a simple chatbot that allows you to chat with Llama models. Under the hood, it uses the
/inference/chat-completion
streaming API to send messages to the model and receive responses.
RAG: Uploading documents to memory_banks and chat with RAG agent
This page allows you to upload documents as a
memory_bank
and then chat with a RAG agent to query information about the uploaded documents.Under the hood, it uses Llama Stack’s
/agents
API to define and create a RAG agent and chat with it in a session.
Evaluations
Evaluations (Scoring): Run evaluations on your AI application datasets.
This page demonstrates the flow evaluation API to run evaluations on your custom AI application datasets. You may upload your own evaluation datasets and run evaluations using available scoring functions.
Under the hood, it uses Llama Stack’s
/scoring
API to run evaluations on selected scoring functions.
Evaluations (Generation + Scoring): Use pre-registered evaluation tasks to evaluate an model or agent candidate
This page demonstrates the flow for evaluation API to evaluate an model or agent candidate on pre-defined evaluation tasks. An evaluation task is a combination of dataset and scoring functions.
Under the hood, it uses Llama Stack’s
/eval
API to run generations and scorings on specified evaluation configs.In order to run this page, you may need to register evaluation tasks and datasets as resources first through the following commands.
$ llama-stack-client datasets register \ --dataset-id "mmlu" \ --provider-id "huggingface" \ --url "https://huggingface.co/datasets/llamastack/evals" \ --metadata '{"path": "llamastack/evals", "name": "evals__mmlu__details", "split": "train"}' \ --schema '{"input_query": {"type": "string"}, "expected_answer": {"type": "string"}, "chat_completion_input": {"type": "string"}}'
$ llama-stack-client benchmarks register \ --eval-task-id meta-reference-mmlu \ --provider-id meta-reference \ --dataset-id mmlu \ --scoring-functions basic::regex_parser_multiple_choice_answer
Inspect
API Providers: Inspect Llama Stack API providers
This page allows you to inspect Llama Stack API providers and resources.
Under the hood, it uses Llama Stack’s
/providers
API to get information about the providers.
API Resources: Inspect Llama Stack API resources
This page allows you to inspect Llama Stack API resources (
models
,datasets
,memory_banks
,benchmarks
,shields
).Under the hood, it uses Llama Stack’s
/<resources>/list
API to get information about each resources.Please visit Core Concepts for more details about the resources.
Starting the Llama Stack Playground
Llama CLI
To start the Llama Stack Playground, run the following commands:
Start up the Llama Stack API server
llama stack build --template together --image-type conda
llama stack run together
Start Streamlit UI
cd llama_stack/distribution/ui
pip install -r requirements.txt
streamlit run app.py
Docker
Playground can also be started in a docker image:
export LLAMA_STACK_URL=http://localhost:11434
docker run \
--pull always \
-p 8501:8501 \
-e LLAMA_STACK_ENDPOINT=$LLAMA_STACK_URL \
quay.io/jland/llama-stack-playground
Configurable Environment Variables
Environment Variables
Environment Variable |
Description |
Default Value |
---|---|---|
LLAMA_STACK_ENDPOINT |
The endpoint for the Llama Stack |
http://localhost:8321 |
FIREWORKS_API_KEY |
API key for Fireworks provider |
(empty string) |
TOGETHER_API_KEY |
API key for Together provider |
(empty string) |
SAMBANOVA_API_KEY |
API key for SambaNova provider |
(empty string) |
OPENAI_API_KEY |
API key for OpenAI provider |
(empty string) |