An open platform for operating large language models (LLMs) in production.
Fine-tune, serve, deploy, and monitor any LLMs with ease.
📖 Introduction
With OpenLLM, you can run inference with any open-source large-language models,
deploy to the cloud or on-premises, and build powerful AI apps.
and model runtime, including StableLM, Falcon, Dolly, Flan-T5, ChatGLM,
StarCoder and more.
query via WebUI, CLI, our Python/Javascript client, or any HTTP client.
HuggingFace that allows you to easily create your own AI apps by composing LLMs
with other models and services.
Images or deploy as serverless endpoint via
LLM.tuning()
. (Coming soon)
🏃 Getting Started
To use OpenLLM, you need to have Python 3.8 (or newer) and pip
installed on
your system. We highly recommend using a Virtual Environment to prevent package
conflicts.
You can install OpenLLM using pip as follows:
To verify if it’s installed correctly, run:
$ openllm -h
Usage: openllm [OPTIONS] COMMAND [ARGS]...
██████╗ ██████╗ ███████╗███╗ ██╗██╗ ██╗ ███╗ ███╗
██╔═══██╗██╔══██╗██╔════╝████╗ ██║██║ ██║ ████╗ ████║
██║ ██║██████╔╝█████╗ ██╔██╗ ██║██║ ██║ ██╔████╔██║
██║ ██║██╔═══╝ ██╔══╝ ██║╚██╗██║██║ ██║ ██║╚██╔╝██║
╚██████╔╝██║ ███████╗██║ ╚████║███████╗███████╗██║ ╚═╝ ██║
╚═════╝ ╚═╝ ╚══════╝╚═╝ ╚═══╝╚══════╝╚══════╝╚═╝ ╚═╝
An open platform for operating large language models in production.
Fine-tune, serve, deploy, and monitor any LLMs with ease.
Starting an LLM Server
To start an LLM server, use openllm start
. For example, to start a
OPT
server, do the
following:
Following this, a Web UI will be accessible at http://localhost:3000 where you
can experiment with the endpoints and sample input prompts.
OpenLLM provides a built-in Python client, allowing you to interact with the
model. In a different terminal window or a Jupyter notebook, create a client to
start interacting with the model:
You can also use the openllm query
command to query the model from the
terminal:
export OPENLLM_ENDPOINT=http://localhost:3000 openllm query 'Explain to me the difference between "further" and "farther"'
Visit http://localhost:3000/docs.json
for OpenLLM’s API specification.
Users can also specify different variants of the model to be served, by
providing the --model-id
argument, e.g.:
openllm start flan-t5 --model-id google/flan-t5-large
Use the openllm models
command to see the list of models and their variants
supported in OpenLLM.
🧩 Supported Models
The following models are currently supported in OpenLLM. By default, OpenLLM
doesn’t include dependencies to run all models. The extra model-specific
dependencies can be installed with the instructions below:
Runtime Implementations (Experimental)
Different LLMs may have multiple runtime implementations. For instance, they
might use Pytorch (pt
), Tensorflow (tf
), or Flax (flax
).
If you wish to specify a particular runtime for a model, you can do so by
setting the OPENLLM_{MODEL_NAME}_FRAMEWORK={runtime}
environment variable
before running openllm start
.
For example, if you want to use the Tensorflow (tf
) implementation for the
flan-t5
model, you can use the following command:
OPENLLM_FLAN_T5_FRAMEWORK=tf openllm start flan-t5
Note For GPU support on Flax, refers to
Jax’s installation
to make sure that you have Jax support for the corresponding CUDA version.
Integrating a New Model
OpenLLM encourages contributions by welcoming users to incorporate their custom
LLMs into the ecosystem. Check out
Adding a New Model Guide
to see how you can do it yourself.
⚙️ Integrations
OpenLLM is not just a standalone product; it’s a building block designed to
easily integrate with other powerful tools. We currently offer integration with
BentoML and
LangChain.
BentoML
OpenLLM models can be integrated as a
Runner in your
BentoML service. These runners have a generate
method that takes a string as a
prompt and returns a corresponding output string. This will allow you to plug
and play any OpenLLM models with your existing ML workflow.
answer = await llm_runner.generate(input_text)
return answer” dir=”auto”>
import bentoml import openllm model = "opt" llm_config = openllm.AutoConfig.for_model(model) llm_runner = openllm.Runner(model, llm_config=llm_config) svc = bentoml.Service( name=f"llm-opt-service", runners=[llm_runner] ) @svc.api(input=Text(), output=Text()) async def prompt(input_text: str) -> str: answer = await llm_runner.generate(input_text) return answer
HuggingFace Agents
OpenLLM seamlessly integrates with HuggingFace Agents.
Warning The HuggingFace Agent is still at experimental stage. It is
recommended to OpenLLM with
pip install -r nightly-requirements.generated.txt
to get the latest API
update for HuggingFace agent.
import transformers agent = transformers.HfAgent("http://localhost:3000/hf/agent") # URL that runs the OpenLLM server agent.run("Is the following `text` positive or negative?", text="I don't like how this models is generate inputs")
Note Only
starcoder
is currently supported with Agent integration. The
example aboved was also ran with four T4s on EC2g4dn.12xlarge
If you want to use OpenLLM client to ask questions to the running agent, you can
also do so:
import openllm client = openllm.client.HTTPClient("http://localhost:3000") client.ask_agent( task="Is the following `text` positive or negative?", text="What are you thinking about?", )
⏳ Coming Soon!)
LangChain (In future LangChain releases, you’ll be able to effortlessly invoke OpenLLM
models, like so:
from langchain.llms import OpenLLM llm = OpenLLM.for_model(model_name='flan-t5') llm("What is the difference between a duck and a goose?")
if you have an OpenLLM server deployed elsewhere, you can connect to it by
specifying its URL:
from langchain.llms import OpenLLM llm = OpenLLM.for_model(server_url='http://localhost:8000', server_type='http') llm("What is the difference between a duck and a goose?")
🚀 Deploying to Production
To deploy your LLMs into production:
-
Building a Bento: With OpenLLM, you can easily build a Bento for a
specific model, likedolly-v2
, using thebuild
command.:A
Bento,
in BentoML, is the unit of distribution. It packages your program’s source
code, models, files, artifacts, and dependencies. -
Containerize your Bento
BentoML offers a comprehensive set of options for deploying and hosting
online ML services in production. To learn more, check out the
Deploying a Bento
guide.🍇 TelemetryOpenLLM collects usage data to enhance user experience and improve the product.
We only report OpenLLM’s internal API calls and ensure maximum privacy by
excluding sensitive information. We will never collect user code, model data, or
stack traces. For usage tracking, check out the
code.You can opt-out of usage tracking by using the
--do-not-track
CLI option:openllm [command] --do-not-track
Or by setting environment variable
OPENLLM_DO_NOT_TRACK=True
:export OPENLLM_DO_NOT_TRACK=True
👥 CommunityEngage with like-minded individuals passionate about LLMs, AI, and more on our
Discord!OpenLLM is actively maintained by the BentoML team. Feel free to reach out and
join us in our pursuit to make LLMs more accessible and easy-to-use👉
Join our Slack community!🎁 ContributingWe welcome contributions! If you’re interested in enhancing OpenLLM’s
capabilities or have any questions, don’t hesitate to reach out in our
discord channel.Checkout our
Developer Guide
if you wish to contribute to OpenLLM’s codebase.