Migrate from OpenAI in 3 lines of code
If you've been using GPT-3.5 or GPT-4 with the OpenAI API standard, switching to OctoAI is easy!
OctoAI LLMs are available to use through our compatible OpenAI API. Additionally, if you have been building or prototyping using OpenAI’s Python SDK you can keep your code as-is and use OctoAI’s LLM models.
In this example, we will show you how to change just three lines of code to make your Python application use OctoAI’s Open Source models through OpenAI’s Python SDK.
What you will build
Migrate OpenAI’s Python SDK example script to use OctoAI’s LLM endpoints.
These are the three modifications necessary to achieve our goal:
- Redefine
OPENAI_API_KEY
your API key environment variable to use your OctoAI key. - Redefine
OPENAI_BASE_URL
to point tohttps://text.octoai.run/v1
- Change the model name to an Open Source model, for example:
llama-2-13b-chat
Requirements
We will be using Python and OpenAI’s Python SDK.
Instructions
- Set up a Python virtual environment. Read Creating Virtual Environments here.
- Install the pip requirements in your local python virtual environment
Environment setup
To run this example, there are simple steps to take:
- Get an OctoAI API token by following these instructions.
- Expose the token in a new
OCTOAI_TOKEN
environment variable:
- Switch the OpenAI token and base URL environment variable
If you prefer, you can also directly paste your token into the client initialization.
Example code
Once you’ve completed the steps above, the code below will call OctoAI LLMs:
Note that you need to supply one of OctoAI’s supported LLMs as an argument, as in the example above. For a complete list of our supported LLMs, check out our REST API page.
Example output
The code above produces the following object: