Using JSON mode with Text Gen endpoints
Ensure Text Gen outputs fit into your desired JSON schema.
OctoAIs Large Language Models (LLMs) can generate generate outputs that not only adhere to JSON format but also align with your unique schema specifications.
Supported models (Updated September 5, 2024 5PM PT)
- Llama 3.1 8B
- Llama 3.1 70B
- Hermes 2 Pro Llama 3 8B
- Mistral 7B
- Nous Hermes Mixtral 8x7B
- Mixtral 8x7B
- WizardLM 8x22B
Getting started
Setup credentials:
Curl example (Mistral-7B): Let’s say that you want to ensure that your LLM responses format user feedback about cars into a usable JSON format. To do so, you provide the LLM with a reponse schema ensuring that it knows it must provide “color” and “maker” in a structured format—see “response format below”:
The LLM will respond in the exact schema specified:
Pydantic and OctoAI’s Python SDK
Pydantic is a popular Python library for data validation and settings management using Python type annotations. By combining Pydantic with the OctoAI SDK, you can easily define the desired JSON schema for your LLM responses and ensure that the generated content adheres to that structure.
First, make sure you have the required packages installed:
Basic example
Let’s start with a basic example to demonstrate how Pydantic and the OctoAI SDK work together. In this example, we’ll define a simple Car model with color and maker attributes, and ask the LLM to generate a response that fits this schema.
The key points to note here are:
-
We import the necessary classes from the OctoAI SDK: Client, TextModel, and ChatCompletionResponseFormat.
-
We define a Car class inheriting from BaseModel, specifying the color and maker attributes with their expected types.
-
When creating the chat completion, we set the response_format using ChatCompletionResponseFormat and include the JSON schema generated from our Car model using Car.model_json_schema().
The output will be a JSON object adhering to the specified schema:
Array example
Next, let’s look at an example involving arrays. Suppose we want the LLM to generate a list of names based on a given prompt. We can define a Meeting model with a names attribute of type List[str].
The LLM will generate a response containing an array of names:
Nested example
Finally, let’s explore a more complex example involving nested models. In this case, we’ll define a Person model with name and age attributes, and a Result model containing a sorted list of Person objects.
In this example:
- We define a Person model with name and age attributes, along with descriptions using the Field function from Pydantic.
- We define a Result model containing a sorted_list attribute of type List[Person].
- When creating the chat completion, we set the response_format using ChatCompletionResponseFormat and include the JSON schema generated from our Result model.
The LLM will generate a response containing a sorted list of Person objects:
Instructor
Instructor makes it easy to reliably get structured data like JSON from Large Language Models (LLMs). Read more here
Install
Example
Let’s break down the code step by step:
After importing the necessary modules and setting the clients, we:
-
We use the instructor.patch function to patch the ChatCompletion.create method of the OctoAI client. This allows us to use the response_model parameter directly with a Pydantic model.
-
We define a Pydantic model called UserExtract that represents the desired structure of the extracted user information. In this case, it has two fields: name (a string) and age (an integer).
-
We call the chat.completions.create method of the patched OctoAI client, specifying the model (mistral-7b-instruct), the response_model (our UserExtract model), and the user message that contains the information we want to extract.
-
Finally, we print the extracted user information using the model_dump_json method, which serializes the Pydantic model to a JSON string with indentation for better readability.
The output will be a JSON object containing the extracted user information, adhering to the specified UserExtract schema:
By leveraging Instructor and the OctoAI SDK, you can easily define the desired output schema and ensure that the LLM generates structured data that fits your application’s requirements. This simplifies the process of integrating LLM-generated content into your software systems.