Text Gen REST API

All OctoAI text generation models are accessible via REST API. Learn how to implement with easy to follow code examples.

All of our text generation models are accessible via REST API, and we follow the “Chat Completions” standard popularized by OpenAI. Below you can see a simple cURL example and JSON response for our endpoint, along with explnations of all parameters.

Input Sample

cURL
$curl -X POST "https://text.octoai.run/v1/chat/completions" \
> -H "Content-Type: application/json" \
> -H "Authorization: Bearer $OCTOAI_TOKEN" \
> --data-raw '{
> "messages": [
> {
> "role": "system",
> "content": "You are a helpful assistant. Keep your responses limited to one short paragraph if possible."
> },
> {
> "role": "user",
> "content": "Hello world"
> }
> ],
> "model": "meta-llama-3.1-8b-instruct",
> "max_tokens": 128,
> "presence_penalty": 0,
> "temperature": 0.1,
> "top_p": 0.9
> }'

Input Parameters

  • model (string): The model to be used for chat completion. Here is the complete list of presently supported model arguments. For more information regarding these models, see this description.
"meta-llama-3.1-8b-instruct",
"meta-llama-3.1-70b-instruct",
"mixtral-8x7b-instruct",
  • max_tokens (integer, optional): The maximum number of tokens to generate for the chat completion.
  • messages (list of objects): A list of chat messages, where each message is an object with properties: role and content. Supported roles are “system”, “assistant”, and “user”.
  • temperature (float, optional): A value between 0.0 and 2.0 that controls the randomness of the model’s output.
  • top_p (float, optional): A value between 0.0 and 1.0 that controls the probability of the model generating a particular token.
  • stop (list of strings, optional): A list of strings that the model will stop generating text if it encounters any of them.
  • frequency_penalty (float, optional): A value between 0.0 and 1.0 that controls how much the model penalizes generating repetitive responses.
  • presence_penalty (float, optional): A value between 0.0 and 1.0 that controls how much the model penalizes generating responses that contain certain words or phrases.
  • stream (boolean, optional): Indicates whether the response should be streamed.

Non-Streaming Response Sample:

JSON
1{
2 "id": "cmpl-8ea213aece0747aca6d0608b02b57196",
3 "choices": [
4 {
5 "index": 0,
6 "message": {
7 "role": "assistant",
8 "content": "Founded in 1921, Seattle is the mother city of Pacific Northwest. Seattle is the densely populated second-largest city in the state of Washington along with Portland. A small city at heart, Seattle has transformed itself from a small manufacturing town to the contemporary Pacific Northwest hub to its east. The city's charm and frequent unpredictability draw tourists and residents alike. Here are my favorite things about Seattle.\n* Seattle has a low crime rate and high quality of life.\n* Seattle has rich history which included the building of the first Pacific Northwest harbor and the development of the Puget Sound irrigation system. Seattle is also home to legendary firm Boeing.\n",
9 "function_call": null
10 },
11 "delta": null,
12 "finish_reason": "length"
13 }
14 ],
15 "created": 5399,
16 "model": "meta-llama-3.1-8b-instruct",
17 "object": "chat.completion",
18 "system_fingerprint": null,
19 "usage": {
20 "completion_tokens": 150,
21 "prompt_tokens": 571,
22 "total_tokens": 721
23 }
24}

Streaming Response Sample:

Once parsed to JSON, you will see the content of the streaming response similar to below:

JSON
1// Starting chunk, note that content is null and finish_reason is also null.
2{
3 "id":"cmpl-994f6307a891454cb0f57b7027f5f113",
4 "created":1700527881,
5 "model":"meta-llama-3.1-8b-instruct",
6 "choices":
7 [
8 {
9 "index":0,
10 "delta":
11 {
12 "role":"assistant",
13 "content":null
14 },
15 "finish_reason":null
16 }
17 ]
18}
19// Ending chunk, note the finish_reason "length" instead of null.
20// This means we reached the max tokens allowed in this request.
21// The "object" field is "chat.completion.chunk" for the body of responses.
22{
23 "id":"cmpl-994f6307a891454cb0f57b7027f5f113",
24 "object":"chat.completion.chunk",
25 "created":1700527881,
26 "model":"meta-llama-3.1-8b-instruct",
27 "choices":
28 [
29 {
30 "index":0,
31 "delta":
32 {
33 "role":"assistant",
34 "content":"",
35 "function_call":null
36 },
37 "finish_reason":"length"
38 }
39 ]
40}

Without parsing, the text stream will start with data: for each chunk. Below is an example. Please note, the final chunk contains simply data: [DONE] as text which can break JSON parsing if not accounted for.

data: {"id": "cmpl-994f6307a891454cb0f57b7027f5f113", "created": 1700527881, "model": "meta-llama-3.1-8b-instruct", "choices": [{"index": 0, "delta": {"role": "assistant", "content": null}, "finish_reason": null}]}
data: {"id": "cmpl-994f6307a891454cb0f57b7027f5f113", "object": "chat.completion.chunk", "created": 1700527881, "model": "meta-llama-3.1-8b-instruct", "choices": [{"index": 0, "delta": {"role": "assistant", "content": "", "function_call": null}, "finish_reason": null}]}
data: {"id": "cmpl-994f6307a891454cb0f57b7027f5f113", "object": "chat.completion.chunk", "created": 1700527881, "model": "meta-llama-3.1-8b-instruct", "choices": [{"index": 0, "delta": {"role": "assistant", "content": "Hello", "function_call": null}, "finish_reason": null}]}
data: {"id": "cmpl-994f6307a891454cb0f57b7027f5f113", "object": "chat.completion.chunk", "created": 1700527881, "model": "meta-llama-3.1-8b-instruct", "choices": [{"index": 0, "delta": {"role": "assistant", "content": "!", "function_call": null}, "finish_reason": null}]}
data: {"id": "cmpl-994f6307a891454cb0f57b7027f5f113", "object": "chat.completion.chunk", "created": 1700527881, "model": "meta-llama-3.1-8b-instruct", "choices": [{"index": 0, "delta": {"role": "assistant", "content": "", "function_call": null}, "finish_reason": null}]}
data: {"id": "cmpl-994f6307a891454cb0f57b7027f5f113", "object": "chat.completion.chunk", "created": 1700527881, "model": "meta-llama-3.1-8b-instruct", "choices": [{"index": 0, "delta": {"role": "assistant", "content": "", "function_call": null}, "finish_reason": "stop"}]}
data: [DONE]

Response Parameters

Parameters

  • id (string): A unique identifier for the chat completion.
  • choices (list of objects):
    • This is a list of chat completion choices, each represented as an object.
    • Each object within the choices list contains the following fields:
      _ index (integer): The position of the choice in the list of generated completions.
      _ message (object):
      _ An object representing the content of the chat completion, which includes:
      _ role (string): The role associated with the message, typically “assistant” for the generated response.
      _ content (string): The actual text content of the chat completion.
      _ function_call (object or null): An optional field that may contain information about a function call made within the message. It’s usually null in standard responses.
      _ delta (object or null): An optional field that can contain additional metadata about the message, typically null.
      _ finish_reason (string): The reason why the message generation was stopped, such as reaching the maximum length ("length").
  • created (integer): The Unix timestamp (in seconds) of when the chat completion was created.
  • model (string): The model used for the chat completion.
  • object (string): The object type, which is always chat.completion.
  • system_fingerprint (object or null): An optional field that may contain system-specific metadata.
  • usage (object):
    • Usage statistics for the completion request, detailing token usage in the prompt and completion.