Sign up
Log in
Sign up
Log in
New Webinar
August 7: Learn to optimize LLMs for cost and quality, outperforming GPT-4
Register now
Text Gen Solution

Fast,
cost-optimized
LLM endpoints

Quickly evaluate and scale the latest models by leveraging OctoAI's singular API. Our deep expertise in model compilation, model curation, and ML systems means you get low-latency, affordable endpoints that can handle any production workload.

Sign Up Free
Ask About Enterprise
An LLM summarization and question and answer chatbot powered by OctoAI
PUBLIC BETA

Create AI Agents with function calling

blue terminal icon

Connect LLMs to external tools

Enable effective tool usage with API calls

AI agent icon purple

Create AI agents

Transform LLMs into an AI agents taking action in your app

context retrieval icon yellow

Add context for improved results

Users can interact with your app using natural language and experience better outputs

real-time icon dark grey

Easily use real-time data

See how to easily get started with function calling using real-time data with our quick tutorial

# Process any tool calls made by the model
tool_calls = response.choices[0].message.tool_calls
if tool_calls:
    for tool_call in tool_calls:
        function_name = tool_call.function.name
        function_args = json.loads(tool_call.function.arguments)
        # Call the function to get the response
        function_response = locals()[function_name](**function_args)
        # Add the function response to the messages block
        messages.append(
            {
                "tool_call_id": tool_call.id,
                "role": "tool",
                "name": function_name,
                "content": function_response,
            })
TESTIMONIALS

Trusted by GenAI Innovators

Latitude logo

“Working with the OctoAI team, we were able to quickly evaluate the new model, validate its performance through our proof of concept phase, and move the model to production. Mixtral on OctoAI serves a majority of the inferences and end player experiences on AI Dungeon today.”

Nick Walton portrait

Nick Walton

CEO & Co-Founder Latitude

Otherside AI logo

“The LLM landscape is changing almost every day, and we need the flexibility to quickly select and test the latest options. OctoAI made it easy for us to evaluate a number of fine tuned model variants for our needs, identify the best one, and move it to production for our application.”

Matt Shumer portrait

Matt Shumer

CEO & Co-Founder Otherside AI

Run OSS models and fine tuned models

Build on your choice of OSS LLMs or your own model on our blazing fast API endpoints. Scale seamlessly and reliably without dropping performance.

migrate with ease icon blue

Migrate with Ease

OpenAI SDK users move to OctoAI's compatible API with minimal effort

Stay up to date with new models and features

news icon

Product & Customer Updates

Introducing the Llama 3.1 Herd on OctoAI

Jul 23, 2024
4 minutes

HyperWrite: Elevating User Experience and Business Performance with OctoAI's Cutting-Edge AI Platform

Jun 26, 2024
3 minutes

Streamline Jira ticket creation with OctoAI’s structured outputs

Jun 19, 2024
8 minutes

A Framework for Selecting the Right LLM

Jun 11, 2024
4 minutes
Visit the blog

Latest Models

See all models
news icon

Product & Customer Updates

Introducing the Llama 3.1 Herd on OctoAI

Jul 23, 2024
4 minutes

HyperWrite: Elevating User Experience and Business Performance with OctoAI's Cutting-Edge AI Platform

Jun 26, 2024
3 minutes

Streamline Jira ticket creation with OctoAI’s structured outputs

Jun 19, 2024
8 minutes

A Framework for Selecting the Right LLM

Jun 11, 2024
4 minutes
Visit the blog

Demos & Webinars

View all demos & webinars
Fast & Flexible

JSON mode for reliable structured output

JSON mode is built into leading models on the OctoAI Systems Stack, allowing it to work without disruptions or quality issues. OctoAI has pushed further and optimized JSON mode for industry-leading latency performance.

See how
JSON mode chart of output in latency miliseconds, with OctoAI at 309 ms, Fireworks AI at 310 ms, Anyscale at 1580 ms, and Together AI at 1640 ms

Build using our high quality and cost effective Mixtral 8x7B & 8x22B models

Our accelerated Mixtral delivers quality competitive with GPT 3.5, but with open source flexibility. Enjoy reduced costs with our 4x lower price per token than GPT 3.5. Migrating is made easy with one unified OpenAI compatible API. We support fine-tunes from the community including the latest from Nous Research.

See how
Mixtral Instruct on OctoAI, an AI generated neon world with turn tables, a disco ball, and beautiful mountains in the landscape

Text embedding for RAG

Utilize GTE Large embedding endpoint to facilitate retrieval augmented generation (RAG) or semantic search for your apps. With a score of 63.13% on the MTEB leaderboard and compatible API, migrating from OpenAI requires minimal code updates. Learn how.

yellow reliable gears iconMODEL COCKTAILS

Build using multiple models for your use case

Using OctoAI you can link several generative models together to create a highly performant pipeline. You can build new experiences specifically for your industry needs using language, images, audio, or your own custom models. Learn how our customer, Capitol AI, was able to work with us to achieve cost savings on their multiple models in production.

Try the Demo App