LangChain Integration

Langchain developers can leverage OctoAI LLM and embedding endpoints to easily access efficient compute across a wide selection of LLMs.

Introduction

LangChain provides a framework to easily build LLM-powered apps. Developers using LangChain can now utilize OctoAI LLMs and Embedding endpoints to access efficient, fast, and reliable compute.

Using OctoAI’s LLMs and LangChain

To use OctoAI LLMs with LangChain, first install the following dependencies in your environment:

$pip install langchain langchain-community openai

Secondly, obtain an OctoAI API Token. Then paste your API token and run the code example below:

1# Set your api key via the OCTOAI_API_TOKEN environment variable
2from langchain_community.chat_models.octoai import ChatOctoAI
3
4llm = ChatOctoAI(
5 model_name="meta-llama-3.1-70b-instruct",
6 max_tokens=10000,
7 temperature=0.4,
8 model_kwargs={},
9)
10
11messages = [
12 (
13 "system",
14 "You are a helpful assistant. Provide short answers to the user's questions.",
15 ),
16 ("human", "Who was Leonardo DaVinci?"),
17]
18ai_msg = llm.invoke(messages)
19
20print(ai_msg.content)

It should produce the following output:

Leonardo da Vinci was a renowned Italian polymath, born in 1452. He was an artist, inventor, engineer, anatomist, and scientist. He is famous for his iconic paintings, such as the Mona Lisa and The Last Supper, as well as his inventions and designs that were centuries ahead of his time.

Using OctoAI’s Embeddings and LangChain

Using OctoAI’s Embeddings endpoint is easy with LangChain.

We require the following dependencies:

$pip install langchain langchain-community openai transformers

And now you can run the code example below:

1from langchain_community.embeddings.octoai_embeddings import OctoAIEmbeddings
2
3embeddings = OctoAIEmbeddings()
4text = "This is a test query."
5query_result = embeddings.embed_query(text)
6print(query_result)

Learn with our demo apps

Get started today by following along with one of our demo apps: