Nexevo.aiNexevo.ai
← All examples
Framework integration

Use LangChain + Nexevo to make RAG

Connect Nexevo to LangChain's ChatOpenAI to perform retrieval-augmented chat.

Python
python
# Use Nexevo as a drop-in replacement for OpenAI in LangChain.
# Both ChatOpenAI and embeddings work via the OpenAI-compat endpoint.

from langchain_openai import ChatOpenAI
from langchain_community.vectorstores import FAISS
from langchain_openai import OpenAIEmbeddings
from langchain.chains import RetrievalQA

llm = ChatOpenAI(
    model="deepseek-chat",
    openai_api_key=os.environ["NEXEVO_API_KEY"],
    openai_api_base="https://api.nexevo.ai/v1",
)

# Note: embeddings need a separate endpoint or use OpenAI for embeddings.
# Nexevo currently routes chat completions; for embeddings combine with OpenAI/Cohere.
embeddings = OpenAIEmbeddings(api_key=os.environ["OPENAI_API_KEY"])

vectorstore = FAISS.from_texts(
    texts=["Nexevo.ai routes to mainland Chinese LLMs.", "..."],
    embedding=embeddings,
)

qa = RetrievalQA.from_chain_type(
    llm=llm,
    retriever=vectorstore.as_retriever(),
)

print(qa.invoke({"query": "What does Nexevo do?"}))
Use LangChain + Nexevo to make RAG — Nexevo Cookbook | Nexevo.ai