LangChain Integration
EngramVectorStore implements the LangChain VectorStore interface — drop Engram into any LangChain pipeline.
Install
bash
pip install langchain-core langchain-openai engram-subnet
Basic usage
python
from langchain_openai import OpenAIEmbeddingsfrom engram.sdk.langchain import EngramVectorStoreembeddings = OpenAIEmbeddings()store = EngramVectorStore(miner_url="http://127.0.0.1:8091",embeddings=embeddings, # omit to use miner's built-in embedder)# Store documentsstore.add_texts(["BERT uses bidirectional transformers.", "GPT generates text autoregressively."],metadatas=[{"source": "paper"}, {"source": "paper"}],)# Similarity searchdocs = store.similarity_search("how does attention work?", k=5)for doc in docs:print(doc.page_content, doc.metadata)# With scoresdocs_and_scores = store.similarity_search_with_score("transformers", k=3)for doc, score in docs_and_scores:print(f"{score:.4f} — {doc.page_content[:60]}")
Tip
If
embeddings is omitted, the miner's built-in sentence-transformers model is used. Pass an embeddings object to use OpenAI, Cohere, HuggingFace, etc.As a retriever
python
retriever = store.as_retriever(search_kwargs={"k": 5})# Use in any chaindocs = retriever.invoke("what is Bittensor?")
RetrievalQA chain
python
from langchain.chains import RetrievalQAfrom langchain_openai import ChatOpenAIllm = ChatOpenAI(model="gpt-4o-mini")retriever = store.as_retriever(search_kwargs={"k": 5})chain = RetrievalQA.from_chain_type(llm=llm,chain_type="stuff",retriever=retriever,)answer = chain.run("How does Bittensor distribute rewards?")print(answer)
engram docs · v0.1edit on github →