fetch_k: Number of Documents to fetch to pass to MMR algorithm.
lambda_mult: Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5.
Usage
A Vectordb definitely needs a embedding function and you connect these two components through a stack.
You can also use different search_methods and search options when trying out more complicated usecases
from langchain.docstore.document import Document as LangDocument
from genai_stack.vectordb.chromadb import ChromaDB
from genai_stack.vectordb.weaviate_db import Weaviate
from genai_stack.embedding.utils import get_default_embedding
from genai_stack.stack.stack import Stack
embedding = get_default_embedding()
# Will use default persistent settings for a quick start
chromadb = ChromaDB.from_kwargs()
chroma_stack = Stack(model=None, embedding=embedding, vectordb=chromadb)
# Add your documents
chroma_stack.vectordb.add_documents(
documents=[
LangDocument(
page_content="Some page content explaining something", metadata={"some_metadata": "some_metadata"}
)
]
)
# Search for content in your vectordb
chroma_stack.vectordb.search("page")