We recommend this database when you have a more large usecase. You would have to deploy weaviate separately and connect it to our stack to use the running weaviate instance.
We recommend running weaviate without any vectorizer module so that the embedding component is utilized for creating embeddings from your documents.
fetch_k: Number of Documents to fetch to pass to MMR algorithm.
lambda_mult: Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5.
Usage
A Vectordb definitely needs a embedding function and you connect these two components through a stack.
from langchain.docstore.document import Document as LangDocument
from genai_stack.vectordb.chromadb import ChromaDB
from genai_stack.vectordb.weaviate_db import Weaviate
from genai_stack.embedding.utils import get_default_embedding
from genai_stack.stack.stack import Stack
embedding = get_default_embedding()
# Will use default persistent settings for a quick start
weaviatedb = Weaviate.from_kwargs(url="http://localhost:8080/", index_name="Testing", text_key="test")
chroma_stack = Stack(model=None, embedding=embedding, vectordb=weaviatedb)
# Add your documents
weaviate_stack.vectordb.add_documents(
documents=[
LangDocument(
page_content="Some page content explaining something", metadata={"some_metadata": "some_metadata"}
)
]
)
# Search for your documents
result = weaviate_stack.vectordb.search("page")
print(result)
You can also use different search_methods and search options when trying out more complicated usecases