OpenAI
How to configure and use it?
Pre-Requisite(s)
openai_api_key
(required) - Set an OpenAI key for running the OpenAI Modelmodel_name
(optional) - Set which model of the OpenAI model you want to use. Defaults togpt-3.5-turbo-16k
Running in a Colab/Kaggle/Python scripts(s)
from genai_stack.model import OpenAIGpt35Model
llm = OpenAIGpt35Model.from_kwargs(fields={"openai_api_key": "sk-xxxx"}) # Update with your OpenAI Key
model_response = llm.predict("How long AI has been around.")
print(model_response["result"])
Import the model from genai-stack
Instantiate the class with
openai_api_key
call
.predict()
method and pass the query you want the model to answer to.Print the response. As the response is a dictionary, get the result only.
The response on predict() from the model includes result and source_documents.
Running the model in a webserver
If you want to run the model in a webserver and interact with it with HTTP requests, the model provides a way to run it.
As a Python script
We use FastAPI + Uvicorn to run a model in a webserver.
Set the response class. Default response class is fastapi.responses.Response
. It can be customized as done in the below code snippet.
from genai_stack.model import OpenAIGpt35Model
from fastapi.responses import JSONResponse
llm = OpenAIGpt35Model.from_kwargs(fields={"openai_api_key": "sk-xxxx"})
llm.run_http_server(response_class=JSONResponse)
A uvicorn server should start as below.
INFO: Started server process [137717]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8082 (Press CTRL+C to quit)
Making HTTP requests. URL - http://localhost:8082/predict/
import requests
response = requests.post("http://localhost:8082/predict/",data="How long AI has been around.")
print(response.text)
As a CLI
Create a model.json
file with the following contents:
{
"model": {
"name": "gpt3.5",
"fields": {
"openai_api_key": "sk-***"
}
}
}
Run the below CLI
genai-stack start --config_file model.json
██████╗ ███████╗███╗ ██╗ █████╗ ██╗ ███████╗████████╗ █████╗ ██████╗██╗ ██╗
██╔════╝ ██╔════╝████╗ ██║██╔══██╗██║ ██╔════╝╚══██╔══╝██╔══██╗██╔════╝██║ ██╔╝
██║ ███╗█████╗ ██╔██╗ ██║███████║██║ ███████╗ ██║ ███████║██║ █████╔╝
██║ ██║██╔══╝ ██║╚██╗██║██╔══██║██║ ╚════██║ ██║ ██╔══██║██║ ██╔═██╗
╚██████╔╝███████╗██║ ╚████║██║ ██║██║ ███████║ ██║ ██║ ██║╚██████╗██║ ██╗
╚═════╝ ╚══════╝╚═╝ ╚═══╝╚═╝ ╚═╝╚═╝ ╚══════╝ ╚═╝ ╚═╝ ╚═╝ ╚═════╝╚═╝ ╚═╝
INFO: Started server process [641734]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8082 (Press CTRL+C to quit)
Last updated