OpenAI

How to configure and use it?

Pre-Requisite(s)

  • openai_api_key (required) - Set an OpenAI key for running the OpenAI Model

  • model_name (optional) - Set which model of the OpenAI model you want to use. Defaults to gpt-3.5-turbo-16k

Running in a Colab/Kaggle/Python scripts(s)

from genai_stack.model import OpenAIGpt35Model

llm  = OpenAIGpt35Model.from_kwargs(fields={"openai_api_key": "sk-xxxx"})  # Update with your OpenAI Key
model_response = llm.predict("How long AI has been around.")
print(model_response["result"])
  1. Import the model from genai-stack

  2. Instantiate the class with openai_api_key

  3. call .predict() method and pass the query you want the model to answer to.

  4. Print the response. As the response is a dictionary, get the result only.

    • The response on predict() from the model includes result and source_documents.

Running the model in a webserver

If you want to run the model in a webserver and interact with it with HTTP requests, the model provides a way to run it.

  1. As a Python script

We use FastAPI + Uvicorn to run a model in a webserver.

Set the response class. Default response class is fastapi.responses.Response. It can be customized as done in the below code snippet.

A uvicorn server should start as below.

Making HTTP requests. URL - http://localhost:8082/predict/

  1. As a CLI

Create a model.json file with the following contents:

Run the below CLI

Last updated