ChatGPT
This use case shows you how we can use OpenAI Chat API to build applications with gpt-3.5-turbo.
Potential use case for ChatGPT:
- Draft an email or other piece of writing
- Write Python code
- Answer questions about a set of documents
- Create conversational agents
- Give your software a natural language interface
- Tutor in a range of subjects
- Translate languages
For this example, the following dependencies are required:
- aisquared
This package is available on on pypi via
pip
. The following cell also runs the commands to install this dependency as well as imports them into the notebook environment.! pip install aisquared
import aisquared
import json
Now that the required packages have been installed and imported, it is time to create the OpenAI model. In the following blocks, we configure the harvesting, preprocessing, analytic, postprocessing, and rendering steps. Once those are created, we add them all to the
ModelConfiguration
object and compile them into the .air
file.#For harvesting, we need to harvest a users input
harvester = aisquared.config.harvesting.InputHarvester()
# No postprocessing steps are needed, so we can set that value to None
preprocesser = None
# The analytic for this configuration is going to be a CustomObject
# class, where we pass the OpenAI url, the users input/'s, and the key.
analytic = aisquared.base.CustomObject(
'OpenAIModel',
baseUrl = 'https://api.openai.com',
inputType = 'text',
key = key,
seedPrompt = ''
)
# No postprocessing steps are needed, so we can set that value to None
postprocesser = None
# To render results, we are going to use the DocumentRendering class
renderer = aisquared.config.rendering.DocumentRendering()
Now that the pieces are in place, we can put them together into a .air file that is ready to use in the AI Squared Extension.
# Finally, we will take the previous objects and put them all
# together into a single ModelConfiguration object, which is then
# compiled into the .air file
model_config = aisquared.config.ModelConfiguration(
'OpenAIChat',
harvesting_steps = harvester,
preprocessing_steps = preprocesser,
analytic = analytic,
postprocessing_steps = postprocesser,
rendering_steps = renderer
)
# compile to create .air file
model_config.compile()
This model reads text as chucks called tokens. Tokens can be as short as one character or as long as one word. For example,
the string
"ChatGPT is great!"
is encoded into six tokens: ["Chat", "G", "PT", " is", " great", "!"]
.The total number of tokens in an API call affects:
- How long your API call takes, as writing more tokens takes more time
- Whether your API call works at all, as total tokens must be below the model’s maximum limit (4096 tokens for
gpt-3.5-turbo-0301
)