How To Deal With OpenAI Token Limit Issue - Part 1
If you are using OpenAI APIs, then I’m sure you must have stumbled upon this error:
InvalidRequestError: This model’s maximum context length is 4097 tokens, however you requested 13886 tokens (13630 in your prompt; 256 for the completion). Please reduce your prompt; or completion length.
This error occurs when your prompt size + response exceeds the token limit.
In this article, I’ll explain how to generate this error and then will show you how to fix this error. In this entire process, I’m using Langchain and OpenAI APIs.
Scenario
We will take a huge text file which has the content of a book and then we will use OpenAI API to summarize that book for us.
Installing The Required Packages
pip install openai
pip install langchain
pip install unstructured
pip install tiktoken
Importing The Required Packages
from langchain.document_loaders import UnstructuredFileLoader
from langchain.chains.summarize import load_summarize_chain
from langchain import OpenAI
from langchain.text_splitter import RecursiveCharacterTextSplitter
Getting OpenAI API Key
To get the OpenAI key, you need to go to https://openai.com/, login and then grab the keys using highlighted way: