Azure AI Search embeddings: max content length exceeded?
Creating embeddings for an array of 100 JSON objects in order to create an index. Loop through the objects creating an embedding for the body field (six simple fields and one vector field) and get error "This model's maximum context length is 8192 tokens, however you requested 16893 tokens (16893 in your prompt; 0 for the completion). Please reduce your prompt; or completion length."
However:
Using the tiktoken library:
The current object has 1778 tokens
The running total of 32 objects is 38355
The text for the embedding (for the offending object) is 7487 characters
I have put delays in the code to no avail. If I simply get an embedding for the offending object (no loop) it works which suggests it has something to do with the accumulating load -- but not sure what it could be.