431 RequestHeaderFieldsTooLarge when calling certain Azure Open AI models from Power Automate

Grant Crofton 20 Reputation points
2025-02-03T12:23:40.27+00:00

We're calling Azure Open AI from Power Automate to do chat completions.

Because the model we're using (gpt-35-turbo-16k 0613) will be retired soon, we're trying different models, however they all fail with 431 RequestHeaderFieldsTooLarge.

This only happens when we access Open AI via a vNet and a Private Endpoint, it works fine when we access the public endpoint directly. However we can't do this in production due to security restrictions.

I believe this is because the vNet call goes via APIM, based on the same issue reported previously in different scenarios:

https://github.com/microsoft/sample-app-aoai-chatGPT/issues/875

https://learn.microsoft.com/en-us/answers/questions/1685374/azure-ai-studio-chat-in-my-data-gpt-4o-model-reque

https://community.openai.com/t/request-header-fields-too-large/935726

https://learn.microsoft.com/en-us/answers/questions/2114269/suddenly-getting-apistatuserror-error-code-431-whe

We get this issue with the following models we've tried:

  • gpt-35-turbo 0125
  • gpt-4o-mini

I've tried different API versions including:

  • 2024-10-21

2025-01-01-preview

As I said above we don't get the issue when using gpt-35-turbo-16k 0613. Nothing else changes between working and non-working calls other than the deployment name.

(I should point out we're not sending images, I believe this is due to APIM adding additional headers).

Azure OpenAI Service
Azure OpenAI Service
An Azure service that provides access to OpenAI’s GPT-3 models with enterprise capabilities.
3,826 questions
{count} votes

1 answer

Sort by: Most helpful
  1. Grant Crofton 20 Reputation points
    2025-02-12T16:04:19.9766667+00:00

    Hi @santoshkc , thanks for your reply.

    To clarify, we don't have an APIM instance, this is something used behind the scenes as part of the Custom connector, as shown here: https://learn.microsoft.com/en-us/connectors/connector-architecture#architecture-components

    So we don't have any control over that. We're not specifying any headers as part of the Custom connector configuration either.

    I've not found a way of logging the HTTP requests - there's vNet Flow Logs but these don't work on a Private Endpoint. I've got Diagnostic Logs on the vNet and the Network Interface but these don't seem to log the HTTP requests. If you know of a way to log these let me know.

    The only thing I can think of is to create an APIM instance and route the requests through that, which would enable logging of the requests and stripping out any unwanted headers.


Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.