Found the answer. The problem was actually in the request params. Normally, openai expects a model name to be passed in for model like
gpt-4-1106-preview
But in the case of azure, it expects the given model name in your deployment, so whatever name you wrote down for the deployment should be passed in here.
params = {
"model": model.name, // NAME IN AZURE
"messages": chat_messages,
"max_tokens": model_max_tokens,
"temperature": temperature,
"stream": stream,
}