Hi @mathias Herbaux,
Thank you for reaching out to Microsoft Q&A forum!
Azure OpenAI’s text-embedding-3 models support Matryoshka Representation Learning (MRL), which allows multi-level compression for faster searches and reduced storage costs. In Azure AI Search, MRL can be used alongside binary or scalar quantization, enabling dimensionality reduction through the truncationDimension property.
Since you're setting dimension=512 in AzureOpenAIEmbeddingSkill, your embeddings are already at a reduced size. Truncation is only necessary if your embeddings exceed 512 dimensions (e.g., text-embedding-3 models typically output 1,536-dimensional vectors). In your case, binary quantization is still beneficial for optimizing storage and query performance, but truncation isn't required.
I hope you understand! Thank you.