I have thought that the Microsoft team have not considered the case when an image is slightly rotated (For example, when someone scans a document wrong). In these cases your API combines top lines with bottom lines due to the slightly rotated orientation of the sheet. This causes them to sometimes extract a text first and other times to extract it later.
Do I need to configure something to get that same endpoint that I use in local working in production?
I am using Form Recognizer to read some forms.
So I've done a lot of testing on my local machine calling my Form Recognizer endpoint and it works great. But I have seen that in production, the results that come back to me are always a little different than locally. What is it due to? Do I need to configure something to get that same endpoint working in production?
2 answers
Sort by: Most helpful
-
-
Pavankumar Purilla 1,230 Reputation points Microsoft Vendor
2024-10-25T07:24:54.7166667+00:00 Hi David Munoz,
To handle the issue of slightly rotated images when using the Form Recognizer API, consider implementing preprocessing steps to correct the rotation before sending the images for analysis. Using image processing libraries like OpenCV can help automatically detect and adjust the orientation of scanned documents.