How to handle touch device mapping in multi-monitor setups using Windows API?
When the Windows settings are adjusted to use only the external monitor, my application—designed for hardware diagnostics and reliant on the Windows API—still allows the touchscreen module to be activated and tested. This is problematic because the external monitor does not have touchscreen capabilities, yet the application cannot distinguish between monitors. As a result, users may inadvertently access touchscreen functionalities that should not be available, leading to confusion and incorrect test results.
This problem arises from the way the Windows API, specifically the "POINTER_DEVICE_INFO" structure, handles device mapping. The HMONITOR field indicates the monitor associated with the touch device, but when the application is running on the external monitor, the API incorrectly associates the touch device with this monitor. This leads to erroneous behavior where the touchscreen functionality remains accessible.
Given the complexities involved and the limitations of the current API, I would appreciate any guidance on potential solutions or workarounds that may help to accurately distinguish between active touchscreen monitors and non-touch monitors in multi-display configurations.
Your assistance in addressing this issue would be invaluable, as it directly impacts the usability and functionality of applications relying on touch input in multi-monitor environments. Thank you for your attention to this matter.