c# - C++ COM Windows Sensor APIs - Lack of floating point precision -


I am developing an application in C ++ which is an interface with Windows Sensor API via Windows.

I'm actually porting a C # application that I previously wrote that used the Windows Sensor API via the WinRT API.

In the C # application, the inclimmeter sensor was used to pitch, roll, and retrieve the values ​​in the form of floating point values ​​of the system (float 32 bit) or values.
In C # I will build the inclimmeter sensor, set the report interval for the minimum permission, and then get the callback event through the InclinometerReadingChanged function.

Now in C ++, I do the same thing (but with the COM) the COM interface gives you the option to change report interval and sensitivity, while C # /. The .NET / WinRT interface automatically sets sensitivity based on the Report Interval setting for you. I still get the sensor value of pitch, roll and yo values ​​as 32 bit floats, but the accuracy has been cut at the first decimal place!

  Built-in sensor manager Permission: 0 Composition created sensor count: 12 gate sensor (to test) supports the event! Minimum Report Interval: 15 Current Report Interval: 50 Setting Current Report Interval Minimum ... New Current Report Interval: 15 Current Sensitivity: Setting 0.5, 0.5, 0.5 Sensitivity ... New Current Sensitivity: 0.01, 0.01, 0.01 sensor started! The state has changed! Reading: -0.6, 0.8, 54.8 Reading: 2.7, 2. 9, 53.4 Readings: 4.2, 3.9, 52.2 Reading: 4.2, 2. 9, 51.9 Reading: 3.3, 2.3, 51.8 Reading: 2.3, 1.9, 51.7 Reading: 1.5 Reading: 0, 1, 51.5 Reading: -0.1, 0.9, 51.5 Reading: -0.2, 0.9, 51.5 Reading: 0.2, -0.3, 0.9, 51.5 Reading: -0.4, 0.8, 51.5 Reading: -0.5, 0.8, 51.5 Reading: -0.5, 0.8, 51.5 Reading: -0.5, 0.8, 51.5 Reading: -0.5, 0.8, 51.5 Reading: -0.6 , 0.8, 51.5 Reading: -0.6, 0.8, 51.5 Reading: -0.6, 0.8, 51.5  

The C # application did not have this problem in two apis (WinRT and Win32 / COM) theoretical form From the same built-in system Areas are your data.

Can COM be the culprit for removing temporal point accuracy?
I saw if it is possible to determine the accuracy of the sensors themselves but there is no setting for it (I tried accuracy and resolution properties with no luck).

Edit:
I'm using std :: cout to print the value. I did this test whether it was a hardcode with greater accuracy Printing the float was a print routine and it was printed correctly. I also went to Debugger and saw the value of it and they were in the format: #. # 000000 # (Ex. 7.70000005). Given that the problem is with decimal locations, and the computer is internally binary

I'm sure that you have this problem with printing routines, these are the only bits of code that usually use decimals.


Comments