Hey James,
Thanks for your answers on the data considerations between ARM and x86 and also the link about going from 16-bit to IEEE float. I'll give this link a look and do understand it's 3rd party docs. You may not be able to answer this one but if you get the
opportunity to check in with one of the local audio architects, one thing I'd like to know is with the WASAPI, how is the data presented when calling the GetBuffer(...) method on the IAudioCaptureClient. If we have let's say two channels for the left and right,
and let's say that it's 16 bit so each sample is represented by two bytes, does it get presented like...
1.) LeftByte1,LeftByte2,RightByte1,RightByte2
or reversed...
2.) RightByte1, RightByte2, LeftByte1, LeftByte2
or maybe interlaced like...
3.) LeftByte1, RightByte1, LeftByte2, RightByte2
If you do happen to find that out, it may be enough to put me on the right path. Appreciate all your assistance and support James for all my questions. Your a rock star and it's great to see your level of dedication and support to the local
dev community on these forums! If this is your job, you deserve a raise. Thanks again and looking forward to your feedback.
If anyone else out there in the community can shed some more light on this topic, I'd love to hear your thoughts as well.
Thanks,
Matthew