locked
mono/stereo buffer formats RRS feed

  • Question

  • Hi all,

    What's the format of mono/stereo buffers provided to
    waveInAddBuffer through the WAVEHDR structure, or where could I find more information about it?

    Thanks in advance,

    Andre

    Saturday, September 27, 2008 2:41 PM

Answers

  • If nChannels == 2 && wBitsPerSample == 16, then there are 2 bytes for the left speaker followed by 2 bytes for the right speaker.

     

    Be aware that other values for nChannels (ie > 2) are possible, but uncommon.  This can be used for subwoofers, back speakers, etc.

    Monday, September 29, 2008 11:58 PM

All replies

  • The format of the data is specified in the WaveFormatEx that is passed in the waveInOpen call.  http://msdn.microsoft.com/en-us/library/ms713462(VS.85).aspx gives a decent view of this.

     

    In short, if nChannels is 1 the signal is mono, if it is 2, the signal is stereo.  Multiply nChannels times (wBitsPerSample / 8), and that's how many bytes it takes to store each sample.  Multiply that by nSamplesPerSec and you know how many bytes it takes to store 1 second's worth of audio.

     

    There are variations on this depending on the setting of wFormatTag, but that's the basics.

    Saturday, September 27, 2008 8:55 PM
  • And how is a stereo sound buffer organized?

    I guess that if I had a mono buffer, and had made


    WAVEFORMATEX
    : nChannels = 1 and wBitsPerSample = 16

    each 16 bits sample (2 bytes) would be played at both speaker at the same time.

    But and when
    nChannels = 2? Would even samples be played at the right speaker, and odd samples at the left speaker?

    Thanks,

    Andre
    Monday, September 29, 2008 11:09 PM
  • If nChannels == 2 && wBitsPerSample == 16, then there are 2 bytes for the left speaker followed by 2 bytes for the right speaker.

     

    Be aware that other values for nChannels (ie > 2) are possible, but uncommon.  This can be used for subwoofers, back speakers, etc.

    Monday, September 29, 2008 11:58 PM
  • How can I then convert a mono and 16 bits per sample stream (nChannels == 1 and wBitsPerSample == 16) into a stereo and 16 bits per sample one (nChannels == 2 and wBitsPerSample == 16)?

    Am I correct to think that in the mono stream, each 2 bits correspond to a sample in the following way?

    monoSample[0] = sample_0_less_significant_byte
    monoSample[1] = sample_0_most_significant_byte
    monoSample[2] = sample_1_less_significant_byte
    monoSample[3] = sample_1_most_significant_byte
    ...

    And in the stereo stream?

    stereoSample[0] = left_sample_0_less_significant_byte
    stereoSample[1] = left_sample_0_most_significant_byte
    stereoSample[2] = right_sample_1_less_significant_byte
    stereoSample[3] = right_sample_1_most_significant_byte
    ...

    If so, is the following correct to convert a mono into a stereo stream having just sound at the left side?

    stereoSample[0] = monoSample[0]
    stereoSample[1] = monoSample[1]
    stereoSample[2] = 0
    stereoSample[3] = 0
    stereoSample[4] = monoSample[2]
    stereoSample[5] = monoSample[3]
    stereoSample[ 6 ] = 0
    stereoSample[7] = 0
    ...

    Thanks in advance,

    Andre
    Monday, October 13, 2008 9:28 PM
  • Depending on what your source is, you'll also need to change the header info to reflect the new format.

    Tuesday, October 14, 2008 12:33 AM