none
waveout control volume of higher bits RRS feed

  • Question

  • Hey guys.

         The maximum output of 24 bit is much higher than 16 bit.  If I adjust the maximum output I am still over driving the signal.  The maximum output isn't what needs adjusted.  Any ideas? 

     

    Friday, February 23, 2018 12:09 AM

Answers

  • The maximum output of 24 bit is much higher than 16 bit.  If I adjust the maximum output I am still over driving the signal.  The maximum output isn't what needs adjusted.  Any ideas?   

    That is correct - it is not the maximum ouput that needs to be adjusted.   It is the scaling.   For a nominated maximum dB you now need to scale the amplitude for the range of +/-4096 instead of +/-256.


    • Edited by Acamar Friday, February 23, 2018 12:55 AM sp
    • Proposed as answer by Reed KimbleMVP, Moderator Saturday, February 24, 2018 4:14 PM
    • Marked as answer by -tE Sunday, March 4, 2018 10:43 PM
    Friday, February 23, 2018 12:54 AM
  •   At this simplest I needed an explanation of the bit depth distorting. 

    According to the documentation, WAVEFORMATEX is not the correct structure for 24-bit PCM data; it only works for 8 and 16 bit data.  You need to use WAVEFORMATEXTENSIBLE to pass 24 bit PCM data values.

    Its also worth noting that you appear to be starting with a stereo (2 channel) audio file which means the byte array contains 3 bytes for the first Y-axis point on the left channel, then three bytes for the first Y-axis point on the right channel - so every three bytes is a data point for one channel, which means every six bytes is 1 increment on the X-axis (three for left and three for right at each point in time).

    Again, it appears that this would all be much easier to implement using a managed DirectSound wrapper instead of Winmm.dll.

    https://msdn.microsoft.com/en-us/library/windows/desktop/bb318770(v=vs.85).aspx

    https://slimdx.org/docs/#N_SlimDX_DirectSound


    Reed Kimble - "When you do things right, people won't be sure you've done anything at all"

    • Proposed as answer by IronRazerz Sunday, February 25, 2018 3:29 PM
    • Marked as answer by -tE Sunday, February 25, 2018 9:32 PM
    Sunday, February 25, 2018 2:57 PM
    Moderator

All replies

  • Hi,

    I'm not sure if I can understand correctly your question.
    Do you want to make some software? If no, I'm afraid you are in a wrong place (this is a forum for VB.NET developer).

    Would you explain more details about what you want to do? 

    Regards,

    Ashidacchi

    Friday, February 23, 2018 12:28 AM
  • The maximum output of 24 bit is much higher than 16 bit.  If I adjust the maximum output I am still over driving the signal.  The maximum output isn't what needs adjusted.  Any ideas?   

    That is correct - it is not the maximum ouput that needs to be adjusted.   It is the scaling.   For a nominated maximum dB you now need to scale the amplitude for the range of +/-4096 instead of +/-256.


    • Edited by Acamar Friday, February 23, 2018 12:55 AM sp
    • Proposed as answer by Reed KimbleMVP, Moderator Saturday, February 24, 2018 4:14 PM
    • Marked as answer by -tE Sunday, March 4, 2018 10:43 PM
    Friday, February 23, 2018 12:54 AM
  • That is correct - it is not the maximum ouput that needs to be adjusted.   It is the scaling.   For a nominated maximum dB you now need to scale the amplitude for the range of +/-4096 instead of +/-256.


    Where? I don't know of this. Is my format structure needing this?
    Friday, February 23, 2018 1:16 AM
  • Where? I don't know of this. Is my format structure needing this?

    You haven't shown any code or even a description of what you are doing, so expecting anyone to know where you are doing the scaling or what a 'format structure' might be is unreasonable.  But if you were working with a 16-bit source and you are now workign with a 24-bit source and you are somehow creating some sort of output from that source then to maintain that output within the same limits you need to adjust the scaling.

    Friday, February 23, 2018 1:19 AM
  • Understood.  I have found that the buffersize is needing the adjustment.  Still not performing right,  but by adjusting it the db is correct.

    Show the code you are using and provide a description of what you are trying to do and what is meant by 'not performing right'.

    Friday, February 23, 2018 2:35 AM
  • You haven't shown any code or even a description of what you are doing, so expecting anyone to know where you are doing the scaling or what a 'format structure' might be is unreasonable.  But if you were working with a 16-bit source and you are now workign with a 24-bit source and you are somehow creating some sort of output from that source then to maintain that output within the same limits you need to adjust the scaling.

    Understood.

        <StructLayout(LayoutKind.Sequential)>
        Public Structure WAVEFORMATEX
            Public wFormatTag As Short
            Public nChannels As Short
            Public nSamplesPerSec As Integer
            Public nAvgBytesPerSec As Integer
            Public nBlockAlign As Short
            Public wBitsPerSample As Short
            Public cbSize As Short
    
        End Structure
    
    
    
       <StructLayout(LayoutKind.Sequential)>
        Public Structure WAVEHDR
            Public lpData As IntPtr
            Public dwBufferLength As Integer
            Public dwBytesRecorded As Integer
            Public dwUser As Integer
            Public dwFlags As Integer
            Public dwLoops As Integer
            Public lpNext As Integer
            Public Reserved As Integer
            Public dwOffset As Integer
            <MarshalAs(UnmanagedType.ByValArray, SizeConst:=8, ArraySubType:=UnmanagedType.U4)> Public dwReserved() As Integer
        End Structure

    I also have the const for the waveformat types but don't know where to use these and maybe that's the problem. These don't include any samples above 16 bit though.

       
    Dim SplBits as Integer = 24
    Dim SplPerSec as Integer = 44100
    Public Const WAVE_FORMAT_PCM as Integer = 1
    
         Dim wFormat As WAVEFORMATEX
            wFormat.wFormatTag = WAVE_FORMAT_PCM
            wFormat.nChannels = 2
            wFormat.nSamplesPerSec = SplPerSec
            wFormat.wBitsPerSample = SplBits
            wFormat.nBlockAlign = wFormat.nChannels * (wFormat.wBitsPerSample / 8)
            wFormat.nAvgBytesPerSec = (wFormat.nSamplesPerSec * wFormat.nBlockAlign)
            wFormat.cbSize = 0


    Tried this.

     Dim MaxVol As Integer = (2 ^ (16 * (16 / SplBits))) - 1
    
    'left channel'
      volume_msg = ("&H0000" & Hex(MaxVol))
    
    waveOutSetVolume(Intptr, volume_msg)

    My concern is that I am controlling the sound card.  I thought by using the pointer it would control the wave file.  In my app, any wave played before it through the same sound device is affected.  I want to control the wave and be able to have a true mono sound from 2 sources to each side.




    • Edited by -tE Friday, February 23, 2018 2:50 AM
    Friday, February 23, 2018 2:42 AM
  • This wasn't correct.  The sample was just so short it didn't get to max volume.
    Friday, February 23, 2018 2:44 AM
  • My concern is that I am controlling the sound card.  I thought by using the pointer it would control the wave file.  In my app, any wave played before it through the same sound device is affected.  I want to control the wave and be able to have a true mono sound from 2 sources to each side.

    I can't see how that comment relates to the topic of this post.

    You have adjusted the scaling by setting wBitsPerSample to 24.  How have you confirmed that your source is a 24-bit PCM?

    You have shown only one setting for max volume, and that uses a variable that is not described.  Where does that formula come from, is it the same for a 16-bit and 24-bit source, and what value are you using for SplBits? Why are you calculating it by formula?

    Is the problem that the audio card is overdriven at maximum volume, or that the 24-bit PCM is driving the card to a higher level at a given volume setting, compared to the 16-bit PCM.

    Your volume setting does not look right to me - that is supposed to be DWORD.  You are passing a string.  The compiler might convert your string for you, but it should not be trusted.  Specify the exact integer value for this parameter. That will enable you to confirm exactly what that value is (65535, presumably).

    0 can't be a handle so I presume it's a device ID - have you confirmed that device ID is correct?

    Friday, February 23, 2018 3:01 AM
  • Thanks. First, is dword a C++ only variable choice? I think might be.

    Your comment about the PCM value,  is most likely the problem.   I exported the file myself as a 24 bit 44.1 Wave file.  Same as my 16 bit versions.  But that's what I am trying to ask for help about.  I don't know if I can even do this or need to declare a different PCM format.  All I have is the single PCM const for 16 bit.  I was assume the winmm.dll would help more with the dithering.  The file plays in media player,  but I guess it could really be adjusting the wav. 

    The formula is a try at a ratio between the exponential difference that will occur anyway with the data.  (2^16)-1 IS 65335. So by calculating the vast difference in a 24 bit samples output it seems to lower the volume to the max correctly.  But the file itself is overdriven and distorted.

    Forgot.   The IntPtr is as such..  it is in a list.

    Private  BufferWaveOutIntptr AsNewList(OfIntPtr)

    AudioProfile.waveOutOpen(BufferWaveOutIntptr.Item(S), SCO(S), wFormat, Nothing, BufferLocation(S), AudioProfile.CallBacks.CALLBACK_NULL) ' this is the volume function that contains the waveOutSetvolume ' selectLRFunction(S) AudioProfile.waveOutPrepareHeader(BufferWaveOutIntptr.Item(S), NF, Marshal.SizeOf(NF)) AudioProfile.waveOutWrite(BufferWaveOutIntptr.Item(S), NF, Marshal.SizeOf(NF))

    The waveOutOpen points the Intptr to the sound card.  But why am I controlling more than just that instance?  Every instance does the same thing.  These are the 2 issues I am having.  I would love to control the it depth,  but even more control each files volume correctly.  I am not sure that waveOut can successfully do that part,  but the bit depth should.

    NF is the buffered data in a WaveHDR structure.

           Dim NF As AudioProfile.WAVEHDR
            NF.lpData = 'whatever'
            NF.dwBufferLength = 'whatever' 
            NF.dwBytesRecorded = 0
            NF.dwUser = 0
            NF.dwFlags = 0



    • Edited by -tE Friday, February 23, 2018 3:33 AM
    Friday, February 23, 2018 3:21 AM
  • First, is dword a C++ only variable choice? I think might be.

    You must pass a DWord - that is required by the method.  The compiler might be converting that string into a DWord for you, but you won't know what value it is calculating.  You should always make your conversions explicit.   I presume that you want to make that value -1.

    Your comment about the PCM value,  is most likely the problem.   I exported the file myself as a 24 bit 44.1 Wave file.  Same as my 16 bit versions.  But that's what I am trying to ask for help about.

    Exported from what? You need to independently verify the file format.

    The formula is a try at a ratio between the exponential difference that will occur anyway with the data.  (2^16)-1 IS 65335. So by calculating the vast difference in a 24 bit samples output it seems to lower the volume to the max correctly.  But the file itself is overdriven and distorted.

    I don't know what that means.  An 'exponential difference' would relate to relative sound levels, but you are only talking about a max volume.  That is neither a difference nor exponential.  The max voume setting for a channel is 0xffff, and that doesn't depend on the bits per sample.   I can't see there is any point in chasing other issues with the max volume until you know exactly what value you are passing to that method.

    Friday, February 23, 2018 4:38 AM
  • I've read this:

    FFFF is max volume FOR a 16 bit sample which is (2^16)-1   65335

    This is because the bits exponentially push the sound pressure.

    This means that FFFF is not the same max for 24 bit.

    It exponentially grows at (2^24)-1 which is 16777215.

    So:

    I tried setting the volume based on the total sound pressure at this.  I passed the variable named SplBits so it could vary from 16 to 32 like:

    db =  2 ^ (16  *  (16 / SplBits))  - 1

    This means that at 16 bits it is 65335, but is lowered in ratio to the number of bits and then calculated "exponentially".

    The rest of the trouble:

    If you see that 16777215 can not be a hex value of 4 digits,  the high and low word is not the right length anyway. 

    This is a big issue for as I said I thought I was manipulating the wave or bits,  but apparently it is the actual sound card.  I don't see how to control the wave form by this method.  Why am I looking for controlling it?  The function is literally a waveOut function.  Everything points at the bytes of data and then I get to this point and can't control them.  Am I wrong?   I have been creating in MIDI for a number of years now,  and MIDI is only 128 possible values.  So you can easily move the high and low bits.  I don't see how to do this when the values are so high. 

    In MIDI I found it simpler to   (X << 8)   or   256 * X.  The declared function was an Integer when I passed the high Low at 65335.  I changed it to Uinteger when I had trouble building the value.  It is working,  but the Uint building is crazy in VB.  It looks like this right now:

    Dim volume_msg As UInteger
    
    volume_msg = ("&H" & Hex(MaxVol) & Hex(MaxVol))

    This works as long as the hex value has its 4 values of FFFF or 0000.  But then I run into a smaller value which is not 4 and needs leading 0.  Now I can't format it as hex and would have to pass a string,  but UInt can't use that.


    • Edited by -tE Friday, February 23, 2018 6:31 AM
    Friday, February 23, 2018 6:29 AM
  • FFFF is max volume FOR a 16 bit sample which is (2^16)-1   65335
    This is because the bits exponentially push the sound pressure.
    This means that FFFF is not the same max for 24 bit.

    The max volume setting used in that method call does not consider the number of bits per sample.  The values (0 to 0xff for each channel) are the same for any file format and any data size.

    You should not be passing a string to the function. The value you want to pass must be numeric, so that you can see what you are actually using and know that it is the value that you want.  Currently you have no way of telling what is actually being used.

    I'm not sure what change you are making that doesn't work, but it doesn't really matter becasue using a string for that argument is incorrect.

    Friday, February 23, 2018 9:14 AM
  • I have tried numeric.  It overloads.  These 2 calc give the same integer.

     
                volume_msg = ("&H" & Hex(MaxVol) & Hex(MaxVol))
    
    
                volume_msg = (MaxVol * MaxVol + 1)
    As well,  if one word is 0 you can't multiply against it and have a value.  It has to pass as a hex.

    But still not understanding the bit conversion.  When I grab a chuck from the file to read,  I simply use the freq and bit depth to grab an amount.  Then I am telling the waveEx structure the values I used which would be 44100 and 24. But it doesn't want to play 24 bit.  Is this the answer?



    I have read about the new audio SDK.  I might have to try that.  It would be one thing to be successful with 24 bit,  but I need to control each wav out and not really the sound card.  I am not really that far into this so it might just be the right direction. 


    • Marked as answer by -tE Friday, February 23, 2018 4:39 PM
    • Unmarked as answer by -tE Friday, February 23, 2018 4:42 PM
    • Edited by -tE Friday, February 23, 2018 5:26 PM
    Friday, February 23, 2018 3:55 PM
  • I have tried numeric.  It overloads.  These 2 calc give the same integer.

    I have no idea what you are doing with that.  The value you pass to the method must be a number.  If that expression is working for you then it's only because the compiler is converting it to a number for you, but you don't know what number that is.

    If you want to set maximum volume for both channels then you use something like:

            Dim volume_msg As Integer = &HFFFFFFFF ' full setting
            Dim volume_msg As Integer = &HFF00FF ' half setting
            Dim volume_msg As Integer = &H0 ' silence

    If the maximum overloads the device then that simply means that the device can not cope with that level of input.  But you haven't described what you mean by 'overloads', so it might be just that the speakers aren't good enough for the device.

    Friday, February 23, 2018 8:45 PM
  • This thread and some other past ones help explain more about the overall thing if it helps.

    https://social.msdn.microsoft.com/Forums/vstudio/en-US/f0cf7683-4203-4e75-b366-15fc419b247e/winmm-mixer-controls?forum=vbgeneral

    Friday, February 23, 2018 9:08 PM
  • I have figured out away. You hinted it, but not sure you meant or me to use it like I am. :)

    4096


    Should I start a different thread about the output of waveOutSetVolume and how to get around it controlling the device instead of the wave.  I think this is answered.  And you should get credit,  even though I worked my tail off to find the use!!  :)
    • Edited by -tE Friday, February 23, 2018 9:43 PM
    Friday, February 23, 2018 9:10 PM
  • I just posted the other thread for background.

    I am staying out. I dont understand exactly what you want to do in reality and would not be able to help much in code if I did.

    :)


    PS OHHHHH. So you actually have two sound cards in use to play one wav sound. And one soundcard plays the left channel??? of the recorded wav sound and one soundcard plays right???

    The result is you need to set the volume for each sound card independently (via code) and not use windows sound control etc because it assumes you have only one sound card working and is setting both left and right together?

    Friday, February 23, 2018 9:53 PM
  • ... it assumes you have only one sound card working and is setting both left and right together?

    If that's what's happening it's because the device ID is being used instead of the handle in the call to the method.  As it is being passed as 0 then it must be a device ID.   I would never have picked that setup from the above disucssion.
    Friday, February 23, 2018 10:06 PM
  • ... it assumes you have only one sound card working and is setting both left and right together?

    If that's what's happening it's because the device ID is being used instead of the handle in the call to the method.  As it is being passed as 0 then it must be a device ID.   I would never have picked that setup from the above disucssion.

    Acamar,

    Its just my latest theory I am not sure what is happening. Probably not?

    I will stay out and let Zemp answer.

    :)

    PS Zemp, make sure its clear who you are addressing if it matters. You should mark Acamar as the answer if you want to. Start and new thread or use this one while Acamar is here. Whatever you want. I am staying out....

    Friday, February 23, 2018 10:10 PM
  • Guys,

     My sound card is by a professional audio manufacturer.  It connects to outboard components and allows up to 96 channels.  These unite were desgned around the time ASIO showed up.  Prior to that the WDM drivers showed up like this:

    Brand of card 1-2

    Brand of card 3-4

    Brand of card 5-6

    etc.

    With ASIO they show up as

    Brand of card Left (1-2)

    Brand of card Right (1-2)

    Brand of card Stereo (1-2)

    etc.

    So with ASIO I have a huge list to choose from. While my app is only showing the first list.  (read PS)  IF,  I can mute say right 1-2, but still get sound of another file in just left by muting the right (1-2) then I can go forward.  But as I have mentioned, if I import two files and try this ,which ever setting I change controls both wavOut functions.  This isn't right. 

    As well,  no I don't think what I have is wrong yet.  There is too much code to throw up here.  But here is the strategy.

    I fill a buffer.

    I have a function that is called when a percentage of that buffer is done playing based on the bytes.

    This works.

    I have a switch that calculates 2 buffers and recalc 2 when 1 is playing and then opposite when 2 is playing.

    Works.

    I have used an if statement to recall on the play function as well as a callback when either the first choice is below the desired amount of bytes or second if the waveOut returns done.

    Works.

    But now I can't control the waves.  I pass a string which has the Intptr of the Marshall data to my play function.  In this string I have passed the Intptr and the buffer size.  I use this to assign the Intptr by doing some math to give the waveOut its own Intptr (I removed this is the earlier code because it wouldn't have made sense in the thread).  They are stored in a list that I use to delete alloc memory and keep up to date with the buffers.

    Then when I am am defining the structure for the waveOut, I have passed buffer size.  

    This is where the 24 bit hold up is.  The buffer size is calculated by bit depth and freq,  so this should and would change the passed buffer size to the wave structures.  The wave structures are simply told what sample depth and rate to use and they are calculated based on this.  They should and would match what the string said after my function has calculated the next buffer size.

    But, 2 points.

    I can't seem to control the waveOut of the intended Intptr.

    Second, the 24 bit is distorted.

    Is 24 bit is too much data is what I have been focusing on.  That is what the 2^16 conversion is about.  But it doesn't alter the wave.  It is the first point.  It is controlling the sound card.

    I guess it could be possible that I haven't stored a unique ID for the  Intptr in my list,  but I am simply doing this:

    Intptr = bufferSize*2 + channelNumber

    So even if the first part was the same,  the channel number is added to the Intptr value.

    I am removing them from the list and reinserting when the buffer size is recalculated.

    I have been reading about writing a 24 bit sample.  And they suggest righting a 32 bit sample with blank data on the last 8 bits.  It is possible that the file is needing this readjustment.  But,  that I don't think is my need, but that the structure would do this.

    PS.  I don't remember installing the WDM driver which has me curious anyway.  In my other DAW,  I see the ASIO only.  Which means I might be able to populate my list easily and access them single channel anyway. But right now I want to learn the winm.dll.




    • Edited by -tE Saturday, February 24, 2018 12:22 AM
    Saturday, February 24, 2018 12:07 AM
  • ... which ever setting I change controls both wavOut functions.

    Is this the problem of this thread?  All the other description does not seem to be related, and doesn't match the comment inthe initial post.

    If that's the problem it is becasue you are calling the method with the device ID instead of the handle:
    "If a device identifier is used, then the result of the waveOutSetVolume call applies to all instances of the device. If a device handle is used, then the result applies only to the instance of the device referenced by the device handle."  The code you originally posted use a parameter value of 0, so that was obviously a device ID.  You are now using something diffferent, but thre is no explanation of where it comes from other than "Intptr = bufferSize*2 + channelNumber" which makes no sense at all for a handle.

    Saturday, February 24, 2018 2:12 AM
  • Can I not assign the Intptr a number that will then be the Inttr for the other wave functions?  That was what I was doing.  Or do I have to wait for a response to get that identifier?  I was creating a number to use as an identifier for the functions that followed.
    Where does this Intptr identifier value return to and how? This is simple stuff, but in MIDI there are fewer devices so I simply would just put a declared variable such as Dim hmo as Intptr and then use the hmo in the function. How does this separate out and keep track of countless open functions if I can't assign the value?



    • Edited by -tE Saturday, February 24, 2018 5:31 AM
    Saturday, February 24, 2018 4:09 AM
  • Can I not assign the Intptr a number that will then be the Inttr for the other wave functions?

    IntPtr is the name of a type in .Net.   The item you are referring to is the hwo argument of the waveOutSetVolume method. You should not use a variable name of IntPtr.  If you know the device ID you can set that parameter to that value - you were using 0 in your previous code.  But if you do that then the method will set the volume for all instances (channels) of that device.  If you set that value to a handle instead, then it refers to the device isntance for that handle.  The address of that handle is returned in the first argument of the waveOutOpen method (provided that WAVE_FORMAT_QUERY is not specified).  It might be available in other ways, but typically would be obtained from that argument. You can't calculate it.

    Saturday, February 24, 2018 7:29 AM
  • I understand I think.  It will play one buffer,  but it hangs up on the second play.


    I am not sure that the result is different yet.  I am adding the returned Intptr to my list AFTER the callback,  but it is so shaky on playback that I am not sure it is able to split the wave files outputs still. 







    • Edited by -tE Saturday, February 24, 2018 7:43 PM
    Saturday, February 24, 2018 6:13 PM
  • I understand I think.  It will play one buffer,  but it hands up on the setvolume the second play.

    This does require the use of the callback eh?  No other way to get the handle passed?



    It seems to me that the part you don't quite understand is how the data of a sample actually relates to the volume and pitch of the sound you hear.  It would appear that you are trying to use the waveOutSetVolume method to change the amplitude of the sound wave being played, and that's not how it works.  As best as I can tell, this is why you're not quite getting what Acamar is telling you (which is all correct).

    The sound you are playing, whatever it is, is a complex wave form but for simplicity sake let us assume you are playing a solid tone and so the wave form is a simple sine wave.  Envision this sine wave on a graph with the wave moving along the X axis such that the peaks and valleys of the wave are measured on the Y axis.  The X axis is time, the Y axis is amplitude or volume.  The number of increments along the Y axis between maximum and minimum amplitude are determined by the bit depth of the sample.  The bytes that make up a time-sample of sound are the series of Y axis values plotted along the X axis over time.  In a 16 bit sample, every two bytes of data represent one Y axis plot point at the given X axis time point.  The increments of the X axis are determined by the sample rate; at 14.4K each X axis increment is 1/14400 seconds long.  When you move to 24 bit encoding, every three bytes become a Y axis point value which means you can make a taller sine wave.

    This is why playing a sound encoded at 16 bits would sound quiet if played back at 24 bits - the wave form of the sound would be "shrunk" with the peaks and valleys occurring at less than the maximum values.  This is where you would have to scale the values.  You would use BitConverter to change the byte array into an array of Short values then get the percentage of each value (value/Short.MaxValue or value/Short.MinValue if negative) and multiply that by the appropriate Integer value (min/max for 24 bits).  Then use BitConverter to convert the Integer array into bytes, dropping the most significant byte of each Integer conversion, to get the scaled 24bit data array. 

    When you call the method to set the volume, this just changes the playback volume within the amplifier range of the playback device.  It does not in any way affect the amplitude of the wave form being played.

    So calling the method to set the volume is essentially like turning a volume knob - whatever plays on that device next will still have the same volume level.

    What I'm still not clear on is whether you have actual data values that need to be scaled or if you just need to control the volume level on different simultaneous instances of your audio playback device.  If you are trying to handle different instances of a device then you'll definitely need to query the system and have it tell you the handle it assigned to a given device instance.

    It seems like this whole thing might be a lot easier with the old DirectSound wrapper from DX7 or SlimDX's DirectSound wrapper.  It seems like you may just want multiple instances of SecondaryBuffer to play with their own volume levels.


    Reed Kimble - "When you do things right, people won't be sure you've done anything at all"

    Saturday, February 24, 2018 7:47 PM
    Moderator
  • That makes complete sense.  But why do I have to do anything when reading the file into a byte array and telling the structure how to interpret it.  My best guess is that I have to expect the file is actually encoding with more data like 32.  I don't see how the files data can't be simply read into a byte array unless that is the case and they need to be adjusted.  At this point I am not receiving audio input to write a file.  Simply playing back the file which is read into an array.  I am not altering any data here yet.
    • Edited by -tE Saturday, February 24, 2018 11:47 PM
    Saturday, February 24, 2018 11:47 PM
  • But why do I have to do anything when reading the file into a byte array and telling the structure how to interpret it. 
    You don't.  If you have an array of 24-bit values and if you specify the format for the audio source as 24-bit, then the data will be accessed as 24-bits per sample.  That's not your problem.  Your problem is that you can control the volume of the device but not the amplitude of the audio source.  Please look at the responses you already have.   If you pass a device id to the set volume method (eg, 0 in the code you originally posted) then you willl set the sound level of the device.  To set the sound level of the device instance (the channel) you must use the handle of that instance. You need to get that handle from the open method. You haven't shown any code where you do that.
    Saturday, February 24, 2018 11:52 PM
  • I am getting that from the open method now.  It is then going in my Intptr list and is passed to the other waveOut funtions.  Stars the player correctly.

    But as I said the original post has a lot to do with this problem.  Assigning the speakers came up as we went.  I CAN'T play back the 24 bit samples.  They are distorted and too loud.  I can play them back in my DAW and in the media player.

    Sunday, February 25, 2018 12:01 AM
  • I am getting that from the open method now.  It is then going in my Intptr list and is passed to the other waveOut funtions.  Stars the player correctly.

    But does it work to set the volume?

    If you are still unable to set the volume using the handle then there is an error in that code.  You need to show the code that you are using, as well as the actual values being returned.  

    If it now works then additional questions should be asked in a new thread.

    Sunday, February 25, 2018 12:10 AM
  • It isn't working.  But I don't want to bug you guys on this thread.  I get one buffer to play and some other errors that stop it from cycling.  I seriously don't think I can post all the code.  At this simplest I needed an explanation of the bit depth distorting.  When my callback comes back with the open method,  I use the return hwo and place it in the list.  Same list I was using just now it is from the returned callback.  That works for the first play.  It is hanging in another spot that I am working on now.  But the sample are short so I have trouble toggling the speakers during this since I can't get the buffers to replay.  I have a setting fro the full file that I am trying to use to set the speakers.  but it is hanging.
    Sunday, February 25, 2018 12:29 AM
  • When my callback comes back with the open method,  I use the return hwo and place it in the list.

    So you are using the data from the callback, not the open.  What type of callback?  What is the code for handling the callback?

    Sunday, February 25, 2018 12:39 AM
  • Huh.  What do you suggest I use if not the call back.....

        
    ' S refers to the index of my soundcard list '
    
       waveOutOpen(device, device, wFormat, .out_ptrCallback, S, CALLBACK_FUNCTION)
    
    
    
       Public Function WaveOut_CallBack(ByVal hwo As Integer, ByVal uMsg As UInteger, ByVal dwInstance As Integer, ByVal dwParam1 As Integer, ByVal dwParam2 As Integer) As Integer
    
    
            Dim S As Integer = dwInstance
    
           Select case
    
               Case MM_WOM.MM_WOM_OPEN
    
                   MyList.Insert(S, hwo)
    
            End Select
    
    
    
    end function
    
    


    • Edited by -tE Sunday, February 25, 2018 4:40 AM
    Sunday, February 25, 2018 3:24 AM
  •   At this simplest I needed an explanation of the bit depth distorting. 

    According to the documentation, WAVEFORMATEX is not the correct structure for 24-bit PCM data; it only works for 8 and 16 bit data.  You need to use WAVEFORMATEXTENSIBLE to pass 24 bit PCM data values.

    Its also worth noting that you appear to be starting with a stereo (2 channel) audio file which means the byte array contains 3 bytes for the first Y-axis point on the left channel, then three bytes for the first Y-axis point on the right channel - so every three bytes is a data point for one channel, which means every six bytes is 1 increment on the X-axis (three for left and three for right at each point in time).

    Again, it appears that this would all be much easier to implement using a managed DirectSound wrapper instead of Winmm.dll.

    https://msdn.microsoft.com/en-us/library/windows/desktop/bb318770(v=vs.85).aspx

    https://slimdx.org/docs/#N_SlimDX_DirectSound


    Reed Kimble - "When you do things right, people won't be sure you've done anything at all"

    • Proposed as answer by IronRazerz Sunday, February 25, 2018 3:29 PM
    • Marked as answer by -tE Sunday, February 25, 2018 9:32 PM
    Sunday, February 25, 2018 2:57 PM
    Moderator
  • Reed,

         The Extensible will work?  I would need the constants for the structure.  Such as the PCM and maybe MP3.  There are also those for the speakers when in combination.  The single channels are listed in hex.  Can you share those?  Or would they be in a SDK? I had help with the TIME structure and the extensible has a UNION also.  In VB there is need for an OFFSET I assume. Would it bet four as well?  And do I only need that for the UNION components?  The other 3 components would be seem to be at the same location of 0.

     



    • Edited by -tE Sunday, February 25, 2018 3:46 PM
    Sunday, February 25, 2018 3:23 PM
  • You'll need to pull any constant values from the C header file(s) or find them in documentation; I don't have any links to share.  As I've said a number of times, I would not try to do all of this with raw Win32 API calls.  I would use a wrapper over DirectSound to do this and let DirectX take care of these low-level implementation details.

    Reed Kimble - "When you do things right, people won't be sure you've done anything at all"

    Sunday, February 25, 2018 3:58 PM
    Moderator
    • Edited by tommytwotrain Monday, February 26, 2018 2:17 PM wrong link
    Monday, February 26, 2018 2:17 PM