none
8 bit vs 16 bit textures inside kernels RRS feed

  • Question

  • When working with 16 bit textures inside a C++ AMP kernel? At what accuracy are calculations performed?

    e.g.

    texture<float_4> tex_8bit;
    
    
    parallel_for_each(/*...*/, 
    {
         float_4 test = tex_8bit[idx] * tex_8bit[idx+1]; // 4x 8 bit?
    }
    
    texture<float_4> tex_16bit;
    
    parallel_for_each(/*...*/, 
    {
         float_4 test = tex_16bit[idx] * tex_16bit[idx+1]; // 4x 16 bit?
    }


    • Edited by Dragon89 Tuesday, August 28, 2012 7:36 AM
    Tuesday, August 28, 2012 7:36 AM

Answers

  • texture<float_N, M> only supports 16/32-bit float, there is no support for 8-bit floats. The 16-bit floats only exists in the storage of such textures.  If you load a float from the texture, what you get is a 32-bit float. If you store a float to such texture, the texture hardware automatically store it as 16-bit float (with the loss of accuracy).  So all the arithmetics are always performed using 32-bit float.  So the code in your second example, the computation is done with 32-bit accuracy. 

    Thanks,

    Weirong

     
    Tuesday, August 28, 2012 5:13 PM

All replies

  • texture<float_N, M> only supports 16/32-bit float, there is no support for 8-bit floats. The 16-bit floats only exists in the storage of such textures.  If you load a float from the texture, what you get is a 32-bit float. If you store a float to such texture, the texture hardware automatically store it as 16-bit float (with the loss of accuracy).  So all the arithmetics are always performed using 32-bit float.  So the code in your second example, the computation is done with 32-bit accuracy. 

    Thanks,

    Weirong

     
    Tuesday, August 28, 2012 5:13 PM
  • Is this something specific to C++ AMP or is it the way the GPU hardware works?
    Tuesday, August 28, 2012 7:48 PM