How to work with numerical data in arrays whose dimension unknown at compile time? RRS feed

  • Question

  • Hello All.

    I am working on scientific software and I want to create arrays with dimensions that vary throughout the course of the execution of an algorithm.  I have read that dynamic memory allocation is not well-suited to GPU's.  That being said, if my array could have 50 dimensions or 5 and that dimension number is generated dynamically by the computations of the program, it doesn't seem to make sense to make a 50 dimensional array upfront at compile time if maybe I only need 5.  There has to be a balance between just creating some 'maximum' array dimension and using a 'mallocate' type command which is not streamlined for GPU's.  I look forward to any advice or direction towards relevant reading on this type of memory management question.  Thanks! 



    Friday, September 19, 2014 9:40 PM

All replies

  • (Assuming you don't need a sparse array)

    Just allocate a 1-dimensional array and use n-dimensional indexing to access the elements of the n-dimensional array.

    Imagine how you convert a two-dimensional index (x,y) into a linear address.  We can number all the elements such that each element has the linear address (y * upper_bound_of_x + x).

    upper_bound_of_x is the the maximum index that x will ever have plus one.

    Now, for brevity, allow me to write upper_bound_of_x as ub_x.

    In three dimensions we can convert (x,y,z) into: (z * ub_y * ub_x + y * ub_x + x).  Notice that the upper_bound of z is not used.  In the major dimension, the upper bound is not relevant.  And in the minor dimension, the increment is always 1 address per element.

    In general the index can be computed like this:

    address = 0;
    for( int i = 0; i < dimensions; ++i ) {
       address *= upperBound(i);
       address += index(i);

    There are other ways of numbering the elements too.  You can "swizzle" the addresses of the elements of the array to increase locality.  You can use a space filling curve such as a hilbert curve to derive a linear address from any set of coordinate indices.  These "swizzled" addresses are quite efficient for GPUs, particularly in 3 dimensions.  Obviously you would expect everything, including locality, to break down entirely in 50 dimensions.  Most importantly, you would expect to run out of memory with 50 dimensions.  Even if the upper bound in all dimensions is only 2, and the storage of each element is only a single byte, 2^50 is still a million gigabytes.  :/

    Wednesday, November 12, 2014 1:53 AM