(Assuming you don't need a sparse array)

Just allocate a 1-dimensional array and use n-dimensional indexing to access the elements of the n-dimensional array.

Imagine how you convert a two-dimensional index (x,y) into a linear address. We can number all the elements such that each element has the linear address (y * upper_bound_of_x + x).

upper_bound_of_x is the the maximum index that x will ever have plus one.

*Now, for brevity, allow me to write upper_bound_of_x as ub_x.*

In three dimensions we can convert (x,y,z) into: (z * ub_y * ub_x + y * ub_x + x). Notice that the upper_bound of z is not used. In the major dimension, the upper bound is not relevant. And in the minor dimension, the increment
is always 1 address per element.

In general the index can be computed like this:

address = 0;

for( int i = 0; i < dimensions; ++i ) {

address *= upperBound(i);

address += index(i);

}

There are other ways of numbering the elements too. You can "swizzle" the addresses of the elements of the array to increase locality. You can use a space filling curve such as a hilbert curve to derive a linear address from any set
of coordinate indices. These "swizzled" addresses are quite efficient for GPUs, particularly in 3 dimensions. Obviously you would expect everything, including locality, to break down entirely in 50 dimensions. Most importantly,
you would expect to run out of memory with 50 dimensions. Even if the upper bound in all dimensions is only 2, and the storage of each element is only a single byte, 2^50 is still a million gigabytes. :/