locked
Arrays, Arrays Not Aries RRS feed

  • General discussion

  • Hiya all,

     I like to get some opinions about these arrays;

    Byte [] chunk = new Byte[65536];
    
    Byte [,] chunk2D = new Byte[256,256];

     I would say they are equal in allocation usage. But, perhaps the latter would be faster or slower code?

    What do you all think?


    • Edited by User3DX Tuesday, April 8, 2014 3:52 AM MisCalc
    Tuesday, April 8, 2014 12:59 AM

All replies


  • Byte [] chunk = new Byte[1024];
    
    Byte [,] chunk2D = new Byte[256,256];

     I would say they are equal in allocation usage.

    Is this part of a test you're taking? Or an assignment? It looks like it could be.

    Do you think that 256 X 256 = 1024 ?

    - Wayne

    Tuesday, April 8, 2014 1:55 AM
  • Tuesday, April 8, 2014 5:34 AM
  • Uhm... you dont get my normal friendly "Hiya". My posting is legit because of my project from

    previous thread was given some issue, defined as stall or freeze.  And to answer Wayne, no, no

    and I copy/paste incorrect code but later corrected.  I think its fair to ask the "Guru" out there.

    Perhaps they know what code works best, for the coder and the compiler. But, thanks for

    the replies anyways.

    Tuesday, April 8, 2014 10:56 AM
  • Bidimensional arrays are always slower to access. How much slower really depends on what you do with them. Let's say that each additional array dimension adds some overhead, something like 5 instructions per array access. If you do very little work per array element then this overhead may be significant. Though for such small arrays it's unlikely to really matter.

    Tuesday, April 8, 2014 12:26 PM
  • If you do very little work per array element then this overhead may be significant. Though for such small arrays it's unlikely to really matter.


    Elaboration of this may be enlightening. Specifically, why would "very little work" make
    the "overhead ... significant"?

    - Wayne

    Tuesday, April 8, 2014 6:11 PM
  • Click here to find which is faster

    Paul Linton


    Very good article, Paul, and a very appropriate answer to the OP's question.
    Tuesday, April 8, 2014 6:28 PM
  • well, because you dont have the whole picture of my project, I will give you 1 "you are excused"

    for the flaming and crude humor. But its okay...

    Anyways, my project is working with files of great size, eventualy TB. The starter post was

    an example of how my project code will read these files, in chunks. Currently set at 8KB

    but my UI thread is stalling. Handling the data after read from file is taking alot of time.

    So, as suggested, I implemented Worker threads. All is working but want to incread the

    load of data handled by the Worker thread. Now you see why I am asking.

    Thanks.

    Wednesday, April 9, 2014 12:54 PM
  • but my UI thread is stalling. Handling the data after read from file is taking alot of time.


    So, as suggested, I implemented Worker threads. All is working but want to incread the

    load of data handled by the Worker thread. Now you see why I am asking.


    Application Performance - AQtime Pro
    http://automatedqa.com/products/aqtime/

    Profiling of C++-Applications in Visual Studio for Free
    http://www.codeproject.com/Articles/144643/Profiling-of-C-Applications-in-Visual-Studio-for-F

    - Wayne

    Wednesday, April 9, 2014 1:34 PM