none
Deleted RRS feed

Answers

  • I think that we have already said it before in this forum, but I will insist on it: You can't make your code go faster by blindly making random changes to it. The first thing that you need to do is to profile it to find out where exactly is the bottleneck.

    If most of the time is lost waiting for your WebClients to pump information into your program, then you will not be able to go any faster unless you improve the conditions external to your program, such as getting more bandwidth or connecting to faster servers.

    If most of the time is lost reading or writing your files, you can study the access pattern and cache data into memory and write it or read it all at once (or in big chunks) so that the disk heads don't lose time seeking from one file to another. Or you could use faster disks or more spindles.

    If most of the time is due to the CPU analyzing the content, then you can improve the speed by either optimizing the algorithm or adding more parallelism, up to the point where all the CPUs are busy; you won't improve speed (and in fact you may lose some) if you add any more parallelism beyond that point.

    So as you see, the most important part is to first investigate and perfectly understand what is happening on your program, before you make any attempts to change it.

    Sunday, January 27, 2019 12:23 PM
    Moderator

All replies

  • I think that we have already said it before in this forum, but I will insist on it: You can't make your code go faster by blindly making random changes to it. The first thing that you need to do is to profile it to find out where exactly is the bottleneck.

    If most of the time is lost waiting for your WebClients to pump information into your program, then you will not be able to go any faster unless you improve the conditions external to your program, such as getting more bandwidth or connecting to faster servers.

    If most of the time is lost reading or writing your files, you can study the access pattern and cache data into memory and write it or read it all at once (or in big chunks) so that the disk heads don't lose time seeking from one file to another. Or you could use faster disks or more spindles.

    If most of the time is due to the CPU analyzing the content, then you can improve the speed by either optimizing the algorithm or adding more parallelism, up to the point where all the CPUs are busy; you won't improve speed (and in fact you may lose some) if you add any more parallelism beyond that point.

    So as you see, the most important part is to first investigate and perfectly understand what is happening on your program, before you make any attempts to change it.

    Sunday, January 27, 2019 12:23 PM
    Moderator
  • Deleted
    Sunday, January 27, 2019 2:47 PM