Thursday, May 15, 2008

Limitations of today’s Parallelization Approaches

As already stated in detail, physics has stopped the race for more CPU clock speed. It’s about the heat, period. Hardware (Chip) manufactures response is to put more and more cores on a die – we all call this multi-core. I also tried to list and explain a couple of technologies (and also No-Go’s) to exploit those architectures; just skim through the blog. But most of the approaches are based on the process of determining sections of code which can be executed in parallel, loops for instance. Saying “sections of codes” does imply that we can expect serial remnants in the source code. And these parts [The percentage of code resisting tenaciously to be parallelized! :-)] determine the speedup factor defined by Amdahl’s Law (see post on Amdahl's Law). The dilemma looks a little bit similar to the expectations from the increase of clock speed in the past. It will not work out to lean back and wait until there are 16 or 32 cores on a single die. This will not fix the problem sufficiently because most of the existing approaches for parallelization does not scale. Other technologies and strategies have to be developed and applied. And before this will happen, it’s always a great approach to optimize the source code and to think about an effective usage of the cash (memory).

No comments: