Friday, February 29, 2008

More Concurrency

I’m receiving a lot of questions about the status of parallelization in today’s world of software development. More and more people are interested in this topic. Some are driven by requirements. Some are mixing it up with the 64-bit change. ;-) Well. Despite all the support in Java, .NET, and all the frameworks like MPI and OpenMP, it is still a mess. It is not addressed adequately in respect to the roadmap of the processor industry, especially for client applications. Functional programming languages and transactional memory are no way out, because of missing acceptance and the current state of these languages, concepts, etc. On the other hand, the increase of the clock speed is no longer a save heaven. Everybody should be aware of this. And, cache is also just a workaround. Beside the existing support mentioned earlier, today’s concepts in terms of languages and compilers make assumptions that might be not true. Basically, the processors and compilers just see sequential code, but they do not work on strictly sequential code. There is always the threat of some kind of optimization, which might be a requirement for the CPU developers to gain some extra-performance. Other pitfalls are dead-locks, different locking mechanisms, and data corruptions in case of failed locking. I have not mentioned the state of complexity yet, the KISS (keep it …) is definitely not addressed. This is especially true for lock-free programming which can’t be recommended for mainstream programming. Talking about my experience, the utilization of OpenMP should be preferred for simple shared-memory scenarios. Developers should consider the positive impact of cache and must learn to utilize environmental variables, pragmas / compiler directives, and the library functions properly. Of course, the code (basically the loops) must be organized respectively. Btw, this is another condition to use OpenMP (If not, it makes no sense). More is about to come.

No comments: