There is not just Moore’s Law and Murphy’s Law, both are well known to IT and Non-IT folks, but also Amdahl’s Law which is about the maximum of expected improvement to a system when only parts of the systems are improved. It is applied in Parallel Computing commonly, and specifies the theoretical speedup of a program running on multiple CPU’s in parallel. The maximum speedup is limited by the sequential parts of the code. A boundary condition is the assumptions that the problem size is the same when running in parallel. Here comes the formula (with N = number of processors and P the percentage of code that can be parallelized; hence, (1-P) is the serial code):
Speedup = 1 / (1-P) + P/N
You can play around with this formula. In essence, the sequential part of the code must be minimized to gain speedup. And, even a high number of processors does not have that impact when the percentage of parallel code is low. Anyhow, it’s a formula. In practice, the speedup depends also on other conditions, the communication (mechanisms) between the processors and the cache [Because Cache is king! ;-)].