Definitions for

**"Amdahl's Law"**A rule first formalised by Gene Amdahl in 1967, which states that if F is the fraction of a calculation that is serial and 1-F the fraction that can be parallelised, then the speedup that can be achieved using P processors is: 1/(F + (1-F)/P) which has a limiting value of 1/F for an infinite number of processors. This no matter how many processors are employed, if a calculation has a 10% serial component, the maximum speedup obtainable is 10.

A rule of computer-program scalability, first proposed by Gene Amdahl in 1967. It states that if is the fraction of a calculation that is inherently serial (ie. cannot be parallelised), and 1-s the fraction that is parallel then the maximum possible speedup on processors is limited by as speedup = limit( - infinity) 1 / {)/} speedup

A mathematical approximation (not really a law) stating the ideal speedup that should result when a program is executed in parallel: S( / )), where is the number of CPUs applied and is the fraction of the program code that can be executed by parallel threads. When is small (much of the program executes serially), S() is near 1.0, in other words, there is no speedup. When is near 1.0 (most of the program can be executed in parallel), S() is near 1/, in other words, linear speedup.