I am starting to get really ticked at seeing this ‘law’ being proclamed as the end-all-be-all law when it comes to computing.
So this guy says that he thinks processors will double every 24 months, (used to be 18, and 12 before that).
- A law doesn’t change to adapt to the real world conditions. Either it accurately describes a phenomenon, or it doesn’t.
- Why do we call it a law, as it’s only more of a precept, a note, something that is close to the reality. Does that make it a proxy, perhaps?
- Why am I getting the sense, from this article in B2.0, that the author is portraying the scientists as trying to make their development efforts work within the constraints of the law’s time table (24 months, in this case).
I’ve heard enough of this Moore’s Law.
Is it because computers is such a new field of research and development (compared to more traditional sciences like astronomy, engineering, etc) that we feel the need to have axioms and all-encompassing ‘laws’ such as this one.
If someone wants hard facts, only need to take a look at www.research.ibm.com for plenty details and www.intel.com/research. Now, can we get back to the business of improving technology without feeling a constant need to refer to this?