There have been a lot of stories lately about the end of Moore's law. The law is all about the number of transistors on a chip, and the technological limits of the fabrication process. There are reasons that the limits won't be pushed like they have been previously, and we can lay them at the foot of John Von Neuman, and his computer architecture.
The Von Neuman computer architecture is fairly straightforward. A CPU fetches a list of instructions from memory, and acts upon those instructions. The chip makers have pushed this architecture to its limits over the past 40 years, using the exponentially increasing transistor counts to do all sorts of fancy footwork to make the CPU faster. But there are only so many tricks that can get played within the rules of the architecture, and most of them have been exploited.
The single instruction, single data (SISD) architecture is reaching its zenith. The writing is on the wall, as dual core, hyperthreading and other changes force programmers into a new programming model based on parallel or multiprocessing. It will take some time to shift the software development world to embrace this new bedrock of computing. There are massive market opprotunities for those who do it first, and those who do it right.
There are other architectures, even more exotic, including the bitgrid design I write about elsewhere. These require even larger paradigm shifts from programmers, but offer much more processing power in trade for massive transistor counts that will be possible in the future.
It's going to be a fun ride. I look forward to it.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment