It is the common wisdom these days that future processors will have increasingly many cores but will not run at significantly higher clock rates. The constant drive for increased performance will come by letting the processor do more at one time, rather than by making it run faster. Along the same lines is the increased use of hyperthreading, in which a single processor has multiple threads of control, thus permitting one thread of control to absorb the memory latency of another. I see the problems with this line of development.
The first is that the multiple cores have to share a lot of ancillary hardware. That is fine for a few cores, but as you get more the level of contention increases. When they all share a memory bus, you don’t have to get too high before contention gets unbearable. The natural step is to partition memory in some way. Eventually you have to partition everything. Then you have several different computers on a chip–they will have very fast interconnects, but the interconnects will require programming, they won’t be shared memory. I don’t see how to avoid this endpoint. I’m sure a lot of people are thinking about it, but I don’t know what they’ve concluded.
That endpoint leads into the second problem, which is that it’s difficult to program these machines. It’s not like programming a traditional single computer. It’s like programming a set of computers with a fast communication channel. Programming a set of computers is fundamentally different from programming a single computer. But if this future is realized, the only way to continue to get performance gains is to program a set of computers.
In other words, we’re coming to the parallel programming model, only we’re doing it by the back door. We’ll be doing parallel programming on a single computer.
Researchers have been looking at parallel programming for a long time. It was what I studied in my one year in graduate school, back in 1986. In all that time, I think they’ve come up with one really significant result: people find parallel programming difficult.
Although our brains are inherently parallel, our thought processes are single-threaded. We find it difficult to think about several things at once. Cordwainer Smith has a nice short story about a person who is able to think on three different levels, but certainly one of the lessons is that it is very difficult indeed. The effort of parallel programming is to figure out how to write the program without having to think of several things at once.
There have been various attempts to let people write programs sequentially, but to extract the available parallelism. These attempts have generally not succeeded, because the available parallelism is too small to make a significant different in performance.
It will be interesting to see what happens when computer processors hit the limits of people’s programming ability in this way. Perhaps we will work out new ways to write code.
Leave a Reply
You must be logged in to post a comment.