There's an article in Chip Design magazine by Peter Claydon, titled Implementing Multi-Core:The Devil is in the Detail. The standard argument that lots of slower processors can get more computing done, and with less power, than one big high-speed CPU.
Of course, this is true if the computing task is embarrassingly parallel. Graphics, packet switching in a router, high volume web services, and so on -- sure, no problem. But if the task is not like this, the serial bottleneck will kill the overall performance. Has the history of Thinking Machines been wiped from everyones memory?
We've known the problems for four decades. Are people still really falling for this stuff?
DAC 2012: Mystical Confluence: ESL Hockey Stick and The Cup!
-
Another note from DAC 2012: In Gary Smith’s Sunday night pre-DAC talk, he
mentioned that in 2011, ESL tools took off – the famous Hockey Stick. See
his s...
12 years ago
No comments:
Post a Comment