First, my congratulations to Enrique and his co-authors. Getting any book out the door is more work than anybody can appreciate who hasn't done it. Kudos!
I won't comment on the main content, since I'm not really qualified, "business value" books just aren't my cup of tea, and N years (for large N) in IBM left me profoundly allergic to SOA. SOA roolz. OK? It's wonderful. But I've only read one discussion of it, about 5 pages total, that didn't make my skin crawl.
However, part 3 of the download (Chapter 1) had this, which is relevant to the issues of this blog:
...in 2004 Intel found that a successor to the Pentium 4 processor, codenamed Prescott would have hit a "thermal wall."... the successor chip would have run too hot and consumed too much power at a time when power consumption was becoming an important factor for customer acceptance.
Big positive here: This event has been published, by Intel.
However, heat and power consumption "becoming an important factor for customer acceptance" is an excessively delicate way of putting it.
As I recall, that proposed chip family was so hot that all major system vendors actually refused it. They stood up to Intel, a rather major thing to do, and rejected the proposed design as impossible to package in practical, shippable products. This was a huge deal inside Intel, causing the cancellation of all main-line processor projects and the elevation to stardom of a previously denigrated low-power design created by an outlier lab in Israel, far from the center of mass of development.
So history gets exposed in a way that placates the shareholders.
The preface to this book also had some comments about the ancient history of virtualization that reminded me that I was there. Fodder for a future post.
(My thanks to a good friend and ex-co-worker who pointed me to this book and provided the title phrase.)
5 comments:
Shouldn't that be, "History is written by the *vectors*?" :) As you may know, I used to work for Floating Point Systems, and we had endless discussions/debates about all of these issues back in the "good old days" (1980 -- 1990 for me).
In the end, high-speed serial processors ended up ruling the world because parallel programming is at least one and probably more like three orders of magnitude more difficult than serial programming. Sure, the *languages* are better now -- you won't get shot on sight if, for example, you write software in Erlang, Haskell, OCaml, or other languages of the functional or "single-assignment" ilk. But in the end, I think you'll see a hardware solution before parallel programming makes any significant advances over the state of the art in 1990.
Hi, Ed. Thanks for your thoughts. Hopefully I'll stimulate more of them.
I agree with your assessment of parallel vs. serial, but a question:
What kind of "hardware solution" are you thinking of? Transactional memory?
(Right, "vectors". :)
Transactional memory is not "hardware" -- it's software. No, I'm thinking of the stuff we thought about in the good old days but never did anything with: gallium arsenide, indium phosphide, etc. They were always there, just not "cost-effective".
It's got to be easier to productize those than solve NP-complete software problems, don't you think? :)
Ah, you meant punt the whole parallel thing by getting clock rate increases raging again. Sure would be nice.
I don't know enough about the exotic technologies to know if that's a practical option. Maybe. But the industry isn't heading that way right now.
I thought you were referring to some hardware fix for parallel programming. If one exists, I'd really like to know of it.
I know of TM software, but there is TM hardware, too. Sun has a bunch of patents on it, and at one point was suing Azul systems for infringement in Azul's Vega systems, which certainly have shipped and do work on a large scale - 864 cores.
TM might help, possibly a lot, but it's not a panacea.
SOA is for architecture was OOP was for development :-)
Post a Comment
Thanks for commenting!
Note: Only a member of this blog may post a comment.