Multicore is the wave of the future. Cloud Computing is the wave of the future. Do they get along? My take: Eh. Sorta. There are the usual problems with parallel programming support, despite hubbub about parallel languages and runtimes and ever bigger multicores.
Multicore announcements in particular have been rampant recently. Not just the usual drumbeat from Intel and AMD; that continues, out to 8-way, 12-way, and onward to the future. Now more extreme systems are showing their heads, such as ScaleMP announcing vSMP for the Cloud, (and also for SMB), a way of gluing together X86 multicore systems into even larger shared-memory (NUMA) systems. 3Leaf is doing also doing essentially the same thing. Tilera just announced a 100-core chip product, beating Intel and others to the punch. Windows 7 has replaced the locking in XP, allowing it to "scale to 256 processors" – a statement that tells me (a) they probably did fix a bunch of stuff; and (b) they reserved one whole byte for the processor/thread ID. (Hope that's not cast in concrete for the future, or you'll have problems with your friendly neighborhood ScaleMP'd Tilera.)
So there's a collection of folks outside the Big Two processor vendors who see a whole lot of cores as good. Non-"commodity" systems – by the likes of IBM, Sun, and Fujitsu – have of course been seeing that for a long time, but the low-priced spread is arguably as the most important in the market, and certainly is the only hardware basis I've seen for clouds.
What's in the clouds for multicore?
Amazon's instance types and pricing do take muticore into account: at the low end small Linux is $0.085/hour for 1 core, nominally 1 GHz; and at the high end, still Linux, you can get "Extra-large High CPU" Linux is $0.68/hour for 8 cores, 2.5GHz each. So, assuming perfect parallel scaling, that's about 20X performance for 8X the price, a good deal. (I simplified 1 Amazon compute unit to 1 GHz. Amazon says it's 1.0-1.2 GHz 2007 Opteron or Xeon.)
Google App Engine (GAE) just charges per equivalent 1.2 GHz single CPU. What happens when you create a new thread in your Java code (now supported) is… well, you can't. Starting a new thread isn't supported. So GAE basically takes the same approach as Microsoft Azure, which treats multicore as an opportunity to exercise virtualization (not necessarily hardware-level virtualization, by my read), dividing multicores down into single core systems.
The difference between AWS, on the one hand, and GAE or Azure on the other, of course makes quite a bit of sense.
GAE and Azure are both PaaS (Platform as a Service) systems, providing an entire application platform, and the application is web serving, and the dominant computing models for web-serving are all based on throughput, not multicore turnaround.
AWS, in contrast, is IaaS (Infrastructure as a Service): You got the code? It's got the hardware. Just run it. That's any code you can fit into a virtual machine, including shared-memory parallel code, all the way up to big hulking database systems.
Do all God's chillum have to be writing their web code in Erlang or Haskell or Clojure (which turns into Java at runtime) or Ct or whatever before PaaS clouds start supporting shared-memory parallelism? But if PaaS is significant, doesn't it have to support E/H/C/Ct/etc. before those chillums will use them? Do we have a chicken-and-egg problem here? I think this just adds to the already long list of good reasons why parallel languages haven't taken off: PaaS clouds won't support them.
And in the meantime, there are of course a formidable barrier to others hosting their own specialty code on PaaS, like databases and at least some HPC codes. Hence the huge number of Amazon Solution Providers, including large-way SMP users such as of Oracle, IBM DB2, and Pervasive, while Google has just a few third-party Python libraries so far.
PaaS is a good thing, but I think that sooner or later it will be forced, like everyone else, to stop ignoring the other wave of the future.
------------------------------------------------------------------
Postscript / Addendum / Meta-note: My apologies for the lack up blog updates recently; I've been busy on things that make money or get me housed. I've something like six posts stacked up to be written, so you can expect more soon.
8 comments:
My current feeling is that Google App Engine is about as good a PaaS as we'll get in the mid term.
I even go so far as to state that the actual programming model of a GAE app is really quite similar to the Erlang model, where individual processes work on messages in their mailbox.
In a web app, most messages will be HTTP requests coming from remote clients, but with GAE's task queues, a messaging model can also be used for application-internal processing.
By using the datastore's ACID transactions, processes are effectively isolated, and the programmer can take additional steps to ensure low contention for each of the datastore's individual transactional scopes (entity groups), which should lead to "infinite" parallel processing scalablility.
I think that GAE's model is good enough for 80% of uses today (in a worse-is-better way), and truly alternative programming paradigms (parallel Haskell, ...) will take a long time to seriously enter the PaaS space, if ever.
Last I checked Clojure does not know how to transmogrify itself.
GAE does not allow any long-polling (30s/req) which is important to minimize connection overhead for mobile clients on latent networks.
@Anonymous #1 - OK, but Clojure does turn into Java byte code, right? And runs under a JVM? Agreed, that's not exactly the same as transmogrifying into Java source.
Dear Greg,
Your blog is interesting, diverse and funny. Please keep up the good work and find time to unleash the six posts in queue...
Found at Multicoreinfo.com, a pointer to the first in a two-part series on designs principles that will lead to multicore- and cloud-ready applications Multicore processing power and cloud computing are two of the most exciting challenges facing software developers today. Multiple chips or processing cores will enable individual computing platforms to process threads unbelievably fast, and the advent of cloud computing means that your applications could run on multiple distributed systems. In this first half of a two-part article, Appistry engineer Guerry Semones gets you started with the four design principles for writing cloud-ready, multicore friendly code: atomicity, statelessness, idempotence, and parallelism.
Anyone had a good look at AppScale?
http://appscale.cs.ucsb.edu/
http://code.google.com/p/appscale/
or am i missing the point here?
Thanks for sharing very informative information. We all appreciate with this blog post and it is very helpful for those that want to make their career in Information technology.
Keep up sharing...
Cloud Computing training in Mexico
Post a Comment
Thanks for commenting!
Note: Only a member of this blog may post a comment.