tag:blogger.com,1999:blog-3155908228127841862.post7850017887625120636..comments2023-06-28T10:04:44.463-06:00Comments on The Perils of Parallel: Why Virtualize? A Primer for HPC GuysGreg Pfisterhttp://www.blogger.com/profile/12651996181651540140noreply@blogger.comBlogger6125tag:blogger.com,1999:blog-3155908228127841862.post-83055816876937121112009-10-06T16:51:32.470-06:002009-10-06T16:51:32.470-06:00Greg, I appreciate your reply to my question in th...Greg, I appreciate your reply to my question in the email and it does make sense but again as Rob Peglar says, it's still seems, "Cat and Mouse". It's convenience V/S perfection on a compute and it's here for the taking.Digvijay "VJ" Singh Rathorehttp://digvijaysinghrathore.spaces.live.com/noreply@blogger.comtag:blogger.com,1999:blog-3155908228127841862.post-80199673663511246242009-07-26T08:09:08.719-06:002009-07-26T08:09:08.719-06:00I started in HPC in 1978 and this current debate/v...I started in HPC in 1978 and this current debate/viewpoint is strangely reminiscent of the 'real vs. virtual memory' brouhaha. The 'real' advocates liked their base & bounds with fixed sized partitions, and no silly virtualization technique, no matter how robust, could sway them. OTOH, the virtual memory advocates saw it as a way to make multiprogramming easier and actually give batch jobs a larger memory 'space' than they had - read - could afford. The real guys did rollout/rollin and (in some cases) overlays, the virtual guys did paging.<br /><br />So it is today. Those with infinite real resources to burn will burn them and chide those who don't have such resources. Those without infinite resources will virtualize them and find a way.<br /><br />There is also perceived geek cred involved. The real guys believe they earn geek creds by doing all sorts of gyrations to control their real resources and their workloads. Likewise, the virtual guys believe they earn geek creds by eschewing these techniques and instead focus on slick hiding/lying methods. After all, virtualization is the fine art of lying to the layer above you :-)<br /><br />But as far as accelerators go, there will always be a conflict between the standard and the proprietary. Remember, the accelerator folks are not standing still either, and even though the charts above showing 5-years-and-out behavior, this assumes the accelerator does not change during that time. <br /><br />Cat versus mouse - an ancient and honorable battle :-)Rob Peglarhttp://www.xiotech.comnoreply@blogger.comtag:blogger.com,1999:blog-3155908228127841862.post-82984132205834686692009-07-19T19:56:24.411-06:002009-07-19T19:56:24.411-06:00Thanks!
About multi-system SSI, yes, “virtualizat...Thanks!<br /><br />About multi-system SSI, yes, “virtualization” could be and has been used to describe it, but I think such usage confuses the issue because multi-system SSI breaks the normal SMP programming model (as discussed in prior posts). We’ll have to agree to disagree.Greg Pfisterhttps://www.blogger.com/profile/12651996181651540140noreply@blogger.comtag:blogger.com,1999:blog-3155908228127841862.post-79488130763499687272009-07-19T19:10:37.323-06:002009-07-19T19:10:37.323-06:00Clear and concise, well written. As someone who h...Clear and concise, well written. As someone who has worked in both HPC and virtualization, I have had to make these points before, but not as coherently as you've laid it out here.<br /><br />It is worth pointing out that the single system image discussion we had before is a form of virtualization as well, just the reverese of what we are talking about here. Basically Virtualization will allow you to break the 1-1 relationships between machine and OS instance.admiyohttps://www.blogger.com/profile/10559086516587174707noreply@blogger.comtag:blogger.com,1999:blog-3155908228127841862.post-55316862581259562842009-07-08T14:21:38.333-06:002009-07-08T14:21:38.333-06:00I absolutely agree with your point that for IO vir...I absolutely agree with your point that for IO virtualization to work within adapters, the adapters must be able to isolate VM use completely. You have a good list in there of what that means: No matter what any VM does, the adapter should (a) not lock up; (b) not affect any other partition’s operation – a superset of (a); (c) not violate the host’s memory partitioning; (d) not put utter garbage on the wire, like bad checksums. <br /><br />These are stiff requirements. But they’re the same requirements CPU/memory virtualization meets now (maybe except for garbage on the wire). There’s no reason IO adapters can’t meet them, too; the technology is there.<br /><br />There is a hangup, of course: adapter vendors need to absorb a new skill set, and, more importantly, a new mind set. This will involve kicking and screaming. But the additional hardware cost, as in CPUs, really won’t be big. Someone just needs to breaks the dam, providing the support needed for fast IO, and get to charge a premium.<br /><br />Will that take a long time? I'm not sure. The issue is psychology, not technology.Greg Pfisterhttps://www.blogger.com/profile/12651996181651540140noreply@blogger.comtag:blogger.com,1999:blog-3155908228127841862.post-88575503334019527642009-07-08T10:52:18.398-06:002009-07-08T10:52:18.398-06:00Greg, I see it taking some time before native I/O ...Greg, I see it taking some time before native I/O virtualization works robustly in networking/storage/IPC adapters, via PCI-SRIOV. The main challenge is that once you let a virtual machine talk directly to the adapter, the latter must include fault isolation mechanisms, to ensure that a crashing VM does not take down the whole adapter (and therefore, system). This means that the adapter must scrupulously check each doorbell write, work descriptor content and scatter/gather list issued to it by a VM for 'correctness'. The definition of 'correctness' is a little slippery here and no-one has attempted to define it. At minimum, it means "The adapter does not lock up, no matter what the VM fires at it." But should an adapter, allow a rogue VM to put garbage on the wire (no lockups, but potentially serious trouble downstream). Another budding standard, PCI-ATS will have to be running robustly in the host's north bridge to screen out flaky host memory addresses given to the adapter by misbehaving VMs. Frankly, I see it being years before this stuff is ready for prime time.Anonymousnoreply@blogger.com