Oracle’s Exalogic….
is a hardware platform that outperforms competition with features like 40 Gb/sec Infiniband network link, 30 x86 compute nodes, 360 Xeon cores (2.93 GHz), 2.8 TB DRAM and 960 GB SSD in a full rack. Phew!
Ref: Oracle’s Whitepaper on Exalogic
You can “google” it … search for “Oracle Exalogic” and learn more about the beast, but in short this is a platform that is not only optimized to perform well, but also designed to use less resources. So for example, the power consumption is really low and this is a very green option. Or so says the “cool-ade label”.
Application Architects have always fret over network latency, I/O bottlenecks and general hardware issues over the years. While the classical “Computer Science” recommends/insists that the optimization lies in the application and algorithmic efficiency – the reality in enterprise environments is that “Information Processing” applications are often (lets assume) optimized but it is hardware issues that cause more problems. Sure there is no replacement to SQL tuning, code instrumentation etc but if you are an enterprise invested in a lot of COTS applications – you just want the damn thing to run! Often the “damn thing” does want to run but then it has limited resources and “scaling” of these resources is not optimized.
This is specially true for 3-tier applications which despite being optimized (No “select * queries” or bad sort loops) have to run on hardware that perform great in isolation but when clustered they do not scale as well as High Performance Computing applications do. Why is that?
The problem lies…
in the common protocols used to move data around. Ethernet and TCP/IP over it has been the standard to make computers and applications in them “talk”. Lets just say that this hardware and protocol can be optimized quite a bit! Well that’s what has happened with Exalogic and Exadata.
Thanks to some fresh thinking on Oracle’s part, their acquisition of Sun Microsystems, improvements in Java language (some of the New IO stuff) and high performance network switches from Infiniband… there is a new hardware platform in town which is juiced out (can I say “pimped out”) to perform!
My joy stems from the fact that Oracle is using optimizations employed in High Performance computing to enterprise hardware. The use of collective communication APIs like Scatter/Gather to improve application I/O throughput and latency (Fact: 10GBPS Ethernet has a MTU of 1.5K – while Infiniband uses a 64K MTU for Inifiniband over IP protocol and 32K MTU or more for Socket Direct Protocol).
Personally all this ties very well with my background in High Performance Computing (see my Master’s in Computer Science report on High Performance Unified Parallel C (UPC) Collectives For Linux/Myrinet Platforms done at Michigan Tech with Dr. Steve Seidel) …and my experience in Enterprise Application development/architecture.
…here’s my description of Scatter and Gather collective communication written in 2004:
Broadcast using memory pinning and remote DMA access in linux (basically network card can access user space directly and do gets and puts)