Thursday, December 28, 2006

Using embedded platform management with WBEM/CIM: add IPMI to provide "Last Mile" Manageability for CIM-based solutions - Enterprise Networking

CIM has garnered a lot of attention over the years, much to the delight of vendors and end users alike. The DMTF organization, under the leadership of Winston Bumpus, has shepherded a process that addresses the seemingly never-ending dark tunnel of system complexity and vendor differences. By abstracting the data associated with that complexity into a set of guidelines, users and systems can communicate knowledge in an open way to the benefit of everyone. But what is under the hood of CIM? What practical purpose does it serve in a typical system or network? How does it compare and contrast with other technologies?

Equally, IPMI has also benefited from a multi-vendor consortium. The promoters, Dell, Intel, HP and NEC, over the past five years have standardized the hardware platform management interface to an infinite range of components- power, fans, chassis, controllers, sensors, etc.-and is now supported by 150 other adopters. Both CIM and IPMI thus offer key benefits. But how do they complement each other within the Data Center? How do you exploit these standards and make money doing so?

"Plus ca change, plus c'est la meme chose": The more things change, the more they stay the same. For all the innovation, today's Enterprise Data Center appears to be very similar to the one 10 years ago. Application (and the resulting server) sprawl has always challenged the applications or server administrators to set-up, deploy and manage at the farther corners of the business empire. New technologies still inundate the enterprise with promises of 'breakthrough' ROI--everything is 'web-based.' Everything can be dynamically discovered. Application interoperability is seemingly achieved in a heartbeat. New product offerings promise tantalizing benefits. Symmetrical Multi-Processing (SNIP) clusters simplify management and thus reduce TCO. Deploying new form factors, such as blades technology, appear as the ultimate weapon for dynamic resource allocation, improving price/performance using power saving designs at ever increasing MIPS/rack. Virtualization then takes its place as the new mantra--build systems that can accommodate 'Resources on Demand'--Compute, Storage, I/O are decoupled from the reality of the rack. Grid computing is now 'on tap' in true utility-like fashion. It's enough to make you tear your hair out.

Well, so much for no change--the Data Center remains at the 'center' of the 'data' universe. However, it should come as no surprise that every Data Center is different. Place the same equipment and same number of people in two identical buildings with the same goals of reducing TCO and improving reliability, availability and serviceability, and you'll get two very different results. Why? Well, leaving aside human fallibility, most systems are modular to some degree. Like Lego blocks, the construction process behind it differs. While there are many characteristics to building out a successful Data Center infrastructure, the chances of success increases when using a building block approach based on products that support standards. Vendors who embrace a modular building block approach can deliver very real benefits to the Data Center, especially in delivering interoperable management to server and network administrators. Products that support modularity have three common characteristics:

Standard Interface: A standardized interface within the device that allows you to monitor and control the components as and when it's needed without regard to the BIOS, OS, Processor or color of the box. The ability to manage a modular server like a single system is crucial. This interface then needs to be exposed to support integration with existing management points. This is vital to ensure a holistic and unified view is retained in a complex and dynamic network infrastructure. Finally, this common interface must be 'Always On' (highly available) and be programmatic so that different applications and other standards can support and extend it.

Platform Instrumentation: Platform instrumentation is key, as without a certain level of "embedded intelligence" you'll be left on the outside looking in. And because no network is truly identical or homogenous, cross platform management requirements are pragmatic. This is best achieved by using standards-based building blocks. Standards help provide support for the basic hardware elements that make up a system--processor, fans, temperature, power, chassis, etc., irrespective of the manufacturer or state of the system.

Support Extensibility: Building blocks also have to embrace differentiation. They need to support the common hardware, firmware and software elements but also allow expansion to support unique elements. They also need to be utilized and reusable by other OEMs, OSVs, IHVs and ISVs. This modularity supports differentiation by adding features specific to certain markets while also retaining existing development IP.