ARM Ecosystem wants to power the Cloud too

intel_moorestown-Pres.jpg The news surrounding Intel's recent announcement of the Atom Z6xx (aka Moorestown) System-on-a-Chip (SoC) tends to focus on the uphill battle the company is facing in the ARM-centric smartphone ecosystem. Intel has claimed idle times of 21-23 milliwatts for the Z6xx series compared to 25 mW for a 1GHz Snapdragon. That is 10 days of standby time with a 1500 mAh battery. What is more interesting is the move to port Windows Server to multi-core ARM processors manufactured at 40 nm, such as announced by Marvell Technology Group. The chips will bring more than a five-fold reduction in power consumption in data centers and cloud environments compared to the x86. Think of the headroom an ARM implementation in servers would be when comparing an Intel Xeon at several hundred dollars versus a $35 ARM quad-core running virtualized Windows Server 2008. Om Malik brings up in a recent post that it was "too bad Intel sold its StrongARM technology to Marvell." I agree; Marvell did what Intel didn't have the heart to do. We think of virtualization in data centers as smart economics in hardware utilization and power consumption but what happens when server hardware processor cores decrease by a factor of 10 and power consumption by 5? Do we throw hardware at the problem again? Analysts should model the financial scenarios factoring in VMWare licensing costs, power consumption/footprint of rack space and application-specific-servers vs. general purpose power-hog blades running VMWare.