Skip to content

Bicycle powered ARM servers

Photo: Jon Masters pedal-powering an ARM server (source: Red Hat flickr stream)

At this year’s Red Hat Summit, I gave a talk entitled “Hyperscale Cloud Computing with ARM processors” (video coming soon). In the talk, I introduced and gave a live demo of the world’s first bicycle powered ARM server (HP Redstone server powered by Calxeda EnergyCore quad-core ARM processors). I wanted to make a point that the (hyperscale) future will be all about energy efficient technology. The quad-core Calxeda EnergyCore ARM-based chips used in my demo (powering the HP Redstone server system) use only 5W of power at full load, including the RAM, fabric interconnect, and management controller. The (pre-production) test system had 8 of these installed, for a total of 32 ARM cores. At 5W per quad-core, that’s still only 40W to run all of the compute within the server system.

We wanted to visualize the power (pun intended) of low energy computing. In some way that would be both memorable, but would also connect the audience at a personal level. The idea of using a bicycle was suggested, and I took this very much to heart. Over a period of several weeks (on and off), I designed and built a modified solar power rig, replacing the solar panels with a bicycle generator system based on the Pedal-A-Watt (which was used by the “Amp” energy drink manufacturer during a Superbowl pre-game event a few years back, along with many riders and batteries, to power the entire pre-game show). The (single speed) bicycle was attached to a (reverse diode protected) generator that connected to a solar charge controller. The charge controller handles keeping a 12V (35AH) deep-cycle AGM (Absorption Glass Mat – safe against leaks and for use in a public environment) trickle fed while diverting excess load (power that needs to go somewhere other than into heating and destroying the generator) to a “diversion load” – in this case a convenient trucker’s fan (cooling the rider in the process). The battery feeds an inverter of the kind found in trucks and larger automobiles, which is then connected to a smoothing circuit. For the demo, we used a (smallish) UPS as the smoothing circuitry because this provided for a guaranteed regulated sine wave output, a buffer against pedal startup/shutdown, and helped to avoid continually cycling the (expensive, pre-production prototype)  server on and off. If you’re just powering an embedded board or some home electronics, you can skip the UPS part.

Video: The initial proof of concept (source: Jon Masters)

Inline with the generator, I installed two multi-meters. One captured instantaneous current flowing into the charge controller circuitry, the other captured voltage across the generator. Using a simple (and not entirely ideal, but we can work on that) Power = Voltage x Current type of calculation, it was possible to display a measure of instantaneous power being generated. I used a model (the TekPower TP4000ZC) of multi-meter that was inexpensive and yet had an RS232 output (this is in fact the cheapest multi-meter with such a feature that I could find). Two of these provided the necessary data, which I read using a custom utility I wrote (the QtDMM, “multimeter”, and other Linux applications not being adequate for my console-driven needs). I considered graphing the results with gnuplot (and in fact, I did do this) but the visualization wasn’t as straightforward for an audience as a single large power reading. So I wrote a small pygtk (GTK+) application to display the instantaneous power calculation returned by my “multi” software. This is what the audience saw during the demo. It in fact was a single GTK window wherein I had hacked the main loop horribly to read the output of my “multi” utility as a pipe on the command line (since it’s my demo, I can violate all the rules of modern graphical programming if I want to).

Using the rig, we were able to generate instantaneous power readings of up to several hundred Watts, while 100W was quite reasonable with little effort. The bicycle used was a single speed (for the aesthetic) and the lack of gears meant that we didn’t approach the 300-400W maximum that the generator can theoretically put out (good thing too, because realizing this, I put the current measuring multi-meter inline with the generator and it has a 10A fuse rating – for a bigger rig, some kind of current sensing coil might be needed, etc.). During the live on-stage demo as part of my Red Hat Summit talk, I appeared to generate much less than 100W at times. This is because the jury rigged wire attaching the inverter came loose during the demo and we were periodically dumping load out to the fan (there’s a reminder there about the dangers of doing live demos). Since the fan offers little electron resistance compared with charging the battery, the bike becomes much easier to pedal and you start pedaling very fast, very quickly. In a permanent rig, a better dump load would be a second battery or other resistive load offering similar characteristics to the battery. I fixed the wiring after the demo and subsequent riders at the booth were generating up to 200W of power once again.

Photo: Jon Masters pedal-powering an ARM server (source: Jon Masters)

Here’s the full component list:

The all-important Red Hat cycle jersey is available in the Red Hat “Cool Stuff” store.


Introducing Hyperscale Computing

Those who attended (or read about) Red Hat Summit last week might have noticed my talk, entitled “Hyperscale Cloud Computing with ARM Processors“. In the talk (slides coming soon, video coming soon), I introduced the concept of Hyperscale Computing as an inflexion point in the industry that will disrupt the very concept of a server in future systems. Modern servers have come a long way, but they are nonetheless fundamentally based around designs originally created decades ago. Racks of individually connected, high-power, low density servers (and blades) are installed in modern data centers thousands at a time. Each of these server systems requires its own networking infrastructure, (high) power distribution, HVAC, and maintenance engineers to take care of it when things go wrong. But that’s all so last century.

In the future, we may still use racks, but we will design and integrate at the rack (and datacenter) level. We won’t have individual network ports with spaghetti wiring between servers, we’ll use a fabric interconnect to expose a single (or several) network ports for a whole rack (including built-in resiliency, high availability, and fault tolerance through multi-path fabric connections). We will use System-on-Chip (Server-on-Chip) technology to integrate all but the system RAM and (flash) storage onto the server chip – including all of the IO, offload, GPGPU, etc. – and further down the line Package-on-Package will allow us to integrate some (or all) of the rest. System-on-Chip technology, which began life as an embedded systems technology (but is primed to storm the datacenter in the next few years) allows for mass levels of integration at high density. Combine this will a choice of redesigned or alternative computer architectures (such as ARM, or low energy x86 designs) requiring little active cooling and you will see 1,000 (possible right now) or even (eventually) 10,000 server nodes in a single rack.

Performance in the Hyperscale world will be gained in aggregate, at low energy, not necessarily through having a small number of beefy and energy inefficient servers. Although you will still see servers featuring dozens of fully coherent chips with elaborate interconnects and hundreds of cores, hyperscale will be more about having thousands of individual servers each having a smaller number of cores. This is consistent with a general trend in the industry away from single-core scalability. In the past few decades since the end of the 1970s, computer architecture and silicon process enhancements have allowed a level of unparalleled performance growth. Year-on-year, we saw 52% growth vs. 25% growth in single core performance in the years prior. Now, once again, we have returned to an average performance growth rate of 22% per year (see Computer Architecture, 5th edition for the statistics). Everyone has given up on linear single core growth as the strategy, and it’s time to give up on a strategy of single-system coherent designs that are energy inefficient and complex to design. Instead, application-level support for scale-out will allow for the use of much simpler designs. We’ll still have big, beefy servers, but they will be the exception, and not the rule, at least in this space.

A single Hyperscale server node will be formed from at most three pieces:

  • Server-on-Chip (SoC) – CPU, GPU, IO, fabric interconnect, management controller
  • RAM – stacked in the longer term once PoP technology allows for this
  • Storage – individual flash will combine with virtualized fabric-distributed I/O

We won’t have maintenance engineers scurrying around datacenters ripping out servers and replacing parts. Instead, we’ll use Failure-in-Place (also Fail-in-Place, FiP) to allow server nodes to fail and be marked as “bad”, much as how we disregard a few dead pixels on a laptop display, or a few bad blocks (or flash erase blocks) on a storage device. When you have 1,000+ server nodes in a single rack (and ultimately much more), the last thing you will see is an SLA that calls for individual node replacement. Instead, a certain number of bad servers will be tolerated before the whole system (or parts of it) are replaced as a scheduled event.

For more about Hyperscale Computing, follow this blog, and check out the videos from my Red Hat Summit talk.


Get every new post delivered to your Inbox.