Energy Management Channel

When is running warm – too hot?

Wednesday, March 21st, 2012 by Robert Cravotta

Managing the heat emanating from electronic devices has always been a challenge and design constraint. Mobile devices present an interesting set of design challenges because unlike a server operating in a strictly climate controlled room, users want to operate their mobile devices across a wider range of environments. Mobile devices place additional design burdens on developers because the size and form factor of the devices restrict the options for managing the heat generated while the device is operating.

The new iPad offers the latest device where technical specifications may or may not be compatible with what users expect from their devices. According to Consumer Reports, the new iPad can reach operating temperatures that are up to 13 degrees higher (when plugged in) than an iPad 2 performing the same tasks under the same operating conditions. Using a thermal imaging camera, the peak temperature reported by Consumer Reports is 116 degrees Fahrenheit on the front and rear of the new iPad while playing Infinity Blade II. The peak heat spot was near one corner of the device (Image at the referenced article).

This type of peak temperature is perceived as warm to very warm to the touch for short periods of time. However, for some people, they may consider a peak temperature of 116 degrees Fahrenheit to be too warm for a device that they plan to hold in their hands or on their lap for extended periods of time.

There are probably many engineering trade-offs that were considered in the final design of the new iPad. The feasible options for heat sinks or distributing heat away from the device were probably constrained by the iPad’s thin form factor, dense internal components, and larger battery requirements. Integrating a higher pixel density display definitely provided a design constraint on how the system and graphic processing was architected to deliver an improvement in display quality and maintain an acceptable battery life.

Are consumer electronics bumping up against edge of what designers can deliver when balancing small form factors, high performance processing and graphics, acceptable device weight, and long enough battery life? Are there design trade-offs that are still available to designers to further push where mobile devices can go while staying within the constraints of acceptable heat, weight, size, cost, and performance? Have you ever dealt with running a warm system that becomes a system that is running too hot? If so, how did you deal with it?

What is driving lower data center energy use?

Wednesday, August 3rd, 2011 by Robert Cravotta

A recently released report from a consulting professor at Stanford University identifies that the growth in electricity use in data centers over the years 2005 to 2010 is significantly lower than the expected doubling based on the growth rate of data centers from 2000 to 2005. Based on the estimates in an earlier report on electricity usage by data centers, worldwide electricity usage has only increased by about 56% over the time period of 2005 to 2010 instead of the expected doubling. In contrast, the growth in data center electricity use in the United States increased by 36%.

Based on estimates of the installed base of data center servers for 2010, the report points out that the growth in installed volume servers slowed substantially over the 2005 and 2010 period by growing about 20% in the United States and 33% worldwide. The installed base of mid-range servers fell faster than the 2007 projections while the installed base of high-end servers grew rapidly instead of declining per the projections. While Google’s data centers were not able to be included in the estimates (because they assemble their own custom servers), the report estimates that Google’s data centers account for less than 1% of electricity used by data centers worldwide.

The author suggests the lower energy use is due to impacts of the 2008 economic crisis and improvements in data center efficiency. While I agree that improving data center efficiency is an important factor, I wonder if the 2008 economic crisis has a first or second order effect on the electricity use of data centers. Did a dip in the growth rate for data services cause the drop in the rate of new server installs or is the market converging on the optimum ratio of servers to services?

My data service costs are lower than they have ever been before – although I suspect we are flirting with a local minimum in data service costs as it has been harder to renew or maintain discounts for these services this year. I suspect my perceived price inflection point is the result of service capacities finally reflecting service usage. The days of huge excess capacity for data services are fading fast and service providers may no longer need to sell those services below market rate to gain users of that excess capacity. The migration from all-you-can-eat data plans to tiered or throttle accounts may also be an indication that excess capacity of data services is finally being consumed.

If the lower than expected energy use of data centers is caused by the economic crisis, will energy spike up once we are completely out of the crisis? Is the lower than expected energy use due more to the market converging on the optimum ration of servers to services – if so, does the economic crisis materially affect energy use during and after the crisis?

One thing this report was not able to do was ascertain how much work was being performed per unit of energy. I suspect the lower than expected energy use is analogous to the change in manufacturing within the United States where productivity continues to soar despite significant drops in the number of people actually performing manufacturing work. While counting the number of installed servers is relatively straightforward, determining how the efficiency of their workload is changing is a much tougher beast to tackle. What do you think is the first order affect that is slowing the growth rate of energy consumption in data centers?

What is important when looking at a processor’s low power modes?

Wednesday, June 1st, 2011 by Robert Cravotta

Low power operation is an increasingly important capability of embedded processors, and many processors support multiple low power modes to enable developers to accomplish more with less energy. While low power modes differ from processor to processor, each mode enables a system to operate at a lower power level either by running the processor at lower clock rates and voltages or by removing power from selected parts of the processor, such as specific peripherals, the main processor core, and memory spaces.

An important characteristic of a low power or sleep mode is the current draw while the system is operating in that mode. However, evaluating and comparing the current draw between low power modes on different processors requires you to look at more than the just current draw to perform an apples-to-apples comparison. For example, the time it takes the system to wake-up from a given mode can disqualify a processor from consideration in a design. The time it takes a system to wake up is dependent on such factors as the settling time for the clock source and for the analog blocks. Some architectures offers multiple clock sources to allow a system to perform work at a slower rate while the faster clock source is still settling – further complicating the comparison between the wake-up time for the processor.

Another differentiator for low power modes is the level of granularity the power modes support that allows the developer to turn on and off individual versus blocks of peripherals or coprocessors. Some low power modes remove power from the main processor core and leave an autonomous peripheral controller operating to manage and perform data collection and storage. Low power modes can differ on which circuits they leave running such as brown-out detection, preserving the contents of ram or registers, and whether the real time clock remains active. The architectural decisions of which circuits can be powered down or not depends greatly on the end application, and they provide opportunities for specific processors to best target niche requirements.

When you are looking at a processor’s low power modes, what do you consider the important information that must be considered? When considering different processors, do you compare wake-up times or does current draw trump everything else? How important is your ability to control which circuits are powered on or off?

Boosting energy efficiency – Energy debugging

Monday, April 4th, 2011 by Oyvind Janbu

Using an ultra low power microcontroller alas does not by itself mean that an embedded system designer will automatically arrive at the lowest possible energy consumption.  To achieve this, the important role of software also needs to be taken into account.  Code needs to be optimized, not just in terms of functionality but also with respect to energy efficiency.  Software has perhaps never really been formally identified as an ‘energy drain’ and it needs to be.  Every clock cycle and line of code consumes energy and this needs to be minimized if best possible energy efficiency is to be achieved.

While the first two parts of this article have proposed the fundamental ways in which microcontroller design needs to evolve in the pursuit of real energy efficiency, so this third part considers how the tools which support them also need to change.  Having tools available that provide developers with detailed monitoring of their embedded systems’ energy consumption is becoming vital for many existing and emerging energy sensitive battery-backed applications.

As a development process proceeds, code size naturally increases and optimizing it for energy efficiency becomes a much harder and time-consuming task.  Without proper tools, identifying a basic code error such as a while loop that should have been replaced with an interrupt service routine can be difficult.  Such a simple code oversight causes a processor to stay active waiting for an event instead of entering an energy saving sleep mode – it therefore matters!  If these ‘energy bugs’ are not identified and corrected during the development phase then they’re virtually impossible to detect in field or burn-in tests.  Add together a good collection of such bugs and they will have an impact on total battery lifetime, perhaps critically so.

In estimating potential battery lifetimes, embedded developers have been able to use spreadsheets provided by microcontroller vendors to get a reasonable estimation of application behavior in terms of current consumption.  Measurements of a hardware setup made by an oscilloscope or multimeter and entered into spreadsheets can be extrapolated to give a pretty close estimation of battery life expectancy.  This approach however does not provide any correlation between current consumption and code and the impact of any bugs – the application’s milliamp rating is OK, but what’s not known is whether it could be any better.

With a logic analyzer a developer gets greater access to the behavior of the application and begins to recognize that ‘something strange’ is going on.  Yes it’s a ‘code view’ tool, and shows data as timing diagrams, protocol decodes, state machine traces, assembly language or its correlation with source level software, however it offers no direct relationship with energy usage.

Combine the logic analyzer, the multimeter, and the spreadsheet and you do start to make a decent connection between energy usage and code, but the time and effort spent in setting up all the test equipment (and possibly repeating it identically on numerous occasions), making the measurements and recording them into spreadsheets can be prohibitive if not practically impossible.

Low power processors such as the ARM Cortex-M3 however are already providing a SWO (serial wire output) that can be used to provide quite sophisticated and flexible debug and monitoring capabilities that tool suppliers can harness to enable embedded developers to directly correlate code execution with energy usage.

Simple development platforms can be created which permanently sample microcontroller power rail current consumption, convert it, and send it along with voltage and timing data via USB to a PC-based energy-to-code profiling tool.  Courtesy of the ARM’s SWO pin, the tool can also retrieve program counter information from the CPU.  The coming together of these two streams of data enables true ‘energy debugging’ to take place.

Provided that current measurements have a fairly high dynamic range, say from 0.1µA to 100mA, then it’s possible to monitor a very wide and practical range of microcontroller current consumption.  Once uploaded with the microcontroller object code, the energy profiling tool then has all the information resources it needs to accurately correlate energy consumption with code.

The energyAware Profiler tool from Energy Micro shows the relationship between current consumption, C/C++ code, and the energy used by a particular function. Clicking on a current peak, top right, reveals the associated code, bottom left.

The tool correlates the program-counter value with machine code, and because it is aware of the functions of the C/C++ source program, it can then readily indicate how energy use changes as various functions run.  So the idea of a tool that can highlight to a developer in real time, an energy-hungry code problem comes to fruition.  The developer watches a trace of power versus time, identifies a surprising peak, clicks on it and is immediately shown the offending code.

Such an ability to identify and rectify energy drains in this way and at an early stage of prototype development will certainly help reduce the overall energy consumption of the end product, and it will not add to the development time either, on the contrary.

We would all be wise to consider the debug process of low power embedded systems development as becoming a 3-stage cycle from now on:  hardware debugging, software functionality debugging, and software energy debugging.

Microcontroller development tools need to evolve to enable designers to identify wasteful ‘energy bugs’ in software during the development cycle.  Discovering energy-inefficient behavior that endanger battery lifetime during a product field trial is after all rather costly and really just a little bit too late!

Low Power Design: Energy Harvesting

Friday, March 25th, 2011 by Robert Cravotta

In an online course about the Fundamentals of Low Power Design I proposed a spectrum of six categories of applications that identify the different design considerations for low power design for embedded developers. The spectrum of low power applications I propose are:

1) Energy harvesting

2) Disposable or limited use

3) Replaceable energy storage

4) Mobile

5) Tethered with passive cooling

6) Tethered with active cooling

This article focuses on the characteristics that affect energy harvesting applications. I will publish future articles that will focus on the characteristics of the other categories.

Energy harvesting designs represent the extreme low end of low power design spectrum. In an earlier article I identified some common forms of energy harvesting that are publicly available and the magnitude (typically in the μW to mW range) of the energy that are typically available for harvesting.

Energy harvesting designs are ideal for tasks that take place in locations that are difficult to deliver power. Examples include remote sensors, such as might reside in a manufacturing building where the quantity of devices might make performing regular battery replacements infeasible. Also, many of the sensors may be in locations that are difficult or dangerous for an operator to reach. For this reason, energy harvesting systems usually run autonomously, and they spend the majority of their time in a sleep state. Energy harvesting designs often trade-off computation capabilities to fit within a small energy budget because the source of energy is intermittent and/or not guaranteed on a demand basis.

Energy harvesting systems consist of a number of subsystems that work together to provide energy to the electronics of the system. The energy harvester is the subsystem that interfaces with the energy source and converts it into usable and storable electricity. Common types of energy harvesters are able to extract energy from ambient light, vibration, thermal differentials, as well as ambient RF energy.

The rate of energy captured from the environment by the energy harvester may not be sufficient to allow the system to operate; rather, the output of the energy harvester feeds into an energy storage and power management controller that conditions and stores the captured energy in an energy bladder, buffer, capacitor, or battery. Then, when the system is in an awake state, it is drawing energy from the storage module.

The asymmetry between the rate of collecting energy and consuming energy necessitates that the functions the system needs to perform are only executed on a periodic basis that allows enough new energy to be captured and stored between operating cycles. Microcontrollers that support low operating or active power consumption, as well as the capability to quickly switch between the on and off state are key considerations for energy harvesting applications.

A consideration that makes energy harvesting designs different from the other categories in the low power spectrum is that the harvested energy must undergo a transformation to be usable by the electronics. This is in contrast to systems that can recharge their energy storage – these systems receive electricity directly in quantities that support operating the system and recharging the energy storage module.

If the available electricity ever becomes insufficient to operate the energy harvesting module, the module may not be able to capture and transform ambient energy even when there is enough energy in the environment. This key condition for operating means the decision for when and how the system will turn on and off must take extra precautions to avoid drawing too much energy during operation or it will risk starving the system into an unrecoverable condition.

Energy harvesting applications are still an emerging application space. As the cost continues to decrease and the efficiency of the harvesting modules continues to improve, more applications will make sense to pursue in an analogous fashion that microcontrollers have been replacing mechanical controls within systems for the past few decades.

Boosting energy efficiency – Sleeping and waking

Friday, March 18th, 2011 by Oyvind Janbu

While using a 32-bit processor can enable a microcontroller to stay in a deep-sleep mode for longer, there is nevertheless some baseline power consumption which can significantly influence the overall energy budget. However, historically 32-bit processors have admittedly not been available with useful sub-µA standby modes. With the introduction of power efficient 32-bit architectures, the standby options are now complementing the reduced processing and active time.

With the relatively low power consumption many microcontrollers exhibit in deep sleep, the functionality they provide in these modes is often very limited.  Since applications often require features such as real time counters, power-on reset / brown-out detection or UART reception to be enabled at all times, many microcontroller systems are prevented from ever entering deep sleep since such basic features are only available in an active run mode.  Many microcontroller solutions also have limited SRAM and CPU state retention in sub-µA standby modes, if at all.  Other solutions need to turn-off or duty-cycle brown-out and power-on reset detectors in order to save power.

In the pursuit of energy efficiency then microcontrollers need to provide product designers with a choice a sleep modes offering the flexibility to scale basic resources, and thereby the power consumption, in several defined levels or energy modes.  While energy modes constitute a coarse division of basic resources, additional fine-grained tuning of resources within each energy mode should also be able to be implemented by enabling / disabling individual peripheral functions.

There’s little point though in offering a microcontroller with tremendous sleep mode energy consumption if its energy efficiency gains are lost due to the time it takes for the microcontroller to wake up and enter run mode.

When a microcontroller goes from a deep sleep state, where the oscillators are disabled, to an active state, there is always a wake-up period, where the processor must wait for the oscillators to stabilize before starting to execute code.  Since no processing can be done during this period of time, the energy spent while waking up is wasted energy, and so reducing the wake-up time is important to reduce overall energy consumption.

Furthermore, microcontroller applications impose real time demands which often mean that the wake-up time must be kept to a minimum to enable the microcontroller to respond to an event within a set period of time.  Because the latency demanded by many applications is lower than the wake-up time of many existing microcontrollers, the device is often inhibited from going into deep sleep at all – not a very good solution for energy sensitive applications.

A beneficial solution would be to use a very fast RC oscillator that instantly wakes up the CPU and then optionally transfers the clock source to a crystal oscillator if needed. This meets both the real time demands as well as encourages run- and sleep mode duty cycling. Albeit the RC oscillator is not as accurate as a crystal oscillator, the RC oscillator is sufficient as the CPUs clock source during crystal start-up.

We know that getting back to sleep mode is key to saving energy. Therefore the CPU should preferably use a high clock frequency to solve its tasks more quickly and efficiently.  Even if the higher frequency at first appears to require more power, the advantage is a system that is able to return to low power modes in a fraction of the time.

Peripherals however might not need to run at the CPU’s clock frequency.  One solution to this conundrum is to pre-scale the clock to the core and peripherals, thereby ensuring the dynamic power consumption of the different parts is kept to a minimum.  If the peripherals can further operate without the supervision of the CPU, we realize that a flexible clocking system is a vital requirement for energy efficient microcontrollers.

The obvious way for microcontrollers to use less energy is to allow the CPU to stay asleep while the peripherals are active, and so the development of peripherals that can operate with minimum or no intervention from the CPU is another worthy consideration for microcontroller designers.  When peripherals look after themselves, the CPU can either solve other high level tasks or simply fall asleep, saving energy either way.

With advanced sequence programming, routines for operating peripherals previously controlled by the CPU can be handled by the peripherals themselves.  The use of a DMA controller provides a pragmatic approach to autonomous peripheral operation.  Helping to offload CPU workload to peripherals, a flexible DMA controller can effectively handle data transfers between memory and communication or data processing interfaces.

Of course there’s little point in using autonomous peripherals to relieve the burden of the CPU if they’re energy hungry.  Microcontroller makers also need to closely consider the energy consumption of peripherals such as serial communication interfaces, data encryption/decryption engines, display drivers and radio communication peripherals.  All peripherals must be efficiently implemented and optimized for energy consumption in order to fulfill the application’s need for a low system level energy consumption.

Taking the autonomy ideal a step further, the introduction of additional programmable interconnect structures into a microcontroller enable peripherals to talk to peripherals without the intervention of the CPU, thereby reducing energy consumption even further.  A typical example of a peripheral talking to another peripheral would be an ADC conversion periodically triggered by a timer. A flexible peripheral interconnect allows direct hardware interaction between such peripherals, solving the task while the CPU is in its deepest sleep state.

The third part of this three part article explores the tools and techniques available for energy debugging.

Boosting energy efficiency – How microcontrollers need to evolve

Monday, February 21st, 2011 by Oyvind Janbu

Whatever the end product, all designers have specific tasks to solve and their solutions will be influenced by the resources that are available and the constraints of cost, time, physical size and technology choice.  At the heart of many a good product, the ubiquitous microcontroller often has a crucial influence on system power design and particularly in a brave new world that’s concerned with energy efficiency, users are entitled to demand a greater service from them.  The way microcontrollers are built and operate needs to evolve dramatically if it is to achieve the best possible performance from limited battery resources.

Bearing in mind that the cost of even a typical coin cell battery can be relatively high compared to that of a microcontroller, there are obvious advantages in designing a system that offers the best possible energy efficiency.  It can enable designers to reduce the cost and size of a battery.  Secondly, it can enable designers to significantly extend the lifetime of a battery, consequently reducing the frequency of battery replacement and for certain products the frequency, cost and ‘carbon footprint’ associated with product maintenance call-outs.

Microcontrollers, like many other breeds of electronic components, are these days very keen to stress their ‘ultra low power’ credentials, which is perfectly fine and appropriate where a device’s dynamic performance merits; however, with a finite amount of charge available from a battery cell, it is how a microcontroller uses energy (i.e. power over the full extent of time), that needs to be more closely borne in mind.

Microcontroller applications improve their energy efficiency by operating in several states – most notably active and sleep modes that consume different amounts of energy.

Product designers need to minimize the product of current and time over all phases of microcontroller operation, throughout both active and sleep periods (Figure 1). Not only does every microamp count, but so does every microsecond that every function takes.  This relationship between amperage and time makes the comparison of 8-, and 16-bit microcontrollers with 32-bit microcontrollers less straightforward. Considering alone their current consumption characteristics in a deep-sleep mode, it is easy to understand why 8-bit or 16-bit microcontrollers have been in an attractive position in energy sensitive applications, where microcontroller duty cycles can be very low.  A microcontroller may after all stay in a deep sleep state for perhaps 99% of the time.

However, if product designers are concerned with every microamp and microsecond every function takes, then using a 32-bit microcontroller should be being considered for even in the ‘simplest’ of product designs.  The higher performance of 32-bit processors enables the microcontroller to finish tasks quicker so that they can spend more time in the low-power sleep modes, which lowers overall energy consumption.  32-bit microcontrollers are therefore not necessarily ‘application overkill’.

More than that though, even simple operations on 8-bit or 16-bit variables can need the services of a 32-bit processor if system energy usage goals are to be achieved.  By harnessing the full array of low-power design techniques available today, 32-bit cores can offer a variety of low-power modes with rapid wake-up times that areon par with 8-bit microcontrollers.

There is a common misconception that switching from an 8-bit microcontroller to a 32-bit microcontroller will result in bigger code size, which directly affects the cost and power consumption of end products.  This is borne of the fact that many people have the impression that 8-bit microcontrollers use 8-bit instructions and 32-bit microcontrollers use 32-bit instructions.  In reality, many instructions in 8-bit microcontrollers are 16-bit or 24-bit in length.

The ARM Cortex-M3 and Cortex-M0 processors are based on the Thumb-2 technology, which provides excellent code density.  Thumb-2 microcontrollers have 16-bit as well as 32-bit instructions, with the 32-bit instruction functionality a superset of the 16-bit version.  Typical output from a C compiler gives 90% 16-bit instructions. The 32-bit version would only be used when the operation cannot be performed with a 16-bit instruction.  As a result, most of the instructions in an ARM Cortex microcontroller program are 16-bits.  That’s smaller than many of the instructions in 8-bit microcontrollers, typically providing less compiled code from a 32-bit processor than 8- or 16-bit microcontrollers.

The second part in this three part series looks deeper at the issues around microcontroller sleep modes.

Is adoption risk real?

Wednesday, January 26th, 2011 by Robert Cravotta

I recently received sales material for solar power panels. According to the literature, I can buy a solar power system that will pay for itself and then continue generating returns for the next 30 to 40 years without the risks associated with investing in stocks. Something about this pitch smacks of overlooking areas of risk when adopting an immature technology. Perhaps I am merely a pessimist on the current state of technology for solar energy, but I think there are significant adoption risks that are analogous to ones that I have had to consider with other technologies.

According to the literature supplied to me: I can buy solar panels with a 30% tax credit and a generous rebate program. Despite the steep discount – these panels will take at least 5 years to reach a breakeven point – and that point assumes I can choose the perfectly sized system for my house. The system comes with a 10 year bumper-to-bumper warranty and a 25 year manufacturer’s warranty. This sounds like a great deal right?

To reach a breakeven point in 5 years, the solar panels need to provide a better than 14.4% annual rate of return on my initial investment. That is quite a healthy rate of return for any static installation to sustain for many years, but I am willing to consider that it might be realistic. I suspect that that rate of return does not include the cost for dismantling, maintaining, and replacing the solar panels – but for this scenario – I am willing to consider those as free activities – no future costs.

I expect that solar power technology will continue to improve each year – in fact, I expect that the rate of improvement of the solar conversion efficiency might mirror Moore’s law about transistor density in some analogous fashion. If this is true – and the organizations providing the rebates and tax credit subsidies are counting on it – solar power technology will be somewhere between four and eight times more efficient in 5 years than a system I would install today.

This scenario strongly reminds me of the opportunities to buy desktop computers in the 1980’s that were future proofed by having the system motherboard be able to accept the next generation processor by merely dropping in the new processor when you wanted it. My experience with those systems was that the extra expense for the extra complexity on the motherboard was not worth the extra cost.

Additionally, there were other things in the system that also changed that made using the motherboard beyond a few years a bad idea – namely, the operating system kept evolving, the device drivers kept evolving, and both of these provided no support for “old and obsolete” peripherals and modules. It was much cheaper, easier, and safer to buy what you needed when you needed it – and then replace it with the next round of available devices when you needed to upgrade.

Am I inappropriately applying this lesson from the past to solar power? According to my lessons learned, I should wait a few more years and realize the resulting improvement in cost and efficiency of solar power – In other words, I might come out ahead if I wait and do nothing today. I believe this is the condition of a classic early adopter. Have you experienced this type of trade-off when choosing components and features for your embedded designs?

Energy Harvesting Sources

Friday, June 18th, 2010 by Robert Cravotta

In my previous post about RF energy harvesting, I focused on a model for intentionally broadcasting RF energy to ensure the ambient energy in the environment was sufficient and consistent enough to power devices on demand that were located in difficult, unsafe, or expensive to reach locations. This approach is the basis for many RFID solutions. Using an intentional model of delivering energy by broadcasting can also simplify the energy harvesting system when the system only needs to operate in the presence of sufficient energy because the device may not need to implement a method of storing and managing the energy during periods of insufficient energy to harvest.

In addition to harvesting RF energy, designers have several options, such as thermal differentials, vibrations, and solar energy for extracting useful amounts of ambient energy. Which type(s) of energy a designer will choose to harness depends significantly on the specific location of the end device within the environment. The table identifies the magnitude of energy that a properly equipped device might expect to extract if placed in the appropriate location. The table also identifies the opportunities of extracting energy from a user by a wearable device. The amount of energy available from a human user is typically two to three orders of magnitude lower than that available in ideal industrial conditions.

Characteristics of ambient and harvested power energy sources (source: imec)

The Micropower Energy Harvesting paper by R.J.M. Vullers, et al., provides a fair amount of detailed information about each type of energy harvesting approach that I summarize here. Solar or photovoltaic harvesters can collect energy from both outdoor and indoor light sources. Harvesting outdoor light offers the highest energy density when the device is being used in direct sun;however, harvesting indoor light can perform comparably with the other forms of energy harvesting listed in the table. Using photovoltaic harvesting indoors requires the use of fine-tuned cellsthat accommodate the different spectral composition of the light and the lower level of illumination than compared to outdoor lighting.

Harvesting energy from motion and vibration may use electrostatic, piezoelectric, or electromagnetic transducers. All vibration-harvesting systems rely on mechanical components that vibrate with a natural frequency close to that of the vibration source, such as a compressor, motor, pump, blowers, or even fans and ducts, to maximize the coupling between the vibration source and the harvesting system. The amount of energy that is extractable from vibrations usually scales with the cube of the vibration frequency and the square of the vibration amplitude.

Harvesting energy with electrostatic transducers relies on a voltage change across a polarized capacitor due to the movement of one moveable electrode. Harvesting energy with piezoelectric transducers relies on motion in the system causing the piezoelectric capacitor to deform which generates a voltage. Harvesting energy with electromagnetic transducers relies on a change in magnetic flux due to the relative motion of a magnetic mass with respect to a coil that generates an AC voltage acrossthe coil.

Harvesting energy from thermal gradients relies on the Seebeck effect where the junction made from two dissimilar conductors causescurrent to flow across the junction when the conductors are different temperatures. A thermopile, a device formed by a large number of thermocouples placed between a hot and cold plate, and which are connected thermally in parallel and electrically in series, is the core element of a thermal energy harvester. The power density of this energy harvesting technique increases as the temperature difference increases.

The majority of these harvesting systems has a relatively large size and is fabricated by standard or fine machining. The advances in research, development, and commercialization of MEMS promise to decrease the cost and increase the energy collection efficiency of energy harvesting devices.

If you would like to be an information source for this series or provide a guest post, please contact me at Embedded Insights.

[Editor's Note: This was originally posted on the Embedded Master]