Question of the Week Channel

The Question of the Week challenges how designers think about embedded design concepts by touching on topics that cover the entire range of issues affecting embedded developers, such as how and why different trade-offs are made to survive in a world of resource- and time-constrained designs.

How could easing restrictions for in-flight electronics affect designs?

Wednesday, March 28th, 2012 by Robert Cravotta

The FAA (Federal Aviation Administration) has given permission to pilots on some airlines to use iPads in the cockpit in place of paper charts and manuals. In order to gain this permission, those airlines had to demonstrate that the tablet did not emit radio waves that could interfere with aircraft electronics. In this case, the airlines only had to certify the types of devices the pilots wanted to use rather than any of the devices that passengers would want to use. This type of requirements-easing is a first step toward possibly allowing electronics use during landings and takeoffs for passengers.

Devices, such as the Amazon Kindle, that use an E-ink display, can emit radio emissions that are less than 0.00003 volts per meter (according to tests conducted at EMT Labs) when in use – well under the 100 volts per meter of electrical interference that the FAA requires of airplanes. Even if every passenger was using a device with emissions this low, it would not exceed the minimum shielding requirements for aircraft.

A challenge though is whether allowing some devices versus others to operate throughout a flight would create situations where enough passengers might accidently leave on or operate their unapproved devices so that taken together – all of the devices might exceed the safety constraints for radio emissions.

On the other hand, being able to operate an electronic device throughout a flight would be a huge selling point for many people – and this could lead to further economic incentives for product designers to push their designs to those limits that could gain permission from the FAA.

Is the talk about easing the restrictions for using electronic gadgets during all phases of a flight wishful thinking or is the technology advancing far enough to be able to offer devices that can operate well below the safety limits for unrestricted use on aircrafts? I suspect this on-going dialogue between the FAA, airlines, and electronics manufacturers could yield some worthwhile ideas on how to ensure proper certification, operation, and failsafe functions within an aircraft environment that could make unrestricted use of electronic gadgets a real possibility in the near future. What do you think will help this idea along, and what challenges need to be resolved before it can become a reality?

When is running warm – too hot?

Wednesday, March 21st, 2012 by Robert Cravotta

Managing the heat emanating from electronic devices has always been a challenge and design constraint. Mobile devices present an interesting set of design challenges because unlike a server operating in a strictly climate controlled room, users want to operate their mobile devices across a wider range of environments. Mobile devices place additional design burdens on developers because the size and form factor of the devices restrict the options for managing the heat generated while the device is operating.

The new iPad offers the latest device where technical specifications may or may not be compatible with what users expect from their devices. According to Consumer Reports, the new iPad can reach operating temperatures that are up to 13 degrees higher (when plugged in) than an iPad 2 performing the same tasks under the same operating conditions. Using a thermal imaging camera, the peak temperature reported by Consumer Reports is 116 degrees Fahrenheit on the front and rear of the new iPad while playing Infinity Blade II. The peak heat spot was near one corner of the device (Image at the referenced article).

This type of peak temperature is perceived as warm to very warm to the touch for short periods of time. However, for some people, they may consider a peak temperature of 116 degrees Fahrenheit to be too warm for a device that they plan to hold in their hands or on their lap for extended periods of time.

There are probably many engineering trade-offs that were considered in the final design of the new iPad. The feasible options for heat sinks or distributing heat away from the device were probably constrained by the iPad’s thin form factor, dense internal components, and larger battery requirements. Integrating a higher pixel density display definitely provided a design constraint on how the system and graphic processing was architected to deliver an improvement in display quality and maintain an acceptable battery life.

Are consumer electronics bumping up against edge of what designers can deliver when balancing small form factors, high performance processing and graphics, acceptable device weight, and long enough battery life? Are there design trade-offs that are still available to designers to further push where mobile devices can go while staying within the constraints of acceptable heat, weight, size, cost, and performance? Have you ever dealt with running a warm system that becomes a system that is running too hot? If so, how did you deal with it?

Which processor is beginner friendly?

Wednesday, March 14th, 2012 by Robert Cravotta

My first experience with programming involved mailing punch cards to a computer for batch processing. The results of the run would show up about a week later; the least desirable result was finding out there was a syntax error in one of the cards. I moved up in the world when we gained access to a teletype that allowed us to enter the programs directly to the computer; however, neither of these experiences hinted at the true complexity that embedded programming would entail.

The Z80 was the first processor that I worked with that truly exposed the innards of the processor to me. A key reason for this was the substantial hobbyist community that had grown up around the Z80. I had (and still have in storage) a cornucopia of technical documents that exposed in detail every part of the system and ways to use them effectively. When I look back on those memories I marvel at the amount of information that was available despite the lack of any online connectivity – or in other words, no internet.

I found significant value in being able to examine other people’s code in real use applications. Today’s development support often includes application notes and sample code that addresses a wide range of use cases for a target processor. Online developer communities provide a valuable opportunity for developer’s to find example material, but even better, be able to query the community for examples of how to address a specific function with that target processor.

I would like to confirm that the specific capabilities of the processor are less important (because they all provide a minimum good set of functions) and that good development tools, tutorials and sample code, as well as responsive developer community support are more critical to a beginner.

Which processor (or processors) do you find to be beginner friendly or provide the right set of development support that make getting started with the processor faster and easier? Does using an RTOS, operating system, and/or middleware make this easier or harder? Which processors are the best examples of the type of developer community support you find most valuable?

Do you refactor embedded software?

Wednesday, February 29th, 2012 by Robert Cravotta

Software refactoring is an activity where software is transformed in such a way that preserves the external behavior while improving the internal software structure. I am aware of software development tools that assist with refactoring application software, but it is not clear whether design teams engage in software refactoring for embedded code – especially for control systems.

Refactoring was not practiced in the projects I worked on; in fact, the team philosophy was to make only the smallest change necessary when working with a legacy system to affect the change needed. First, we never had the schedule or budget needed just to make the software “easier to understand or cheaper to modify.”  Second, changing the software for “cosmetic” purposes could cause an increase in downstream engineering efforts, especially in the area of verifying that the changes did not break the behavior of the system under all relevant operating conditions. Note that many of the control projects I worked on were complex enough that it was difficult just to ascertain whether the system worked properly or just coincidently looked like it did.

Most of the material I read about software refactoring assumes the software targets the application layer of software which is not tightly coupled to a specific hardware target and is implemented in an object oriented language, such as Java or C++. Are embedded developers performing software refactoring? If so, do you perform it on all types of software or are there types of software that you definitely include or exclude from a refactoring effort?

Are you looking at USB 3.0?

Wednesday, February 22nd, 2012 by Robert Cravotta

SuperSpeed USB, or USB 3.0, has been available in certified consumer products for the previous two years. The serial bus specification includes a 5Gbps signal rate which represents a ten-fold increase of the data rate over HIGH-Speed USB. The interface relies on a dual-bus architecture that enables both USB 2.0 and USB 3.0 operations to take place simultaneously, and it provides backward compatibility. Intel recently announced that its upcoming Intel 7 Series Chipset Family for client PCs and Intel C216 Chipset for servers received SuperSpeed USB Certification; this may signal that 2012 is an adoption inflection point for the three year old specification. In addition to providing a ten-fold improvement in data transfers, SuperSpeed USB increases the maximum power available via the bus to devices, supports new transfer types, and includes new power management features for lower active and idle power consumption.

As SuperSpeed USB becomes available on more host-like consumer devices, will the need to support the new interface gain more urgency? Are you looking at USB 3.0 for any of your upcoming projects? If so, what features in the specification are most important to you?

Are you using Built-in Self Tests?

Wednesday, February 15th, 2012 by Robert Cravotta

On many of the projects I worked on it made a lot of sense to implement BISTs (built-in self tests) because the systems either had some safety requirements or the cost of executing a test run of a prototype system was expensive enough that it justified the extra cost of making sure the system was in as good a shape as it could be before committing to the test. A quick search for articles about BIST techniques suggested that it may not be adopted as a general design technique except in safety critical, high margin, or automotive applications. I suspect that my literature search does not reflect reality and/or developers are using a different term for BIST.

A BIST consists of tests that a system can initiate and execute on itself, via software and extra hardware, to confirm that it is operating within some set of conditions. In designs without ECC (Error-correcting code) memory, we might include tests to ensure the memory was operating correctly; these tests might be exhaustive or based on sampling depending on the specifics of each project and the time constraints for system boot up. To test peripherals, we could use loop backs between specific pins so that the system could control what the peripheral would receive and confirm that outputs and inputs matched.

We often employed a longer and a shorter version of the BIST to accommodate boot time requirements. The longer version usually was activated manually or only as part of a cold start (possibly with an override signal). The short version might be activated automatically upon a cold or warm start. Despite the effort we put into designing, implementing, and testing BIST as well as developing responses when a BIST failed, we never actually experienced a BIST failure.

Are you using BIST in your designs? Are you specifying your own test sets, or are you relying on built-in tests that reside in BIOS or third-party firmware? Are BISTs a luxury or a necessity with consumer products? What are appropriate actions that a system might make if a BIST failure is detected?

Do you ever think about endianess?

Wednesday, February 8th, 2012 by Robert Cravotta

I remember when I first learned about this thing called endianess as it pertains to ordering higher and lower order bits for data that consumes more than a single byte of data. The two most common ordering schemes were big and little endian. Big endian stored the most significant bytes ahead of the least significant bytes; little endian stored data in the opposite order with the least significant bytes ahead of the most significant bytes. The times when I was most aware of endianess was when we were defining data communication streams (telemetry data in my case) that transferred data from one system to another that did not use the same type of processors. The other context where knowing endianess mattered was when the program needed to perform bitwise operations on data structures (usually for execution efficiency purposes).

If what I hear from semiconductor and software development tool providers is correct, only a very small minority of developers deal with assembly language anymore. Additionally, I suspect that most designers are not involved in driver development anymore either. With the abstractions that compiled languages and standard drivers offer, does endianess affect how software developers write their code? In other words, are you working with datatypes that abstract how the data is stored and used, or are you implementing functions in such a way that require you to know how your data is internally implemented? Have software development tools successfully abstracted this concept away from most developers?

Are software development tools affecting your choice of 8-bit vs. 32-bit processors?

Wednesday, February 1st, 2012 by Robert Cravotta

I have always proposed that the market for 8-bit processors would not fade away – in fact there are still a number of market niches that rely on 4-bit processors (such as clock movements and razor blades that sport a vibrating handle for men when shaving their faces). The smaller processor architectures can support the lowest cost price points and the lowest energy consumption years before the larger 32-bit architectures can begin to offer anything close to parity with the smaller processors. In other words, I believe there are very small application niches that even 8-bit processors are currently too expensive or energy hungry to support just yet.

Many marketing reports have identified that the available software development tool chains play a significant role in whether a given processor architecture is chosen for a design. It seems that the vast majority of resources spent evolving software development tools are focused on the 32-bit architectures. Is this difference in how software development tools for 8- and 32-bit processors are evolving affecting your choice of processor architectures?

I believe the answer is not as straight forward as some processor and development tool providers would want to make it out to be. First, 32-bit processors are generally much more complex to configure than 8-bit processors, so the development environments, which often include drivers and configuration wizards, are nearly a necessity for 32-bit processors and almost a non-issue for 8-bit processors. Second, the type of software that 8-bit processors are used for are generally smaller and contend with less system-level complexity. Additionally, as embedded processors continue to find their way into smaller tasks, the complexity of the software may need to be simpler than current 8-bit software to meet the energy requirements of the smallest subsystems.

Do you feel there is a significant maturity difference between software development tools targeting 8- and 32-bit architectures? Do you think there is/will be a widening gap in the capabilities of software development tools targeting different size processors? Are software development tools affecting your choice of using an 8-bit versus a 32-bit processor or are other considerations, such as the need for additional performance headroom for future proofing, driving your decisions?

Do you employ “Brown M&Ms” in your designs?

Wednesday, January 25th, 2012 by Robert Cravotta

I have witnessed many conversations where someone accuses a vendor of forcing customers to use only their own accessories, parts, or consumables as a way to extract the largest amount of revenue out of the customer base. A non-exhaustive list of examples of such products includes parts for automobiles, ink cartridges for printers, and batteries for mobile devices. While there may be some situations where a company is trying to own the entire vertical market around their product, there is often a reasonable and less sinister explanation for requiring such compliance by the user – namely to minimize the number of ways an end user can damage a product and create avoidable support costs and bad marketing press.

The urban legend that the rock band Van Halen employed a contract clause that required a venue to provide a bowl of M&Ms backstage but with all of the brown candies removed is not only true, but provides an excellent example of such a non-sinister explanation. According to David Lee Roth (the band’s lead singer) autobiography, the bowl of M&Ms with all of the brown candies removed was a nearly costless way for them to test whether the people setting up their stage followed all of the details in their extensive setup and venue requirements. If the band found a single brown candy in the bowl, they ordered a complete line check of the stage before they would agree that the entire stage setup met their safety requirements.

This non-sinister description is consistent with the type of products that I hear people complain that the vendor is merely locking them into the consumables for higher revenues. However, when I examine the details I usually see a machine, such as an automobile, that requires tight tolerances on every part; otherwise small variations in non-approved components can combine to create unanticipated oscillations in the body of the vehicle. In the case of printers, variations in the formula for the ink can gum up the mechanical portions of the system when put through the wide range of temperature and humidity environments that printers are operated in. And for mobile device providers are very keen to keep the rechargeable batteries in their products from exploding and hurting their customers.

First, do you employ some clever “Brown M&M” in your design that helps to signal when components may or may not play together well? This could be as simple as performing a version check of the software before allowing the system to go into full operation. Or is the concept of “Brown M&Ms” just a story to cover greedy practices by companies?

Are you using accelerometers and/or gyroscopes?

Wednesday, January 18th, 2012 by Robert Cravotta

My daughter received a Nintendo 3DS for the holidays. I naively expected the 3D portion of the hand held gaming machine to be a 3D display in a small form factor. Wow, was I wrong. The augmented reality games that combine the 3D display with the position and angle of the gaming machine. In other words, what the system displays changes to reflect how you physically move the game machine around.

Another use of embedded accelerometers and/or gyroscopes that I have heard about is to enable the system to protect itself when it is dropped. When the system detects that it is falling, it has a brief moment where it tries to lock down the mechanically sensitive portions of the system so that when it impacts the ground it incurs a minimum of damage to sensitive components inside the system.

Gyroscopes can be used to stabilize images viewed/recorded via binoculars and cameras by detecting jitter in the way the user is holding the system and making adjustments to the sensor subsystem.

As the price of accelerometers and gyroscope continue to benefit from the scale of being adopted in gaming systems, the opportunities for including them in other embedded systems improve. Are you using accelerometers and/or gyroscopes in any of your designs? Are you aware of any innovative forms of inertial sensing and processing that might provide inspiration for new capabilities for other embedded developers?