Machine Sensing Channel

Machine sensing is at the core of designers being able to make autonomous systems. If the system cannot recognize subtle context clues, users cannot trust the system to work autonomously in complex environmental conditions. This series focuses on the state-of-the-art for machine sensing technologies and signal processing advances that take sensing to the next level of subtlety.

Are you using accelerometers and/or gyroscopes?

Wednesday, January 18th, 2012 by Robert Cravotta

My daughter received a Nintendo 3DS for the holidays. I naively expected the 3D portion of the hand held gaming machine to be a 3D display in a small form factor. Wow, was I wrong. The augmented reality games that combine the 3D display with the position and angle of the gaming machine. In other words, what the system displays changes to reflect how you physically move the game machine around.

Another use of embedded accelerometers and/or gyroscopes that I have heard about is to enable the system to protect itself when it is dropped. When the system detects that it is falling, it has a brief moment where it tries to lock down the mechanically sensitive portions of the system so that when it impacts the ground it incurs a minimum of damage to sensitive components inside the system.

Gyroscopes can be used to stabilize images viewed/recorded via binoculars and cameras by detecting jitter in the way the user is holding the system and making adjustments to the sensor subsystem.

As the price of accelerometers and gyroscope continue to benefit from the scale of being adopted in gaming systems, the opportunities for including them in other embedded systems improve. Are you using accelerometers and/or gyroscopes in any of your designs? Are you aware of any innovative forms of inertial sensing and processing that might provide inspiration for new capabilities for other embedded developers?

The Express Traffic Lane (It’s Not the Computer, It’s How You Use It)

Friday, September 24th, 2010 by Max Baron

Less than a week ago, a section of the diamond lane in California’s southbound Interstate 680 freeway became sensor-controlled or camera-computerized. A diamond lane, for those of us not familiar with the term, is an express traffic lane allowed only to high-occupancy automobiles or types of vehicles that use environmentally friendly fuels or less gasoline.

Also known as the carpool lane, the diamond lane is usually marked by white rhombuses (diamonds) painted on the asphalt to warn solo drivers that they are not allowed to use it. The diamond lane provides fast free commuting for carpoolers, motorcyclists and diamond lane sticker owners. Solo drivers must use the remaining lanes that are usually slower during periods of peak traffic. These single drivers however, are now allowed to use a section of the diamond lane in California’s southbound Interstate 680 freeway — but they have to pay for it.

The camera-computerized or sensor-activated system introduced just a few days ago doesn’t make sense considering the state-of-art of available technology.

Here is how the complex system works. An automobile carrying only its driver must have a FasTrak transponder allowing a California-designated road authority to charge a fee for using this newly created toll-diamond lane. Mounted on a car’s windshield, the FasTrak transponder uses RFID technology to read the data required to subtract the passage fee from the car owner’s prepaid debit account. The fee reflects the traffic level and is changed according to the time of day. The fee is displayed on pole-mounted digital displays.

To avoid being charged if there are also passengers in the automobile, a FasTrak transponder owner must remove the vehicle’s transponder from the car’s windshield. Caught by traffic enforcement (California Highway Patrol) a solo driver without a FasTrak transponder is fined for using the diamond lane without paying for the privilege. Other schemes implemented for instance at a toll plaza, will trigger a camera to take a photo of the delinquent vehicle and its license plate following which a violation notice will be sent to the registered owner of the vehicle.

Considering the complexity of the system from the viewpoint of existing digital cameras, embedded computers, cellular telephony and the presence of police enforcement on the freeway, one can wonder about the necessity of FasTrak devices or police involvement. 

If we are to follow descriptions found on publications such as San Jose’s Mercury daily newspaper (reference) and the freeway’s information brochure (reference), the system seems to be unnecessarily disconnected: an RFID tag is used to pay for solo driving, but police has to check if a vehicle without a transponder is occupied by just the driver or by additional people. If a FasTrak-less driver is detected police must stop the delinquent car and write a ticket. Based on the description, it may seem that the cameras or sensors implemented are unable to differentiate among illegal solo drivers vs. multiple passenger cars. If true, these cameras or sensors are using technology that was state of the art in the 90’s.  They only seem to be able to detect a large object well-enough to report to police the number of vehicles using the lane without transponders, leaving the rest to law enforcement.

Today’s embedded computers equipped with simple cameras can read numbers and words. FasTrak transponders should not be required. Existing systems can identify human shapes and features in cars well enough to differentiate among multiple vs. solo drivers and with adequate software and illumination, they can continue to function correctly despite most adverse weather changes or light conditions. The word “CARPOOL” written by the driver of a multiple-person car can be displayed for the computer to read to ensure that the system will not charge for the use of the toll-lane. The license plate of the solo driver automobile can be linked in a data base to a debit account or to the name and address of the owner.

We estimate the price of a pole-mounted low-power system of this kind including wireless communication at a pessimistic $9,800 as follows: a ruggedized camera– $1,500; a video road image recognition engine plus software such as designed by Renesas for automobiles — $2,000 including software (reference); a controlling embedded computer including a real-time OS — $900; a wireless communication block — $600; components for remote testing, upgrades and servicing–$1,000; battery and power supply– $1,000; solar panels if required–$800; enclosure–$2,000.

In a modern system there would be no fines—just charges made to driver bank accounts if so elected–or monthly statements to be paid along with electrical, gas, and other services for which monthly payments have found acceptance. But, have we been told everything? Do we really know what types of systems are looking today at the traffic on freeway 680’s toll-enabled express lane? This may be just step one.

Giving machines a fine sense of touch

Tuesday, September 14th, 2010 by Robert Cravotta

Two articles were published online on the same day (September 12, 2010) in Nature Materials that describe the efforts of two research teams at UC Berkeley and Stanford University that have each developed and demonstrated a different approach for building artificial skin that can sense very light touches. Both systems have reached a pressure sensitivity that is comparable to what a human relies on to perform everyday tasks. The sensitivity of these systems can detect pressure changes that are less than a kilopascal; this is an improvement over earlier approaches that could only detect pressures of tens of kilopascals.

The Berkeley approach, dubbed “e-skin”, uses germanium/silicon nanowire “hairs” that are grown on a cylindrical drum and then rolled onto a sticky polyimide film substrate, but the substrate can be made from plastics, paper, or glass. The nanowires are deposited onto the substrate to form an orderly structure. The demonstrated e-skin consists of a 7x7cm surface consisting of an 18×19 pixel square matrix; each pixel contains a transistor made of hundreds of the nanowires.A pressure sensitive rubber was integrated on top of the matrix to support sensing. The flexible matrix is able to operate with less than a 5V power supply, and it has been able to continue operating after being subjected to more than 2,000 bending cycles.

In contrast, the Stanford approach, sandwiches a thin film of rubber molded into a grid of tiny pyramids, packing up to 25 million pyramids per cm2, between two parallel electrodes. The pyramid grid makes the rubber behave like an ideal spring that supports compression and rebound of the rubber that is fast enough to distinguish between multiple touches that follow each other in quick succession. Pressure on the sensor compresses the rubber film and changes the amount of electrical charge it can store which enables the controller to detect the change in the sensor. According to the team, the sensor can detect the pressure exerted by a 20mg bluebottle fly carcass placed on the sensor. The Stanford team has been able to manufacture a sheet as large as 7x7cm, similar to the Berkeley e-skin.

I am excited by these two recent developments in machine sensing. The uses for this type of touch sensing are endless such as in industrial, medical, and commercial applications. A question comes to mind – these are both sheets (arrays) of multiple sensing points – how similar will the detection and recognition algorithms be to the touch interfaces and vision algorithms that are being developed today? Or will it require a completely different approach and thought process to interpret this type of touch sensing?