July/Aug 2009

Whales, warships, and wolves. Cows, cars, and crashes. Friends, foes, and failures. All of these things have played a role in what we today call M2M (machine-to-machine) technology. For a collection of concepts that are thought of as leading-edge today, their history goes back less than a century. The roots of M2M are buried deep in the development of other technologies, often created for military applications. A look back at where M2M originated and how we got to where we are today is a fascinating, and often intriguing, exercise.

Many Pillars, One Foundation
Machine-to-machine technology is often wide-ranging and all-encompassing, but for all intents and purposes can fall into one of “Six Pillars” as defined by M2M magazine: RFID (radio frequency identification), telematics, telemetry, sensor networking, smart services, and remote monitoring.

If you want to know how M2M came to be, the single foundation stone would probably be the transistor and the microchips it spawned. In the sense that today’s technology is almost exclusively based on the microprocessor and complementary chips, which in turn are collections of transistors, this would be a logical leap. But before there were transistors, in the era of vacuum tubes and wire looms, there were attempts at what would evolve into M2M.

Take RFID. The concept is that radio waves emitted by a transponder (a combination transmitter and receiver or responder) are reflected—if passive—or multiplied and reflected—if active—by a tag or chip, giving the receiver information such as a unique code or identification. Most RFID is low powered and short range and used to track items in movement past or near the transponder. Pallets going through a doorway to a truck can trigger inventory increments/decrements based on the ID of the associated tags; a cow with a tag attached to its collar or ear entering a stall alerts the machinery to dispense the proper feed based on the cow’s output and nutritional needs.

In the mid-1930s, British military research developed what became known as RADAR—a coined shorthand term for Radio Detection And Ranging—in response to fears of German aerial assault. An antenna transmits pulses of radio waves that bounce off any object in their path and return to the dish where they are detected. The time it takes for the reflected waves to return to the dish enables a computer to calculate how far away the object is, its velocity, and other characteristics. The invention of radar (the term has become part of the language and is rarely capitalized) followed on patents and research in many countries by such technology luminaries as Nikola Tesla.

Are You Friendly?
A similar technology was the IFF—Identify: Friend or Foe—which was used to, logically, identify friendly aircraft during war time. The term is somewhat inaccurate since IFF can only positively identify friendly aircraft, those with the properly encoded “tags,” but not hostile ones.

If an IFF signal receives no reply, the interrogated aircraft can only be treated as suspicious but not as a positively identified foe.

IFF might be considered secondary radar. Primary radar bounces a radio pulse off an aircraft to determine position; with IFF, position is determined by comparing antenna dish angle and the delay from the interrogator pulse to the received IFF pulses. Early IFF and radar both were large, cumbersome devices. Today, these devices can fit into a package no bigger than a desktop computer. Of course, that miniaturization is a result of the transistor and microchips that followed.

RFID, a variation on IFF concepts, started gaining traction as an application in the 1970s. Dr. Jeremy Landt is chief scientist at TransCore, www.transcore.com, Harrisberg, Pa., and one of the original five scientists from Los Alamos National Laboratories that developed RFID for the federal government. He recalls, “The 1970s were characterized primarily by developmental work. Intended applications were for animal tracking, vehicle tracking, and factory automation. Examples of animal tagging efforts were the microwave systems at Los Alamos and the inductive systems in Europe.”

He continues, “Transportation efforts included work at Los Alamos and by the Intl. Bridge Turnpike and Tunnel Association (IBTTA) and the United States Federal Highway Admin. The latter two sponsored a conference in 1973 which concluded there was no national interest in developing a standard for electronic vehicle identification. This is an important decision since it would permit a variety of systems to develop, which was good, because RFID technology was in its infancy.”

Tests of RFID for collecting tolls had been going on for many years, and the first commercial application began in 1987 in Norway and was followed quickly in the United States by the Dallas North Turnpike in 1989. Also during this time, the Port Authority of New York and New Jersey began commercial operation of RFID for buses going through the Lincoln Tunnel. RFID was finding a home with electronic toll collection, and new players were arriving daily. Using the IFF analogy, the toll collectors were checking to see if the buses and cars were friend (i.e, toll payers) or foe (scofflaws).

Where’s My Car?
Telematics, like the other pillars of M2M, is not a technology by itself but a collection of technologies combined to solve a problem or ease a burden. Having the ability to communicate machine-to-machine is the basic tenet of M2M, after all, and telematics creates an environment where sensors onboard a vehicle collect data and relay that information to an outboard collection point via wireless technology, either satellite or cellular networks. That distribution of data can be automatic or human ordered.

Until the cellular network was sufficiently built out—satellite networks being much more expensive to use—the advantages of telematics were, as is often the case, primarily enjoyed by the largest companies and government entities such as the military. With the ubiquitous nature of cellular communications today, that is no longer the case. Prices and availability are changing and new applications for the concept are being explored.

According to Dennis Foy, author of Automotive Telematics, telematics is the convergence of telecommunications and information processing. The term later evolved to refer to automation in automobiles, such as the invention of the emergency warning system for vehicles. GPS (global positioning systems) navigation, integrated hands-free cell phones, wireless safety communications, and automatic driving assistance systems.

While telematics, the term, has been co-opted by the automotive industry, the technology is also in general use. The science is applied in wireless technologies and computational systems and covered by the IEEE standard 802.11p.

Foy makes the point that live, valid, and accurate traffic information is considered a must-have application by many end users—by that he means drivers—and telematics allows navigation to make child’s play out of avoiding congestion hotspots. Time and money savings are tremendous, especially for fleet operators. A side benefit is safety: Knowing where vehicles are can aid in their recovery if stolen or the rescue of the occupants in case of accident off the beaten path.

If any application of telematics comes to mind among the general population it has a name: OnStar. Sensors in the vehicle will notify OnStar when there is a problem with the vehicle such as an accident or a system failure. Steve Millstein, president and CEO, ATX Group, www.atxg.com, Irving, Texas, says, “Telematics for over a decade has been defined by a single business model pioneered by ATX and our industry colleagues at General Motors who provide the OnStar service. We anticipate that over the next few years we will reach 2.5 million subscribers just based on overall telematics offerings within the automotive industry, and there are other factors that could launch even more significant growth for us.”

However, as the industry embarks on its 12th year, future growth, exciting as it is, is not the heart of the telematics story. Experts have different opinions on this topic.
Millstein projects, “Telematics services will become less about responding to an event such as a crash, a stolen or disabled vehicle, a lost driver, for example, but more about an always-on experience wherever you are mobile. It will be less about technology and more about personalization of services and strict protection of vehicle owners’ personal information and preferences, delivering to them a unique ownership experience compatible with the OEMs brand values.”

Among the technologies often associated with telematics is GPS navigation and location-based systems that can be used to track, via satellite, a vehicle—or person—equipped with the proper device. We have become conditioned by films and TV to assume that “Big Brother” can—for example—spot our car among the tens of thousands on the road in Los Angeles at rush hour, pinpointing it with red crosshairs on a computer screen. True? Perhaps not yet but the bits (or bytes) and pieces of the technology are certainly out there.

Another use is the ankle bracelet tracking device that allows law enforcement to know, moment-by-moment, where a prisoner is—unless it fails. And fail they do, more often than the police would like. Highly publicized cases where prisoners under house arrest and shackled with tracking bracelets were able to remove them and commit crimes go back nearly 20 years and have caused a number of jurisdictions to stop using the technology.

Tracking Rockets and Wolves
While tracking prisoners hasn’t always worked out, telemetry has had a much better track(ing) record. For example, good results have been had with tracking migration paths of wild animals using collar-attached locators. Radio tracking uses low-powered radio signals from the collar to locate animals and follow their movements. More than 700 wolves have been tracked in northern Minnesota in this way since 1968.

A radio transmitter housed in a collar is attached to the animal. Using telemetry, data is collected and transmitted over a distance to a researcher who uses a radio receiver and directional antenna to home-in on the signal and follow it to the animal. Wolves travel so far and wide that biologists usually use airplanes to track them.

Telemetry is an enabling technology for large complex systems such as missiles and spacecraft because it allows automatic monitoring, alerting, and record-keeping necessary for safe, efficient operations. Space agencies such as NASA use telemetry systems to collect data from operating spacecraft and satellites. Telemetry is vital in the development phase of missiles, satellites, and aircraft because testing can often result in destruction of the craft and, without constant transmission of vital data, the causes might never be known. Engineers need critical system parameters in order to analyze (and improve) the performance of the system.

In many ways, telemetry was a vital source of intelligence during the Cold War when Soviet missiles were tested. For this purpose, the U.S. operated a listening post in Iran to eavesdrop on the telemetry signals from Soviet missile tests. Eventually, the Soviets discovered this and encrypted their telemetry signals. Naturally, the roles were also reversed and telemetry was a vital intelligence source for the Soviets who would operate listening ships in Cardigan Bay to eavesdrop on British missile tests carried out there.

Whale or Warship?
It may sound like a child’s riddle but it can be a serious problem: What’s the difference between a whale and a submarine? Various sensors have been deployed around the oceans of the world to try and determine when a large mass is a whale or a threatening warship. One of the largest networks is SOSUS—the SOund SUrveillance System. SOSUS provides deep-water, long-range detection capability and had success during the Cold War tracking submarines by their faint acoustic signals. SOSUS consists of high-gain long fixed arrays in the deep ocean basins.

With the advances in submarine warfare during World War II, the need for timely detection of undersea threats was made a high priority in Anti-Submarine Warfare (ASW). As technology of the time progressed, it was recognized that shore-based monitoring stations were the answer to the problem since they could be made basically impervious to destruction, foul weather, and ambient self-generated noise. Since the early 1950s both the Atlantic Ocean and Pacific Ocean have been under the coverage of SOSUS, with long acoustic sensors (hydrophones) installed across the ocean bottom at key locations.

With the development of quieter submarines and counter-tactics to evade SOSUS, newer technologies have been implemented over the years to keep up with the threat. Faster processors, higher capacity storage devices, and “cleaner code” have enabled the advancement of the art of locating undersea threats.

Dry land has had its share of sensor networks for military use as well. The Remotely Monitored Battlefield Sensor System (REMBASS) uses passive sensors that can be unattended for up to 30 days. The sensors are normally in an idle mode with very low power use, but when a target comes into range, the sensors note a change in the ambient energy level (seismic/acoustic, thermal, and/or magnetic) and are activated. The sensors identify the target (as a person or vehicle), format this data into short digital messages, and transmit the messages to a monitoring device. Information received at the monitoring device is decoded and displayed, showing target classification and direction of travel. The sensors send a test message on initial power-up to verify operational status and repeaters send periodic test messages.

Operator calculations, based upon the sensor data, can be used to determine target location, speed, direction of travel, and number of targets.

Monitoring Remotely
Remote monitoring has been around in one form or another for a long time—even if that monitoring was done manually by a technician in a truck. As equipment gained complexity, and communications technology improved, having a machine send messages—simple data packets or alarms—added to the value of the machines. Two-way communications wasn’t necessary, it was usually enough to get a warning that some function was malfunctioning.

In the 1950s, industrial equipment lacked the computer/microchip capabilities common today. Controls were comprised of relay boxes, simple on/off switches with interval programming. The PLC (programmable logic controller) added multiple layers to the ladder logic that relays used to control machines.

When microprocessors and communications capabilities came on the scene, central control consoles with flashing lights for warnings and status info started to be popular. Railroads were among the early adopters of simple remote monitoring, using wired pressure switches at intervals along the track to record the presence or passing of equipment; the monitoring board showing the location of trains and the status of switches and controls by varied colored lights.

Similar boards popped up in industrial applications, especially the process plants where diagrams of fixed machinery—distillation towers, cooling and heating tanks, etc.—could be shown with their status indicated by lights. The next stage was distributed control and SCADA (supervisory control and data acquisition). SCADA systems perform data collection and control at the supervisory level.

Early SCADA systems used mainframe technology and required human operators to make decisions, take action, and maintain the information systems. Because this increased the human labor cost, early SCADA systems were very expensive to maintain. Today, SCADA is generally much more automated, and therefore more cost-efficient.

Services Getting Smart
Probably the newest M2M application, and therefore the one with the shortest history, remote control and monitoring, is often thought of as smart services. While SCADA and other control systems in a plant can receive data from the equipment being monitored, and then send control action signals to the machines as needed—primarily over wires and often automatically, especially in the case of safety related alarms—these actions usually take place within a confined area such as a plant or refinery. What if the equipment is scattered around the world? A wired environment is impossible. No problem, now that the Internet and cellular networks are ubiquitous.

Using a wired or wireless connection to the Internet that is usually available at any commercial, industrial, or warehouse site, equipment with communications modules can do an “ET”—call home when in trouble. And in mobile applications, or where Internet access is unavailable, cellular networks are ready to fill in. Whether it is a copier that needs repair or a semi-trailer that has a malfunction in its refrigeration system, notification by cellular or Internet connection can alert local maintenance or the manufacturer of the need for immediate service.

“An M2M solution that enables automated back-and-forth flow of data from machines to machines, machines to enterprises, and machines to people is feasible,” says John Tillotson, senior director of strategy and business development for the Global Smart Services group of Qualcomm Enterprise Services, www.qualcomm.com/qes, San Diego, Calif. “Companies can use wireless M2M to monitor and control their assets—fixed and mobile—resulting in improved worker productivity, more effective asset utilization, enhanced customer service, and much more.”

OmniTRACS is one of Qualcomm’s earliest offerings of smart services and wireless M2M. The system consists of wireless devices installed on semi-trailer trucks that “talk” to computers located in a NOC (network operations center), enabling transportation carriers to monitor driver performance; schedule and plan vehicle maintenance more effectively; and improve customer service.

From the 1930s to the 1980s, the technical history of early M2M has evolved in close relationship to two overriding needs: military surveillance and industrial (and agricultural) productivity. As the communications networks expanded and miniaturized, and the data processing capabilities of chips followed suit, M2M leaped ahead by combining technologies into complex, responsive and valuable systems in a wide range of applications. And under it all is the foundation, the transistor, without which we’d never have the communications and control capabilities we have today with M2M.


Tom Inglesby is a contributing writer for M2M magazine.

[button link="https://connectedworld.com/subscribe-connected-world/" color="default" size="small" target="_self" title="" gradient_colors="," gradient_hover_colors="," border_width="1px" border_color="" text_color="" shadow="yes" animation_type="0" animation_direction="down" animation_speed="0.1"]Subscribe Now[/button]

Gain access to Connected World magazine departments, features, and this month’s cover story!