NIST's Top Tips for Evaluating Emerging Technologies

After five years of evaluating emerging military technologies, a team of NIST researchers is more than familiar with the difficulties and rewards of this process. The team recently released its lessons learned from the assessment.

Tags:
2011-07-01

As anyone in the IT industry knows, evaluating the potential of emerging technologies can be a difficult and frustrating process. Yet it’s an essential part of the job for many industry executives, military planners, research managers and others.

score_0_0.jpg
The SCORE framework allows technology evaluators to assess the potential of new tools.

With these professionals in mind, researchers from NIST (National Institute of Standards and Technology) have released a series of important “lessons learned” after five years of evaluating emerging military technologies for DARPA (Defense Advanced Research Projects Agency).

The NIST team also described the framework it created for evaluating the performance and utility of emerging technologies. Named “System, Component, and Operationally Relevant Evaluations” (SCORE), the collaborative set of software and criteria assess technologies from a variety of perspectives.

NIST developed SCORE to review intelligent systems, the growing category of technologies that includes robots, sensor networks and smart appliances.

"Intelligent systems can respond to conditions in an uncertain environment,” explained Craig Schlenoff, acting head of NIST's Systems Integration Group, according to a statement, “be it a battlefield, a factory floor, or an urban highway system—in ways that help the technology accomplish its intended purpose."

Schlenoff’s team used the SCORE system to assess technologies developed by two DARPA programs. The first, ASSIST, involves creating wearable sensors, including video cameras, microphones and global positioning devices for soldiers. The second, TRANSTAC, focuses on improving two-way translation systems for speakers of different languages. For both initiatives, the SCORE criteria enabled faster improvements to drive innovations.

The NIST team centered largely on maximizing the contributions of developers and testers while minimizing result bias.

"There is often a balancing act between creating the evaluation environment in a way that shows the system in the best possible light vs. having an environment that is as realistic as possible," they wrote in their report, which was published in the International Journal of Intelligent Control and Systems.

The researchers also emphasized the importance of flexibility and the inevitable nature of trade-offs caused by logistics, cost and other factors.

"The main lesson," Schlenoff said, "is that the extra effort devoted to evaluation planning can have a huge effect on how successful the evaluation will be. Bad decisions made during the design can be difficult and costly to fix later on."

Source : Smarter Technology