A Smarter and Safer Military

Many top defense technologies get their start in Tech's labs.

Artificial clotting agents for minimizing blood loss from battle injuries. A radar jamming system that literally learns on the fly. Security systems designed specifically to protect utilities from cyber attacks. Georgia Tech has been on the cutting edge of defense research and development for decades, and these are just a few of the latest technologies being developed on campus for military purposes. Tech researchers may not specialize in big-ticket, high-profile weapons projects—those are usually the purview of major defense contractors—but they play an instrumental role in developing the underlying technologies and platforms to help determine how future wars may be fought and how lives may be spared. 

“We have the license to think ahead of what might be next, to think about the military scenarios that might eventually involve the United States, and the kind of technologies that can be most useful,” says Steve Cross, executive vice president for research at Georgia Tech.  As such, Tech ranks among the top 10 academic institutions in the country that support the U.S. Department of Defense, and the overall defense industry and intelligence community, Cross says.

Bryan Clark, senior fellow at the Center for Strategic and Budgetary Assessments—a nonprofit public policy think tank—agrees about Tech’s crucial role in defense R&D. Clark says Tech stands apart from most research universities in its strong relationship with the U.S. government and defense contractors.  Only a few  such as Tech “act as kind of an adjunct to government labs,” Clark says. Through the Georgia Tech Research Institute (GTRI) and within Tech’s nationally ranked academic units, the Institute asa whole conducts a broad array of both basic and applied research, including in areas such as unmanned systems, computer miniaturization, electromagnetics and cyber warfare, that’s beyond the norm for academic institutions.

In FY2013, Tech received a total of $640 million in research awards from all sources. Of that, the DoD granted the Institute $301.4 million for defense research, with the lion’s share ($263.6 million) going to GTRI. Established as the Engineering Experiment Station in 1934, GTRI took off in World War II when researchers, supported by faculty at the School of Physics and the School of Electrical Engineering, started work on microwave technology in support of military radar development. Then, with the beginning of the Cold War, researchers deepened their involvement in electronic warfare while adding computers to their arsenal. Most recently, GTRI has been focusing a lot of its efforts countering cyber attacks. “That’s been a large growth area here as the cyber threat has become a much greater risk,” Cross says.

Though defense-funded technology developed at Tech clearly has a military use in mind, there are civilian applications as well. For example, a pattern-recognition software program designed to anticipate an adversary’s moves could be used to analyze a patient’s electronic medical history and suggest courses of treatment. Similarly, all the robotics, manufacturing, sensor and computer vision technologies employed in a weapons system could help “automate an industrial processing plant, which may be dirty, smelly and really an unsafe place,” Cross says.

In the stories to come, we examine some of the newest defense technologies being researched and developed at Tech.

Staunching Blood Loss on the Battlefield

Tech biomedical engineers are developing a new medical treatment that could save the lives of troops who have suffered serious injury on the battlefield. Funded by the Department of Defense (DoD) and the National Institutes of Health, a GT team is developing artificial blood platelets that, when injected intravenously, would force wounds to form a scab much faster than the human body does on its own.

“The goal is to develop technology to help wounded warriors stop bleeding,” says Thomas Barker, associate professor of biomedical engineering. The hope is that, for frontline troops fighting at remote locations, the artificial blood platelets could significantly reduce combat fatalities. Barker imagines scenarios where the injection could even be taken prophylactically by soldiers.

“Obviously the technology allows you to treat combat wounds, but we think you might be able to take this before battle to boost your clotting system, too,” he says.

Artificial Platelets

The artificial blood platelets, which are composed of hydrogels, are activated by the body’s own mechanisms when a person is wounded. “The particles stick to the wound where bleeding is occurring and help the body stop blood loss,” Barker says.

Barker and his team have been working on the new technology for about two years and have already demonstrated the artificial blood platelets in animal models. Those trials showed that the artificial platelets could clot blood 30 percent faster than the body’s own natural processes alone. However, the researchers have yet to perform human testing.

In the civilian world, there are even more immense ramifications of this research. Not only could artificial blood platelets help stop blood loss from injuries, they could also improve recovery from surgery and aid those suffering from blood disorders.

“They could be used following massive trauma or in patients with clotting disorders like hemophilia, or to solve clotting problems associated with chemotherapy,” Barker says.

The platelets research is currently in preclinical trials, and Tech’s biomedical engineering team is in discussion with the Food and Drug Administration (FDA) to move the process forward. Though the technology is still in its early stages, with the FDA’s approval, it could find its way into the hands of doctors both on and off the battlefield soon.

“We’re hoping that these blood platelets could be available and making a positive impact within a few years,” Barker says.

Protecting Public Utilities from Cyber Attacks

There are many ways to attack a nation’s weak points. One of the sneakier methods is to hack the computer network of a public utility and wreak all sorts of sabotage, such as programming turbines to fail or, in the case of a power company, causing a blackout.

Last September, the Department of Energy awarded a consortium led by the Georgia Tech Research Institute (GTRI) and other Georgia Tech academic departments $5 million to develop a security suite to safeguard electrical utilities from cyber attack. At present, a determined hacker can insert malicious commands into the industrial control system of a utility, trip a circuit breaker and cause an electrical grid to fail, says GTRI research scientist Seth Walters, one of the principal investigators on the three-year project.

The idea, of course, is to stop such malicious commands. But how can you identify them?


“One of the challenges is that the electrical grid and the utilities on it are all designed to operate in a certain way,” Walters says. “And part of their fundamental operations includes command-and-control messages that might be dangerous at one particular time but harmless at another time.”

In the past, IT security professionals tended to apply traditional corporate network solutions to the electrical grid. However, this cut-and-paste approach did not quite work out given the differences between the two systems, Walters says.

One major issue is that a utility network has serious data sequencing requirements to function properly. “You can’t just take an enterprise system security tool and apply it to an industrial control system network because you might harm the timeliness of information that’s being transmitted,” he says.

To defend a utility, a software engineer needs to understand the industrial control system network well enough to distinguish a harmful command-and-control message from an innocuous one. That means installing the right intrusion detection sensors and having a simulation tool that can model a command’s future effect immediately in the present.

“The key to this technology is the ability to perform faster-than-real-time simulation of the system,” says Sakis Meliopoulos, MS EE74, PhD EE 76, professor of computer engineering at Tech. “This means we need to determine what will happen to the system for the next one to two minutes with computations that can be performed in fractions of a second.”

This approach will require more computing power of the network. In addition, there will be time delays on commands, but that, Meliopoulos says, “will be minimal.”

The project is set to begin in another month or two.

Testing UAV Sensors on the Cheap

Got an unmanned aerial vehicle (UAV) sensor payload in need of testing? Well, Georgia Tech is set to offer defense customers an experimental aircraft on which to place it—at a fraction of the cost it would take to integrate that same payload on a conventional UAV.

The new test bed is called the GTRI Airborne Unmanned Sensor System (GAUSS). “It gives us the ability to offer proof of principle tests to customers at a price that’s reasonable, at a schedule that’s reasonable,” says Mike Brinkmann, MS EE 91, principal research engineer for sensor packages for the Georgia Tech Research Institute (GTRI).

GAUSS is based on the Griffon Aerospace Outlaw ER test UAV, which Tech purchased from Griffon and subsequently modified. The test bed has a 16-foot wingspan and weighs about 140 pounds, with a 35-pound payload capacity. Under Georgia Tech’s authorization from the Federal Aviation Administration (FAA), GAUSS can operate at a maximum ceiling of 5,000 feet, but it is capable of flying higher.


Some of the modifications GTRI researchers made to the Outlaw ER are immediately apparent. “In particular, we put pods on the wings to carry the radar system and power supply, and we made some modifications internally,” says Mike Heiges, AE 85, MS AE 86, PhD AE 89, GTRI’s principal aircraft research engineer for the project.

To prove it can test a variety of sensors on GAUSS, GTRI is integrating three different systems. The first is a visual light camera, the second is an RF signal detection package; and the third is a four-channel, side-looking radar designed to map the ground.

The radar is one of the first systems with these capabilities designed to be fitted on an aircraft as small as the GAUSS, and should be flying onboard it soon. “The two sensors that we have—the signals recorder and also the radar—we’re hoping will open some doors for GTRI to conduct sponsored research with a number of customers that would like to have combinations or variations on those things,” Brinkmann says.

Heiges adds that GRTI has an advantage over potential competitors because the Insitute has authorizations from the FAA to allow it to fly the GAUSS at several locations around the country.

“That’s a huge deal,” Brinkmann says.

Radar Jamming with an “Angry Kitten” 

The rules of electronic warfare are simple. Make the most of the electromagnetic spectrum and deny the other guys access to it. In other words, jam them. But actual radar jamming is easier said than done given the emergence of frequency-hopping radar and communications networks being used by today’s military aircraft.

So Georgia Tech Research Institute engineers began work last June on integrating machine-learning algorithms into Angry Kitten, a developmental jamming system designed to employ new electronic attack and shielding techniques.

Angry Kitten

The Angry Kitten team hopes that, by incorporating an adaptive learning approach into jammers, they will get a system “that can think on the fly” and overcome the electronic protection of advanced targets, says GTRI research engineer Stan Sutphin, MS ECE 12.

The result of three years of internal R&D projects, Angry Kitten probes the vulnerabilities of friendly sensor systems before they are deployed on the battlefield. In addition, Angry Kitten serves as a test bed for new forms of electronic attack, which might be used against an opponent. In doing so, it explores techniques and technologies not employed in jammers built under programs of record, which tend to focus on broader bandwidth and more power.

The current challenge the GTRI electronic warfare tool is tackling is waveform agile systems.

The standard approach to jamming is to first identify the target and then choose a corresponding electronic attack from a library of jamming techniques. However, this attack-by-rote does not account for enemy adaptation. As emitters—communications systems and radars—get more advanced, they behave less predictably and finding “a canned response for them gets to be very difficult,” Sutphin says.

By contrast, a machine-learning algorithm will teach the jammer to learn from past experiences, so that when it encounters the same type of target again, its response will be more sophisticated and hopefully, faster and more successful. If a technique failed the last time, a jammer might try a variant and watch how the target responds to it and adjust accordingly with a feedback loop.

“There has been a huge interest from the Department of Defense in Angry Kitten-like technology,” Sutphin says, noting that the Defense Advanced Research Projects Agency (DARPA) is pursuing its own Adaptive Radar Countermeasures program, which takes a similar approach.

Helping Helicopters Fight a Dread Enemy: Ice

The American Helicopter Society does not give just anyone a Howard Hughes Award. In April, Lakshmi Sankar, MS AE 75, PhD AE 77, associate chair of Georgia Tech’s School of Aerospace Engineering, shared this honor with a government-industry team seeking to model ice formation on helicopter rotors—an effort that aims to improve flight safety, reduce the cost of all-weather certification and help develop the U.S. military’s Future Vertical Lift helicopters.

Ice formation on the blades of a helicopter is a serious problem. “The leading edge is very important for lift production,” Sankar says. “If you have a big chunk of ice over the leading edge, then the rotor may stall and the helicopter will lose altitude.”

What’s more, uneven ice formation on the blades can cause vibrations, putting stress on components, and ice flying off the main rotor can damage the tail rotor or another sensitive part of the helicopter. Finally, even if the worst does not happen, ice on the blades increases drag on the helicopter and increases fuel consumption.

Helicopter Rotor Ice

Airplanes typically rely on anti-icing technology to melt ice on their wings. However, helicopters have limited heating capabilities given their small engines, which supply the electricity on the heaters to the blades. “So this is a very important issue, to be able to predict how much ice will accumulate, how much will it melt, is it going to break or fly off because of the centrifugal forces on the blade,” Sankar says.

In 2011, Georgia Tech partnered with NASA Glenn Research Center and leading aerospace companies to work on the High Fidelity Icing Analysis and Validation for Rotorcraft project. As part of that project, Sankar and his team developed a software model that combines aerodynamics with the structural dynamics of a rotor blade bending under a load—and then combined it with LEWICE, a NASA Glenn program that models ice accretion.

If the model is proven accurate, certifying helicopters for all-weather operations will be cheaper because fewer test flights will be required. Likewise, it could prevent mid-development redesigns of rotor blades because the computer model could test designs even before a vehicle is built. In addition, Sankar sees the model supporting the development of Future Vertical Lift, i.e. the next generation of, helicopters.

So far, the model has fared well against wind tunnel and flight tests, but more research is required. “Hopefully, the government will give us some more funding,” Sankar says.

Fighting the Hidden Effects of Bomb Blasts

Time will tell whether the data collected from soldiers caught in roadside bombings will correlate with late-onset brain injuries. But thanks to the Integrated Blast Effects Sensor Suite (I-BESS), developed at the Georgia Tech Research Institute (GTRI), medical professionals may have some incident histories from which to draw conclusions about the effects of high G-force acceleration and overpressure on the human body.

In July 2011, the U.S. Army Rapid Equipping Force tapped GTRI to create what became I-BESS. Then-vice chief of staff of the U.S. Army, Gen. Peter Chiarelli wanted to address the “invisible injury”—traumatic brain injury—and he needed to do it quickly. The opportunity to collect data was disappearing due to the impending drawdown in Afghanistan, says Brian Liu, EE 05, the head of the Advanced Human Integration Branch at GTRI’s Electronic Systems Laboratory.


Racing against the clock, GRTI researchers started fielding I-BESS in the summer of 2012. For dismounted soldiers, a mix of accelerometers, gyros and pressure transducers were installed into standard vests and headgear. These devices record, time-stamp and measure the effects of an encounter with a roadside bomb. Similarly, there are sensors affixed to the hull of soldiers’ vehicles and inserted inside their seats, with all the systems uploading data to a central storage unit.

“So the system not only is on the soldier, but it’s also on the frame of the vehicle and also on the seat of the vehicle. And those are all integrated and time-tagged so that the data would allow you to go back and reconstruct which soldier was in which seat and what the soldier experienced,” Liu says.

In developing I-BESS, Liu and his team looked to leverage componentry already used in the commercial world. However, the wholesale borrowing of equipment was simply not possible. They couldn’t just take, for example, a vehicle sensor used in NASCAR races to record crash data because the crumpling of a car frame “is a very different event, dynamically, from an explosion,” he says.

Since I-BESS’s initial fielding, GTRI has already collected data from troops and passed it along to the Army’s Joint Trauma Analysis and Prevention of Injury in Combat Program. Now the GTRI team is discussing next steps. “We are working with the Army to look at some of their requirements for future soldier sensor systems that are not identical to I-BESS, but are similar in nature and similar in mission,” he says.

Saving Energy and Money on Military Outposts 

Transporting fuel to a remote U.S. military outpost in Afghanistan is no easy feat. There are unpaved roads; there are Taliban ambushes. So it’s best that when the fuel does arrive, it’s to be used sparingly.

That’s why researchers at Georgia Tech are working with the Office of Naval Research to develop computer modeling tools that optimize energy consumption at forward operating bases, says Yogendra Joshi, the John M. McKenney and Warren D. Shiver Distinguished Chair at Georgia Tech’s Woodruff School of Mechanical Engineering.

To fully appreciate the situation, consider that it reportedly takes 22 gallons of fuel per day to sustain a soldier or Marine in the field, and thanks to the difficult logistical situation in Afghanistan, the price per gallon is astronomical. “We’re talking about fuel that is not $3.90 per gallon, but about an estimated $200 per gallon delivered at a forward operating base,” Joshi says.

The software tools Joshi and his team are working on would allow the military to use its liquid fuels as efficiently as possible by simulating the power consumption of appliances found on a given base.  Georgia Tech is focusing its efforts on heating, cooling, lighting and energy storage technologies because of its significant resident expertise in those fields, Joshi says. The idea is to optimize the electricity consumption by those systems.

Saving Energy

Once the software tools are proven to match real-word power consumption at remote bases, they could be scaled up and applied to large installations, potentially reducing liquid fuel consumption significantly. In addition, the models, while being developed for the military, could also be used for disaster relief or other civil applications—anywhere that might be “off-grid,” Joshi says.

The project started this January and is projected to run for four years.

Designing Super Long-Lasting Computers

Think of the perfect embedded computer. Think of a computer so energy-efficient that it can last 75 times longer than today’s systems. Researchers at Georgia Tech are helping the Defense Advanced Projects Research Agency (DARPA) develop such a computer as part of an initiative called Power Efficiency Revolution for Embedded Computing Technologies, or PERFECT.

“The program is looking at how do we come to a new paradigm of computing where running time isn’t necessarily the constraint, but how much power and battery that we have available is really the new constraint,” says David Bader, executive director of high-performance computing at the School of Computational Science and Engineering.


If the project is successful, it could result in computers far smaller and orders of magnitude more efficient than today’s machines. It could also mean that the computer mounted tomorrow on an unmanned aircraft or ground vehicle, or even worn by a soldier would use less energy than a larger device, while still being as powerful.

Georgia Tech’s part in the DARPA-led PERFECT effort is called GRATEFUL, which stands for Graph Analysis Tackling power-Efficiency, Uncertainty and Locality. Headed by Bader and co-investigator Jason Riedy, GRATEFUL focuses on algorithms that would process vast stores of data and turn it into a graphical representation in the most energy-efficient way possible.

The ultimate goal is to get an algorithmic framework that delivers supercomputer capabilities on a small, power-restricted platform.

One approach to reducing power consumption is to reduce the level of data collection. For example, when looking for a needle in a haystack, you don’t necessarily need to inspect every piece of hay. “What we’re looking at is collecting the minimal data necessary to make accurate decisions,” Bader says.

For now, the Tech team is applying GRATEFUL to social network analysis. But that same technology could also be used for any number of security applications, such as identifying hackers trying to break into a network. And, eventually, the technology developed under GRATEFUL could find its way onto smaller, more efficient computers in unmanned aerial vehicles or worn by soldiers.

The team is currently one year into a potentially five-year effort. Bader says most of the work is still in the elementary stages, but the team is developing proofs of concept software. “Our goal is to create architecture-independent software that can run across multiple hardware platforms and still perform extremely well,” he says.

Pushing the Envelope with Mini-Radars

The Institute has a long history in developing radar systems, which harkens back to World War II. Today, Tech researchers are working to develop radar systems that are much smaller and lighter than anything currently deployed through a new type of wideband, tunable, true time delay.

Modern active electronically scanned array (AESA) radars are complex affairs. They rely on electrical delay cabling to help keep all the electromagnetic pulses firing to and from individual transmit/receive modules in proper sequence.

But separating electrical pulses, which travel essentially at the speed of light, requires lots of cabling—meters of it, in fact. And all this cabling takes up space, weight and power inside the radar, says Kyle Davis, EE 09, MS ECE 13, a research engineer at the Georgia Tech Research Institute  (GTRI). By contrast, Georgia Tech’s new wideband, tunable, true time delay, which is a device used to slow down an electronic signal, uses thin strips of coated film to convert radio frequency (RF) energy into sound waves and then back into RF energy.

Sound waves travel far slower than light waves, so the new device does not require meters of spooling between Point A and Point B to create a signal delay. “Our lengths are on the order of micrometers,” says Davis, adding that this translates into smaller, lighter radars.

Facilitating the operation of the Institute’s acoustic time-delay device are film-based strips of material. “Our materials were sputtered thin films: metals and dielectrics and a piezoelectric layer of zinc oxide,” says Ryan Westafer, CmpE 05, MS ECE 06, PhD ECE 11, a GTRI project research engineer.

Time Delay

Davis says the RF energy-acoustic wave technology being developed at Tech could be used for beam-steering in AESA radars. In addition, it could be leveraged for electronic warfare beam-forming and beam-steering, power amp linearization, electronic countermeasures, radar and antenna testing, and RF interferometers.

The wideband, tunable, true time delay project started about three years ago using internal Institute funding, and with enough money, the team could finish development relatively quickly.  “The current components have a path to being very robust,” Davis says. “If a program had sufficient funding to rapidly mature this technology, two to three years would not be out of the question.” All the team needs to make this happen is an external funding partner.

Studying Animals to Build Smarter Robots

Can studying the mating behavior of birds help the U.S. military develop better unmanned systems? That’s what Ronald Arkin, a roboticist at Georgia Tech’s College of Computing, and other researchers aim to find out as part of the U.S. Navy-funded Heterogeneous Unmanned Networked Teams (HUNT) Project.

Initiated in 2008, the HUNT Project is a multi-phased study that looks at assorted animal interactions—from wolves stalking an elk to squirrels hiding acorn caches—as inspiration for developing new algorithms to guide intelligent autonomous systems. For now, Arkin has been working with computer models and little bots in the lab. But things can always scale up to larger, more robust unmanned vehicles.

“That’s the beauty of the basic research,” he says. “It’s not limited to a physical type of platform.”

One of the earliest subjects of HUNT was “lekking” behavior in birds, in which a group of males gathers around—but not too closely—a very handsome specimen (a “hotshot”) in order to mate with females. This became the basis for seeing how one could distribute autonomous systems behind enemy lines “without using strict formation control” but in a way that “maximizes the likelihood of encounter” with the enemy, Arkin says.

In 2010 and 2011, Arkin and his team moved on to wolf packs. Initially, they thought the wolves coordinated with each other when hunting elk. But Dan MacNulty, a professor of wildlife ecology at Utah State University, disabused them of that notion. “When we brought Dan in the first time, he informed us that there is no coordination,” he says. “They are all individual, greedy agents.”

So how exactly did they work as a pack without explicit rules or communication? One possible explanation was that a predator chasing down an elk indicated to the others that the hunted animal was weak. So applying a probabilistic model to the stage of a hunt, Arkin tried to “replicate that behavior in robotic systems to see if we could do the same sort of thing both in simulations and platforms.” And he succeeded.


Following on the wolf pack research, Arkin then looked at bird mobbing, in which birds gather to drive off a stronger predator. Did it make sense for a weak bird to feign strength and participate in the mobbing? His simulations demonstrated that under certain conditions, yes, it did. And those same lessons could be applied to a low-power robot or one that’s out of ammo.

Arkin is now looking more broadly at robot deception. But,  he explains, ultimately all of the pieces of HUNT relate to one another as examples of biologically inspired group behaviors.

Leave a Reply