July 1, 2025

How Watson College is helping to lead the robotics revolution

Thanks to advances in technology and artificial intelligence, we are closer than ever to having robots in our daily lives

PhD students Zainab Altaweel, left, and Yohei Hayamizu work in Associate Professor Shiqi Zhang's lab at Watson College's School of Computing, where they are learning how to control two new humanoid robots. PhD students Zainab Altaweel, left, and Yohei Hayamizu work in Associate Professor Shiqi Zhang's lab at Watson College's School of Computing, where they are learning how to control two new humanoid robots.
PhD students Zainab Altaweel, left, and Yohei Hayamizu work in Associate Professor Shiqi Zhang's lab at Watson College's School of Computing, where they are learning how to control two new humanoid robots. Image Credit: Jonathan Cohen.

Ever since The Jetsons and similar futuristic visions, we’ve imagined working side by side with robots to help with everyday tasks.

Thanks to advances in technology and artificial intelligence, that day is closer than ever — and Watson College researchers are at the forefront of those innovations. Here are just a few examples of what they’re working on.

‘Cobots’ for manufacturing

As the manufacturing sector is upgrading to Industry 4.0 — which utilizes advances such as artificial intelligence, smart systems, virtualization, machine learning and the internet of things — Associate Professor Christopher Greene, MS ’98, PhD ’01, from the School of Systems Science and Industrial Engineering researches collaborative robotics, or “cobots,” as part of his larger goal of continual process improvement.

“In layman’s terms,” he says, “it’s about trying to make everybody’s life easier.”

Most automated robots on assembly lines are programmed to perform just a few repetitive tasks, with no sensors intended for working side by side with humans. Some functions require pressure pads or light curtains for limited interactivity, but those are added separately.

Through the Watson Institute for Systems Excellence (WISE), Greene has led projects for factories that make electronic modules using surface-mount technology, as well as done research for automated pharmacies that sort and ship medications for patients who fill their prescriptions by mail. He also works on cloud robotics, which allows users to control robots from anywhere in the world.

Human workers are prone to human errors, but robots can perform tasks thousands of times in the exact same way, such as gluing a piece onto a product with the precise amount of pressure required to make it stick firmly without breaking it. They also can be more accurate when it matters most. Humans are required to program and maintain the automated equipment.

“Assembling pill vials with the right quantities is done in an automated factory,” Greene says. “Cobots are separating the pills, they’re putting them in bottles, they’re attaching

labels and putting the caps on them. They’re putting it into whatever packaging there is, and it’s going straight to the mail. All these steps have to be correct, or people die. A human being can get distracted, pick up the wrong pill vial or put it in the wrong package. If you correctly program a cobot to pick up that pill bottle, scan it and put it in a package, that cobot will never make a mistake.”

The rise in robots’ abilities and usefulness, he adds, will lead to shifts in the labor force.

“Everybody’s always asking, ‘Is it going to put people out of a job?’ I tell my students, ‘Not if you learn how to be the one who programs or maintains the cobot,’” he says. “The cobot is going to break down because, over time, that’s just what happens to machinery, and it can’t fix itself.”

Helping humans and robots work together

If humans and robots are going to get along well, they need a common language, or they must at least share common ground about problem-solving.

Shiqi Zhang, an associate professor at the School of Computing, studies the intersection of AI and robotics, and he especially wants to ensure that service robots work smoothly alongside humans in collaborative environments.

There’s just one problem — and it’s a big one: “Robots and humans don’t work well with each other right now,” he says. “They don’t trust each other. Humans don’t know what robots can do, and robots have no idea about the role of humans.”

In his research group, Zhang and his students focus on everyday scenarios — such as homes, hospitals, airports and shopping centers — with three primary themes: robot decision-making, human/robot interaction and robot task-motion planning. Zhang uses language and graphics to show how the AI makes decisions and why humans should trust those decisions.

“AI’s robot system is not transparent,” he says. “When the robot is trying to do something, humans have no idea how it makes the decision. Sometimes humans are too optimistic about robots, and sometimes it’s the other way round — so one way or the other, it’s not a good ecosystem for a human/robot team.”

One question for software and hardware designers improving AI/human collaborations is how much information needs to be shared to optimize productivity. There should be enough so humans can make informed decisions, but not so much they are overwhelmed with unnecessary information.

Zhang is experimenting with augmented reality (AR), which allows users to perceive the real world overlaid with computer-generated information. Unlike the entirely computer-generated experience of virtual reality (VR), someone on a factory floor stacked with boxes and crates could pull out a tablet or put on a pair of AR-enhanced glasses to learn where the robots are, so accidents can be avoided.

“Because these robots are closely working with people, safety becomes a huge issue,” Zhang says. “How do we make sure the robot is close enough to provide services but keeping its distance to follow social norms and be safe? There is no standard way to enable this kind of communication. Humans talk to each other in natural language, and we use gestures and nonverbal cues, but how do we get robots to understand?”

‘Bugs’ on the ocean

Futurists predict that more than 1 trillion autonomous nodes will be integrated into all human activities by 2035 as part of the “internet of things.” Soon, pretty much any object — big or small — will feed information to a central database without the need for human involvement.

Making this idea tricky is that 71% of the Earth’s surface is covered in water, and aquatic environments pose critical environmental and logistical issues. To consider these challenges, the U.S. Defense Advanced Research Projects Agency (DARPA) has started a program called the Ocean of Things.

As part of that initiative, Professor Seokheun “Sean” Choi, Assistant Professor Anwar Elhadad, PhD ’24, and PhD student Yang “Lexi” Gao from the Department of Electrical and Computer Engineering developed a self-powered “bug” that can skim across the water, and they hope it will revolutionize aquatic robotics.

Over the past decade, Choi has received research funding from the Office of Naval Research to develop bacteria-powered biobatteries that have a possible 100-year shelf life. The new aquatic robots use similar technology because it is more reliable than solar, kinetic or thermal energy systems under adverse conditions.

“When the environment is favorable for the bacteria, they become vegetative cells and generate power,” Choi says, “but when the conditions are not favorable — for example, it’s really cold or the nutrients are not available — they go back to spores. In that way, we can extend the operational life.”

The research showed power generation close to 1 milliwatt, which is enough to operate the robot’s mechanical movement and any sensors that could track environmental data such as water temperature, pollution levels, the movements of commercial vessels and aircraft, and the behaviors of aquatic animals. The next step in refining these aquatic robots is testing which bacteria will be best for producing energy under stressful ocean conditions.

“We used very common bacterial cells, but we need to study further to know what is actually living in those areas of the ocean,” Choi says.

Diving under the sea

For Assistant Professor Monika Roznere ’18, developing robots for underwater environments brings unique challenges. Seeing beneath the surface is different than above it — GPS and Bluetooth don’t work and transmitting any signal to exchange information can be tricky.

The good news, she says, is that many avenues to possible solutions are unexplored, offering the potential for truly groundbreaking research.

While a computer science undergraduate at Watson, Roznere worked on computer vision with Distinguished Professor Lijun Yin. At the same time, her older sister was earning a PhD in ecology at a university in Ohio and needed to get scuba certified as part of her research on mussels. Roznere decided that sounded like fun and took a scuba class at .

Her career path took a turn when pursuing her PhD at Dartmouth College: A professor asked her if she wanted to join his underwater robotics lab.

“He said, ‘Are you scuba certified? Do you want to dive with robots?’ I was like, ‘Yeah!’” Roznere says with a laugh. “That’s when I changed my focus from computer graphics — I’m going to swim with robots!”

A few tools can help robots get around under the sea. Thanks to war films set on military submarines, most of us know about sonar, which sends out audible “pings” and listens for echoes to calculate the distance and direction of objects around it.

However, a high-end sonar system can cost up to $30,000, won’t give a full image of what it detects, doesn’t render colors and can be noisy because of the mechanics required. Cameras work, of course — but the deeper underwater a robot goes, the less sunlight reaches down there. Also, because light wavelengths degrade differently when they hit the water, everything looks blue and green.

“My research is about being creative with the lower-cost options that we have — a simple camera and a simple sonar,” Roznere says. “How good can we get compared to high-end sensors? Can we create an algorithm that helps the robot figure out something is 5 meters away? What does it look like in my camera view?”

Her underwater focus has its perks — including trips to tropical Barbados to field-test the latest innovations — and she enjoys working with colleagues from multidisciplinary backgrounds as they try to solve problems from a variety of angles.

“A researcher once told me that if you are trying to hire a roboticist for a very challenging problem, get a marine roboticist, because they’ve already overcome all these difficult challenges,” Roznere says. “How do you make a car autonomous in a snowy environment where you can’t see the road and there are snowflakes everywhere? That’s me! I get sediment floating around and fish flying in front of the cameras because I have lights, and they love lights.”

An eye in the sky

Thanks to improved technology and reduced cost, more robots have taken to the skies over the past 20 years — and drones are more than just a fun hobby.

Assistant Professor Jayson Boubin from the School of Computing uses that bird’s-eye view to find and analyze issues on the ground, including invasive species and landmines. To aid his research, he develops AI software that integrates the latest advances in edge computing.

“What is intelligence? What makes a machine smart?” he asks. “For me, intelligence is all about perception, understanding and decision-making. The smartest people we know are the ones who can understand their environments, understand the problems they’re trying to solve and then solve them with incisive decision-making. I try to make my drones do that.”

The challenge is accomplishing this level of autonomy given the limited weight, processing power and battery life of drones, also called unmanned aerial vehicles or UAVs. Boubin’s solution is to determine where tasks fit on the edge/cloud continuum — in other words, figuring out what data is essential to keep onboard the drone and what can be transmitted elsewhere for processing and storage. Other complicating factors are latency (how long it takes for a signal to transmit to “base” and return with further instructions) and what to do in rural areas without adequate cell coverage.

“That’s the thesis of my research area — making UAVs as smart as possible within those very real and complicated constraints,” he says.

With funding from the Air Force Research Laboratory, Boubin explores two main areas where drones can make a difference. One is ecology and agriculture, with UAVs providing overall views of forests or farmers’ fields. By looking for anomalies, drones can spot insidious insect pests such as the spotted lanternfly (which harms trees and vineyards), the brown marmorated stink bug (which attacks various fruits and vegetables) and the Japanese beetle (which strips leaves on soybean plants).

He also focuses on locating landmines and other unexploded ordnance in former war zones, which also could find the remains of combatants who are missing in action. Later this year, he hopes to conduct experiments with replica (nonexplosive) landmines at one of the University’s soccer fields.

Projects like these are more appealing to him than programming drones for leisure activities or warfare: “I like to have drones solve problems that I think have significance and that fulfill me when I attempt to solve them.”

But do we really need that robot?

SSIE Assistant Professor Stephanie Tulk Jesso researches the interactions between humans and technology as well as more general ideas of human-centered design — in short, asking people what they want from a product, rather than just forcing them to use something unsuitable for the task.

As an example, she points to one of her past projects to mitigate fall risks in hospital settings. Could robots take patients to the bathroom and free up time for human staff? Research showed that nurses — who are primarily the ones who escort patients to the bathroom — didn’t want that because of the nuanced human needs during that particular care.

“A roboticist wants to build a robot. An AI scientist wants to build AI,” she says. “I can say, ‘Oh, that’s a bad idea — let’s not.’”

One big problem: We project human qualities onto robots that look like us, even though they perceive and evaluate the world very differently. Then when they do something we don’t predict or fail to meet our expectations, we are disappointed — or, worse, companies continue to use them despite their inadequacies because of the money already spent to purchase them.

Another thing holding back robotics, Tulk Jesso believes, is Moravec’s paradox, an AI theory proposed in the 1980s by computer scientist Hans Moravec. He observed that mimicking higher-level thinking — playing chess, doing math or writing essays — is inherently easier for AI than innate skills like perception and motor functions that have been finely honed through evolutionary natural selection over millions of years.

In everyday environments outside of a highly controlled lab, “a 3-year-old child can navigate, pick up objects and arrange things on a table much better than the most sophisticated robot in the world right now,” she says.

Circling back to healthcare, Tulk Jesso thinks there are tasks that robots could do successfully. They could fetch items for their human coworkers or help patients in isolation rooms, allowing nurses to focus on patient care without spending extra time and energy to put on personal protective equipment every time someone has a low-level needs such as an extra pillow or blanket.

“I do think there are opportunities where robots really could do something helpful,” she says. “But we as a society need to temper all of this unchecked enthusiasm and methodically evaluate what this technology is even capable of. Otherwise, in five years we’re not going to use robots — or when we see them, everybody’s going to groan and be like, ‘Great, there’s another stupid robot.’”