In today’s manufacturing plants, the division of labor between humans and robots is quite clear: Large, automated robots are typically cordoned off in metal cages, manipulating heavy machinery and performing repetitive tasks, while humans work in less hazardous areas on jobs requiring finer detail. 

But according to Julie Shah, the Boeing Career Development Assistant Professor of Aeronautics and Astronautics at MIT, the factory floor of the future may host humans and robots working side by side, each helping the other in common tasks. Shah envisions robotic assistants performing tasks that would otherwise hinder a human’s efficiency, particularly in airplane manufacturing. 

“If the robot can provide tools and materials so the person doesn’t have to walk over to pick up parts and walk back to the plane, you can significantly reduce the idle time of the person,” says Shah, who leads the Interactive Robotics Group in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). “It’s really hard to make robots do careful refinishing tasks that people do really well. But providing robotic assistants to do the non-value-added work can actually increase the productivity of the overall factory.” 

A robot working in isolation has to simply follow a set of preprogrammed instructions to perform a repetitive task. But working with humans is a different matter: For example, each mechanic working at the same station at an aircraft assembly plant may prefer to work differently — and Shah says a robotic assistant would have to effortlessly adapt to an individual’s particular style to be of any practical use. 

Now Shah and her colleagues at MIT have devised an algorithm that enables a robot to quickly learn an individual’s preference for a certain task, and adapt accordingly to help complete the task. The group is using the algorithm in simulations to train robots and humans to work together, and will present its findings at the Robotics: Science and Systems Conference in Sydney in July. 

“It’s an interesting machine-learning human-factors problem,” Shah says. “Using this algorithm, we can significantly improve the robot’s understanding of what the person’s next likely actions are.”

Taking wing

As a test case, Shah’s team looked at spar assembly, a process of building the main structural element of an aircraft’s wing. In the typical manufacturing process, two pieces of the wing are aligned. Once in place, a mechanic applies sealant to predrilled holes, hammers bolts into the holes to secure the two pieces, then wipes away excess sealant. The entire process can be highly individualized: For example, one mechanic may choose to apply sealant to every hole before hammering in bolts, while another may like to completely finish one hole before moving on to the next. The only constraint is the sealant, which dries within three minutes. 

The researchers say robots such as FRIDA, designed by Swiss robotics company ABB, may be programmed to help in the spar-assembly process. FRIDA is a flexible robot with two arms capable of a wide range of motion that Shah says can be manipulated to either fasten bolts or paint sealant into holes, depending on a human’s preferences. 

To enable such a robot to anticipate a human’s actions, the group first developed a computational model in the form of a decision tree. Each branch along the tree represents a choice that a mechanic may make — for example, continue to hammer a bolt after applying sealant, or apply sealant to the next hole? 

“If the robot places the bolt, how sure is it that the person will then hammer the bolt, or just wait for the robot to place the next bolt?” Shah says. “There are many branches.”

Using the model, the group performed human experiments, training a laboratory robot to observe an individual’s chain of preferences. Once the robot learned a person’s preferred order of tasks, it then quickly adapted, either applying sealant or fastening a bolt according to a person’s particular style of work. 

Working side by side

Shah says in a real-life manufacturing setting, she envisions robots and humans undergoing an initial training session off the factory floor. Once the robot learns a person’s work habits, its factory counterpart can be programmed to recognize that same person, and initialize the appropriate task plan. Shah adds that many workers in existing plants wear radio-frequency identification (RFID) tags — a potential way for robots to identify individuals.

Steve Derby, associate professor and co-director of the Flexible Manufacturing Center at Rensselaer Polytechnic Institute, says the group’s adaptive algorithm moves the field of robotics one step closer to true collaboration between humans and robots. 

“The evolution of the robot itself has been way too slow on all fronts, whether on mechanical design, controls or programming interface,” Derby says. “I think this paper is important — it fits in with the whole spectrum of things that need to happen in getting people and robots to work next to each other.” 

Shah says robotic assistants may also be programmed to help in medical settings. For instance, a robot may be trained to monitor lengthy procedures in an operating room and anticipate a surgeon’s needs, handing over scalpels and gauze, depending on a doctor’s preference. While such a scenario may be years away, robots and humans may eventually work side by side, with the right algorithms. 

“We have hardware, sensing, and can do manipulation and vision, but unless the robot really develops an almost seamless understanding of how it can help the person, the person’s just going to get frustrated and say, ‘Never mind, I’ll just go pick up the piece myself,’” Shah says. 

This research was supported in part by Boeing Research and Technology and conducted in collaboration with ABB.

Comments
Jagadeesh  - Very Innovative2012-06-14 06:55:58
Innovation at its best. Great.
Robertdooley2012-06-21 05:30:01
nice