Can robots be taught to analyse and deal with complex tasks in their own way, have a more dynamic thought process, and therefore adapt to changing scenarios? Pioneering new research could stand to make AI and robotics more human-like.
Machine learning may be essential to future technological innovations and could accelerate human achievement. Credit: maxuser / Shutterstock
Led by a team at the Chalmers University of Technology in Sweden alongside researchers from NVIDIA, who hope to break new ground in machine learning to offer different solutions in a changeable environment, adding flexibility traditionally not seen with robots, which they posit could allow machines to work more closely with humans in the future.
Utilising AI, one of the core themes of the fourth Industrial Revolution and core to optimising manufacturing and industry processes in the near future, the team hope to open up new and exciting opportunities for AI in the real world that may require a more lateral approach.
Read more: What is Industry 5.0? Talking with Visual Components
One of the biggest concerns with the digital transformation is that machines will ultimately replace humans and is a reason for some fierce opposition to concepts such as Industry 4.0. But many in the tech sphere are thinking ahead and attempting to look into these issues before they become a major issue.
“Robots that work in human environments need to be adaptable to the fact that humans are unique, and that we might all solve the same task in a different way. An important area in robot development, therefore, is to teach robots how to work alongside humans in dynamic environments,” says Maximilian Diehl, a Doctoral Student at the Department of Electrical Engineering at Chalmers University of Technology and a main researcher behind the project.
For example, when a person lays a table or cleans their home, they may approach the situation from a different angle, every time accommodating a number of variables that a machine may not be able to take into account.
A chair could be misplaced, which can have multiple outcomes, from different ways to move around it, or deciding to push it back into place. One diner could be left-handed which means cutlery would need to be positioned differently. We may take breaks or swap the hand we lay the table with.
Robots do not think in the same way humans do. They require precise planning, which makes them very efficient in systems that flow in the same pattern, such as on manufacturing lines.
A way to think about it is that many variables could potentially have to be manually inputted by humans beforehand. The point of this research is to explore whether the AI that controls the robot is able to think for itself - to allow for lateral thinking without human aid - and bring robots a step closer to thinking as we do.
The Chalmers team wanted to explore whether it was possible to teach a robot more humanlike ways to approach solving tasks that extracts general information instead of specific information during a task to allow for a more flexible approach during a long-term goal. They refer to this as an "explainable AI".
For tests, the team got people to stack cubes around 12 times each in VR with the expectation that each participant would make slight differences, with each movement tracked by laser scanners.
“When we humans have a task, we divide it into a chain of smaller sub-goals along the way, and every action we perform is aimed at fulfilling an intermediate goal. Instead of teaching the robot an exact imitation of human behaviour, we focused on identifying what the goals were, looking at all the actions that the people in the study performed,” says Karinne Ramirez-Amaro, Assistant Professor at the Department of Electrical Engineering.
From this the team were able to build up a "thought bank" in the robot, effectively allowing it to keep track of different movement or placing options while it completed the task.
The AI then utilised a planning tool to carry it out, which allowed it to plan out how it was going to conduct the stacking of the blocks, even if surrounding conditions changed.
Should the situation change slightly, the machines would apparently go into its thought bank and adapt its processes, which hints that robots may be able to ne taught to think somewhat laterally.
The solution still requires the specific planning typically associated with machine learning but does offer an exciting insight into whether machines can adapt to changing situations. Although the ability to think independently, as it were, is likely still in the realm of science fiction.
Read more: Drone scarecrows? The Swedish firm looking to preserve nature & protect crops
The researchers claim the robot had a "92% success rate" within its first trial. When all 12 demonstrations are added to the bank, this increased to 100%.
"It might still take several years until we see genuinely autonomous and multi-purpose robots, mainly because many individual challenges still need to be addressed, like computer vision, control, and safe interaction with humans. However, we believe that our approach will contribute to speeding up the learning process of robots, allowing the robot to connect all of these aspects and apply them in new situations," Deihl concluded.
Back to Homepage
Back to Technology & Innovation