When Robots See – An Additional Sense for Better Performance
With constant precision and speed, robots tirelessly perform even the most monotonous work with absolute reliability around the clock. According to the definition of the European EN 775 standard, robots are “automatically controlled, reprogrammable, multi-purpose handling devices with several degrees of freedom”. Because of their capabilities, in recent decades they have proven to be extremely useful and economical solutions in numerous industrial sectors. This is why they are utilized with increasing frequency around the globe: According to the World Robotics Report from the International Federation of Robotics (IFR), 422,000 robots were delivered globally in 2018, which corresponds to a record sales volume of 16.5 billion US dollars. Although the IFR expects the growth in 2019 to hold steady at the record level set in 2018, it projects a remarkably high average growth of 12 percent per year from 2020 to 2022. Global frontrunner in using these flexible handling devices is Singapore: In 2018, the ratio there was 831 robots per 10,000 employees. Germany holds a respectable third place with a density of 338 robots per 10,000 employees. This technology thus plays a significant role in the high degree of automation, which is essential for the economical production of all kinds of goods particularly in high-wage countries.
Achieving even more with the power of sight
Modern industrial robots usually have a certain number of sensors--for example, to detect the presence of gripped parts or to immediately stop their movements when there is a danger of collision. However, the data captured by traditional sensors only provides limited information. There are clear benefits to systems that offer image processing and can use it to capture and evaluate significantly more details. Combined with a vision system and using evaluated camera images as a basis, robots have considerably better decision-making capabilities to react flexibly to unexpected situations. This is enormously important particularly in the high-growth segment of so-called cobots: For direct cooperation with humans, these collaborative robots were developed without a shielding protective device – here, the safe prevention of accidents is the top priority to avoid any risk to the health of human colleagues. The use of robots can also result in high costs and idle time – for example, if they damage work pieces or other automation devices through incorrect movements. Here too, camera systems help increase the reliability of systems with integrated robots.
In addition to avoiding these kinds of unwanted situations, “robots that see” offer many other advantages: They enable more flexible processes, since the evaluated image data can be used to precisely control the robotic movements. Even simple tasks such as gripping components from a defined position can fail without the use of image processing, if a component doesn’t arrive exactly in the expected position where a robot is designed to pick it up. In many cases, this is no problem for a robotic system enhanced with a vision system: A camera takes an image of the inaccurately positioned component, the subsequent image analysis calculates its deviation from the position and then forwards the corrected 2D or 3D gripping coordinates to the robot’s control system. Within process-dependent limits, this method ensures that components are picked up reliably. The ultimate challenge in taking up components is so-called bin picking: For a robot to grip parts that lie unsorted in a container requires sophisticated visions systems. They handle the detection of the next pickable component along with the exact specification of its 3D position and forward this information to the robot. Based on the current state of technology, this task would be unsolvable in many cases without using image processing.
Setting up the vision system properly is crucial.
Which image processing system can be optimally combined with a robot in a particular application depends on several factors. One fundamental criterion is the camera’s positioning in the system: It can be permanently installed above a robot cell (“off-arm”), for example, or attached directly to the robot arm (“on-arm”). In the second scenario, the robot’s “sense of sight” is available very close to the action or at the gripper, but the movements require that the camera’s weight is as low as possible, that it’s very robust in terms of acceleration and vibrations and that the cable routing is well-designed and suitable for robots.
Before a “seeing” robot can be developed, it’s also necessary to answer the fundamental question of whether a traditional industrial camera or a so-called intelligent camera (or smart camera) is the better choice for the task. In smart cameras, the captured images are analyzed directly in the camera housing, whereas industrial cameras transmit their images to a PC system for analysis, which generally enables higher precision and speed for the image processing than intelligent cameras. Both architectures have their advantages and disadvantages, so that criteria such as the required precision, speed of processes and movements, type of industrial environment and the resulting necessary protection class of the vision system, the load-bearing capacity of the robot, the preferred communication interfaces, or additional influencing conditions all decide which image processing system is the optimal solution for the respective application.
However, the camera isn’t the only decisive factor in the successful use of vision systems in robotics. Lighting is an important element for any image processing system. Only with lighting that is optimally adjusted to the task are cameras able to record images in the required quality to subsequently enable a reliable analysis. Optics also play an important role in image capture. In robot vision applications with on-arm architectures, one of the things that must be ensured is that the vibrations and accelerations don’t lead to changes in the settings, such as for the aperture. If the working distances change often, auto-focus lenses can be an expedient solution. Particularly with on-arm applications, even the vision system’s cabling has an important influence on the stability of the entire system: Due to the robot’s constant movements, specific cables or even drag chains resistant to torsion and bending should be used to guarantee that the communication functions correctly at all times.
In addition to selecting the optimal vision hardware for a “seeing” robot, the software plays a key role in making these kinds of systems economically successful. As a rule, robots, the potentially required grippers, the cameras and sometimes even the lighting systems work with their own proprietary controls. The integration of all the subsystems involved, their programming and control as well as the assurance of well-functioning communication on all levels thus require sophisticated design. In large part, the total costs of the implementation of such systems often depend on the duration of the development. As a result, the decisive factor is whether the complex tasks can be accomplished with the available software and compatible programming tools with minimal effort.