Bin-picking made simple
12 September 2016
Suzanne Gill reports on a combined vision and robotic solution that could help to cost-effectively automate bin-picking applications in the food sector.
In food sector bin-picking applications, items such as fruit or vegetables often need to be sorted and removed from a bin or tote and placed into individual containers for packaging fulfilment or additional manufacturing processes. The job is boring, repetitive, and fast-paced while also requiring high accuracy and consistency – all characteristics that make it suitable for automation. However, while human workers are able to look at a bin full of products and immediately identify the best way to pick up items based on their shape and position, and avoiding the edge of the bin as they reach inside, the complexities of automating these actions in the food sector have, traditionally, been prohibitive.
Automated bin-picking applications require a 3D camera, vision software, and robot arm with appropriate gripper. The real challenge in food applications has been the software. The camera, mounted overhead is able to scan a bin full of products, while it is the vision software that analyses the image, defines an appropriate item to be picked up based on position, shape, and ease of access, and communicates that information to the robot arm so it can pick up the item.
Nigel Smith, managing director at TM Robotics, explains further: “Traditionally, CAD data has been required by the vision system to allow it to compare product with data. However, fresh produce is rarely of a consistent size, shape or colour, which has made it difficult for vision systems to work with.”
The bin-picking challenges continue. The robot arm needs to provide full six-axis movement in order to use varying approach vectors to reach into the bin without hitting the box sides and then pick up items that can be lying on top of each other in a range of positions and orientations. The camera needs to scan, process, and communicate data quick enough to coordinate the robot’s actions and the images need to be clear enough to show more than just outlines of the items in the bin. For the robot gripper to effectively approach a targeted item, the position and orientation of items that may be jumbled or overlapping also need to be identified.
While advances in six-axis robot arms and high-speed 3D camera systems have addressed many of these issues, vision system software has continued to be a stumbling block. Typical vision software is expensive and complicated, requiring professional CAD programming to ‘teach’ the robot to recognise models. Even after initial programming, it can be difficult for the system to recognise multiple models in a single bin or to recognise the models’ positions in the bin in order to identify the ideal approach vector and picking point for the robot. If the application changes – it is necessary to reprogramme the system.
A new approach, implemented by Toshiba Machine, is said to overcome these challenges and make automated bin-picking a reality – even for smaller volume and highly variable applications.
Easy to use solution
“In the food industry much of the equipment on the plant floor will be used by non-specialist vision system users so any solution needs to be easy to use,” continued Smith. “Our solution consists of a robot, cameras, projector and software. Icons on the top of the TS Vision software programme – which can be visualised on a standard PC – walks untrained users through each step of repositioning the robot arm and measuring the bin, so no special training or programming expertise is required.”
Image capture and processing and parallax operations are performed inside the camera. The camera offers accuracy of ±.07mm at a height of 700mm, with a measurement field of 350mm X 280mm and depth of field of 600mm to 800mm. In order to enhance accuracy of the camera, a projector shines a random light pattern into the bin, which highlights the surfaces of the items inside and gives the camera additional position and orientation data for more accurate identification.
The vision software offers easy model registration without requiring complex CAD data. The software registers a model by capturing an image of the item with the camera and simply using a mouse to enclose the image. After capturing sample work pieces multiple times in different positions and orientations, the vision system automatically generates composite model data. If there are multiple parts, the recognition rate of a single item can be improved using the mouse to mask unnecessary parts.
Calibration of the camera and the robot base-coordinate system is also simple. The camera captures images of the model multiple times in different positions and orientations while it is held by the robot, and the vision software calculates the part’s position and altitude.
Multiple picking points can be identified, and an optimum picking point selected. During this process, multiple models can be registered and parameters adjusted. The system also allows the user to measure the bin position, opening area, and height using the mouse. This allows the software to guide the robot arm for the most effective approach vector so that it does not collide with the bin and the tip picks up parts that do not interfere with the box.
The system is able to incorporate multiple robotic options, depending on application needs, and is also able to interface with other systems on the production floor. A Cartesian robot, for example, could fill and transport totes, while SCARA or six-axis robot picks items for packaging or binning.
Cycle time for items to be picked can vary depending on the situation, with a typical cycle time of three seconds, providing an optimised balance between processing speed and accuracy. If only one workpiece is present per image or per tray, cycle time can be as fast as 0.7 seconds, while an image full of workpieces can still be processed in as little as five seconds.
Contact Details and Archive...