text.skipToContent text.skipToNavigation

Pick and Place with 2D Cameras

Robot-supported pick and place is widely used in manufacturing, assembly and packaging processes where objects must be accurately detected for further processing, picked and then placed in a different position. Object localization is carried out via 2D cameras, which send the position data directly to the robot.

Pick and Place versus Bin Picking – What is the Difference?

Pick and Place: Precise Localization and Picking

In pick-and-place applications, 2D cameras locate the objects to be picked and pass their position coordinates on to the robot controller. The robot can then pick the objects precisely and then place them in a different position.
Pick and place is suited for simple and quick applications where the objects are separated and on one level. 

Bin picking: Reaching into a Bin

Bin picking refers to an automated application, where robots pick randomly arranged objects from containers. This requires 3D sensors that detect the position of the objects in the space and pass the information to the robot.
Bin picking with 3D sensors is ideal if the objects are not separated and are arranged in random positions.

Pick and Place with Machine Vision Devices – The Advantages

Flexible Camera Mounting Options

No two applications are the same! Machine vision hardware can be mounted statically above the gripping area or directly on the robot arm depending on the application. Static mounting of the cameras is particularly suitable for space-critical applications or small robot arms with limited installation space, while the positioning of the camera on the robot arm increases the flexibility of the application.

Static camera

Camera on robot arm

Direct Communication Between Camera and Robot

The 2D camera communicates directly with the robots of the manufacturer UR via the URCap  – no separate interface needs to be developed and no separate hardware needs to be connected. This enables very quick and easy setup of the Robot Vision application.

Seamless Integration into the uniVision Ecosystem

The wenglor uniVision 3 toolbox offers various modules for object detection, such as the modules for localization and pattern matching. Both modules enable different object types to be taught in, whereby several – even different – objects can be found in both modules and sorted, for example, by position. Variable offsets in x and y can also be implemented, which increases the flexibility of the pick-and-place application enormously: Depending on the application, it is thus possible to specify the exact point at which the robot should grab the object – at the tip, in the middle or rather at the rear. The results, such as the gripping position or the contour of the detected object, can be displayed individually and flexibly in the web-based visualization.

Reduced Cycle Times Thanks to Multi-Object Recording

In wenglor uniVision 3 software, several – even different – objects can be detected with a single image capture and their coordinates can be passed on to the robot control system. This means that the robot can grip the objects directly one after the other without having to move to the detection position for camera capture again before each grip. This reduces cycle time and greatly increases the efficiency of the pick-and-place application. 

Precise Calibration with Accurate Calibration Targets

The 2D camera and robot must initially be calibrated once. Highly accurate calibration targets are available in various sizes and materials (glass for transmitted light applications, carbon fiber for incident light applications). The successful calibration process can be checked directly via a verification step.

Compatible Hardware

The uniVision 3 machine vision software runs on the following wenglor image processing products.

Smart Cameras B60

Product Comparison