text.skipToContent text.skipToNavigation

3D Sensor Technologies

The three-dimensional detection of objects plays a central role in automation, as the subsequent processing step must know the position, size and shape. The path to a 3D point cloud is a multi-step process that can be carried out using different measurement techniques.
 

Triangulation and Structured Light

The triangulation technique is a method of obtaining depth information. The illumination source and the camera have a defined distance and are aligned to a common point. This forms a triangle with a so-called triangulation angle. This triangulation angle can be used to calculate the depth information. The greater the angle, the better the depth information that can be acquired. The triangulation angle causes illuminated objects to cast shadows (shading) or the object obscures the background and is no longer visible to the camera (obstruction). Depth information can only be output for areas that are neither shaded nor obstructed. A 3D sensor from wenglor works with structured light and triangulation. It consists of a light source and a camera. The camera and illumination source are aligned to a point and form a triangle, known as triangulation. This allows depth information to be obtained. A 3D point cloud can be created by projecting different patterns onto the object.
Structured light is an illumination technology where the light creates a known pattern, often grids or bars. The depth and surface information of the objects can be detected by the way in which the patterns are deformed. Structured light is a measurement method with high-precision resolutions of less than 10 μm. This means that the finest hairline cracks in objects or the smallest structures that are invisible to the human eye can be identified. 3D sensors often use patterns such as binary images with designations such as gray code patterns or phase images.
The gray code pattern consists of a sequence of stripes that are illuminated in light or dark and become increasingly finer. By tracking the intensity progression with a camera, a pattern can be detected and thus a depth range can be defined. Phase images, on the other hand, are wave patterns in the form of sine waves projected onto an object. For example, a digital micromirror device can be used to generate the patterns. The phase of the wave is shifted from image to image. The phase sequence can be used to obtain depth information with the help of a camera. 

Passive Stereo

In this procedure, two cameras view the same object at an angle. The distance of a point can be determined by the different viewing angles. The difficulty here is identifying the same point with both cameras. This method is suboptimal when viewing a low-contrast surface such as a white wall, for example.

Active Stereo

The structure is the same as that of the passive stereo. The only difference is that a pattern (e.g. randomly distributed points) is projected onto the object. This makes it easier to assign a point from both cameras.

Time of Flight

In this procedure, the distance between the object and the sensor is determined based on the transit time. The sensor emits light pulses that hit an object. The object reflects these light pulses. The distance is determined depending on the duration of the reflection of the light pulses. This allows depth information such as structures or distances of objects to be determined.

Comparison of 3D technologies

The Three-Dimensional Nature of the 3D Sensor

The 3D sensors project several patterns onto the object to be measured and then record them by means of a camera. The object is thus recorded in three dimensions and digitized in a 3D point cloud. Neither the object nor the 3D sensor is in motion. Objects can therefore be recorded quickly and extremely precisely.

​​​​​​​1) High-resolution camera
2) Light engine
3) X, Y = measuring range
4) Z = working range

3D Object Measurement Simplifies Automobile Production

Illumination: Light Engines for Ideal Illumination

The illumination source can be a laser or an LED. Lasers generate light with a high degree of temporal and spatial coherence. The spectrum is narrowband. The light generated by a laser can be brought into a specific shape via optics. Another type of illumination is the use of an LED. In contrast to a laser, this produces broadband light and has hardly any coherence. LEDs are easier to handle and generate more wavelengths compared to laser diodes. Every sample can be generated using digital light processing (DLP) technology. The combination of LED and DLP offers the ability to create different patterns quickly and effectively, making them optimal for structured light 3D technology. 

Image Recording: Perfect Picture with CMOS Power

The object is recorded in two dimensions using a high-resolution camera. Modern cameras typically have a photosensitive semiconductor chip based on CMOS or CCD technology, with CMOS technology being used more frequently. A chip consists of many individual cells (pixels). Modern chips have several million pixels, allowing two-dimensional detection of the object. Due to the better performance of CMOS technology, it is used in 3D sensors.

3D Point Cloud: From Application to the Finished Image

The pattern sequence of structured light is recorded by the camera. The package containing all images is called the image stack. The depth information of each point (pixel) can be determined from the images of the individual patterns. As the camera has several million pixels and recognizes gray levels in each pixel, several megabytes of data are generated in a short time. The amount of data can be processed on a powerful industrial PC or internally in the sensor with an FPGA. The advantage of the internal calculation is the speed, while the calculation on the PC allows for greater flexibility. The result of the calculation is a 3D point cloud.

Integration: From Sensor to Application

The 3D point cloud is calculated from the captured images. This can be done in the sensor or on an industrial PC. Software development kits (SDK) from the manufacturer or standardized interfaces such as GigE Vision are used for easy integration. 

Use of Monochrome Illumination

The use of monochrome illumination makes it possible to effectively suppress disturbing influences from ambient light through optical filters. Illumination can also be optimized for maximum efficiency and illumination intensity.

Product Comparison