Triangulation and Structured Light
The gray code pattern consists of a sequence of stripes that are illuminated in light or dark and become increasingly finer. By tracking the intensity progression with a camera, a pattern can be detected and thus a depth range can be defined. Phase images, on the other hand, are wave patterns in the form of sine waves projected onto an object. For example, a digital micromirror device can be used to generate the patterns. The phase of the wave is shifted from image to image. The phase sequence can be used to obtain depth information with the help of a camera.
Passive Stereo h4>
In this procedure, two cameras view the same object at an angle. The distance of a point can be determined by the different viewing angles. The difficulty here is identifying the same point with both cameras. This method is suboptimal when viewing a low-contrast surface such as a white wall, for example.
Active Stereo h4>
The structure is the same as that of the passive stereo. The only difference is that a pattern (e.g. randomly distributed points) is projected onto the object. This makes it easier to assign a point from both cameras.
Time of Flight h4>
In this procedure, the distance between the object and the sensor is determined based on the transit time. The sensor emits light pulses that hit an object. The object reflects these light pulses. The distance is determined depending on the duration of the reflection of the light pulses. This allows depth information such as structures or distances of objects to be determined.
Comparison of 3D technologies
Structured light | Passive stereo | Active stereo | Time of flight | |
---|---|---|---|---|
Resolution | ||||
Accuracy | ||||
Ambient light | ||||
Reading speed | ||||
Low-contrast objects | ||||
Obstruction/shading |
Resolution
|
|||
---|---|---|---|
Structured light
|
Passive stereo
|
Active stereo
|
Time of flight
|
Accuracy
|
|||
Structured light
|
Passive stereo
|
Active stereo
|
Time of flight
|
Ambient light
|
|||
Structured light
|
Passive stereo
|
Active stereo
|
Time of flight
|
Reading speed
|
|||
Structured light
|
Passive stereo
|
Active stereo
|
Time of flight
|
Low-contrast objects
|
|||
Structured light
|
Passive stereo
|
Active stereo
|
Time of flight
|
Obstruction/shading
|
|||
Structured light
|
Passive stereo
|
Active stereo
|
Time of flight
|
The Three-Dimensional Nature of the 3D Sensor
The 3D sensors project several patterns onto the object to be measured and then record them by means of a camera. The object is thus recorded in three dimensions and digitized in a 3D point cloud. Neither the object nor the 3D sensor is in motion. Objects can therefore be recorded quickly and extremely precisely.
1) High-resolution camera
2) Light engine
3) X, Y = measuring range
4) Z = working range
3D Object Measurement Simplifies Automobile Production
Illumination: Light Engines for Ideal Illumination h3>
The illumination source can be a laser or an LED. Lasers generate light with a high degree of temporal and spatial coherence. The spectrum is narrowband. The light generated by a laser can be brought into a specific shape via optics. Another type of illumination is the use of an LED. In contrast to a laser, this produces broadband light and has hardly any coherence. LEDs are easier to handle and generate more wavelengths compared to laser diodes. Every sample can be generated using digital light processing (DLP) technology. The combination of LED and DLP offers the ability to create different patterns quickly and effectively, making them optimal for structured light 3D technology.
Image Recording: Perfect Picture with CMOS Power h3>
The object is recorded in two dimensions using a high-resolution camera. Modern cameras typically have a photosensitive semiconductor chip based on CMOS or CCD technology, with CMOS technology being used more frequently. A chip consists of many individual cells (pixels). Modern chips have several million pixels, allowing two-dimensional detection of the object. Due to the better performance of CMOS technology, it is used in 3D sensors.
3D Point Cloud: From Application to the Finished Image
The pattern sequence of structured light is recorded by the camera. The package containing all images is called the image stack. The depth information of each point (pixel) can be determined from the images of the individual patterns. As the camera has several million pixels and recognizes gray levels in each pixel, several megabytes of data are generated in a short time. The amount of data can be processed on a powerful industrial PC or internally in the sensor with an FPGA. The advantage of the internal calculation is the speed, while the calculation on the PC allows for greater flexibility. The result of the calculation is a 3D point cloud.
Integration: From Sensor to Application h3>
The 3D point cloud is calculated from the captured images. This can be done in the sensor or on an industrial PC. Software development kits (SDK) from the manufacturer or standardized interfaces such as GigE Vision are used for easy integration.
Use of Monochrome Illumination h3>
The use of monochrome illumination makes it possible to effectively suppress disturbing influences from ambient light through optical filters. Illumination can also be optimized for maximum efficiency and illumination intensity.