text.skipToContent text.skipToNavigation

Technology of Smart Cameras and Vision Sensors

Smart cameras and vision sensors enable easy setup of an image processing application as a complete image processing system in sensor format. With intuitive operation and maximum functionality, a wide range of industrial image processing tasks can be carried out without the complexity of assembling and setting up a PC-based vision system.

What Is a Smart Camera?

Smart cameras unite image recording and evaluation in a single housing. Optics and illumination are often not permanently installed and can be configured individually. This results in a range of applications comparable to a conventional PC-based vision system. The smart cameras typically have a software environment that can range from simple to extensive software packages, comparable to complex image processing programs.

B60 smart camera with auto-focus and C mount

How Does a Smart Camera Work?

Smart cameras are characterized by combining the recording and evaluation of images in a compact and robust housing. The built-in processor processes the recorded raw image data internally, resulting in a direct result output (e.g. good/bad part). Combined with powerful software, it can solve a wide range of tasks. The device is usually accessed via an Ethernet interface and the application is created via a graphical user interface. By combining intelligent hardware with powerful software, in some cases even with the option of individual programming, users receive a high-performance solution for their application. The smart camera as a complete solution makes setting up an image processing project much easier.

What Is the Difference Between Smart Cameras and Vision Sensors?

The distinction between vision sensors and smart cameras is not always clear, as the transition is flowing.

What Is a Vision Sensor?

Vision sensors are particularly compact designs where the systems already have a suitable optics in addition to the illumination. Vision sensors are typically limited in their resolution and computing power and are optimally matched to a particular application. The software can be quickly configured even without special knowledge in industrial image processing. Pre-trained neural networks are increasingly used, which enable the user to carry out simple pass/fail classifications using fewer reference images. The areas of application are usually limited to simple identification tasks, presence checks and simple measurement applications.

When to Use C Mount Cameras and When to Use Auto-Focus Cameras?

The optics of a camera define the resulting visual field at a given working distance. For the majority of all industrial image processing applications, these parameters are fixed due to the known object size and installation situation. This is why C mount lenses are used here. The right lens is chosen based on the working distance, object size and sensor size. The vision calculator supports this.



If at least one of the basic optical parameters is variable, the focus must be adapted to this change as quickly as possible. Devices with auto-focus make it possible to learn different focus positions. When inspecting packages of different sizes due to the different working distances, for example, a camera with auto-focus is required. 
 
A B60 smart camera with C mount in use at a constant working distance.

Smart Cameras with C mount

A B60 smart camera with C mount in use at a constant working distance.

Smart Cameras with auto-focus

How Does Auto-Focus Work?

Devices with auto-focus ensure high-resolution images even at changing distances by automatically adjusting their focus to selected image areas. A basic distinction is made between mechanical and software-based technology. Mechanical auto-focus includes technologies with motor, liquid lens or piezo auto-focus, while software differentiates between contrast and phase auto-focus.

Mechanical Function

Classic auto-focus is based on the use of a motor that moves the lens elements of the lens.
This form of auto-focus uses a liquid lens that deforms under pressure. The liquid is controlled by an electromagnet that either attracts or rejects it. This moves the lens elements of the lens.
The piezo auto-focus is based on the piezoelectric effect. Piezoelectric materials have the property of deforming when an electrical voltage is applied. However, they can also generate an electrical voltage if they are stretched or compressed. In the case of auto-focus, the piezoelectric effect is used to move the lens elements of the lens.

Software-Based Functions

Most compact cameras use contrast auto-focus. The sharpness is measured by the image sensor, which analyses differences in brightness and color. The lens is adjusted until optimal sharpness is achieved. The contrast auto-focus therefore always overmodulates slightly and is then turned back again, so it moves the lens element back and forth. 
Phase auto-focus is mainly used in photography, where the focus is measured via the AF sensor. The sensor then performs a very complicated calculation with the help of line and cross sensors. Angles and distances are used to calculate in which direction and how far the lens needs to be adjusted. This removes the need to move the lens element back and forth during phase auto-focus. 

Which Technology Is Best for the Application? The Differences at a Glance

What Is the Significance of Integrated Illumination?

The lighting module is mounted on the B60 smart camera without tools.
Illumination is essential when using smart cameras and vision sensors. To compensate for weak or inhomogeneous ambient light, smart cameras and vision sensors with auto-focus are usually equipped with integrated illumination. The illumination modules are often exchangeable and can be changed directly in the field depending on the application. This is usually incident light, as integrated illumination cannot be variably aligned with the camera. To create the most homogeneous lighting situations possible without reflections, individual segments can be controlled separately on some models. This makes it possible to simulate different illumination angles, especially at short working distances, and thus ensures diffuse exposure or the extraction of specific features. External illumination technology is often used at greater working distances and in through-beam applications.

Which Resolution Fits Which Application?

0.4 megapixel (VGA)

Simple applications e.g. presence checks, etc.

1.6 megapixels

Assembly checks, optical character recognition, etc.

5 megapixels

Applications that require high accuracy, e.g. measurements, inspections, etc.

≥ 12 megapixels

Highest precision inspections 

What Is an Image Chip?

The image chip (also known as the image sensor) is an electronic component that is sensitive to light. Incoming light (photons) is converted into an electrical charge by the photoelectric effect. Monochrome sensors are used primarily in industrial settings because they cause less data traffic. These are usually complementary metal oxide semiconductors, or CMOS sensors for short.
Exploded view of a B60 smart camera showing the image chip.

What Does the Size of an Image Chip Depend On?

The sensors for industrial image processing are available in different sizes depending on the resolution. The bigger, the better technically, but practicality is reduced, even for compact cameras with limited space. The market is tending toward smaller sensor sizes due to increasingly better manufacturing processes that minimize the disadvantages of smaller image chips. If the image chip is smaller, there is also less space for the individual pixels. The larger a single pixel, the more light it can absorb and the less light needs to be supplied to the application. Because exposure times are often short in image processing, e.g. in fast dynamic applications, particular attention must be paid to the balance between the number and size of pixels. 

When Are Color Image Chips Used?

A color camera, i.e. a camera with a color image chip, is required in very few cases. It is only advisable to work with color image chips when features need to be detected via small color differences. This is because monochrome sensors have significantly higher light sensitivity than color image chips and have a positive effect on process time due to the lower data traffic.
 
Product Comparison