General Questions about Machine Vision Software
Machine vision software is used to solve image processing tasks with wenglor machine vision products in the field of industrial image processing.
wenglor offers the following machine vision software:
- wenglor uniVision 3 software
- wenglor Discovery Tool software
- uniVision 2 software
- VisionApp 360 Software
- VisionApp Demo 3D Software
- Support software
New software versions are provided for functional enhancements, performance optimizations and bug fixes.
Frequently Asked Questions about wenglor uniVision 3
wenglor uniVision 3 is software for setting up wenglor machine vision products for solving tasks in the field of machine image processing. The development environment enables users to automatically evaluate data (e.g. image evaluation) via graphic user interfaces and the creation of configurations instead of conventional programming. wenglor uniVision 3 thus qualifies as a low-code or no-code platform.
Registered users can download and install the uniVision 3 software free of charge via the product detail page DNNF023.
wenglor uniVision 3 is based on the functionality of uniVision 2, but contains numerous new functions, optimizations and bug fixes. uniVision 2 and 3 also support different devices:
- uniVision 2: weQube B50 smart camera, 2D/3D profile sensors, BB1C5 control unit
- uniVision 3: B60 smart camera, MVC machine vision controller
The wenglor uniVision 3 software is supported by the B60 smart camera and the MVC machine vision controller. It is the standard software for all future wenglor machine vision devices.
wenglor uniVision 3 has a toolbox with numerous software modules that can be added flexibly to the job and linked to each other as required.
Templates are predefined uniVision jobs for a specific task (e.g. code reading) that can be loaded onto the uniVision product. The most important modules are already saved and linked in the templates so that only a few parameters need to be adjusted.
Although no programming knowledge is required to operate wenglor uniVision 3, basic knowledge of machine vision processing and parameterization is required.
wenglor uniVision 3 software requires a PC with Windows 10 or Windows 11. Details can be found in the technical data on the product detail page.
Software and firmware updates for uniVision 3 are published several times a year in order to expand the functional scope of uniVision devices and to continuously improve their stability and performance.
Yes, provided that the devices are compatible with uniVision 3, new software and firmware updates can be downloaded quickly and easily to the corresponding device via the device website. More detailed descriptions of the update process can be found in the operating instructions of the respective device.
Most modules available in wenglor uniVision 3 software can be used several times within a job and can be combined as required. Exceptions are the “Spreadsheet” and “Image Deep OCR” modules (for the B60 smart camera), as well as the interfaces, which can only be used once per job.
Yes, the software can also be expanded later with separate license packages.
Yes, profiles can be read into uniVision offline via Teach Plus or via simulation mode if they are in ply format. For example, profiles can be saved in the VisionApp Demo 3D and simulated in uniVision 3.
Yes, in Teach Plus mode, data (images or profiles) can be saved, deleted and loaded easily via the Image Container Viewer in uniVision. In simulation mode, data (images or profiles) are read from the PC via an unchangeable folder path.
The DNNF023 uniVision 3 software contains an offline simulator, which can be used largely free of charge and without licensing. Only if the modules
-
Image Code 1D
-
Image Code 2D
-
Image Deep OCR
-
Image Pattern Match
-
HALCON Script
are to be used offline is the DNNL022 license required.
The wenglor uniVision 3 software can also be used offline for simulation without a device in two different ways:
- Teach Plus mode
- Simulation mode
The Teach Plus mode can be used, for example, to optimize projects with good and bad images taken with the camera. Quick tests for evaluating the software can also be carried out using the examples stored in the software. The offline simulation mode allows the evaluation of the software with image or profile files recorded with third-party hardware or synthetically generated.
The visualization of a job can be set flexibly and freely. Results can be displayed directly in the image as an overlay, for example. Visualization is web-based and can be used on any device with a browser.
wenglor uniVision 3 supports all relevant interfaces to control systems and robots so that uniVision devices can be integrated quickly and easily.
With uniVision 3, a robot connection for welding or for robotic vision can be set up for each process instance. The B60 smart camera can therefore be connected to a robot, while the MVC machine vision controller allows 16 individual robot connections for welding and/or robotic vision.
Yes, uniVision 3 has templates for common joint types that make it easier to set up the job.
In addition to a robot, a MLxL 2D/3D profile sensor,a MVC machine vision controller and the uniVision Robotics license package (included in the variant MVCV001 or can be relicensed via the license package DNNL026) are required for weld seam tracking with uniVision 3.
The camera and robot with Robot Vision are calibrated by hand-eye calibration using a calibration object. The camera can be installed static or be located on the end effector of the robot. Professional, rigid and temperature-resistant calibration objects are available in different sizes. For the calibration routine, various positions must be taught in in which the camera sees the calibration object so that the relationship between the camera and robot is determined.
The uniVision 3 software enables communication with robots from different manufacturers. The open Robot Vision API can also be used to connect to robots whose type is not yet officially supported.
Robot Vision is supported in wenglor uniVision 3 by the B60 smart camera and by the MVC machine vision controller with the machine vision cameras of the BBVK or BBZK series.
Calibration plates are used in measuring applications to clean up optical distortion and ensure precise conversion from pixel to millimeter values. This takes place in the wenglor uniVision 3 software via the Image Calibration module.
In addition, a calibration plate enables simple and fast calibration in robot vision applications. This process also involves coordinate matching, thus eliminating the distortion caused by the optics. For precise calibration, the calibration plate should be completely in the camera’s visual field and cover at least half of the visual field. Paper-printed calibration patterns result in inaccurate calibration. Opaque panels (e.g. ZVZJ001) are suitable for incident light applications, transparent (e.g. ZVZJ005) for transmitted light applications.
If the relationship between the camera and robot does not change, recalibration is not necessary.
In wenglor uniVision 3 software, data from several different objects can also be found with an image capture and sent to the robot in order to optimize the cycle time for pick-and-place tasks. This means that the robot has to move into the detection position less frequently and can directly grasp other objects that have already been found.
Any desired offsets in x and y can also be set up in wenglor uniVision 3 software for picking in pick-and-place applications so that the object can be gripped at the tip, for example.
In the Pick and Place application, an individual object height can be set up for each object type so that different object types can be gripped at different heights.
Various object types can be easily taught in using wenglor uniVision 3 software. The object type can then be sent directly to the robot.
Teaching in objects is easiest in wenglor uniVision 3 using the Pattern Match and Locator modules.
The “Device Robot Vision” module in wenglor uniVision 3 enables direct communication between 2D cameras and robots.
Yes, the running of HALCON scripts can also be tied to a specific device. This prevents a project from simply being copied to other devices using HALCON script.
HALCON scripts can be encrypted to prevent unwanted changes to the script.
Data such as taught-in contour models can be stored permanently and independently of the platform in the HALCON dictionary.
There are numerous examples of HALCON scripts that show in a simple format which data types are supported and how applications can be implemented easily.
The typical workflow for working with HALCON scripts is as follows:
- Recording of a Teach+ file with real data
- Creation of the HALCON script with the recorded data in the HDevelop software
- Loading the HALCON Script into wenglor uniVision 3 software in the HALCON Script module
The image data required to create a HALCON script is recorded in a Teach+ file with real data.
The uniVision ecosystem enables flexible data exchange between all uniVision modules and the HALCON Script module. Numerous interfaces (e.g. PROFINET, EtherNet/IP) are available on the uniVision product. The results from the HALCON Script module can thus be output directly and flexibly via the uniVision interfaces. The flexible web-based visualization also enables flexible and individual display of results from the HALCON Script module – even directly in the image!
Yes, the HDevEngine is already running on uniVision devices. HALCON scripts can thus be run directly on uniVision devices. This allows the focus to be placed on the application solution (software)!
HALCON scripts can also be created with other HALCON versions. However, the compatibility information with the version HALCON 22.11 used on uniVision devices must be observed.
uniVision devices run version HALCON 22.11.
The following data types can be transferred from uniVision modules to the HALCON Script module (inputs) or returned from the HALCON Script module to other uniVision modules (outputs):
- Iconic variables
- Images
- Regions
- XLDs
- Control variables
- Integer
- Real
- String
Yes, the standard software modules in wenglor uniVision 3 software can be combined with HALCON scripts as desired. Flexible data exchange between the modules is possible!
wenglor uniVision 3 thus enables a combination of parameterization and programming:
- Parametrization:
Standard tasks can be easily completed with the standard uniVision modules from the uniVision Toolbox - Programming: Complex tasks can be solved in HDevelop with HALCON scripts.
HALCON scripts created in MVTec’s HDevelop software can be loaded in the HALCON Script module in the uniVision software and run on the uniVision product (e.g. B60). The HDevEngine required for this is already pre-installed on the uniVision product.
Frequently Asked Questions about AI-Powered Software
Yes, all data is stored in Europe in accordance with GDPR. In payment plans, all rights remain with the user. The cloud storage is BSI C5 certified and data can be deleted yourself after the end of the plan.
All data is encrypted using TLS and AES-256, stored multiple times redundantly and backed up automatically. This protects it against loss and unauthorized access.
The AI Lab supports different formats (e.g. JPEG, BMP) and resolutions. These are adjusted automatically. For best results, all images should be of comparable quality.
An existing model cannot be retrained – every training is always based on the complete data set.
Yes, several B60 smart cameras can be uploaded to the same data set in parallel via weHub, as long as the plan allows free “connected devices”.
The division into training and test data takes place automatically. A training session with ≤ 500 images at 320 px usually takes approx. 5 minutes. Results may vary slightly between different training sessions, as random elements increase robustness.
The upload rate depends on the network, image size and number of devices. Several frames per second per device are common. The following applies to the classification: 1 credit = up to 5,000 images, 2 credits = up to 10,000 images, then further credits per 5,000 images.
Images can be uploaded in full resolution, but are automatically scaled to the appropriate input size for training and execution. Depending on the hardware, only executable model sizes can be selected in the AI Lab. The AI model size is derived from the AI input image size and the AI model architecture and directly influences the inference speed.
By default, the AI Lab generates quantized grids as they run faster on the B60 smart camera. Although proprietary, non-quantified ONNX models can be used, they are usually less efficient. Detailed size and performance specifications for ONNX models can be found on GitHub.
There is no fixed limit on the number of classes. Recommendation: as many as necessary, as few as possible. It is important to have as balanced a ratio of images per class as possible. A minimum of 5 images per class is required, and a minimum of 50 images is recommended for reliable results.
The connection between the AI Lab and uniVison takes place via weHub, via which images are uploaded to the AI Lab and trained AI models are transferred back to uniVision.
A permanent Internet connection is not necessary as weHub buffers data. AI models can only be used with wenglor hardware for inference. However, training data can also be created with third-party devices. The AI Lab is not optimized for smartphones or tablets.
The AI Lab is also designed for AI beginners who want to create their own AI models. The ONNX module is aimed at experienced AI users with their own network architectures or when image data is not allowed to leave the company network. Both the “Image ONNX” module and the AI Lab are included in the “uniVision AI” license package.
No proprietary hardware or expert knowledge is required to train AI models in the cloud. This saves investments and resources. Cloud training offers scalable computing power, remote access, data backup and flexible costs – unlike training on local PCs or edge devices that are not optimized for this.
Cloud-trained AI models are more complex, more accurate and can handle large amounts of data before being distributed to devices for inference. Edge AI models are smaller, more efficient and deliver fast results directly on the device, but often achieve lower accuracy.
The evaluation report is available for checking the AI model quality. It shows the most important key figures for accuracy and serves as proof, e.g. for a factory acceptance.
Plans can be extended by stacking – identical licenses add runtime and credits. An ongoing plan can be replaced by another plan at any time. If limits such as memory, users or credits are reached, you can either clean up or switch to a higher plan.
Frequently Asked Questions about the wenglor Discovery Tool
The wenglor Discovery Tool is a software for searching for and finding wenglor machine vision devices in the network. The software also enables the network configuration of the image processing devices to be adapted to match the network configuration of the system or PC.
The wenglor Discovery Tool is to be available as standard software for all wenglor machine vision devices. The smart camera B60 and the 3D sensors of the ShapeDrive G4 series MLASx1x and MLBSx1x are currently supported on the hardware side.
The wenglor Discovery Tool Machine Vision software requires a PC with Windows 10 or Windows 11 on the system side. For further details on the system requirements for operating the software, please refer to the Technical Data section on the product detail page of the wenglor Discovery Tool software.
Yes, wenglor Discovery Tool software finds all supported devices, even if they are in a different subnet.
wenglor Discovery Tool software indicates the status normal operation, warning or error with details of the warning or error message.
Example:
The temperature of the machine vision device exceeds a critical value.
A warning message with detailed information “Temperature is too high” appears in the software.
It is often difficult to find the network settings in your PC’s settings. wenglor Discovery Tool software displays the network settings of the PC directly without having to call up the computer settings.
It is easy to jump to the device website using wenglor Discovery Tool software. There is no need to remember the IP address of the device.
wenglor Discovery Tool software can be used to assign a name of choice to each device so that it is easy to distinguish between several devices.
Frequently Asked Questions about the weHub Software
weHub is software for detecting and managing wenglor machine vision devices in the network. It enables adaptation of the network configuration, automated upload of images to the AI Lab, and download of AI models from the AI Lab to the wenglor machine vision hardware.
weHub replaces the wenglor Discovery Tool: It offers the same functions as device search and network configuration for wenglor Machine Vision devices, plus the bridge function to connect the cloud-based AI Lab with the offline devices.
weHub is a standard software for all wenglor machine vision devices. The hardware currently supports the B60 smart camera, the MVC machine vision controller and the 3D sensors of the ShapeDrive G4 series.
A PC with the Windows 10 or Windows 11 operating system is required for using weHub. Please refer to the “Technical Data” section on the product details page for further details on the system requirements for operating the software.
Yes, weHub finds all supported devices, even if they are in a different subnet.
weHub indicates the status normal operation, warning or error with details of the warning or error message. Example: The temperature of the machine vision device exceeds a critical value. A warning message with detailed information “Temperature is too high” then appears in the software.
It is often difficult to find the network settings in your PC settings. weHub displays the network settings of the PC directly without having to call up the PC settings.
The device website can be accessed easily via weHub. The IP address of the device is not required.
weHub can be used to assign a name of your choice to each device so that it is easy to distinguish between several devices.