Automatic detection, traceability and quality control of products are becoming increasingly important tasks across all stages of production. These trends are aided by inexpensive cameras and high-performance computers, which enable the use of image processing technology in more areas than ever before.
Image processing has several advantages over visual inspection by humans, especially for quality control purposes. Optical inspection based on image processing can be carried out in a highly repeatable and deterministic manner. Measurements of parts down to the micrometre range, which are almost impossible to perform for humans, can be implemented easily.
In automation applications, image processing has traditionally been handled separately and was often outsourced to external system integrators. Meanwhile, programmable logic control (PLC) programmers have branched out into numerous disciplines, including motion control, safety technology, measurement technology and robotics. It is possible today to combine all of those functions in a single control system on one computer. However, image processing has typically remained in a black box on a separate high-performance computer, with specific configuration tools and programming languages, or it is implemented directly in specially configured smart cameras.
The downside to using a separate computer is that even the smallest changes require input from a specialist instead of the PLC programmer, resulting in avoidable costs. In cases where a third-party system integrator is involved, it also means that the expertise remains external. In addition, the communication between image processing and the control system has to be regulated, which is an error-prone process. As a result, an exact timing in image processing cannot be ensured. External processes, such as the operating system, can affect the processing time and the transmission time, so the results may not reach the controller in the required time span.
The new TwinCAT Vision software combines both worlds into one integrated system. The configuration, especially of the cameras, is carried out in the same tool as the configuration of fieldbuses and motion axes. For programming, the familiar PLC programming languages can be used. In this way, substantial engineering cost savings can be achieved, since there is no need to learn special programming languages, and no special configuration tool is required. The challenges of communication between image processing and control are not only eliminated, but image processing and control components can directly communicate with each other, opening up entirely new application possibilities. Everything is integrated into one tool and one runtime environment – this is the core innovation offered by TwinCAT Vision.
Personal computer- (PC-) based automation combines all control functions on a PC platform and it, therefore, inherently benefits from a Gigabit Ethernet interface. Based on Gigabit Ethernet, GigE Vision is a communication standard that enables reliable and fast transmission of image data from cameras. TwinCAT Vision provides a real-time capable driver for the Ethernet interface, which makes image data available directly in the controller memory. With support for GigE Vision, TwinCAT Vision is also an open system that makes it possible to use cameras from a large number of manufacturers.
The first step after a connection is established usually involves configuration of the camera. Manufacturers of cameras with the GigE Vision interface provide a configuration description in GenApi format. The TwinCAT Vision configuration tool reads the parameters and makes them available to the user in a clearly arranged manner. Configuration changes, such as adjusting the exposure time and setting a region of interest, can be made quickly and easily, and the results can be observed in the live image of the camera.
In addition to the camera configuration tool, TwinCAT Vision provides another tool for geometric camera calibration. This determines the parameters for describing the mapping from image coordinates to real-world coordinates and vice versa. It also makes it possible to relate positions in the images to real-world coordinates and to convert measurement results from pixels into the metric system. In addition to perspective distortions, non-linear distortions of the lens are taken into account, which can be observed in the form of visible deformations in the image.
As part of the camera calibration, one or more images of a suitable calibration pattern are required initially. These images can be acquired directly in the engineering tool, or existing images can be imported. After specifying the calibration pattern, the parameters are calculated automatically. In addition to the standard two-dimensional patterns, such as the chessboard pattern or the symmetrical or asymmetrical circle patterns, users can also read in their own patterns. These may also be three-dimensional patterns. As an alternative to using the calibration tool, camera calibration can be performed in the PLC.
Image Processing in the PLC
The raw images are transferred directly from the camera to the router memory of the PLC through GigE Vision. For this purpose, the camera has to be set into image acquisition state and, depending on the camera configuration, individual images must be triggered. The software function block FB_VN_GevCameraControl is available for this procedure.
For very precise trigger timing, the timestamp-based EL2262 output terminal is available in the Beckhoff I/O system, which can be used to send a hardware trigger signal with microsecond accuracy to the camera. Since everything takes place in real-time in a highly accurate temporal context, image acquisition and the positions of an axis, for example, can be synchronised with high precision – a frequently occurring task for PLC programmers.
Many cameras can also send output signals at previously defined events, such as the start of image capture. These signals can be acquired with a digital input terminal from Beckhoff and then used in the PLC for precise synchronisation of further processes.
TwinCAT Vision offers a new image processing library in the PLC that contains numerous image processing algorithms. For example, images can be scaled or converted during preprocessing to the desired colour space, and certain characteristics can be highlighted or suppressed by means of filter functions.
The image can then be binarised by means of thresholding, followed by contour tracing on the resulting image. The contours found in this way can be filtered based on their characteristics, resulting in a selection of interesting image contours or image regions, which, in turn, are suitable for object identification and measurement. With a previously calibrated camera, the feature points can also be transformed back into the world coordinate system, so that position and measurement data can be specified accurately in real-world coordinates.
By integrating TwinCAT Vision into the TwinCAT real-time environment, the timing of image processing functions can be monitored through watchdogs, which interrupt the functions after a defined period of time or at a certain point in time from the start of a processing cycle. At the same time, the user is provided with any partial results that may be available at the time. In addition, suitable image processing functions can be automatically allocated to multiple CPU cores for parallel processing by means of so-called job tasks, so that TwinCAT Vision makes optimum use of the multi-core capabilities in TwinCAT 3.
During analysis and visualisation of the results, all images can be represented in the form of images and not only in the form of binary data. Before this, it is possible to write and draw results, such as position information, into the images. Exemplary use cases include colour-coded marking of the filtered image contours or the good/bad marking of parts. The user is only limited by the image boundaries. The images can be displayed directly in TwinCAT Engineering in the so-called ADS Image Watch or for the end-user in TwinCAT HMI.
One Universal Tool
TwinCAT Vision combines classic automation technology with image processing, making it especially user friendly. On the engineering side, camera configuration and geometric camera calibration are carried out directly in TwinCAT Engineering. No other tools are required. Image processing is programmed based on the languages used by PLC programmers, which means that no special programming language has to be learned. It is also possible to respond directly to the results of image processing in the PLC, right away in the next line of code. By triggering the camera from within the real-time environment, image capture and PLC or motion control can be fully synchronised. The image processing algorithms are computed in real-time in TwinCAT, ensuring task-synchronous execution and monitoring in real-time through watchdogs. TwinCAT Vision leverages the multi-core capabilities of TwinCAT 3 to automatically execute algorithms on multiple cores whenever they are available.
TwinCAT Vision is aimed at users who have to handle vision tasks within the control system. It is also suitable for users who need a high degree of synchronisation among image processing, PLC and motion control. Since delays in processing are eliminated and the processing of algorithms is time-monitored, the system is able to respond directly and deterministically.
Classic image processing tasks, such as finding and recognising or measuring parts can be performed easily with TwinCAT Vision. In addition to PLC, motion, robotics and measurement technology, TwinCAT users can now add image processing to the list of integrated functions in the TwinCAT system.
Hall 7 Stand A18