Robust image acquisition
Many system designers make the assumption that nothing will go wrong within the system as a whole, and that problems such as over triggering, bus contention, or external electrical noise corruption will never occur. It is true that many applications simply do not need this functionality, but when developing a robust inspection system, or when the design is approaching the limits of a particular interface technology, a robust acquisition framework becomes essential. Some manufacturers have evolved robust mechanisms that provide vision system designers with feedback when system errors occur, allowing them to take corrective action. Without these technologies, system crashes could occur, or systems could continue without anyone being aware that something has failed, or that an image suffered corruption. In this section we outline the features required to make a system robust against unusual system events, such as Teledyne DALSA's "Trigger-toImage-Reliability" framework (T2IR), Allied Vision's "Secure Image Signature" (SIS) and features found in the GigE Vision and USB3 Vision standards.
Trigger and acquisition counters and call-backs
A common and serious design flaw is the assumption that every trigger will cause an image to be captured. It is often assumed that if the camera cycle time is faster than the speed of the triggering, nothing can go wrong, however, this does not take into consideration what happens if the triggers begin to arrive faster than the maximum system speed, or if spurious triggers are generated because of noise in the system, thereby causing unexpected requests for images. In many acquisition systems these triggers are simply ignored which can result in triggers that do not cause an inspection, thus allowing products to pass through the system unchecked. In other systems this can cause the acquisition to be reset halfway through the capture process, resulting in either corrupt images or, in the worst cases, it can cause a system crash if the DMA and memory management become confused or overflow. Good implementations will manage the missed trigger correctly without corruption and notify the application of a problem via a call-back or a counter that can be monitored. The application can then take appropriate action. Applications with synchronous capturing of multiple cameras require the implementation of a detailed signal and data check, to capture possible faults.
|Trigger||Notifies the host application that a trigger hasoccurred.|
|Double trigger||Notifies the application that two triggers occurred inclose proximity and one was ignored.|
|Start of frame/field||Notifies the host that an image capture has begun.|
|End of frame/field||Notifies the host that an image acquisition transfer tomemory has completed.|
|Start of transfer||Notifies the host that an image transfer to PC memoryhas begun.|
|End of transfer||Notifies the system that the image transfer to PCmemory has completed.|
Time stamping Knowing when an image is captured can be useful in a number of situations. A time stamp does not necessarily represent an actual time, but can use any input that enables the images to be marked and synchronised to an external event. For example, in applications that use conveyor belts or moving products, time stamping an image with the output from an encoder that tracks the movement of the product allows a reject mechanism further down the line to reject the correct item, even if the conveyor changes speed, or starts and stops. Another scenario could involve a multi-camera acquisition system that requires later off-line processing. If the images have a common acquisition time stamp, the postprocessor can be sure of the relationship between the images. Synchronising camera time stamps can now be achieved using the PTP protocol found on in the GigE Vision standard. More can be found about this in section 6.9.3. Time stamping can also provide useful system information like trigger and acquisition counters, so that an application can be sure that an image is valid, is in sequence, or has not been missed.
Data monitoring and CRC checking
In high-speed acquisition systems, external signal errors or noise can sometimes cause spurious data clocks and data errors. Good interface designs monitor these signals and check that they are as expected. For instance, some interface solutions for LVDS, analogue and CameraLink can detect and report if any line or frame has too few or too many pixels, thus enabling the system to take action.
This monitoring also enables the system to recover immediately if an event occurs. When using techniques like USB or GigE, a CRC check is applied. CRC stands for Cyclic Redundancy Check, and is a system that produces a checksum for a particular file or data packet. The checksum is then used to detect errors after transmission or storage. The issue here is that this requires a processor to build the CRC check into the camera, and then the host needs to read the whole image in order to prove that the CRC check matches. On interfaces such as FireWire or Gigabit Ethernet, the interface chips already include hardware CRC engines to check each packet of data. With clever driver programming the number of expected packets per image is monitored, enabling CRC checking results in the correct identification and reporting of corrupt images. In addition to the identification of transmission errors, GigE Vision or USB enables the host to request corrupt or lost packets to be resent so that valid data can be guaranteed. Similar technologies are also used in CoaXPress and CameraLink HS interfaces. In fact CameraLink HS takes this to the next level with an automatic resend correction capability. For reference the only machine vision acquisition standard not to deploy CRC data integrity checking is CameraLink.
Control and synchronisation
In recent years the function of the frame grabber or interface has become more than just image acquisition and this trend is finding its way into cameras with digital interfaces. The timing between trigger, acquisition, exposure control, illumination strobe and reject control was previously the responsibility of the system integrator and usually required custom hardware, because PC timing latency was not good enough. If PC timing was used, this could often result in inconsistent system behaviour because of these latencies. Now, many of these functions are to be found inside the acquisition interface or the camera, making system configuration a lot easier and much more robust. Using additional inputs such as reject sensors, encoders or internal counters, these advanced acquisition solutions can manage the timing between product detection, system control and subsequent delayed reject management. An example of this control is shown in the following illustration.
Before the introduction of this technology, a vision system designer spent a lot of time building circuitry that synchronised trigger, reject and lighting strobe control. One recommendation when selecting an acquisition interface would be to think of the system as a whole. What appears to be the cheapest solution at the outset could end up costing more in the long run due to the additional time, effort and hardware needed to develop an acquisition control and reject system that is suitably robust.
In recent years, Gigabit Ethernet's ability to locate cameras about 100 metres away from the processing PC has required an even greater flexibility. This ability enables host PCs to be moved away from the factory floor and into clean, cool server room environments. This change has meant that the more traditional integration of synchronisation and reject control functionality on the PC's acquisition board is no longer viable, as the PC can now be situated some distance away from the physical process.
Clearly an alternative solution is required in this situation. Ethernet based timing controllers now allow the synchronisation of multiple cameras, triggers and reject gates over a network. This also works with cameras used on encoder based conveyors. This ability gives rise to a new type of vision system: The network centric vision system. The temporal synchronisation of cameras between each other has also been included into the GigE Vision 2.0 standard by the introduction of PTP (Precision Time Protocol) to allow for such possibilities.