Cameras are available that use a prism to split the light according to the wavelength into two light beams to be captured on a sensor. This method enables simultaneous capturing and subsequent transmission over two channels.
A sensor using a Bayer Mosaic filter that only captures the visible light, can be used in conjunction with a monochrome sensor that integrates the near infrared light. Using the output of the Bayer sensor, it is possible to analyse an object based on visible light. *The NIR-channel enables detection of features that are only visible in the longwave range, the near infrared, not visible to the human eye. As longwave light penetrates the material of the object deeper, defects can be detected that lay under the surface, and the contrast necessary for a subsequent processing in software is provided.
Either a monochrome or a colour sensor using a Bayer Mosaic filter is used to capture one view of a scene, while another sensor is used to capture a different wavelength. This could be two visible sensors, or one visible sensor and one IR sensor. The light from the 'scene' is separated into the two images using a special dichroic prism and is then captured by the corresponding sensors.
Both images are registered and co-site aligned to allow accurate feature comparison. Thus real and independent multispectral information is being captured over the two channels. Using this approach there is no need for a complex and mechanically demanding installation with two different cameras and externally aligned beam splitter.
In addition 2-chip versions with two monochrome sensors are available. If these two sensors are operated using different exposure times, HDR images can be created by using the software's image fusion functionality.
For applications where space is very limited, such as medical endoscopy or in space challenged industrial quality control tasks, remote head cameras can provide a solution. These cameras use very small sensor heads which can be fed into the smallest holes together with the signal cable. High frequency raw sensor data is transferred to a control unit over a long distance using special data cables.
The control unit provides all standard camera functions such as white balancing, shutter timing or gain. Some models even allow the user to exchange camera heads or cable lengths without readjustment, saving both effort and cost. Some ultrasmall models are available that contain all the electronics in the micro head avoiding the need for a separate control unit.
Low light cameras are often used in a range of scientific and medical applications in addition to uses within the security and defence markets. Standard sensors offer good low light performance, however, there are some applications that require an even higher level of performance.
In the past cameras with image intensifiers have been used for such applications, whereas nowadays, Electron Multiplied CCD cameras (EMCCD) are installed. These use an extra level of amplification within the sensor in combination with Peltier (thermoelectric) cooling and provide excellent quality images with minimum levels of light.
Cameras with additional Peltier cooling reduce the thermally generated noise significantly, improving signal-to-noise ratio and sensitivity. Cooled cameras are used in applications such as fluorescent microscopy, astronomy and gel electrophoresis.
3D cameras allow three-dimensional information to be captured from target objects. The most common technique uses laser profiling in combination with onboard preprocessing. 3D laser profiling is based on the triangulation principle where a camera monitors the laser line projected onto an object and calculates the height information from the deformation of the line profile. As the target passes under the camera, multiple profiles are used to construct a three dimensional image. The 3D information can be calculated onboard the camera or on the host or PC system.
In a typical set-up, the laser is situated directly above the surface that is being profiled, and the camera is angled at approximately 30° with respect to the position of the laser. The precise geometrical relationship between the laser and camera can be altered to provide better height resolution, for instance by increasing the angle between the camera and the laser. Care has to be taken as using a smaller angle results in a higher amount of light reaching the camera, providing more stable results.
Height resolution can be calculated for any given set-up as is shown in the following examples:
This image shows the front of a mobile phone case captured using a laser profiling set-up. A significant amount of detailed height information is present in the image, particularly remarkable considering the shallow nature of the surface features. This image has been pseudo-coloured to assist in visualising the object's height information.
Software for 3D image analysis is often able to output images using a relative coordinate system removing the need for complicated six axes matching. The method involves converting the image into so called 'point clouds' that can be directly compared and significantly simplifies the subsequent analysis.