тел./факс: +7 (8452) 63-34-92+7 (8452) 63-37-69410044, г. Саратов, проспект Строителей, 1-Б
ТОСС ищет сотрудничества с научно-исследовательскими экспериментальными группами для тестирования стеклоизделий, предназначенных для различных применений. Отправить заявку:Введите, то что написано в окне ниже:
Machine vision

The present article reviews some theoretical basis of artificial vision systems work. If you would like to learn about practical applications, please, follow link.

Today artificial vision systems are one of the most advanced and fastest growing manufacturing automation spheres. It became possible to hand over the product quality control process to computing machines with such electronic aid. There are no prizes for guessing that such “bail” will inevitably lead to expenses reduction and product quality improvement.

The artificial vision sphere in Russia is relatively young (about ten years old). That is why the majority of this technology methods and applications are at the fundamental research stage. However, it does not prevent using all advantages of the artificial vision today. Such systems are seldom made universal. Right now the designation of these developments is a solution of definite problems that are often together with other devices in line, for example transportation devices, machining.

The typical artificial vision system consists of several devices. Actual configuration (operation speed, software-based methods) are determined, solely, due to the conditions of the set task.


Figure 1: Configuration master scheme

The standard artificial vision system scheme (fig.1) usually consists of an inspected object (product), a transportation scheme, a positioning arrangement, one or several video cameras for the object shooting, the object lighting system,  a locale.

Depending on the technical specification conditions an inspected object may be a product as well as a part itself, or presence of some marking on it (for example, inspection of excise label presence on bottles). The inspection objective can be the inspected object geometric parameters or its qualitative characteristics (dust, dirt, scratches, cracks, etc.).

The transportation system objective is to move objects through shooting points and other line components. The objects movement speed is chosen on the basis of the necessary line productivity and technical capabilities of the artificial vision system optical and electronic parts. If one chooses too high transportation speed the computation system will not have time to process the information received from the cameras, or cameras will not transfer it in full scope. The used conveyers are bend, carousel and other types.

Positioning arrangement serves for the object detection in the camera shooting zone. In some tasks severe requirements are imposed on their accuracy, when object displacement in a shot is undesirable, because it can adversely affect the results. Conventionally it is an optical sensor, consisting of a transmitter (a semiconductor laser) and a receiver (a photocell) in one unit and a mirror reflecting the signal in the other one. Separate sensors are often necessary for each shooting location and rejecting point. The sensor type may vary depending on the object material. Some sensors incorrectly work with clear objects (glass), have different power parameters and consequently different space between a transmitter/receiver and a reflector.

Video cameras for the object shooting are the most important artificial vision system part. When the object passes the shooting zone, according to the signal from the position sensor shooting is performed during the exposure time, that may be different and depends on the object lighting, its movement speed and the camera capability. The photo is transferred to the computing machine for processing and further result return. Among important camera parameters the following ones should be noted:

  1. Image resolution (in megapixels). Higher resolution contributes to better accuracy of the results (in detailed photos one can see fine details). However, it badly influences the speed of transfer from the camera to the computer and the image processing quickness (the more information the computing machine has to process in one time the longer the process lasts). Higher resolution cameras require more light or longer exposure time. It should be taken into account, because if the object moves too quickly and the exposure time is long the image will be blurred.
  2. Maximum shooting speed (in shots per second). If the line productivity increase is necessary, the object movement speed on the conveyer also increases. For such systems cameras with fast response time (150-600 shots per second) are necessary In case of a high resolution shot transfer in a short period of time the size of the channel of communication with the compute kernel may be not enough. In such situations an intermediate device (frame grabber) collecting shots inside itself may be necessary. It allows increasing the channel capacity.
  3. Camera lens. The lens choice directly depends on the object dimensions and the inspected area size. The majority of the contemporary cameras allow changing lenses using a thread connection (C-mount). The lens is characterized by the focal distance, that determines the sight angle, and by the minimum focal ratio to achieve the optimum balance between the depth of field and lighting.
  4. Information forwarding interface. For most cameras a transfer capacity Gigabit Ethernet, successfully used and able to transfer up to 1000Mbit of the information per second (120 Mbyte or 120 megapixels in case of 8-bit chromacity) is enough. For example, for black and white camera (8 bit) Dalsa GENIE HM640 with 640х480 resolution corresponding to 0,3 megapixels, the transfer capacity of such channel will be enough for 400 shots per second in case of the most favourable conditions. The manufacturer limited the camera shooting speed by 300 shots. If an analogous camera has resolution, for example, Full HD 1920×1080, corresponfing to 2 megapixels, then the maximum of Gigabit Ethernet in this case is 60 shots per second. A channel with transfer capacity 6400Мbit per second (800 Мbyte per second) is necessary for transfer of 400 shots per second from such camera. In this case a frame grabber, that is an ordinary plate inserted into a quick computer socket (for example PCI Express) is necessary. Besides, high speed imposes some limitations on the connection cables length.

The object lighting system. Usually, some methods of the area of interest lighting are necessary for a particular task. Usually, super power LEDs in an impulse mode with the light intensity control possibility are used. The light stream direction and density (focused and scattered) are very important. If necessary they are used together with different deflectors (glass, white, matte). In case of, for example, glass inhomogeneity (ream) inspection, a line lighter, in which zone colours alternate, placed behind the inspected glass object may be used. In places of ream colour zones will change their direction.

Locale. A locale is a computer or a controller (DSP) able to perform mathematical calculations on the basis of the photos received from the cameras. The object photos come via communication channels to the software that processes and analyzes the received data. The output programme returns its action result. It may be either positive or negative response (“yes/no”) to some question (for example, Is there an excise label on the bottle?), or some numerical values (object dimensions, size of chops, scratches, etc.). A locale is characterized by its operating speed. Time during which the system makes calculations may vary in a narrow range. Under a high load (difficult calculation, a large number of cameras, high resolution shots) in order to achieve the necessary productivity, higher operating speed locales may be required. That is why the choice of component parts shall be done taking it into account.

Let’s review one object analysis process.
With the help of the transportation system an object comes to the machine input. When it reaches the position monitored by the position sensor, the signals are transferred to the corresponding cameras and the lighting system. The object is shot by a camera. It is a so called exposure. This process lasts during a certain period (exposure time) during which the camera matrix accumulates charge (light). It depends upon duration how light the object will look. After the exposure completion the received data are transferred to the computer hardware via communication channels.
The next step is the shot processing. It consists of several intermediate steps. Data filtration: the sum of the operations may be called the image simplification. At this stage low level data processing procedures are applied to the shot in order to delete the unnecessary information (for example, monochromization, noise effect removal, edge detection). After it defects, dimensions and other characteristics are found. These methods are often highly resource-consuming. Their processing requires quite a lot of time (milliseconds). After it, computation center transfers measuring data to the executive devices. They, in their turn, perform operations of rejecting, notification etc.