Detailed Description
Clear image plane P
For an image capturing device, such as a camera, a video camera, a scanner, etc., it is known from the optical principle that the distance and range that the camera can clearly image are often fixed if the physical conditions of the optics, the model, the position, etc. of the electronic components in the image capturing device are not changed.
As shown in fig. 1A to 1C, an area that can be clearly photographed by the camera 101 is defined as a clear imaging plane P. In general, the shape of the clear imaging plane P depends on the hardware of the camera 101, particularly the shape of the image sensor, which is illustrated as a rectangle in fig. 1A-1C for illustrative purposes only, and in practice, the clear imaging plane P may be any shape. The size of the clear image plane P and the distance and angle between the clear image plane P and the camera 101 are determined by the relative relationship between the lens and the image sensor.
Image acquisition system
The image acquisition system comprises at least an image acquisition device and a processor, wherein a camera is taken as the image acquisition device.
The clear image plane P of the camera is parallel to the surface of the target object
In the case of macro imaging, the subject 104 often needs to be imaged by the camera at a short distance.
As shown in fig. 1A, 102 is a carrier device for placing a target object 104. 103 is a positioning block for fixing the position of the target object 104. If the part of the object 104 to be shot is exactly planar and exactly in the sharp imaging plane P, a complete and sharp image of the shot part can be obtained.
However, if the position of the object 104 changes (or there is no planar structure itself) as shown in fig. 1B, then if the object 104 is captured by the camera 101, the depth of field of the camera in the close-range imaging is small, and thus a clear image cannot be obtained on the part of the object 104 not in the clear imaging plane P.
In this case, even by adjusting the focal length of the camera 101 or adjusting the relative distance between the camera 101 and the object 104, a complete and clear image of the object 104 cannot be obtained.
The clear image plane P of the camera is not parallel to the surface of the target object
The invention provides an image acquisition system comprising: a camera 101, a driving device 107, and a processor (not shown in fig. 1A and 1B).
The camera 101 is used to capture an image of a subject. The camera 101 may be a zoom camera or a fixed focus camera.
The driving device 107 can drive both the camera 101 and the subject 104, or drive them at the same time. In this embodiment, the driving device 107 is a motor. In a specific embodiment, the driving device includes a carrying device 102 for contacting, holding, or transporting the object 104, and the carrying device 102 is driven by the driving device to move the object 104. This has the advantage that the camera 101 does not have to be adjusted frequently.
As shown in fig. 1B, 1C, in the case where the clear imaging plane P of the camera is not parallel to the surface of the object, the clear imaging plane P of the camera cannot completely overlap the surface of the target object regardless of how the focal length/object distance is adjusted. At this time, the clear image plane P of the camera and the surface of the target object often have only one intersection line in geometric relation. In this case, only a part of the image of the picture taken by the camera is clear, which is the intersection between the clear imaging plane P of the camera and the surface of the target object.
Taking the example of acquiring an image of the surface of a cube whose surface is not parallel to the sharp imaging plane P of the camera, the sharp imaging plane P of the camera is considered to intersect the surface of the target object.
Identifying sharp portions in an image by peaks of image parameter gradients
As an alternative embodiment, the step of collecting and judging the clear part in the cube image is as follows:
and acquiring an original image of the surface of the cube. In the image, the image of the intersection of the sharp imaging plane P of the camera and the surface of the object is sharp, and the image of the other part is blurred. The image acquired in this step may be a grayscale or color image.
And denoising the original image of the cubic surface to obtain a denoised image. The denoising process can reduce the influence of noise generated by hardware or environment on the subsequent image processing work.
The absolute value of the parameter value gradient between adjacent pixels/pixel groups in the denoised image is calculated. The parameter may be a gray scale value, a luminance value, or other parameters such as contrast and saturation.
Judging the peak point x of the gradient by adopting a Gaussian function according to the absolute value of the gradient value calculated in the previous step max . In the same picture, a "blurred" pixel can be seen as taking the average of the pixels around the blurred pixel, reducing the contrast of the pixel with the surrounding pixels, thereby creating a blurred visual effect. The contrast between the 'clear' pixel and the surrounding pixels is more vivid, so that the details of the pixel can be reflected more, and the visual effect is clearer. Excluding the influence of noise, the larger the absolute value of the gradient in the same picture is, the sharper the image is. Alternatively, the manner of determining the gradient value peak point may be other calculation methods besides the gaussian function. Peak point X max The place where the picture is located is a clear part in the picture.
In the process of calculating the gradient values, the gradient values may be arranged in order according to the positions of the pixels/pixel groups in the picture, that is, the calculation result of each gradient value corresponds to a specific position in the picture. At this time, the peak point X is obtained max Meanwhile, the position of the peak point in the picture, that is, the specific position of the clear part in the image, can also be obtained.
This peak determination may be implemented using a gradient processing unit in the processor.
The method for judging the clear part through the peak value of the image parameter can obtain better effect when shooting objects with rich surface textures.
Identifying sharp portions in an image by computing intersections
Establishing a coordinate system
As an alternative embodiment, the line segment representing the sharp imaging plane P of the camera and the surface contour of the target object are represented in the same coordinate system. The coordinate system may be a two-dimensional coordinate system, the plane of the coordinate system being perpendicular to the clear imaging plane P of the camera.
The position of the camera is fixed and the parameters of the lens and image sensor of the camera are unchanged. At this time, the position of the line segment P corresponding to the clear imaging plane P of the camera in the coordinate system is also fixed. And selecting the x axis, the y axis and the coordinate origin, so that each point on the line segment p has a fixed coordinate value. The line segment p may be represented by a linear function within a certain interval. The intersection point of the line segment p and the surface contour of the target object is located at a position corresponding to a sharp portion in the image.
Obtaining coordinate information of surface contour of target object
As shown in fig. 2, coordinate information of the surface profile of the object is acquired using a scanning camera 201. The scanning camera is positioned above, typically directly above, the target object. The motion mechanism causes relative motion between the scanning camera and the target object to enable the scanning camera to traverse the object surface. In this schematic view, the scanning camera 201 is mounted on a rail 202 by a motion mechanism. The shooting direction of the scanning camera is perpendicular to the plane of the coordinate system so as to obtain the projection profile of the target object in the coordinate system. During the relative movement of the two, the distance between the scanning camera 201 and the plane of the coordinate system is kept constant. The scanning camera 201 may employ a line camera or an area camera. The direction of relative motion between the camera and the target object is defined as the x-direction, perpendicular to which is the y-direction.
Linear array camera for acquiring object surface contour coordinate information
The linear array camera is a camera adopting a linear array image sensor. Compared with an area-array camera, the linear-array camera has higher resolution of the image, but only one line of image can be obtained due to one-time shooting, and subsequent processing operation is needed to obtain a complete image of an object to be shot.
The method comprises the following steps of acquiring coordinate information of the surface profile of an object by adopting a linear array camera:
scanning by a linear array camera: the linear array camera has a fixed motion range, continuously scans at fixed interval distances, and outputs images, wherein each image is a line.
Threshold processing: and (3) subtracting the image obtained by the linear array camera from the background image without shielding, comparing the difference value or the absolute value of the difference value with a preset threshold, judging that the line of image obtained by the linear array camera does not contain the target object when the difference value is within a certain range, and adjusting the gray value of the image to be a certain fixed value. When the difference value exceeds a certain range, the image is regarded as the image of the target object, at the moment, the gray value of the image is adjusted to another fixed value, and two end points of the image (line image) of the target object obtained by the line camera are recorded. By such an operation, an image of an area covered by an object can be obtained, and the image is a binary image. Binary images are more convenient to process. And the image information only records two end points of the line image and is used for forming the object outline, so that other useless information is abandoned, and the storage capacity of the system is reduced.
Obtaining the coordinate in the x direction: the motion mechanism enables the linear array camera and the target object to generate relative motion, and displacement information of the motion mechanism at any moment is acquired by utilizing a displacement acquisition device such as an encoder, and the displacement information of the target object at the moment is acquired. The encoder outputs displacement information corresponding to the line of images. Because the shooting direction of the line camera is perpendicular to the plane of the coordinate system, the position information of the encoder corresponding to the line image is the x-coordinate information of the line image.
y-direction coordinate acquisition: the length of a scanning line of the line camera is fixed, the y coordinate of one end of the scanning line is set to be 0, and then the y coordinates of two end points in the line image are obtained by calculating the distance between the two end points and the end point of the scanning line.
Smoothing treatment: splicing front and back multiframe images obtained by the linear array camera, and performing smooth filtering on spliced outline information to obtain curve information of the edge of the whole object. The smooth filtering can eliminate errors caused by single-frame image processing, so that the outline of the target object can more accurately reflect real information, and the subsequent processing is facilitated.
Area-array camera for obtaining object surface contour coordinate information
The area-array camera adopts an area-array image sensor. The imaging area is a plane, and a complete image of an object to be shot can be obtained through one-time shooting.
The method comprises the following steps of acquiring coordinate information of the surface contour of an object by using an area-array camera, wherein the specific working steps are as follows:
shooting by an area-array camera: the target object is placed in the shooting range of the area-array camera so as to ensure that the area-array camera can obtain a complete image of the target object by shooting once. If the image acquired by the area-array camera is not a complete image, the working steps of the line-array camera can be referred to acquire a complete image of the target object.
And (3) threshold processing: the steps of thresholding are similar to line cameras.
Obtaining coordinates in the x direction and the y direction: the position of the area-array camera is fixed, and the shooting range is also fixed. If the coordinates of a boundary point defining the imaging range of the area-array camera are (0, 0), the x and y coordinate information of the contour point represented by the binary image can be obtained by calculating the distance between the contour point and the boundary point.
Smoothing treatment: the smoothing process is similar to the line camera described above.
Calculating intersections to obtain sharp locations in the image
As an alternative embodiment, the step of collecting and judging the clear part in the cube image is as follows:
an original image of the cube surface is acquired. The acquired image can be a gray scale image or a color image.
The scanning camera scans the image of the target object to acquire the coordinate information of the edge profile of the target object. The coordinate information of a line segment P projected by a clear imaging plane P of the camera in a coordinate system is calibrated in advance.
The intersection point between the line segment p and the edge contour of the target object is calculated. The image at the intersection is the sharp part of the original image. At this time, the processor knows the relative position information of the clearly imaged portion in the target object.
Laser marking of sharp imaging planes to identify sharp portions in images
Under the condition that the clear imaging plane of the camera is not parallel to the surface of the target object, if a clear part still exists in an image acquired by the camera at the moment, the clear imaging plane P of the camera and the surface of the target object can be approximately considered to have an intersection line in a geometrical relationship.
And marking the position of the clear imaging plane by using laser, wherein the position of the surface of the target object illuminated by the laser at any time is the intersection line of the clear imaging plane P of the camera and the surface of the target object. When the laser light is reflected in the image shot by the camera, the position illuminated by the laser light in the image is the position of the clear part in the image.
As shown in fig. 1B, as an embodiment, the laser emitting device 108 is disposed at a side of the clear imaging plane P of the camera, and includes a plurality of laser emitters, laser beams emitted by the plurality of laser emitters are parallel to each other, and each laser beam is located on the same plane to form a laser area, and the laser area covers the clear imaging plane P of the camera. As another alternative, as shown in fig. 1C, the laser emitting device 108 is disposed above the clear image plane P, the laser emitter emits planar laser from top to bottom, and since the laser has a certain width, a portion of the target object surface coinciding with the laser area still has a laser spot on the object surface due to the diffuse reflection of the surface itself and is not completely blocked by the object. The area laser area covers the clear imaging plane P of the camera. In this embodiment, the laser light is red laser light having a wavelength of 650 nm. As other alternative embodiments, the color of the laser may be green or other colors.
As a way of judging a sharp portion in an image according to laser spots, a camera adopts a multi-color camera, which includes a first channel and a second channel. In this embodiment, the camera is a color camera, and includes three channels of RGB, in this embodiment, the laser is a red laser with a wavelength of 650nm, the first channel is an R channel, and the second channel is a G channel or a B channel. It should be noted that the first channel and the second channel defined herein may include several channels, for example, in this embodiment, the second channel may be a GB channel. The processor comprises a channel selection unit for selecting a channel of the multi-color camera. The processor extracts a first channel image acquired by the multi-color camera and judges a laser spot area in the first channel image. And then extracting a second channel image, and matching the laser spot area in the first channel image with the second channel image. The laser spot area in the first channel image is the sharp image area in the second channel image. As an alternative embodiment, when the color of the laser light is green, the first channel is a G channel. The multicolor camera is adopted to respectively judge the laser spot area and the clear image position, so that the interference of the irradiation of laser to image information can be avoided, and the subsequent operations such as flaw detection and the like can be conveniently carried out by utilizing the image.
As an alternative embodiment, the light intensity distribution of the laser region is made uniform by using an optical element such as a grating. The uniform laser light intensity distribution does not bring excessive interference to the surface information of the target object. In this case, there is no need to filter the laser light with a multi-color camera.
The mode of laser marking the clear imaging surface can adapt to different shooting scenes, and equipment such as a scanning camera and the like is omitted.
Image fusion
In order to obtain a complete image of the surface of the object, clear images corresponding to different positions of the surface of the object are fused by a processor to form a complete clear image of the surface of the object. Before image fusion, the processor has acquired the relative positional relationship of each sharply imaged portion with respect to other sharply imaged portions in the same image, in accordance with the above-described systems and methods.
As an embodiment, the camera is mounted on the driving device, and the shooting position can be changed with the driving device. As an embodiment, the driving device comprises a mounting seat, a stepping motor, a transmission mechanism, wheels and a guide rail. The camera is fixedly connected with the mounting seat. The stepping motor can be arranged inside the mounting seat and can also be arranged outside the mounting seat. One end of the transmission mechanism is connected with the wheels, and the other end of the transmission mechanism is connected with the stepping motor. The stepping motor drives the transmission mechanism to drive the wheels to rotate. The wheels can move along the guide rails, and the moving tracks of the wheels are determined by the guide rails. In the process of one-time shooting, the driving device moves along a certain direction according to the motion track of the guide rail, and the camera changes the same distance every time under the driving of the stepping motor. The camera takes pictures of the surface of the object at different positions according to the walking sequence of the driving device. The processor receives the pictures, obtains clear parts in the pictures by judging gradient peak values or calculating intersection points, and splices the clear parts in the pictures in sequence. (D in the drawing indicates the direction of relative movement between the object and the camera)
As an alternative embodiment, the motor of the drive device may be a servomotor.
As another embodiment, the driving device may drive the target object or the camera to rotate around a rotation axis, which is parallel to the clear imaging plane P of the camera. In this way, images of different parts of the surface of the target object can also be acquired.
As another alternative, the camera is fixed in position, and the driving device drives the object to be photographed to move. The camera takes multiple pictures at different positions of the target object. The processor receives the pictures, obtains clear parts in the pictures by judging gradient peak values or calculating intersection points, and splices the clear parts in the pictures according to the sequence. In this case, the driving device may be an industrial line, a roller, a transfer cart, or the like.
In summary, a driving device for driving a camera or/and a target object so that a portion to be photographed of the target object can appear in a clear imaging plane P of the camera by time division. As an alternative embodiment, the image capturing system may further include a positioning device, which is respectively connected to the driving device and the processor in a data manner, so that the processor can match the clear portion of the image with the portion corresponding to the target object.
As an alternative embodiment, the camera may be connected to the processor. The camera transmits the captured picture to a processor, which processes the image to obtain information about the surface of the object contained in the image. The information on the surface of the object comprises flaw information on the surface of the object, characters on the surface of the object, mark information, color, texture, pattern and the like of the surface of the object. Wherein the defect information at least includes: information of whether a flaw exists in the image; location information of defects in the image.
The clear part in the image is judged by calculating the position of the intersection of the clear imaging plane and the object surface, so that the method can adapt to various shooting scenes and cannot be influenced by illumination conditions or object surface textures.
Multiple cameras shoot together
Several cameras may be included in the image acquisition system to improve the efficiency of the photographing. For example, when the object is rotated by the rotary driving device 107, if there is only one camera, the rotary driving device usually needs to rotate 360 ° to acquire a complete image of the surface of the target object. If two oppositely arranged cameras are adopted, the target object is placed between the two cameras, and the clear imaging surfaces of the two cameras are perpendicular to the same plane. At the moment, the driving device only needs to rotate 180 degrees to complete shooting, and the time is shortened by half. If three cameras are adopted, the included angle of the central lines between every two cameras is 120 degrees, and planes where the clear imaging surfaces of every two cameras are located are obliquely intersected. At this time, the driving apparatus only needs to rotate 120 ° to complete photographing.
On-line detection system
Industrial detection tends to have higher detection standards. If image acquisition detection is used for replacing manual visual detection, a camera needs to shoot an object to be detected in a short distance so as to acquire finer image information of the surface of the object.
As shown in fig. 3A and 3B, an industrial inspection system based on image acquisition includes an industrial production line 30, a location knowing subsystem 31, an image acquiring subsystem 32, and a processor 33.
In the following, an on-line detection system for glass edge is taken as an example, and an on-line detection system including the image acquisition system of the present invention is specifically described, in this embodiment, the glass 39 to be detected is sheet glass. As an alternative embodiment, the detection system can also detect surface flaws of other objects besides glass, such as surface flaws of wood, steel, stone, and the like.
The position learning subsystem 31 is used for acquiring coordinate information of the edge of the glass to be detected, and the position learning subsystem can be omitted when clear parts in images are identified by calculating gradient peak values of image parameters or marking clear imaging surfaces by using laser. The image acquisition subsystem 32 includes one or more cameras for acquiring images of the glass edge. The processor 33 is at least in data connection with the position-learning subsystem 31 and the image-capturing subsystem 32, and is used for performing various image and data processing operations.
The industrial detection system can obtain a clear image of the surface of an object to be detected through a machine. The labor is saved, and the detection precision is improved. By adopting the industrial detection system, the object to be detected can be placed at any position and direction on an industrial production line.
Industrial production line
The industrial line 30 is used for placing the photographed glass. The industrial production line can convey an object to be detected along a conveying straight line at least in a conveying direction on a conveying surface. In this embodiment, the industrial production line moves the glass placed on the industrial production line.
The industrial production line comprises a conveying unit used for conveying a product to be detected.
As an optional implementation mode, the industrial production line comprises a roller frame and a plurality of conveying rollers, the conveying rollers are rotatably connected with the roller frame, and the axes of the conveying rollers are positioned on the same plane.
As other alternative embodiments, the industrial production line may also include a conveyor such as a conveyor belt or a transfer car.
The surface of the industrial production line can be made of anti-skid rubber materials or provided with suckers for increasing the friction force between the industrial production line and conveyed objects, so that the friction force between the industrial production line and glass is increased, and the conveying efficiency of the industrial production line is improved.
Location awareness subsystem 31
The position learning subsystem is used for acquiring coordinate information of a product to be detected, and can be omitted when clear parts in the image are identified by calculating gradient peak values of image parameters or marking clear imaging surfaces by using laser.
And establishing a coordinate system by taking the conveying surface of the industrial production line as a plane where the coordinate system is located. The coordinate system may be a two-dimensional coordinate system. The contour of an object placed on the industrial production line is projected into a closed figure in the coordinate system. The conveying direction of the industrial production line is taken as the x direction, and the direction perpendicular to the conveying direction is taken as the y direction. The position learning subsystem needs to know at least the relative position between the product to be detected and the camera.
Front camera
As shown in fig. 3B, the front camera 312 is disposed above the industrial production line 30, generally directly above the industrial production line 30. Of course, as an alternative embodiment, the front camera 312 may be set according to actual needs. However, no matter how the arrangement is, the imaging range of the front camera 312 is ensured to cover the proper industrial production line area. The front camera 312 is spaced a distance from the industrial production line 30. The lens of the front camera faces the industrial production line 30, and the optical axis of the front camera is perpendicular to the plane where the axes of the conveying rollers are located.
The front camera may be a line scan camera (line camera), or an area camera. The scanning camera is used for acquiring coordinate information of an object to be measured.
As an alternative embodiment, the front camera 312 is a line scan camera.
Acquisition of industrial production line position information (glass contour x coordinate value)
The glass to be measured is placed on the industrial production line and moves along with the industrial production line. When the position information of the industrial production line is obtained at any time, the x-coordinate value of the glass to be measured at that time can be obtained on the coordinate plane.
In one embodiment, the image acquired by the front camera is determined, and the x-coordinate value of the portion of the same glass to be measured which is first acquired by the front camera is set to 0.
As an alternative embodiment, the y-direction of the industrial production line is provided with a position detection device. As an implementation mode, the position detecting device is a photoelectric door, and includes a signal transmitting device disposed on one side of the industrial production line and a signal receiving device disposed opposite to the signal transmitting device, and a connection line between the signal transmitting device and the signal receiving device is perpendicular to a transport straight line of the industrial production line. The x coordinate of any point on the connection line is set to 0. Under the condition of no shielding, the receiving device of the photoelectric door can always receive the light signal emitted by the light emitting device of the photoelectric door. In the case of a shield, the receiving means of the photovoltaic door cannot receive the light signal emitted by the light emitting means of the photovoltaic door.
The photoelectric gate is connected with the processor, and the distance between the photoelectric gate and the front camera is kept constant. At this time, x-coordinate information of the scanning area of the front camera can also be obtained.
Encoder acquisition coordinates
As a displacement detecting device, an encoder includes an encoder disk and an encoder reading unit for converting an angular displacement or a linear displacement into an electric signal.
As shown in fig. 3B and fig. 4, the industrial inspection system based on image acquisition includes an encoder 311, which may be a rotary encoder, including an absolute value encoder and an incremental encoder.
The coding disc of the encoder can be connected with a motor through a coupler and also can be connected with a transmission device for direct industrial production line movement. When the encoder is connected with the motor, the encoder directly obtains the rotation information of the motor, and the corresponding motion coordinate of the industrial production line is obtained through the reduction ratio conversion.
The encoder is coupled to the processor for transmitting information to the processor. According to the image acquired by the scanning camera for the first time or the starting of the photoelectric door, the motion coordinate of the industrial production line recorded by the encoder at any time is the x coordinate of the front point of the glass.
The position information of the industrial production line is obtained as an original step and a key step of the detection process of the whole detection system. If an error is generated in this step, the output results of the subsequent steps may be erroneous. It is therefore necessary to ensure the accuracy of the information acquired by the encoder.
As an alternative to the photogate, the processor calculates the real-time speed of the industrial production line from the data transmitted by the encoder. And judging that the glass is placed on the industrial production line at the moment when the real-time speed of the industrial production line suddenly drops. The motion coordinate of the industrial production line at the time of the sudden speed drop is marked as 0, and the x coordinate of the front point of the glass is 0 at this time.
As an alternative embodiment, the glass edge detection system includes a plurality of encoders mounted at different locations or on different components of the industrial process line. For example, in the case of using two encoders, the main encoder 311a and the sub encoder 311b are connected to a motor and an industrial line transmission mechanism, respectively. If the information obtained by the sub-encoder 311b is consistent with the information obtained by the main encoder 311a, or the error is within a certain range, the information obtained by the encoder at this time is considered to be accurate. The main encoder and the auxiliary encoder can find the encoder fault in time, and the detection precision is improved.
Coordinate acquisition using brushless motor parameters
The brushless motor is controlled by electronic commutation commands, the speed of which can be controlled by pulse width modulation signals. Due to the characteristic of the brushless motor, the brushless motor controller can obtain the rotation angle of the brushless motor in a mode of recording the quantity of control signals, and therefore motion information of an industrial production line is obtained.
When the photoelectric sensor outputs a zero coordinate, the glass enters a detection area, the brushless motor controller records the pulse width and the number of control signals of the brushless motor, and calculates the motion coordinate of the industrial production line at any time according to the pulse width and the number of the control signals, namely the x coordinate of the front point of the glass.
Time interval of front camera shooting
No matter an encoder or a brushless motor is adopted, the real-time speed of an industrial production line can be obtained. Due to different loads placed on the industrial production line or due to the difference of lubrication environments of mechanical devices of the industrial production line, the industrial production line does not always move at a uniform speed. At the moment, the processor calculates and obtains the real-time movement speed of the industrial production line. And calculating the time required by the industrial production line to move for a fixed interval at the speed, and triggering the shooting of the front camera at a corresponding time point. By adopting the triggering mode of the front camera at the shooting moment, the picture shot by the front camera can meet the requirement of post-processing. For example, when the front camera adopts a linear array camera, the linear array camera can ensure that the motion interval of the corresponding industrial production line is constant when the linear array camera shoots every time; under the condition that the front camera adopts the area-array camera, the area-array camera can be ensured to obtain a complete glass surface image every time of shooting.
The specific shooting interval setting when the linear array camera is adopted is subject to the requirements under different imaging quality requirements. For example, for the glass to be measured with a large size, the shooting frequency of the line scanning camera is relatively reduced, the energy consumption is reduced, and the heat generation of the line scanning camera is reduced. Of course, the line scan camera shooting frequency can also be increased to increase the fineness of the imaging.
After the front camera finishes shooting, coordinate information of any point of the edge of the glass at any moment can be obtained by combining the industrial production line coordinate information corresponding to the control parameters of the encoder or the brushless motor.
Image acquisition subsystem 32
The image acquisition subsystem acquires an image, and the processor identifies a clear part in the acquired image by judging the gradient peak value of the image parameter, calculating the intersection position or marking a clear imaging surface by using laser. In this embodiment, the image acquisition subsystem 32 is configured to acquire an image of an edge of the glass to be measured. The image acquisition subsystem 32 may include a plurality of cameras 321.
In one embodiment, the image capturing subsystem 32 is disposed behind the position learning subsystem 31, and the glass to be measured passes through the position learning subsystem 31 and then passes through the image capturing subsystem 32 when being conveyed. By the arrangement, when the image acquisition subsystem 32 acquires an image, the coordinate information of the edge of the glass to be detected is known, so that the subsequent processing operation is facilitated.
Working process of single camera in image acquisition subsystem
In the case of detecting a glass edge flaw, the position and direction in which the glass is placed on the industrial production line are often arbitrary.
As an embodiment, as shown in the figure, the position of the camera is fixed, the center of the imaging range of the camera is kept horizontal to the industrial production line, and the projection of the clear imaging plane P of the camera from the position right above the industrial production line at least partially covers the industrial production line. During the movement of the industrial production line, the position of the camera remains unchanged.
The glass to be measured is placed on the industrial production line in any direction. In this case, the glass edge tends to be non-parallel to the camera's image. In the glass edge image captured by the camera, the image at the intersection of the glass edge and the camera clear imaging plane P is clear, and the other portions of the image except the intersection are not clear.
As an optional implementation mode, the processor controls the shooting of the camera according to the real-time position of the glass to be measured acquired by the position acquisition subsystem. When the front point of the glass to be measured is about to enter the camera shooting area or enters the camera shooting area, the processor controls the camera to start shooting.
In order to ensure that each point of the glass edge can intersect with a clear imaging plane P of the image acquisition subsystem 32 during the movement of the glass to be measured, as an optional implementation manner, a limiting mechanism can be arranged at the edge of the industrial production line to prevent the glass edge to be measured from exceeding the width range of the industrial production line; as another alternative, an alarm device/function may be introduced: when the front camera 312 judges that the shape and the position information of the glass to be measured exceed the width range of the industrial production line, an alarm is given.
Shooting multiple pictures and carrying out image fusion
The camera has a depth of field and the clear part of the picture taken by the camera has a certain range.
In order to obtain a complete image of the glass edge, a camera continuously takes a plurality of pictures at a specific moment, and a clear part in each image can be spliced to form a complete image of the glass edge.
Recording the horizontal central point of a clear imaging area in the ith frame image as x i 。
With x i By intercepting the image in a central, horizontal and vertical direction x And W y . Wherein x is i ,W x And W y Are in units of pixels. The sharpness range in the image taken by the camera is: [ x ] of i -W x /2,x i +W x /2]。
The expression method of each parameter in the following formula comprises the following steps: image distance: u, object distance: v, belt moving speed: v, the distance of the object moving in the direction perpendicular to the optical axis of the camera: s v Distance moved perpendicular to the optical axis of the camera: s u Abscissa like on the sensor plane: x (pixel horizontal position corresponding to the point on the picture), sensor pixel size: and d, the thickness of the glass is T, the focal length of the lens is: f, sensor inclination angle: theta, reference value x 0 And u 0 Given the horizontal coordinate x on the sensor 0 The image distance of the corresponding lens is u 0 。
In the method of calculating the sharp portion of the image through the peak point of the image gradient, it is necessary to calculate the region of the sharp range in the image. The moving speed V of the belt and the angle alpha of the optical axis of the camera relative to the belt; exposure interval t of two frame images.
In the method of calculating the sharp portion of the image through the gradient peak point of the image,
W x =S u /cos(θ)/d,W y =(u*T)/(v*d)
wherein S is u =u*S v /v,v=1/(1/f–1/u),u=u 0 +Δu;S v =V*sin(α)*t;Δu=(x i -x 0 ) The photographing interval of the sin (θ) camera is t.
In algorithms where sharp parts of the image are obtained by calculating the intersection position,
W x =S u [ theta ]/d with W y =(u*T)/(v*d)
Wherein S is u =u*S v /v,v=1/(1/f–1/u),u=u 0 +Δu;S v =V*sin(α)*t
In the method, the rear camera selects proper time to take pictures according to the glass edge position information provided by the front camera and the reading of the encoder.
And the processor controls the shooting work of the image acquisition subsystem according to the known relative position between the product to be detected and the camera. Clear position x of camera shot image i The timing of (d) is specifically calculated by the following method:
assume that the interval between the front and back camera photographing encoders is C i-1 +C pb And C i +C pb Then, the position of the double-shot glass is changed to (C) i -C i-1 )*S c 。
Because the camera sensor is tilted by the angle θ, the pixel size in the x-axis direction of the image plane is d × Cos (θ), and the pixel size in the y-axis direction of the image plane is still d.
Distance S for moving object perpendicular to optical axis direction of camera v =(C i -C i-1 )*S c *Sin(α)。
Then according to the position information (X) of the edge point provided by the front camera i ,Y i +S pb ) And D, the object distance v = (D + X) of the camera can be obtained i ) And/sin (. Alpha.). The image distance u can also be obtained according to a lens focal length formula.
Further get x i =(u-u 0 )/sin(θ)+x 0 。
The controller controls the camera to shoot according to the calculation result and the information obtained by the position acquisition subsystem, so that the images shot by the camera at two times of interval time/distance can meet the requirement of image splicing.
And fusing a plurality of clear images corresponding to different positions of the glass edge to obtain a complete glass edge image in the shooting range of a single camera.
If the current glass moving direction is from right to left: the [ x-D/2, x + D/2] image of the current frame is collected and placed on the right side of the output image.
If the current glass movement direction is from left to right: the [ x-D/2, x + D/2] image of the current frame is collected and placed on the left side of the output image.
The above two steps are repeated until a complete sharp image is obtained.
Sensor/lens tilt
When the surface flaw of the object is detected, the object to be detected can be placed on an industrial production line in any direction. The image acquisition and detection system acquires the image of the surface of the object by obtaining the clear image at the superposition position between the surface of the object and the clear imaging plane P of the camera. The image acquisition devices comprise cameras, video cameras, scanners and the like, and clear imaging planes P of the image acquisition devices have certain lengths. The object surface flaw detection system needs a camera with a larger depth of field range, so that a wide enough area on an industrial production line can be covered.
In order to obtain a larger depth of field, a method of replacing the lens and/or the image sensor may be used. But a larger depth of field range means more cost.
As an alternative embodiment, the image acquisition subsystem in the industrial inspection system employs a camera with an offset configuration.
As shown in fig. 5, the camera having the offset configuration includes a housing 501, a lens 502, and an image sensor 503, the image sensor being coupled to a mounting shaft 504, which is rotatably mounted to the housing 501. The mounting shaft is exposed from the upper portion of the housing 501, and the rotary knob 505 is fixedly connected to the mounting shaft exposed from the housing. As an alternative embodiment, an angle code disc 506 is disposed around the rotation dial to indicate the rotation angle of the image sensor. The lens 502 defines a main optical axis and the image sensor 503 defines a sensing plane, and a line passing through the mounting axis 504 and perpendicular to the main optical axis of the lens forms an angle α with the sensing plane. The rotary knob 505 is operable to be rotated to rotate the mounting shaft 504, which in turn rotates the image sensor 503 along with the mounting shaft 504. The current tilt angle of the image sensor can be intuitively known through the angle code disc 506 arranged around the rotary dial knob.
Alternatively, the lens 502 may also be rotated to change the tilt relationship between the image sensor and the lens.
Taking the image sensor tilt as an example, the focal range of the sensor is expressed by the following formula:
[(u+sin(α)*x)/(u+sin(α)*x-1),u/(u-1)]
wherein u is an image distance, α is an image sensor rotation angle, X is a normalized size of a corresponding focal length of the image sensor in a rotation axis radial direction (X = X/f, where X is a real size), and sin (α) × X is a projection distance of the sensor in an optical axis direction.
As can be seen from the formula, for a given x and u, a larger α (u + sin (α) ×)/(u + sin (α) × 1) means a larger depth of field. When the angle alpha ranges from 5 degrees to 65 degrees (which can also be expressed as the angle formed by the intersection of the main optical axis of the lens and the sensing plane ranges from 25 degrees to 85 degrees), a better imaging effect can be obtained.
According to the imaging principle of the lens, the camera with offset structure has a longer length of the clear imaging plane P and a larger depth of field range compared with a common camera using the same lens and image sensor.
The longer length and depth of field range of the clear imaging plane P enable the imaging range of the camera with offset configuration to cover as much area as possible on the industrial production line. Because the position of the glass to be measured on the industrial production line is uncertain, the camera with the offset structure is adopted, so that the situation that any point on the industrial production line can be coincided with a certain point on the clear imaging plane P of the camera at a certain moment in the motion process as far as possible under the same hardware cost is ensured, and the situation that any position of the edge of the glass can be clearly imaged in the camera with the offset structure at a certain moment is also ensured.
Meanwhile, the field depth range is larger, and more clear images can be obtained when the surface image of the object with certain depth is shot. For example, when shooting a glass with a radian, a camera with an offset structure can shoot a clear macro image on the whole glass surface at one time under a certain shooting angle. However, the conventional macro camera may not be able to simultaneously acquire clear images of different positions of the glass with radian due to relatively small depth of field.
Camera with offset structure
Multiple cameras shoot together
Often, objects have multiple surfaces, and there may be mutual occlusion between the multiple surfaces of the object or different locations on the surface. At this time, a single camera cannot acquire images of multiple surfaces of the object at the same time.
The image acquisition subsystem 32 is used to acquire a complete image of the glass edge. Image acquisition subsystem 32 includes a plurality of cameras. Multiple cameras may be disposed on the same horizontal plane. Generally, a plurality of cameras are distributed on the periphery of an industrial production line and are arranged on two sides of a conveying straight line of the industrial production line. The sharp imaging planes P of any two cameras are not coincident.
As shown in fig. 6A, as an alternative embodiment, a plurality of identical cameras are uniformly distributed around a certain point (the point is referred to as a central point) on the center line in the width direction of the industrial production line, and an included angle formed by a connecting line between any two adjacent cameras and the central point is consistent, which can also be described as that the plurality of cameras are uniformly distributed on the circumference of a certain circle with the central point as the center. The arrangement is such that each camera is centrally symmetrical about the centre point. The centrosymmetric relationship makes the installation and replacement of the whole image acquisition subsystem 32 easier and more convenient.
As a modification of the above embodiment, the number of the plurality of cameras is an even number. The even number of cameras are uniformly distributed on two sides of a conveying straight line of the industrial production line. In this case, the cameras are not only centrosymmetric but also axisymmetric. The difficulty of assembling and replacing the image acquisition subsystem is further reduced by adopting the arrangement of an even number of cameras.
The multiple cameras are adopted for shooting simultaneously, and images of different edges of the glass can be obtained at the same time. If the number of the adopted cameras is too small, the requirements for acquiring images of different edges of the glass cannot be met. However, the number of cameras used is too large, the installation of these units on an industrial production line becomes complicated, and the corresponding costs increase due to the increase in the number of cameras.
Each camera has a certain shooting angle beta when shooting the glass to be detected at different positions, and only the shooting angle beta is larger>At 0 deg., the camera can work normally. When a plurality of cameras are used, the shooting angle β of each camera at the time of shooting is required>0 deg. Considering the condition that a plurality of cameras are uniformly distributed, the number n of the cameras and the limit shooting angle beta max Is expressed by the following formula:
β max =90°-180°/n
only beta is max >At 0 deg., the camera can work normally. Meanwhile, n must be an integer, and the formula shows that n is more than or equal to 3, namely at least 3 cameras are needed to meet the shooting requirement. At this time, at least two cameras are located on the same side of the straight line of transport.
β max Can also be regarded as the incident angle, beta, of the camera max The larger the size, the more perpendicular the shooting direction of the camera is to the object surface, the better the shooting effect.
As can be seen from the table below, the larger n, the larger β max The larger the average image quality. That is, the larger the number of cameras, the better the imaging effect
Number of cameras n
|
β max |
Angle improvement
|
3
|
30
|
-
|
4
|
45
|
50%
|
5
|
54
|
25%
|
6
|
60
|
11%
|
7
|
64.3
|
7% |
However, the larger the number of cameras, the higher the cost and the more complicated the assembly.
According to multiple experimental results, the absolute value of the imaging quality, the improved value of the imaging quality and the hardware cost of the whole system when different numbers of cameras are adopted are comprehensively considered. The optimal state b of the whole system is expressed by the following formula:
b=8n-n 2
as can be seen from this equation, when n =4, b takes the maximum value.
When n is 4, the cameras can be arranged on two sides of the industrial production line, and the cameras and the glass to be measured are ensured to be positioned on the same horizontal plane. And, because the distance that sets up between each camera on industry production line both sides and the industry production line is the same, can reduce the degree of difficulty of assembly.
With 4 cameras, the angle is improved by 50% compared to 3 cameras. According to experimental results, the improvement in imaging quality was greatest when the cameras of the image acquisition subsystem ranged from 3 to 4.
Mounting position of camera
As shown in fig. 6A, taking 3 cameras as an example, fig. 6B-6D show the relationship between the clear imaging planes of the cameras in different image capturing subsystems.
In one embodiment, the main optical axes of the three cameras are parallel to the same plane. Such an arrangement can prevent distortion of the captured picture due to the tilt of the camera in the longitudinal direction.
The clear imaging planes P of the respective cameras are to satisfy: in the process of relative motion between the industrial production line and the camera, each point on the surface of the target object can be intersected with a certain clear imaging plane P, namely: a plane perpendicular to the clear imaging plane P is used as a projection plane (the transport plane in this embodiment can be used as a projection plane), and the projection of the clear imaging plane P on the projection plane is approximately regarded as a projection line segment, so that an intersection point exists between the motion trajectory of any point on the surface of the target object on the projection plane and the projection line segment of the clear imaging plane P.
The target is placed on the industrial production line, and the edge of the target does not exceed the edge of the industrial production line when viewed from the top. By the arrangement, no matter what direction and position the object is placed on the industrial production line, the edge of the object cannot exceed the plane of the industrial production line, namely: there will be no completely non-imageable portions of the object surface (e.g., portions that are out of the plane of the industrial line will be completely non-imageable).
In addition, there is a problem of front and rear blocking of the object itself to be observed, regardless of the observation by the human eye or the imaging by the apparatus. In the scene of shooting the image of the edge of the glass, as shown in fig. 6B-6C, when the glass is placed at a specific position in the glass placing direction, the complete image of the edge of the glass to be measured cannot be obtained due to front and back shielding. As shown in fig. 6B, in this scenario, the image acquisition subsystem cannot acquire the image of the right edge of the rectangular glass 39 to be acquired; in the scenario of fig. 6C, the image acquisition subsystem cannot acquire the image of the left edge of the rectangular glass 39 to be measured.
According to a plurality of tests, as shown in fig. 6D, under the condition that a closed region is formed between line segments of a plurality of clear imaging planes P on a projection plane, as long as any point in a projection pattern of a detected product conveyed on an industrial production line on a conveying plane can pass through the closed region, that is, the detected product can completely pass through the closed region in the conveying process, a complete clear image of the detected product can be obtained.
A plurality of cameras satisfying the above conditions, are able to obtain a complete image of the glass edge, irrespective of the direction and position of the glass on the industrial production line.
Selection of best camera
The positions of the plurality of cameras are fixed, that is, the shooting angles of the respective cameras are fixed. Since the edges of the glass may be blocked and the glass is placed at any angle on the industrial production line, in general, not every camera can completely capture an image of a certain edge or a certain position of the glass.
And after the glass edge image and the corresponding position information are obtained through the position learning subsystem, the glass edge image is processed to obtain a normal vector at any position on the glass edge image. The position pointed to by the normal vector, i.e. the best shooting angle for that point.
If the glass has a plurality of edges, the average value of the normal vectors on the edges of each glass is calculated respectively, and the optimal shooting angle is determined according to the average value of the normal vectors corresponding to the edges of each glass, so that the optimal camera is selected.
If the shape of the glass edge is a closed curve, the normal vectors at the fixed spacing positions of the glass edge can be sequenced according to the size of an included angle between the normal vectors and a certain reference direction, and the best camera is selected according to the angle of the normal vector at the position.
As an alternative embodiment, the image of the glass edge is a closed graph, and the normal vector of each point on the closed graph is obtained. The angle ranges of the normal vectors corresponding to each camera can be the same or different, but the sum of the angle ranges of the normal vectors corresponding to all the cameras should be not less than 360 degrees, so that at least one corresponding camera is arranged at any position on any glass edge image. When the image of the glass edge is collected, the image collected by the camera corresponding to the normal vector at the glass edge is the optimal image.
As an alternative embodiment, the plurality of cameras are uniformly distributed around a point (the point is called a center point) on the center line in the width direction of the industrial production line, and the included angle formed by the connecting line between two adjacent cameras and the center point is consistent, which can also be described as that the plurality of cameras are uniformly distributed on the circumference of a circle with the center point as the center. The arrangement is such that each camera is centrally symmetrical about the centre point. Where n cameras are involved, each camera selects the corresponding best normal vector in the range of 360/n. By means of the arrangement, the angle ranges of the normal vectors corresponding to the cameras are the same, namely the workload of each camera is the same, and stable operation of the image acquisition subsystem is facilitated. As shown in fig. 3B, as an embodiment, the industrial detection system based on image acquisition adopts a central symmetry arrangement of 4 cameras, which are evenly distributed on two sides of a transport straight line of an industrial production line, and an included angle between central axes of every two cameras is 90 °.
And selecting a corresponding camera according to the normal vector of the glass edge, and obtaining the glass edge image with the best imaging quality while controlling the working time of the camera to reduce the working load of the camera.
Glass edge flaw determination
Fig. 7A-7B show glass edge images taken with a flaw in the edge of the glass being measured. The form of the flaw may be different for different types of glass.
As an example, the edge of the glass to be tested should be frosted under normal conditions in this embodiment. At this time, the main types of glass edge flaws include "uncovered frosting", and the flaw portions produce specular reflection. The camera captures the non-frosted blemish to obtain the brightness and darkness of the image, depending on the angle of the specular reflection surface and the incident direction of the light source.
If the light source light, the flaw part and the camera cannot form a light reflection relation, at the moment, the non-flaw part generates diffuse reflection, and part of the light source is reflected to the camera; the defective portion undergoes specular reflection, and light from the light source does not enter the camera through specular reflection because the direction of the reflection angle is not aligned with the camera. In this case, as shown in fig. 7A, the visual effect of the flaw portion is darker than that of the non-flaw portion, so that the glass of the non-frosted flaw portion appears black.
FIG. 7B is a diagram showing another case where a flaw is present at the edge of the glass to be measured. At this time, the light source light, the flaw part and the camera just form a geometric reflection relationship, and the whole light source light is reflected into the camera. The non-defective part is still subjected to diffuse reflection, and part of the light source is reflected to the camera. In this case, the visual effect of the defective portion is rather brighter than that of the non-defective portion. Therefore, with this type of light source, there may be both "bright" and "dark" conditions in the "uncovered frosted" blemishes.
As an alternative solution. When the flaw of the glass edge is judged, two different judgment threshold values are set, wherein the two different judgment threshold values comprise a dark threshold value and a bright threshold value, and when the brightness value of a certain position of the glass edge is judged to be smaller than the dark threshold value, the position is judged to be the flaw; or when the brightness value of a certain position of the glass edge is judged to be larger than the bright threshold value, the position is also judged to be a flaw. As another alternative solution. The average brightness value of the entire glass edge is first calculated. The brightness values at different positions of the edge of the glass are then subtracted from the average brightness value and the absolute value of the result of the subtraction is calculated. Judging the absolute value: if the absolute value is larger than a certain threshold value, the brightness of the position is too bright or too dark, namely the position is a mirror reflection position, and at the moment, the position is judged to have a defect. The luminance values referred to herein include not only luminance values in color images but also gray-scale values in black-and-white images.
Light source
When the defect is judged by adopting the judging method, if the difference between the brightness of the defect and the brightness of the non-defect is too small, the selection of the threshold value is difficult, and the judgment of the defect is not accurate enough.
To solve this problem, as an alternative embodiment, referring to fig. 8A, a stripe-shaped light source 81 is disposed above the clear image plane P of the camera. From a top view perspective, each strip light source can cover a clear image plane P of one or more image capture units in image capture subsystem 32. The strip light source may include point light sources or area light sources uniformly distributed on the strip light source. By adopting the arrangement, when the image acquisition subsystem acquires an image, the clear part in the image can be irradiated by light source light rays with the same brightness and angle, so that the absolute value of the brightness difference value between each flaw position and each non-flaw position is increased, the brightness contrast between the flaws and the non-flaw positions is improved, and the flaws are judged more accurately. The strip-shaped light source can be close to the industrial production line as much as possible on the premise of not interfering the movement of objects on the industrial production line, so that the loss of brightness is reduced.
Flaws may also exist at the intersection of the edge of the frosted glass and the glass surface, which are known as "flash".
In another alternative embodiment, the light emitted by the light source is parallel light. The parallel light covers the clear imaging plane P of the camera. The light source can be arranged beside the camera and is on the same side of the industrial production line with the camera. Or can be arranged on both sides of the industrial production line with the camera. By adopting the light source arrangement, the glass edge can be irradiated to the intersection of the glass edge and the glass surface, so that the glass edge explosion can be detected, and the application range of the online detection system is widened.
As another alternative, referring to fig. 8B, the light source 82 is an annular surface light source shaped like a side surface of a cylinder or a side surface of a rectangular parallelepiped. The light source surrounds the glass to be measured, and the inner wall of the whole light source emits uniform light. Such a light source setting mode can guarantee: the light emitted by the inner wall of the light source, the flaw position of the glass edge and a certain camera in the image acquisition subsystem can always form a mirror reflection relation. At this time, no matter what direction the glass to be measured is placed on the industrial production line, the flaw part of the glass edge can reflect the light rays at a certain position on the light source into a certain camera in the image acquisition subsystem through the mirror reflection, so that the visual effect of the image of the flaw part of the glass edge can be always highlighted.
As an alternative embodiment, the light source is in the shape of a ring surface light source and a cover surface light source. Compared with the case of only including the annular light source, the light source structure can improve the detection effect on the 'burst edge'.
When the light source is arranged, the lower edge of the light-emitting part of the light source at least needs to be consistent with the position of the lower end of the glass edge or be lower than the lower end of the glass edge, so that the effect of the light source can be ensured. At the same time, however, such light sources may impede the movement of the glass on the industrial production line.
As an alternative embodiment, as shown in fig. 8C, the side view of the industrial line 30 is a ladder shape with a small top and a big bottom, and the industrial line 30 has a highest plane during operation. The ring light source or capped ring light source is disposed around the uppermost plane of the industrial production line. The lower end of the ring light source 83 may be positioned flush with or slightly below the plane of the uppermost plane of the industrial line 30. At the same time, the lower end of the ring light source is kept at a distance from the surface of the industrial line 30, which is greater than the maximum thickness of the glass 39 to be measured. When the glass 39 to be measured is transported to the highest level of the industrial production line, the ring light source or the capped ring light source surrounds the glass to be measured. Because the distance between the lower end of the annular light source 83 and the industrial production line 30 is greater than the maximum thickness of the glass, the position of the annular light source does not need to be changed when the glass moves on the industrial production line, and meanwhile, the movement of the glass is not interfered by the light source. The lower edge of the light-emitting part of the light source is flush with or lower than the lower end of the edge of the glass, and the integrity of the incident light angle of the light source is ensured.
As an alternative embodiment, the ring light source 84 may be moved up and down, as shown in FIG. 8D. The upper end of the annular light source is provided with a telescopic rod 841 which can move up and down and can be driven by a cylinder, hydraulic pressure or a motor. When the glass to be measured on the industrial production line 30 moves below the light source 84, the movement of the industrial production line stops, and at this time, the telescopic rod 841 descends to make the lower edge of the light emitting part of the light source flush with the lower end of the glass to be measured 39. After the glass edge image is acquired, the telescopic rod 841 is lifted, and the industrial production line resumes movement. Correspondingly, the industrial production line 30 can be driven by a servo motor, and a fixed area for placing the glass 39 to be measured is arranged on the industrial production line. The servo motor is operated each time the industrial production line 30 is moved a fixed distance. By such an arrangement, the light source 84 can be raised and lowered at the same time interval, thereby simplifying the procedure and reducing the error probability. Meanwhile, a pressure feedback device or a distance measuring device may be provided on the light source to ensure that the light source does not crush the glass 39 to be measured when it is lowered.
Since the light source itself is opaque, the image acquisition subsystem 32 generally needs to be installed between the light source and the glass to be measured in order to acquire an image of the glass to be measured. Also, since the cameras in the image acquisition system 32 are also typically opaque, the cameras tend to block some portion of the light from the light source. If the number of cameras is multiple and two cameras are in reflective relationship with the glass edge flaw, the resulting image of the glass edge flaw will again appear "dark".
In order to solve the above problem, as an alternative solution, a light source is provided on the camera 321. The light source may be arranged to the side of the camera lens and may be in the form of a surface light source so that the angle of the outgoing light rays will at least compensate for the light rays of the portion of the annular light source that is blocked by the camera. The light source may also be annular, mounted around the camera lens. By the arrangement mode, light rays emitted by the light source can be further ensured to make up for the light rays blocked by the camera.
Defect detection
After a sharp image of the glass edge is obtained, the processor 33 detects flaws from the image.
Flaw detection method comprising front camera
1. Flaw detection mode without image splicing
And performing flaw detection on the clear image acquired by the image acquisition subsystem at each shooting moment without performing flaw detection of image splicing so as to judge whether the image at a certain position of the edge of the glass acquired at the shooting moment has flaws. Meanwhile, since the detection system including the front camera 312 can directly obtain the position of the clear image at the edge of the glass at any time, when the defect is detected at the clear imaging position of the edge of the glass at any time, the spatial position of the defect can be known. In this way, the presence or absence of a flaw on the glass edge and the position of the glass edge at which the flaw is located can be known simultaneously.
The specific determination process is shown in fig. 9A.
In step S911, a reference value of brightness or gradation is determined. The reference value may be determined as an average value of the brightness or the gray scale of the edge of the previous piece of glass. As another embodiment, the average value of brightness or gray scale may be determined by the flawless glass prior to testing.
In step S912, an absolute value of a difference between the average value of the brightness or the gray scale of the current picture and the reference value is calculated.
In step S913, it is determined whether the absolute value exceeds a reasonable error range. If the absolute value of the calculation does not exceed a reasonable error range, the position of the glass edge is considered to be free from flaws. If the absolute value of the calculation result is out of the reasonable error range, it is considered that a flaw is present at the position of the glass edge, and the process proceeds to step S914.
In step S914, spatial information of the glass edge corresponding to the defect picture is obtained. This spatial information may be obtained by the location-aware subsystem 31.
In step S915, feedback is performed. The feedback mode can be used for alarming through sound and light, and can also be fed back to a working computer of a quality testing worker through wired connection, or can be sent to an external mobile terminal such as a smart watch and a smart phone through wireless connection.
The method directly judges whether the clear image obtained at any moment has flaws, saves the image splicing step, and is efficient and rapid.
2. Flaw detection mode after image splicing
And a flaw detection mode of image splicing is not carried out, and each detection is relatively independent. Therefore, the result of the detection is greatly affected by the previous step. For example, the reference value depends on the brightness and gray scale information of the glass before detection, and the difference exists under the influence of different environmental illumination, so the precision of flaw detection is influenced along with the change of natural illumination. And the position information of the clear imaging part at any time is completely dependent on the front camera 312 and the encoder 311, if the information obtained by the front camera 312 or the encoder 311 itself generates deviation, the obtained position information is deviated.
As another glass edge flaw detection method, as shown in fig. 9B, the steps are as follows:
in step S921, the images are stitched to form a sharp image of the complete glass edge.
In step S922, an average value of the brightness or gradation of the entire image obtained in step S921 is calculated. When the obtained picture is a color picture, the average value is a brightness average value; when the acquired picture is a black-and-white picture, the average value is a gray average value.
In step S923, the brightness or gray scale of each pixel/pixel group in the complete image obtained in step S921 is calculated in a specific calculation order and compared with the average value of the brightness or gray scale obtained in step S921. The comparison may be by calculating the absolute value of the difference between the luminance or gray level of the pixel and the average value. Or calculating the absolute value of the division of the difference value of the brightness or the gray scale of the pixel and the average value, and dividing the difference value and the average value, thereby effectively eliminating the influence of the illumination condition on the judgment of the flaw part. And if the comparison result of the brightness or the gray scale of the pixel with the reference value is larger than a certain threshold value, the pixel position is considered as a defect position.
In step S924, the relative position of the flaw portion at the glass edge is obtained according to the sequence of calculating the flaws obtained in step S923. In this case, the spatial information acquired by the position acquisition subsystem 31 may be received and integrated with the relative position acquired in the past.
In step S925, the relative position is fed back to the quality inspector. The feedback mode can be fed back to a working computer of a quality testing worker through wired connection, or can be sent to an external mobile terminal such as a smart watch and a smart phone through wireless connection.
Flaw detection method without front camera 312
1. Flaw detection mode without image splicing
And performing flaw detection without image splicing, and performing flaw detection on the clear image acquired by the image acquisition subsystem at each shooting moment so as to judge whether the image at a certain position of the edge of the glass acquired at the shooting moment has flaws. Since the flaw detection method without the front camera 312 is difficult to obtain the position of the clear imaging position at any time, such a detection method is usually used to determine whether there is a flaw on the edge of the glass to be measured.
The method directly judges whether the clear image obtained at any moment has flaws, saves the image splicing step, and is efficient and rapid.
2. Flaw detection mode after image splicing
And flaw detection is carried out after the images are spliced, so that on one hand, the detection precision can be improved, and on the other hand, the position information of the flaw part can be obtained by obtaining the position relation of the flaw part compared with other parts of the object to be detected. With this method, the position of the defective portion can be obtained despite the absence of position information provided by the position acquisition system.
The foregoing illustrates and describes the principles, general features, and advantages of the present invention. It should be understood by those skilled in the art that the above embodiments do not limit the present invention in any way, and all technical solutions obtained by using equivalent alternatives or equivalent variations fall within the scope of the present invention.