CN111474184B - AOI character defect detection method and device based on industrial machine vision - Google Patents
AOI character defect detection method and device based on industrial machine vision Download PDFInfo
- Publication number
- CN111474184B CN111474184B CN202010306399.1A CN202010306399A CN111474184B CN 111474184 B CN111474184 B CN 111474184B CN 202010306399 A CN202010306399 A CN 202010306399A CN 111474184 B CN111474184 B CN 111474184B
- Authority
- CN
- China
- Prior art keywords
- image
- detected
- character
- template
- matching
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/95—Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
- G01N21/956—Inspecting patterns on the surface of objects
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an AOI character defect detection method based on industrial machine vision, which comprises the following steps: acquiring an image to be detected; preprocessing an image to be detected to obtain an interested area image with corrected position and containing characters; separating the interested region image of the image to be detected to obtain a character region image; carrying out template matching on the character area image of the image to be detected and a pre-established reference image template to obtain a template matching result; judging whether the image to be detected has character defects or not according to the template matching result; and outputting a character defect judgment result. The invention can realize the detection of the ROI area defect under the condition of low pixel gray level image by utilizing template matching, thereby improving the detection efficiency and accuracy and reducing the detection labor cost.
Description
Technical Field
The invention relates to the technical field of image processing and machine vision, in particular to an AOI character defect detection method and device based on industrial machine vision.
Background
Machine vision is to use a machine to replace human eyes for measurement and judgment, has the characteristic of easy information integration, and is a basic technology for realizing computer integration manufacturing. The most basic feature of machine vision systems is to increase the flexibility and automation of production. In some dangerous working environments which are not suitable for manual operation or occasions where manual vision is difficult to meet the requirements, machine vision is often used to replace the manual vision. Meanwhile, in the process of mass repetitive industrial production, the machine vision detection method can greatly improve the production efficiency and the automation degree.
AOI (Automated Optical Inspection) is an Inspection of common defects encountered in solder production based on Optical principles. At present, in industrial production, certain character information is generally required to be printed on a product, and a defect may occur when the product is printed in an environment where a machine operates, so that the qualification rate of the product during quality inspection is reduced, and the production progress is influenced. The traditional technology adopts manual inspection of character information printed on a product, and has the disadvantages of low speed, high labor intensity and long time consumption. The detection method using machine vision is limited by the quality requirement of the shot picture, and the identification precision is often low.
Disclosure of Invention
The invention aims to provide an AOI character defect detection method and device based on industrial machine vision, which are used for identifying character defects on product appearance by combining image processing and machine vision and utilizing an AOI character defect detection technology, reducing the precision requirement on a camera lens and improving the accuracy and efficiency of identification results.
The technical scheme adopted by the invention is as follows.
In one aspect, the present invention provides an AOI character defect detection method, including:
acquiring an image to be detected;
preprocessing an image to be detected to obtain an interested area image with corrected position and containing characters;
separating the interested region image of the image to be detected to obtain a character region image;
carrying out template matching on the character area image of the image to be detected and a pre-established reference image template to obtain a template matching result;
judging whether the image to be detected has character defects or not according to the template matching result;
and outputting a character defect judgment result.
In the method, the pre-established reference image template is obtained by training a plurality of character image samples containing characters to be detected, and the image template establishing and training process of the reference image can adopt the prior art. The parameters available for template matching comprise parameter values representing whether matched model characters can be found in the image to be detected or not and the number of character defects, and whether the character defects exist or not can be judged according to the number of the character defects.
Optionally, the template matching includes: calculating a matching Score, wherein if the Score is 0, characters in the reference image do not exist in the image to be detected, and if the Score is not 0, the characters in the reference image can be found in the image to be detected;
in response to that Score is not equal to 0, comparing the character area of the image to be detected with the character area of the reference image to obtain and output a number NumError of character defects;
judging whether the character defects exist in the image to be detected according to the template matching result is as follows: if NumErro is larger than 0, the character defect exists in the image to be detected. Otherwise, the character defect of the image to be detected does not exist. The matching Score is a number between 0 and 1 and is an approximate measure of the proportion of the template that is visible in the search image.
Optionally, the method further comprises:
responding to the NumErro which is 0 in the template matching result, calculating the center coordinates of the character area image of the image to be detected, comparing the center coordinates with the center coordinates of the character area in the reference image corresponding to the reference image template, and calculating the offset of the image to be detected;
and comparing the calculated offset with a set threshold, and outputting corresponding result information with character defects in response to the fact that the offset is greater than the set threshold, otherwise outputting corresponding result information with successful matching. That is, for the image to be detected which passes through template matching but has a large offset, it can be further determined that there is a character defect.
Optionally, the threshold of the offset is set to a plurality of different values, each threshold corresponds to a character defect degree value, the character defect result information includes an offset degree flag bit, and the value of the flag bit is the character defect degree value; the value of the finally output offset degree zone bit is a character defect degree value which is smaller than the offset and corresponds to a threshold value close to the offset.
Optionally, the foreground and the background of the image of the region of interest of the image to be detected are separated by a Blob analysis algorithm, so as to obtain a character region image.
Optionally, the method further includes, for the region-of-interest image of the image to be detected, removing an image frame by using a region operator selected according to the characteristic value in the Halcon algorithm, then intercepting the region only containing the characters by using a region-reduced operator, and further performing expansion processing on the region. The Halcon algorithm is an existing algorithm and the dilation process can ensure that the character is completely contained within the region.
Optionally, the creating of the reference image template includes:
determining a reference image corresponding to an image to be detected and a plurality of training sample images with known character defects;
determining mark position points and calibration points of the reference image, and determining a character area image;
creating an image template based on the character area image of the reference image;
preprocessing and separating images of each training sample according to the images to be detected to obtain character area images;
and performing matching training on the image template by using the character area image of the training sample to adjust the parameters of the matching process of the template, so that the result of template matching after parameter adjustment meets the character defects of the corresponding training sample image. And determining the mark position point and the index point to correct the positions of the training sample image and the image to be detected.
The template training of the invention adopts the monitoring template algorithm to adjust the parameters of the matching process, so that the image template can obtain more accurate matching results in the actual template matching.
The template matching algorithm can refer to the prior art, and the algorithm inputs mainly comprise: image, ModelID, AngleStart, AngleExtent, ScaleMun, ScaleMeax, Minscore, NumMatches, MaxOverlap, SubPixel, NumLevels, Greeniness. The parameter Image is a reference Image, and the model id is a model name. The parameters AngleStart and AngleExtent determine the range of rotation for which the model is to be searched. The parameters ScaleMin and ScaleMax determine the scope of the model to be searched. The parameter MinScore determines that a potential match must be considered at least as an instance of the model in the image. The SubPixel sub-pixel parameter determines whether to fetch an instance with sub-pixel precision. The number of pyramid levels used in the search process is determined by the number level NumLevel. The parameter Greediness decides how the search should be performed "greedy". The output of the algorithm is mainly as follows: row, Column, Angle, Scale, Score. Where the location, rotation, and scale of the discovered instance of the model are returned in rows, columns, angles, and scales. The rows and columns of coordinates are related to the position of the origin of the shape model in the search image. The score for each found instance is returned in score. The score is a number between 0 and 1, which is an approximate measure of how visible the model is in the image. For example, if half of the model is occluded, the score cannot exceed 0.5.
Optionally, the method further includes storing the reference image and the image to be detected in different folders respectively;
when the image to be detected is preprocessed, the position correction comprises the following steps:
acquiring mark position points and calibration points of a stored reference image;
acquiring mark position points and calibration points of a stored image to be detected;
and enabling the mark position points of the image to be detected and the reference image to coincide through affine transformation.
Optionally, the mark position point of the reference image is defined as M 1 (x 1 ,y 1 )、M 2 (x 2 ,y 2 ) The index point is P (x, y), and the position point of the image to be detected is M' 1 (x′ 1 ,y′ 1 )、M′ 2 (x′ 2 ,y′ 2 ) The index point is P ' (x ', y ');
the affine transformation is:
with M 1 Establishing a coordinate system x by taking the point as a reference 0 y 0 (ii) a By M' 1 Establishing a coordinate system x by taking the point as a reference 0 y 0 Of a translational coordinate system x T y T And a rotating coordinate system x R y R (ii) a Then in the coordinate system x 0 y 0 In (3), the relative coordinate of the point P is P (x) r ,y r ) The relative coordinate of point P ' is P ' (x ' r y′ r ) Wherein x is r =x-x 1 ,y r =y-y 1 ,x′ r =x′-x′ 1 ,y′ r =y′-y′ 1 ;
The P point is transformed by the coordinate system twice to obtain a P' point which is located at x 0 y 0 The relative coordinates in the coordinate system are:
wherein x is 0 =x′ 1 -x 1 ,y 0 =y′ 1 -y 1 ;
And:
the machine coordinate of P' obtained by equations (1) and (2) is:
The rotation angle of the coordinate system is:
and substituting the formula (4) into the formula (3) to obtain the machine coordinate of the P 'point, and correcting the image position to be detected according to the machine coordinate of the P' point.
The coordinate system x is transformed twice from the point P to the point P', namely, the coordinate system x is transformed by one time of coordinate translation transformation 0 y 0 Translated to coordinate system x T y T By converting the coordinate system x by one rotation T y T By rotation of angle theta to coordinate system x R y R For the convenience of calculation, the process of coordinate transformation may be regarded as first rotation transformation and then translation transformation, and then equation (1) is obtained.
Optionally, the template matching includes:
taking a character area image of a reference image as a search template T, defining the pixel of the character area image as m multiplied by n, taking a character area image of an image to be detected as a searched image S, and defining the pixel of the character area image as W multiplied by H pixels;
the search template T is overlaid on the searched image S for moving search, and the search range is as follows:
1≤i≤W-m
1≤j≤H-n
moving a plurality of pixel elements each time to obtain a search template covering the area of the searched graph as a subgraph S ij Subscripts i, j are subgraphs S ij In the upper left corner of the searched image S ij Coordinates of (3);
respectively calculating the matching degree of the search template T and the subgraph Sij after each movement;
selecting a subgraph with the optimal matching degree, and taking the coordinate and the surrounding set range of the subgraph as a new search range;
in a new search range, moving a single pixel element each time, and calculating the matching degree of a search template T and a subgraph Sij;
and in the new search range, selecting the sub-graph with the excellent matching degree, and further calculating the matching degree value and the defect number.
Optionally, after each movement, the matching degree between the search template T and the subgraph Sij is determined according to one of the following formulas:
the smaller the D (i, j), the better the characteristic matching degree.
For example, the matching degree is calculated by using equation (5), and equation (5) is developed to obtain:
as can be seen from the developed formula, the second intermediate term is a constant, i.e. only related to the template, while the first and third terms are related to the original, and both values change as the template moves on the original. When the value of D (i, j) is minimum, it indicates that the target is found. However, this is very inefficient, and the calculation is very large for every pixel element that is moved.
Therefore, the invention carries out algorithm optimization: the actual template matching operation result is observed, so that the matching error near the matching point is rapidly reduced and is obviously different from other positions. Aiming at the characteristic, the invention adopts an algorithm of coarse and fine matching combination, can quickly lock the approximate region of the matching point and greatly reduce the overall matching times. The specific implementation method comprises the following steps: the method comprises the steps of jumping to perform rough matching at intervals of a few points, namely moving a plurality of pixel elements at a time, roughly framing a matching area, and then searching nearby areas one by one to obtain the best matching point. The operation amount can be reduced to less than one third, and the target extraction effect is quite good.
Advantageous effects
The invention can realize the problem of ROI area defect under the condition of low pixel gray level image by utilizing template matching, can quickly screen defective character images through AOI detection under the conditions of few samples and low pixels, can improve the detection efficiency and the accuracy, and simultaneously reduces the manual detection cost. The application of the invention can improve the quality of products and the production efficiency of the whole production line and promote the product trend to be digitalized and normalized.
Drawings
FIG. 1 is a schematic diagram of a pre-processing process of a reference image and an image to be detected;
FIG. 2 is a schematic diagram of an image of a region to be detected, wherein the left side is a camera or camera view angle, and the right side is framed to select a cropped rectangular ROI region;
FIG. 3 is a diagram illustrating a detection result in an application example of the present invention;
fig. 4 and fig. 5 are schematic diagrams illustrating principle flows of two different embodiments of the offset determination method according to the present invention.
Detailed Description
The following further description is made in conjunction with the accompanying drawings and the specific embodiments.
The invention conception of the invention is as follows: the method comprises the steps of determining a reference image for characters to be detected, carrying out position calibration, clipping and other processing on the reference image, converting a character Region in the reference image into a Region of Interest (ROI), improving subsequent detection precision, and then carrying out template creation and training based on the ROI of the reference image. During detection, firstly, an image to be detected is obtained, affine transformation such as scaling, rotation, translation and the like is carried out on the image, so that the calibration points of the image to be detected and a reference image are overlapped, further, whether characters in the reference image exist in a character area image of the image to be detected and defects of the characters exist is judged by utilizing a template matching algorithm, further, the character offset in the image to be detected can be calculated, and the character defects are further determined according to the offset.
Example 1
Based on the aforementioned inventive concept, the present embodiment is an AOI character defect detection method, including:
acquiring an image to be detected;
preprocessing an image to be detected to obtain an interested area image with corrected position and containing characters;
separating the images of the interested areas of the images to be detected to obtain character area images;
carrying out template matching on the character area image of the image to be detected and a pre-established reference image template to obtain a template matching result;
judging whether the image to be detected has character defects or not according to the template matching result;
and outputting a character defect judgment result.
In the method, the pre-established reference image template is obtained by training a plurality of character image samples containing characters to be detected, and the image template establishing and training process of the reference image can adopt the prior art. The parameters available for template matching comprise parameter values representing whether matched model characters can be found in the image to be detected or not and the number of character defects, and whether the character defects exist or not can be judged according to the number of the character defects. In the template creating process, the character area is converted into the region of interest, so that the detection precision can be improved, and the defect detection of the ROI area under the condition of a low-pixel gray-scale image is realized.
Examples 1 to 1
Based on embodiment 1, this embodiment specifically introduces an AOI character defect detection method based on industrial machine vision.
The template matching is established on the basis of the creation of a reference image template, the position correction of an image to be detected and the character region selection, and as shown in a reference figure 1, before the actual detection is implemented, a reference image needs to be determined and acquired first, and the index point and the mark point of the reference image are determined. When detection is needed, an image to be detected needs to be obtained first, affine transformation is carried out through mark points of the image to be detected, so that the mark points of the image to be detected and a reference image are overlapped to realize position correction, and then a rectangular image area containing characters is obtained through cutting, as shown in fig. 2.
Referring to fig. 4, in the present embodiment, the method is mainly implemented as follows.
First, reference image acquisition and processing
(1) Creating a new folder named Standard for storing the png format reference image;
(2) shooting by using an industrial camera with 500 ten thousand pixels, and storing the acquired reference image into a Standard folder;
(3) marking point position calibration, area selection and image cutting are carried out on the reference image in a threshold value judging and shape selecting mode to obtain an interested area of the reference image, and the interested area is stored in a new image _ processing folder;
(4) and segmenting the foreground and the background of the image by using a binarization algorithm to obtain a character area image.
The embodiment adopts a Blob analysis method to perform preprocessing of a reference image, and comprises the following steps: calculating the area Aera _ original of a reference image, performing GAUSS filtering on the reference image to enable the reference image to be smooth and easy to process, adopting a method of a binarization threshold Value, wherein a 'LightDark' coefficient is selected as 'Dark' to binarize the image, calculating the area Value of a character Region after binarization, calculating the difference between the Aera _ original and the Value, if the difference between the Aera _ original and the Value is smaller than the Value, performing reverse color processing on the image, and performing gray Value segmentation on the image by using the binarization threshold Value; otherwise, directly using a binary threshold value to carry out gray value segmentation; because frame interference may occur when the ROI is selected, frame removing processing needs to be performed by an algorithm for selecting the region according to the characteristic value; after the frame is removed, a method of reducing the specific area of the image is used for intercepting the area only containing the characters and expanding the area so as to ensure that the characters are completely contained in the selected area.
Template creation and training
Creating an image template based on the character area image of the reference image;
the image templates are matched and trained by utilizing a plurality of groups of character image samples, so that the process parameters of the template matching algorithm are adjusted, the high-precision matching of the image templates is ensured, and on the basis, the parameters such as NumLevels and Greenesses are adjusted to improve the matching speed.
The template matching algorithm inputs are mainly: image, ModelID, AngleStart, AngleExtent, ScaleMun, ScaleMeax, Minscore, NumMatches, MaxOverlap, SubPixel, NumLevels, Greeniness. The parameter Image is a reference Image, and the model id is a model name. The parameters AngleStart and AngleExtent determine the range of rotation for which the model is to be searched. The parameters ScaleMin and ScaleMax determine the scope of the model to be searched. The parameter MinScore determines that a potential match must be considered at least as an instance of the model in the image. The SubPixel sub-pixel parameter determines whether to fetch an instance with sub-pixel precision. The number of pyramid levels used in the search process is determined by the number level NumLevel. The parameter Greediness determines how the search should be performed "greedy". The output of the algorithm is mainly as follows: row, Column, Angle, Scale, Score. Where the location, rotation, and scale of the discovered instance of the model are returned in rows, columns, angles, and scales. The rows and columns of coordinates are related to the position of the origin of the shape model in the search image. The score for each found instance is returned in score. The score is a number between 0 and 1, which is an approximate measure of how visible the model is in the image. For example, if half of the model is occluded, the score cannot exceed 0.5.
The invention uses the monitoring template to check the applicability of the parameters and can find the proper parameters, and moreover, the contour of the template is obtained by monitoring the template and can be used for the following matching. The frame is then converted to an XLD outline (contour), creating a shape template shape model. When creating the shape template, it is important to set parameters such as NumLevels, Contrast, Metric, and the like, and the optimal parameters need to be selected through repeated debugging on the test picture.
Through test verification, when the values of the parameters NumLevels and Greediness of template matching are respectively 5 and 0.9, the matching speed is fastest and the precision is highest.
Thirdly, acquiring and processing the image to be detected
1. The method comprises the steps of creating a Processing folder for storing images to be detected in the png format, shooting the images to be detected by using an industrial camera with 500 ten thousand pixels, and storing the obtained images to be detected into two folders respectively.
2. Obtaining mark points and coordinate M 'of calibration points of image to be detected' 1 (x′ 1 ,y′ 1 )、M′ 2 (x′ 2 ,y′ 2 ) And P ' (x ', y ') enabling the image to be detected to coincide with the mark point of the reference image through affine transformation such as translation, scaling, rotation and the like, and then cutting the size of the image.
The coordinates of the reference image mark points mark1, mark2 and the index points are M, respectively 1 (x 1 ,y 1 )、M 2 (x 2 ,y 2 ) P (x, y), the affine transformation process is:
with M 1 Establishing a coordinate system x by taking the point as a reference 0 y 0 (ii) a From M' 1 Establishing a coordinate system x by taking the point as a reference T y T And a coordinate system x R y R . Then in the coordinate system x 0 y 0 In (3), the relative coordinate of the point P is P (x) r ,y r ) The relative coordinate of point P ' is P ' (x ' r ,y′ r ) Wherein x is r =x-x 1 ,y r =y-y 1 ,x′ r =x′-x′ 1 ,y′ r =y′-y′ 1 。
The P point is transformed by the coordinate system twice to obtain the P' point, namely the coordinate system x is transformed by the coordinate translation once 0 y 0 Translated to coordinate system x T y T And then the coordinate system x is transformed by one-time coordinate rotation T y T By rotation of angle theta to coordinate system x R y R 。
For the convenience of calculation, the process of coordinate transformation can also be regarded as firstly rotating transformation and then translating transformation, and then obtaining the P' point in the coordinate system x after transformation 0 y 0 The relative coordinates in (a) are:
wherein x is 0 =x′ 1 -x 1 ,y 0 =y′ 1 -y 1 。
And is also provided with
The machine coordinate of P' is obtained by the equations (1) and (2) simultaneously
So that the coordinate system rotates by an angle of
And substituting the formula (3) into the machine coordinate of the point P 'to finally obtain the machine coordinate of the point P', and realizing the position correction of the image to be detected.
After the position is corrected, rectangular ROI area frame selection and cutting are carried out, and a part containing a character area, namely a character area image is left, specifically comprising the following steps:
and acquiring a rectangular region-of-interest operator according to absolute coordinates of the image, dividing the image into rectangular regions only containing characters, cutting the rectangular regions, and storing the rectangular regions in an image processing folder, wherein the background color still exists in the image at the moment. For the region-of-interest images of the reference image and the image to be processed, binarization segmentation processing can be respectively carried out on the reference image and the image to be processed before template matching, so that character region images for template matching can be respectively obtained, and the character region images can also be respectively obtained by preprocessing.
In addition, the image edge can be smoothed by Gaussian filtering the image, so that the influence of pixel low on image processing is reduced;
and selecting the area of other possible interference noise of the character area image obtained by the binary segmentation, only reserving the area containing the character, and then performing area expansion and amplification according to the rectangular frame selection area to ensure that the character is completely contained in the frame selection area. Experiments verify that the effect is best when the expansion coefficient in the character area selection is 2.
Fourth, template matching
And matching the character area image of the image to be detected with an image template obtained by training after the character area is created based on the reference image, and judging whether the character area of the image to be detected has character defects or not.
The basic principle of the template matching algorithm is to stack a search template T (m × n pixels) on a searched image S (W × H pixels) and translate the search template to cover the searched image S ij . The subscript i, j is the coordinate of the top left corner of the sub-graph on the searched graph S. The search range is:
1≤i≤W-m
1≤j≤H-n
and (4) completing the template matching process by comparing the similarity of the T and the Sij. And measuring the matching degree of the template T and the subgraph Sij, wherein the following two measures can be used:
or
The first equation above is expanded to the following:
as can be seen from the expanded formula, the second intermediate term is a constant, i.e. only related to the template, while the first and third terms are related to the original, and these two values change as the template moves over the original. When the value of D (i, j) is minimum, it indicates that the target is found. However, this is very inefficient, and the calculation is very large for every pixel element that is moved.
As the result of the actual template matching operation is observed, the matching error near the matching point is rapidly reduced and is obviously different from other positions. Therefore, aiming at the characteristic, the rough and fine matching combined algorithm is adopted to quickly lock the approximate region of the matching point, so that the overall matching times can be greatly reduced. The specific implementation method comprises the following steps: the rough matching is performed at intervals of several points by jumping, the matching area is roughly framed, and then the best matching point is obtained by searching nearby areas one by one. Namely:
after the initial search range is determined, moving a plurality of pixel elements each time to obtain a search template covering the area of the searched graph as a subgraph Sij, wherein subscripts i and j are coordinates of the upper left corner of the subgraph Sij on the searched graph Sij;
respectively calculating the matching degree of the search template T and the subgraph Sij after each movement;
selecting a subgraph with the optimal matching degree, and taking the coordinate and the surrounding set range of the subgraph as a new search range;
in a new search range, moving a single pixel element each time, and calculating the matching degree of a search template T and a subgraph Sij;
and in the new search range, selecting the sub-graph with the excellent matching degree, and further calculating the matching degree value and the defect number.
Practice proves that the algorithm improvement can reduce the operation amount of template matching to be less than one third, and the target extraction effect is quite good.
And obtaining a matching Score through template matching, wherein if the Score is equal to 0, characters in the reference image do not exist in the image to be detected, and if the Score is equal to 0, characters in the reference image can be found in the image to be detected. The matching Score is a number between 0 and 1 and is an approximate measure of the proportion of the template that is visible in the search image.
The embodiment judges whether the character defect of the image to be detected is as follows according to the template matching result: if NumErro is larger than 0, the image to be detected has character defects. Otherwise, the character defect of the image to be detected does not exist.
Fifthly, character offset judgment
To the condition that NumErro is 0 in the template matching result, preliminary judgement can be no character defect earlier, and character offset judges can further detect the character defect, finds out to have the great image of waiting to detect of skew.
The character offset judgment specifically comprises the following steps:
calculating the center coordinates of the character area image of the image to be detected, comparing the center coordinates with the center coordinates of the character area in the reference image corresponding to the reference image template, and calculating the offset of the image to be detected; the offset may be calculated as: calculating the center coordinates of the character area in the image to be detected and the character area in the reference image, and calculating the difference value of the column vectors of the two character areas as an offset value;
and comparing the calculated offset with a set threshold, and outputting corresponding result information with character defects in response to the fact that the offset is greater than the set threshold, otherwise outputting corresponding result information with successful matching.
The specific offset determination result may be set as needed, for example, one way is: setting the threshold of the offset to be a plurality of different values 0, C0 and the like, wherein each threshold corresponds to a character defect degree value, the character defect result information comprises an offset degree flag bit, and the value of the flag bit is the character defect degree value; the value of the finally output offset degree zone bit is a character defect degree value which is smaller than the offset and corresponds to a threshold value close to the offset; as shown in fig. 4, a positive offset threshold C0 and a negative offset threshold-C0 are determined according to the characters to be detected, when C > C0, the characters are greatly offset to the right, a judgment flag is 3, and the characters are judged to be NG; when C < -C0, the character has larger leftward deviation, the judgment flag is 2, and the judgment is NG; when-C0 < C0, the character position is within the normal range, the flag is determined to be 0, and PASS is determined.
As shown in fig. 5, another method may also be:
when the offset is greater than 0 and less than C0, the offset degree of the representative character is small, the flag bit flag is 0, PASS can be output to indicate that the character passes the defect-free detection; when the offset is larger than C0, which means that the character offset degree is larger, flag is 2, or the offset is larger and larger than a larger threshold, flag is 3, and NG can be output to indicate that the character is defective and the detection is failed.
In this manner, the offset is calculated to be non-negative.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create a system for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including an instruction system which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the present invention has been described with reference to the particular illustrative embodiments, it is to be understood that the invention is not limited to the disclosed embodiments, but is intended to cover various modifications, equivalent arrangements, and equivalents thereof, which may be made by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (9)
1. An AOI character defect detection method is characterized by comprising the following steps:
acquiring an image to be detected;
preprocessing an image to be detected to obtain an interested area image with corrected position and containing characters;
separating the images of the interested areas of the images to be detected to obtain character area images;
carrying out template matching on the character area image of the image to be detected and a pre-established reference image template to obtain a template matching result;
judging whether the image to be detected has character defects or not according to the template matching result;
outputting a character defect judgment result;
wherein the creating of the reference image template comprises:
determining a reference image corresponding to an image to be detected and a plurality of training sample images with known character defects;
determining mark position points and calibration points of the reference image, and determining a character area image;
creating an image template based on the character area image of the reference image;
preprocessing and separating images of each training sample according to the images to be detected to obtain character area images;
and performing matching training on the image template by using the character area image of the training sample to adjust the parameters of the matching process of the template, so that the result of template matching after parameter adjustment meets the character defects of the corresponding training sample image.
2. The method of claim 1, wherein template matching comprises: calculating a matching Score, wherein if the Score is 0, characters in the reference image do not exist in the image to be detected, and if the Score is not 0, the characters in the reference image can be found in the image to be detected;
in response to that Score is not equal to 0, comparing the character area of the image to be detected with the character area of the reference image to obtain and output a number NumError of character defects;
judging whether the character defects exist in the image to be detected according to the template matching result: if NumErro is larger than 0, the character defect exists in the image to be detected.
3. The method of claim 2, further comprising:
responding to the NumErro which is 0 in the template matching result, calculating the center coordinates of the character area image of the image to be detected, comparing the center coordinates with the center coordinates of the character area in the reference image corresponding to the reference image template, and calculating the offset of the image to be detected;
and comparing the calculated offset with a set threshold, and outputting corresponding result information with character defects in response to the fact that the offset is greater than the set threshold, otherwise outputting corresponding result information with successful matching.
4. The method as claimed in claim 3, wherein the threshold value of the offset is set to a plurality of different values, each threshold value corresponds to a character defect degree value, the character defect result information includes an offset degree flag bit, and the value of the flag bit is the character defect degree value; the value of the finally output offset degree zone bit is a character defect degree value which is smaller than the offset and corresponds to a threshold value close to the offset.
5. The method as claimed in claim 1, wherein a Blob analysis algorithm is used for separating foreground and background of the image of the region of interest of the image to be detected, so as to obtain a character region image;
the method further comprises the steps of selecting an area operator according to the characteristic value in the Halcon algorithm to remove image frames of the image of the region of interest of the image to be detected, then intercepting the area only containing characters by using the operator for reducing the specific area of the image, and further performing expansion processing on the area to obtain a final character area image.
6. The method of claim 1, further comprising storing the reference image and the image to be detected in different folders, respectively;
when the image to be detected is preprocessed, the position correction comprises the following steps:
acquiring mark position points and calibration points of a stored reference image;
acquiring mark position points and calibration points of a stored image to be detected;
and enabling the mark position points of the image to be detected and the reference image to coincide through affine transformation.
7. The method of claim 6, wherein the mark position point of the reference image is defined as M 1 (x 1 ,y 1 )、M 2 (x 2 ,y 2 ) The index point is P (x, y), and the position point of the image to be detected is M' 1 (x′ 1 ,y′ 1 )、M′ 2 (x′ 2 ,y′ 2 ) The index point is P ' (x ', y ');
the affine transformation is:
with M 1 Establishing a coordinate system x by taking the point as a reference 0 y 0 (ii) a From M' 1 Establishing a coordinate system x by taking the point as a reference 0 y 0 Of a translational coordinate system x T y T And a rotating coordinate system x R y R (ii) a Then in the coordinate system x 0 y 0 In (3), the relative coordinate of the point P is P (x) r ,y r ) The relative coordinate of point P ' is P ' (x ' r y′ r ) Wherein x is r =x-x 1 ,y r =y-y 1 ,x′ r =x′-x′ 1 ,y′ r =y′-y′ 1 ;
The P point is transformed by a coordinate system twice to obtain a P' point which is located at x 0 y 0 The relative coordinates in the coordinate system are:
wherein x is 0 =x′ 1 -x 1 ,y 0 =y′ 1 -y 1 ;
And:
the machine coordinate of P' obtained by equations (1) and (2) is:
The rotation angle of the coordinate system is:
and substituting the formula (4) into the formula (3) to obtain the machine coordinate of the P 'point, and correcting the image position to be detected according to the machine coordinate of the P' point.
8. The method of claim 1, wherein template matching comprises:
taking a character area image of a reference image as a search template T, defining the pixel of the character area image as m multiplied by n, taking a character area image of an image to be detected as a searched image S, and defining the pixel of the character area image as W multiplied by H pixels;
the search template T is overlaid on the searched graph S for moving search, and the search range is as follows:
1≤i≤W-m
1≤j≤H-n
moving a plurality of pixel elements each time to obtain a search template covering the area of the searched graph as a subgraph S ij Subscripts i, j are subgraphs S ij In the upper left corner of the searched image S ij Coordinates of (3);
after each movement, respectively calculating the matching degree of the search template T and the subgraph Sij;
selecting a subgraph with the optimal matching degree, and taking the coordinate and the surrounding set range of the subgraph as a new search range;
in a new search range, moving a single pixel element each time, and calculating the matching degree of a search template T and a subgraph Sij;
and in the new search range, selecting the sub-graph with the excellent matching degree, and further calculating the matching degree value and the defect number.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010306399.1A CN111474184B (en) | 2020-04-17 | 2020-04-17 | AOI character defect detection method and device based on industrial machine vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010306399.1A CN111474184B (en) | 2020-04-17 | 2020-04-17 | AOI character defect detection method and device based on industrial machine vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111474184A CN111474184A (en) | 2020-07-31 |
CN111474184B true CN111474184B (en) | 2022-08-16 |
Family
ID=71754001
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010306399.1A Active CN111474184B (en) | 2020-04-17 | 2020-04-17 | AOI character defect detection method and device based on industrial machine vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111474184B (en) |
Families Citing this family (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114078109A (en) * | 2020-08-13 | 2022-02-22 | 鸿富锦精密电子(天津)有限公司 | Image processing method, electronic device, and storage medium |
CN112270329A (en) * | 2020-10-30 | 2021-01-26 | 北京华维国创电子科技有限公司 | Accurate MARK point acquisition and identification algorithm based on multi-level algorithm fusion |
CN112651972A (en) * | 2020-11-11 | 2021-04-13 | 北京平恒智能科技有限公司 | Positioning method using integral constraint of double positioning |
CN112782179A (en) * | 2020-12-22 | 2021-05-11 | 北京市新技术应用研究所 | Method and system for detecting defects of light-reflecting surface of product |
CN112557412A (en) * | 2020-12-25 | 2021-03-26 | 无锡歌迪亚自动化科技有限公司 | Automatic code-spraying printing defect detection system and detection method |
CN112381827B (en) * | 2021-01-15 | 2021-04-27 | 中科慧远视觉技术(北京)有限公司 | Rapid high-precision defect detection method based on visual image |
CN112837302B (en) * | 2021-02-09 | 2024-02-13 | 广东拓斯达科技股份有限公司 | Method and device for monitoring state of die, industrial personal computer, storage medium and system |
CN113111868B (en) * | 2021-03-16 | 2023-08-18 | 广州大学 | Character defect detection method, system, device and storage medium |
CN113160154B (en) * | 2021-04-08 | 2024-07-02 | 杭州电子科技大学 | Method and system for detecting paint spraying defects of five-star feet based on machine vision |
CN113295617B (en) * | 2021-05-18 | 2022-11-25 | 广州慧炬智能科技有限公司 | Multi-target offset detection method without reference point |
CN113327204B (en) * | 2021-06-01 | 2024-03-15 | 中科晶源微电子技术(北京)有限公司 | Image calibration method, device, equipment and storage medium |
CN113487538B (en) * | 2021-06-08 | 2024-03-22 | 维库(厦门)信息技术有限公司 | Multi-target segmentation defect detection method and device and computer storage medium thereof |
CN113609897A (en) * | 2021-06-23 | 2021-11-05 | 阿里巴巴新加坡控股有限公司 | Defect detection method and defect detection system |
CN113689397A (en) * | 2021-08-23 | 2021-11-23 | 湖南视比特机器人有限公司 | Workpiece circular hole feature detection method and workpiece circular hole feature detection device |
CN113538427B (en) * | 2021-09-16 | 2022-01-07 | 深圳市信润富联数字科技有限公司 | Product defect identification method, device, equipment and readable storage medium |
CN114354491A (en) * | 2021-12-30 | 2022-04-15 | 苏州精创光学仪器有限公司 | DCB ceramic substrate defect detection method based on machine vision |
CN114549504A (en) * | 2022-03-01 | 2022-05-27 | 安徽工业技术创新研究院六安院 | Appearance quality detection method based on machine vision |
TWI824473B (en) * | 2022-04-08 | 2023-12-01 | 鴻海精密工業股份有限公司 | Method and device for deleting data by visual detection, electronic device, and computer-readable storage medium |
CN116129435B (en) * | 2023-04-14 | 2023-08-08 | 歌尔股份有限公司 | Character defect detection method, device, equipment and storage medium |
CN117011167B (en) * | 2023-07-03 | 2024-08-20 | 广东盈科电子有限公司 | Image positioning correction method, system and device for nixie tube and storage medium |
CN118366167A (en) * | 2024-04-16 | 2024-07-19 | 广东奥普特科技股份有限公司 | Character defect detection method and related equipment |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101464951A (en) * | 2007-12-21 | 2009-06-24 | 北大方正集团有限公司 | Image recognition method and system |
CN108982508A (en) * | 2018-05-23 | 2018-12-11 | 江苏农林职业技术学院 | A kind of plastic-sealed body IC chip defect inspection method based on feature templates matching and deep learning |
-
2020
- 2020-04-17 CN CN202010306399.1A patent/CN111474184B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101464951A (en) * | 2007-12-21 | 2009-06-24 | 北大方正集团有限公司 | Image recognition method and system |
CN108982508A (en) * | 2018-05-23 | 2018-12-11 | 江苏农林职业技术学院 | A kind of plastic-sealed body IC chip defect inspection method based on feature templates matching and deep learning |
Also Published As
Publication number | Publication date |
---|---|
CN111474184A (en) | 2020-07-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111474184B (en) | AOI character defect detection method and device based on industrial machine vision | |
CN111243032B (en) | Full-automatic detection method for checkerboard corner points | |
CN107543828B (en) | Workpiece surface defect detection method and system | |
CN109785316B (en) | Method for detecting apparent defects of chip | |
CN108776140B (en) | Machine vision-based printed matter flaw detection method and system | |
CN106803244B (en) | Defect identification method and system | |
WO2018068415A1 (en) | Detection method and system for wrong part | |
CN106446894B (en) | A method of based on outline identification ball-type target object location | |
US6778703B1 (en) | Form recognition using reference areas | |
CN109426814B (en) | Method, system and equipment for positioning and identifying specific plate of invoice picture | |
US8019164B2 (en) | Apparatus, method and program product for matching with a template | |
CN105718931B (en) | System and method for determining clutter in acquired images | |
CN111222507B (en) | Automatic identification method for digital meter reading and computer readable storage medium | |
CN106296587B (en) | Splicing method of tire mold images | |
CN110765992A (en) | Seal identification method, medium, equipment and device | |
CN113903024A (en) | Handwritten bill numerical value information identification method, system, medium and device | |
CN111027538A (en) | Container detection method based on instance segmentation model | |
CN106203431A (en) | A kind of image-recognizing method and device | |
CN110288040B (en) | Image similarity judging method and device based on topology verification | |
CN113989604A (en) | Tire DOT information identification method based on end-to-end deep learning | |
CN116342525A (en) | SOP chip pin defect detection method and system based on Lenet-5 model | |
CN112419225B (en) | SOP type chip detection method and system based on pin segmentation | |
JP2002140713A (en) | Image processing method and image processor | |
CN111325106A (en) | Method and device for generating training data | |
KR101766787B1 (en) | Image correction method using deep-learning analysis bassed on gpu-unit |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |