CN112233116B - Concave-convex mark visual detection method based on neighborhood decision and gray level co-occurrence matrix description - Google Patents

Concave-convex mark visual detection method based on neighborhood decision and gray level co-occurrence matrix description Download PDF

Info

Publication number
CN112233116B
CN112233116B CN202011438083.4A CN202011438083A CN112233116B CN 112233116 B CN112233116 B CN 112233116B CN 202011438083 A CN202011438083 A CN 202011438083A CN 112233116 B CN112233116 B CN 112233116B
Authority
CN
China
Prior art keywords
image
concave
gray level
convex
occurrence matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011438083.4A
Other languages
Chinese (zh)
Other versions
CN112233116A (en
Inventor
邱增帅
王罡
潘正颐
侯大为
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changzhou Weiyizhi Technology Co Ltd
Original Assignee
Changzhou Weiyizhi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changzhou Weiyizhi Technology Co Ltd filed Critical Changzhou Weiyizhi Technology Co Ltd
Priority to CN202011438083.4A priority Critical patent/CN112233116B/en
Publication of CN112233116A publication Critical patent/CN112233116A/en
Application granted granted Critical
Publication of CN112233116B publication Critical patent/CN112233116B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8854Grading and classifying of flaws
    • G01N2021/8874Taking dimensions of defect into account
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8883Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges involving the calculation of gauges, generating models

Abstract

The invention discloses a concave-convex mark visual detection method based on neighborhood decision and gray level co-occurrence matrix description, which comprises the following steps: firstly, thresholding an original image; secondly, extracting a target image; thirdly, fitting a target image; step four, correcting the target image; and a fifth step of identifying concave-convex marks. According to the concave-convex mark visual detection method based on neighborhood decision and gray level co-occurrence matrix description, the contour of a workpiece is extracted through an edge extraction method, the contour of the workpiece is optimized through a feature extraction, clustering and straight line fitting method, the problem of workpiece deviation is solved, the workpiece is mapped into a standard rectangular image, concave-convex marks are detected through feature judgment and a multi-template matching method, the detection rate of the concave-convex marks on the surface of the workpiece is effectively improved, and the method has great practical value.

Description

Concave-convex mark visual detection method based on neighborhood decision and gray level co-occurrence matrix description
Technical Field
The invention relates to the technical field of machine vision detection of surface defects of industrial production lines, in particular to a concave-convex mark vision detection method based on neighborhood decision and gray level co-occurrence matrix description.
Background
The manual visual inspection is the most common defect detection method, but the manual detection is time-consuming, and the manual detection result can deviate due to different conditions, so that the requirements of high-efficiency and accurate detection of industrial production cannot be met.
Scratches and concave-convex marks often appear on the surface of an object, and the scratches and concave-convex marks are different in length, direction and depth, are often interfered by natural textures or patterns on the surface of a product, and are difficult to accurately extract scratch characteristics.
The edge detection algorithm usually adopts Laplacian, Canny, Sobel and Prewitt operators to detect scratches on the surface of a product, and the edge detection algorithms have a good detection effect on a specific scratch image, but when the surface of an object to be detected has complex textures or the contrast of the scratches is low, edge features are not easy to extract, and false detection or missing detection is easy to cause.
The Kokaram algorithm is one of the most commonly used scratch detection methods, and first it constructs the cosine distribution of the scratch brightness decay and implements screening using median filtering and Hough transform, and then acquires the skeleton of the scratch using Gibbs sampling to determine whether it is a true scratch or a false scratch, but this method is susceptible to noise interference and takes a long time.
The template matching method is another common method for defect detection, which creates a template from a standard image and then performs scratch detection using shape-based template matching, and is generally used for defect detection with a complex background, but is easily affected by image gray scale variation, and when a matching target is rotated, the algorithm is not applicable.
The Chinese invention patent document CN107462587A (application number: CN201710775649.4, application date: 2017, 08 and 31 days, applicant: southern China university) discloses a precise visual inspection system and method for concave-convex mark defects of a flexible IC substrate, wherein complete dense point cloud data is obtained, candidate point areas are segmented and extracted, and then whether the concave-convex mark defects exist is analyzed. However, the apparatus of this patent is quite complex and costly.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: in order to solve the problems in the background art, the method for visually detecting the concave-convex marks based on neighborhood decision and gray level co-occurrence matrix description is provided, the problem of large identification error caused by dark light environment, target object offset, shooting angle and the like can be solved, the detection rate of the concave-convex marks on the surface of the workpiece is effectively improved, and the method has great practical value.
The technical scheme adopted by the invention for solving the technical problems is as follows: a concave-convex mark visual detection method based on neighborhood decision and gray level co-occurrence matrix description comprises the following steps:
firstly, thresholding an original image;
secondly, extracting a target image;
thirdly, fitting a target image;
step four, correcting the target image;
and a fifth step of identifying concave-convex marks.
More specifically, in the above-described aspect, in the first step, the original image is subjected to local adaptive thresholding to obtain a binarized image.
More specifically, in the above technical solution, in the second step, the target image can be segmented by performing edge extraction on the binarized image through an edge extraction algorithm.
More specifically, in the above technical solution, in the third step, the linear pixel v of the target image is identified according to the target image1,v2,…,vnThe expression of the straight line pixel is
Figure 659173DEST_PATH_IMAGE001
B is the slope and a is the intercept, from a and b to vnAnd (6) clustering.
More specifically, in the above technical solution, the clustering results are four types, each of which is Ll,Lr,Rl,RrThen fitting L separatelyl,Lr,Rl,RrA straight line of (1) after fitting
Figure 388094DEST_PATH_IMAGE002
Figure 404592DEST_PATH_IMAGE003
Figure 902569DEST_PATH_IMAGE004
Figure 611899DEST_PATH_IMAGE005
Straight line, the intersection point of the four straight lines is fourRespectively is as follows: the coordinate of the upper left corner is (x)lu,ylu) The coordinate of the lower left corner is (x)ld,yld) The coordinate of the upper right corner is (x)ru,yru) The coordinate of the lower right corner is (x)rd,yrd) And the fitted image is a trapezoid image.
More specifically, in the above technical solution, according to the four intersection points, the distance between the vertical sides of the quadrangle is first obtained
Figure 4615DEST_PATH_IMAGE006
Figure 887121DEST_PATH_IMAGE007
Then, the average distance of the vertical edges is calculated
Figure 177288DEST_PATH_IMAGE008
More specifically, in the above technical solution, the distance between the lateral sides of the quadrangle is first determined according to the four intersection points
Figure 119836DEST_PATH_IMAGE009
Figure 557771DEST_PATH_IMAGE010
Then, the average distance of the horizontal edges is calculated
Figure 916071DEST_PATH_IMAGE011
More specifically, in the above technical solution, the handle
Figure 326323DEST_PATH_IMAGE012
Figure 439773DEST_PATH_IMAGE013
As the length and width of the rectangle, the original trapezoid image is then mapped into the rectangular image by the image correction method.
More specifically, in the above technical solution, image texture is extracted first, features are extracted, and the template is rectangular and longL∈[1,
Figure 801222DEST_PATH_IMAGE013
]The width W is equal to [1 ],
Figure 25530DEST_PATH_IMAGE012
]the templates are shared
Figure 87027DEST_PATH_IMAGE013
×
Figure 309061DEST_PATH_IMAGE012
And traversing each template in the rectangular image, and judging concave-convex marks through similarity comparison.
The invention has the beneficial effects that: the invention relates to a concave-convex mark visual detection method based on neighborhood decision and gray level co-occurrence matrix description, in particular to a concave-convex mark visual detection method of an industrial component based on neighborhood decision and gray level co-occurrence matrix description, which extracts the outline of a workpiece by an edge extraction method, optimizes the outline of the workpiece by a feature extraction, clustering and linear fitting method, solves the problem of workpiece deviation, maps the workpiece into a standard rectangular image, detects concave-convex marks by a feature judgment and multi-template matching method, effectively improves the detection rate of the concave-convex marks on the surface of the workpiece, and has great practical value; the invention can better avoid the defect misinformation caused by rotation, translation, scaling and the like, and has better identification capability on concave-convex mark defects and other defects.
Drawings
In order to more clearly illustrate the embodiments or technical solutions in the prior art of the present invention, the drawings used in the description of the embodiments or prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is an original image.
Fig. 2 is a binarized image.
Fig. 3 is an image after edge extraction.
Fig. 4 is a matrix image to which the image is mapped.
Fig. 5 is a flowchart of a method of visually inspecting a dent mark.
Detailed Description
In order to make the technical problems, technical solutions and advantageous effects solved by the present invention more clearly apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 5, a visual inspection method for concave-convex marks based on neighborhood decision and gray level co-occurrence matrix description, which is used for detecting surface concave-convex marks on an automatic production line, relates to image recognition, segmentation and feature extraction technologies, and particularly relates to a method for mapping an oblique image to a rectangular image and a method for detecting concave-convex marks, and specifically comprises the following steps:
the first step, thresholding the original image: the original image is subjected to local adaptive thresholding processing to be changed into a binary image, namely a black-and-white image.
The binarization method is a local adaptive threshold method. The thresholding image is actually a binary operation on a gray level image, and the fundamental principle is to judge whether an image pixel is 0 or 255 by using a set threshold value, so the setting of the threshold value is important in image binarization. The binarization of the image is divided into global binarization and local adaptive binarization, and the difference is whether the threshold value is unified in one image or not. In order to better process the image, local binarization is selected.
An ideal adaptive threshold algorithm also works well for images with uneven illumination. To compensate for the brightness, the brightness of each pixel needs to be normalized to determine whether a pixel is black or white. The invention adopts a self-adaptive threshold value based on the Wall algorithm. The algorithm principle is as follows:
the basic idea of the algorithm is to traverse the image and compute the pixel average. If a pixel is significantly below this average value, it is set to black, otherwise it is set to white.
Suppose that
Figure 659271DEST_PATH_IMAGE014
Is the sum of the last s pixels at point n:
Figure 687269DEST_PATH_IMAGE015
(1)
wherein i is the number of image points,
Figure 806535DEST_PATH_IMAGE016
is the pixel in the image at point n-i.
By comparison of points
Figure 996208DEST_PATH_IMAGE017
Pixel value and
Figure 896031DEST_PATH_IMAGE014
the size of the share of the average pixel value is determined
Figure 898360DEST_PATH_IMAGE018
If, if
Figure 200028DEST_PATH_IMAGE019
Is greater than
Figure 498285DEST_PATH_IMAGE020
Images of
Figure 885404DEST_PATH_IMAGE021
Is 0, if
Figure 192889DEST_PATH_IMAGE019
Is less than
Figure 83485DEST_PATH_IMAGE022
Images of
Figure 818222DEST_PATH_IMAGE021
Is 1.
Figure 427058DEST_PATH_IMAGE023
(2)
Wherein the content of the first and second substances,
Figure 600551DEST_PATH_IMAGE017
is the pixel in the image that is located at point n,
Figure 47450DEST_PATH_IMAGE024
and t is a set value.
The second step, target image extraction: and performing edge extraction on the binary image through an edge extraction algorithm to segment the target image. The edge extraction algorithm is a Canny operator edge extraction algorithm.
The edge of the image refers to a part with a significant brightness change in a local area of the image, and the gray profile of the area can be generally regarded as a step, namely, the gray value changes sharply from one gray value in a small buffer area to another gray value with a larger gray value difference. The edge portion of the image concentrates most of the information of the image. Therefore, the image needs to be subjected to edge extraction, and the Canny edge detection algorithm is adopted in the invention. The algorithm principle is as follows:
firstly, smoothing an input image by using a Gaussian filter;
order to
Figure 15406DEST_PATH_IMAGE025
Which represents the input image, is,
Figure 49221DEST_PATH_IMAGE026
represents a gaussian function:
Figure 760825DEST_PATH_IMAGE027
(3)
the convolution forms a smooth image:
Figure 563696DEST_PATH_IMAGE028
(4)
secondly, calculating a gradient amplitude image and an angle image;
Figure 702554DEST_PATH_IMAGE029
(5)
Figure 223665DEST_PATH_IMAGE030
(6)
Figure 739677DEST_PATH_IMAGE031
applying non-maximum suppression to the gradient magnitude image;
a. finding the closest
Figure 724950DEST_PATH_IMAGE032
Direction
Figure 972392DEST_PATH_IMAGE033
b. If it is
Figure 43116DEST_PATH_IMAGE034
Is at least less than the edge
Figure 34206DEST_PATH_IMAGE033
One of the two zero neighbors of, zero
Figure 873986DEST_PATH_IMAGE035
(suppression); otherwise make
Figure 666231DEST_PATH_IMAGE036
Obtaining the maximum non-suppressed image
Figure 958672DEST_PATH_IMAGE037
Detecting and connecting edges by using double threshold processing and connection analysis;
to pair
Figure 753452DEST_PATH_IMAGE038
Thresholding is performed to reduce false edge points, the Canny algorithm uses two thresholds: low threshold value
Figure 447739DEST_PATH_IMAGE039
And a high threshold
Figure 36983DEST_PATH_IMAGE040
(Canny suggests a high to low threshold ratio of 2:1 or 3:1)
Figure 82300DEST_PATH_IMAGE041
(7)
Figure 477509DEST_PATH_IMAGE042
(8)
Figure 462520DEST_PATH_IMAGE043
(9)
Figure 550562DEST_PATH_IMAGE044
And
Figure 755278DEST_PATH_IMAGE045
can be considered as "strong" and "weak" edge pixels, respectively. Wherein
Figure 954178DEST_PATH_IMAGE044
Are the edge points of the image, and are the edge points,
Figure 357478DEST_PATH_IMAGE045
for a candidate point, if it is adjacent to the edge point, it is marked as an edge point. The method comprises the following specific steps:
Figure 554104DEST_PATH_IMAGE046
in that
Figure 246116DEST_PATH_IMAGE044
Locating the next unaccessed edge pixel p;
Figure 983128DEST_PATH_IMAGE047
Figure 942732DEST_PATH_IMAGE045
the pixels adjacent to p is 8 are marked as valid edge pixels;
Figure 372576DEST_PATH_IMAGE048
if it is
Figure 348622DEST_PATH_IMAGE044
If all non-zero pixels in the image have been accessed, jumping to step 4, otherwise returning to step 1;
Figure 827008DEST_PATH_IMAGE049
will be provided with
Figure 470479DEST_PATH_IMAGE045
All pixels in the array that are not marked as valid edge pixels are zeroed out.
Third step, fitting of the target image:
the method for identifying the straight line is Hough transformation. In the image after edge extraction, the invention adopts Hough transformation to detect straight lines.
The basic principle of Hough transform is to change a given curve in the original image space into a point in the parameter space by means of curve representation using the duality of points and lines. This translates the detection problem for a given curve in the original image into a peak problem in the search parameter space. I.e. converting the detected global characteristic into a detected local characteristic.
Let it be known that a line is drawn on a black-and-white image, and the position of the line is required. The equation for a straight line can be expressed in y = kx + b, where k and b are parameters, slope and intercept, respectively. Past a certain point (x)0,y0) All the parameters of the straight line satisfy the equation y0=kx0+ b. I.e. point (x)0,y0) A cluster of straight lines is defined. Equation y0=kx0+ b is a straight line on the plane of the parameter k-b, (or equation b = -x)0*k+y0The corresponding straight line). Thus, a foreground pixel on the x-y plane of the image corresponds to a line on the parameter plane.
Through Hough transformation, all the linear pixel clusters in the image can be detected, and the expression of the linear pixel clusters is
Figure 743329DEST_PATH_IMAGE001
Figure 472250DEST_PATH_IMAGE050
Is a slope of the light beam in the direction of the light beam,
Figure 551065DEST_PATH_IMAGE051
is the intercept. According to
Figure 986725DEST_PATH_IMAGE051
And
Figure 934871DEST_PATH_IMAGE050
to pair
Figure 151088DEST_PATH_IMAGE052
And (6) clustering. The clustering algorithm adopted by the invention is a K-Means algorithm. The algorithm is realized by the following steps:
Figure 33594DEST_PATH_IMAGE046
randomly selecting k points as a clustering center;
Figure 323761DEST_PATH_IMAGE047
calculating the clustering of each point to k clustering centers respectively, and then dividing the point to the nearest clustering center, thus forming k clusters;
Figure 266309DEST_PATH_IMAGE048
then, the mass center (mean value) of each cluster is recalculated;
Figure 641927DEST_PATH_IMAGE049
repeating the steps 2-4 until the position of the mass center is not changed or the set iteration number is reached.
Identifying straight-line pixel v of target image according to target image1,v2,…,vnExpression of straight line pixels
Figure 62544DEST_PATH_IMAGE053
And
Figure 535113DEST_PATH_IMAGE054
functional relationship between) is:
Figure 320667DEST_PATH_IMAGE001
(10)
in the formula, two undetermined parameters are provided,
Figure 245897DEST_PATH_IMAGE055
the representative of the intercept is that of the line,
Figure 470205DEST_PATH_IMAGE056
representing a slope, including in the pixel cluster
Figure 233500DEST_PATH_IMAGE057
Group data
Figure 455534DEST_PATH_IMAGE058
Figure 868060DEST_PATH_IMAGE059
V according to a and bnAnd (6) clustering. The clustering result is four types, that is, all pixel clusters in the image can be clustered into four types, respectively Ll,Lr,Rl,RrThen fitting L separatelyl,Lr,Rl,RrA straight line of (1) after fitting
Figure 568163DEST_PATH_IMAGE002
Figure 749746DEST_PATH_IMAGE003
Figure 204998DEST_PATH_IMAGE004
Figure 42504DEST_PATH_IMAGE005
A straight line.
The least squares method finds the best functional match of the data by minimizing the sum of the squares of the errors. Unknown data can be easily obtained by the least square method, and the sum of squares of errors between these obtained data and actual data is minimized.
The invention uses least square method to fit the observation data into straight line. When estimating parameters by least square method, observation value is required
Figure 608614DEST_PATH_IMAGE060
The weighted sum of squares of the deviations of (a) is minimal. For straight line fitting, the value of:
Figure 80922DEST_PATH_IMAGE061
(11)
where m is the number of discrete points given to be fitted, the above equation pairs
Figure 644758DEST_PATH_IMAGE062
Respectively calculating partial derivatives to obtain:
Figure 31877DEST_PATH_IMAGE063
(12)
Figure 73783DEST_PATH_IMAGE064
(13)
the system of equations is obtained by arrangement as follows:
Figure 229958DEST_PATH_IMAGE065
(14)
solving the above equation set can obtain the linear parameters
Figure 964695DEST_PATH_IMAGE066
Best estimated value of
Figure 573531DEST_PATH_IMAGE067
And
Figure 747024DEST_PATH_IMAGE068
Figure 193923DEST_PATH_IMAGE069
(15)
Figure 161879DEST_PATH_IMAGE070
(16)
are respectively paired with Ll,Lr,Rl,RrFitting straight line by least square method to obtain
Figure 195694DEST_PATH_IMAGE002
Figure 641719DEST_PATH_IMAGE003
Figure 772486DEST_PATH_IMAGE004
Figure 849027DEST_PATH_IMAGE005
Straight line:
Figure 370138DEST_PATH_IMAGE071
(17)
the intersection points of the four straight lines are four, which are respectively: the coordinate of the upper left corner is (x)lu,ylu) The coordinate of the lower left corner is (x)ld,yld) The coordinate of the upper right corner is (x)ru,yru) The coordinate of the lower right corner is (x)rd,yrd) And the fitted image is a trapezoid image.
Fourth step, target image correction:
firstly, the distance of the four sides of the quadrilateral image is calculated according to the four intersection points
Figure 619853DEST_PATH_IMAGE072
Figure 35486DEST_PATH_IMAGE073
Figure 282928DEST_PATH_IMAGE074
Figure 353652DEST_PATH_IMAGE075
Figure 282425DEST_PATH_IMAGE076
(18)
Wherein the content of the first and second substances,
Figure 122205DEST_PATH_IMAGE072
Figure 602865DEST_PATH_IMAGE073
is the distance of the vertical side of the quadrangular image,
Figure 331524DEST_PATH_IMAGE074
Figure 126305DEST_PATH_IMAGE075
the distance between the transverse edges of the quadrilateral image.
Then, the average distance of the vertical edges is calculated:
Figure 820591DEST_PATH_IMAGE008
(19)
transverse edgeAverage value of (2)
Figure 409835DEST_PATH_IMAGE077
Figure 455152DEST_PATH_IMAGE011
(20)
Handle
Figure 850361DEST_PATH_IMAGE012
Figure 336837DEST_PATH_IMAGE013
As the length and width of the rectangle. The coordinates of the upper left corner of the rectangle are (0, 0), and the coordinates of the other three points are (
Figure 424879DEST_PATH_IMAGE012
,0),(0,-
Figure 691912DEST_PATH_IMAGE013
),(
Figure 61451DEST_PATH_IMAGE012
,-
Figure 730330DEST_PATH_IMAGE013
). And then mapping the original trapezoid image into a rectangular image by an image correction method. The present invention uses affine transformations.
The affine transformation is a linear transformation from two-dimensional coordinates to two-dimensional coordinates, and can maintain "straightness" and "parallelism" of a two-dimensional figure. Affine transformations can be achieved by a complex series of atomic transformations, including translation, scaling, flipping, rotation, and shearing. Such a transformation may be represented by a 3 x 3 matrix, which will be
Figure 926956DEST_PATH_IMAGE078
Is transformed into original coordinates
Figure 415706DEST_PATH_IMAGE079
New coordinates of (i), i.e.
Figure 355981DEST_PATH_IMAGE080
(21)
Through affine transformation, the image area in the affine rectangle is converted into a right-angle rectangular image, the image is corrected, meanwhile, the background part can be cut, the target area is reserved, a large amount of time is saved for further image processing, and some error detection is reduced.
Fifth step, concave-convex mark recognition:
the traversal method is a sliding window method. The feature extraction method is a gray level co-occurrence matrix.
The invention utilizes the gray level co-occurrence matrix to extract the image texture, and the image resolution is assumed to be
Figure 879366DEST_PATH_IMAGE081
Then the elements of the gray level co-occurrence matrix are
Figure 981314DEST_PATH_IMAGE082
Figure 222940DEST_PATH_IMAGE083
(22)
In the formula:
Figure 763642DEST_PATH_IMAGE084
is a reference point;
Figure 577752DEST_PATH_IMAGE085
is an offset point;
Figure 178498DEST_PATH_IMAGE086
gray value representing a reference point of
Figure 641840DEST_PATH_IMAGE087
Figure 923917DEST_PATH_IMAGE088
The gray value representing the offset point is
Figure 421894DEST_PATH_IMAGE089
Figure 865645DEST_PATH_IMAGE090
Is the offset of the offset point;
Figure 81863DEST_PATH_IMAGE091
is the offset angle of the offset point.
And selecting the contrast, entropy, energy and inverse difference moment of the gray level co-occurrence matrix as characteristic values.
The contrast reflects the depth and definition of the texture grooves of the image. The larger the contrast is, the deeper the image texture grooves are, and the clearer the visual effect is; the smaller the contrast, the lighter the image texture grooves, and the more blurred the visual effect. The contrast expression is
Figure 902051DEST_PATH_IMAGE092
(23)
Entropy reflects the amount of information an image contains. The larger the entropy, the larger the amount of information contained in the image; the smaller the entropy, the smaller the amount of information the image contains. The entropy expression is
Figure 254535DEST_PATH_IMAGE093
(24)
The energy reflects the degree of uniformity of the gray scale distribution of the image. The more concentrated the image gray distribution is, the larger the energy is; the more dispersed the image gray-scale distribution, the less energy. The expression of energy is
Figure 373582DEST_PATH_IMAGE094
(25)
The inverse differential moment reflects the homogeneity of the image texture and measures the local change of the image texture. If the value is large, the image texture is lack of variation among different regions and is locally uniform.
Figure 811517DEST_PATH_IMAGE095
(26)
Respectively calculating offset angles
Figure 435396DEST_PATH_IMAGE091
Characteristic values of contrast, entropy, inverse differential moment and energy at 0 DEG, 45 DEG, 90 DEG, 135 DEG, and calculating an average value of the characteristic values of the offset angle
Figure 642386DEST_PATH_IMAGE096
Figure 755836DEST_PATH_IMAGE097
Figure 618750DEST_PATH_IMAGE098
Figure 843058DEST_PATH_IMAGE012
Figure 170134DEST_PATH_IMAGE099
(27)
Wherein
Figure 828386DEST_PATH_IMAGE100
(i =1,2,3, 4) is
Figure 913017DEST_PATH_IMAGE091
Contrast ratios corresponding to 0 °, 45 °, 90 ° and 135 °,
Figure 941015DEST_PATH_IMAGE101
in order to be the entropy of the signal,
Figure 122598DEST_PATH_IMAGE102
in order to be able to do so,
Figure 515533DEST_PATH_IMAGE103
is the inverse differential moment.
And forming a characteristic vector by using the characteristic value corresponding to each inclination angle and the average value of each characteristic value as a judgment basis of the following characteristics. The feature vector has 20 feature values.
Because the sizes of the concave-convex marks on the images are different, the invention adopts multi-template scanning to ensure the detection accuracy. Firstly, extracting image texture and characteristics, wherein the template is rectangular, the length L belongs to [1 ",
Figure 415356DEST_PATH_IMAGE013
]the width W is equal to [1 ],
Figure 981467DEST_PATH_IMAGE012
]the templates are shared
Figure 955239DEST_PATH_IMAGE013
×
Figure 581392DEST_PATH_IMAGE012
And traversing each template in the rectangular image, and judging concave-convex marks through similarity comparison.
The method can map an irregular polygonal image into a rectangle, namely map an inclined image into a rectangular image, perform feature extraction on the template to obtain a feature vector, calculate the Euclidean distance between the feature vector and eight neighborhoods, compare the Euclidean distance with a standard threshold value, perform voting judgment on the eight neighborhoods through multi-template sliding traversal, and cast a positive vote if the Euclidean distance between the feature vector of a central template and the feature vector of the eight neighborhoods is smaller than the given threshold value, otherwise cast a negative vote; and finally, determining the concave-convex mark attribute of the central template by counting the number of positive tickets and the number of negative tickets. If the positive ticket number is greater than (equal to) the negative ticket number, the mark is not the concave-convex mark; and if the number of the anti-votes is larger than that of the positive votes, the anti-votes are concave-convex marks. The invention can be used in the industrial field of visual detection of the concave-convex marks on the surface of the industrial production line.
The invention utilizes machine vision to detect surface defects, has high detection precision and recognition efficiency, can overcome the adverse conditions of dark illumination and image deviation, has quick calculation and no need of training data, and meets the detection requirement of industrial production.
In order to verify the effectiveness of the method, the detection of the surface defects is verified by using an industrial production line vision camera. Image data is automatically acquired through the monocular camera, and then all data information is transmitted to the computer. The image collected by the camera is shown in fig. 1. The image is a surface of the test object. First, the image to be acquired is subjected to adaptive thresholding to obtain fig. 2. And then Canny edge extraction is carried out on the binarized image, and the outline of the detected object is extracted. The original image has an irregular polygon shape as shown in fig. 3. And extracting straight lines in the image by using Hough transformation. The lines are then classified by a clustering algorithm. There can be four categories. And then respectively performing straight line fitting on the four types of straight lines by a least square method. The four straight lines after fitting are:
Figure 139150DEST_PATH_IMAGE104
(28)
the intersection points are respectively: (609, -5757), (4091, -5668), (522, -289), (4066, -299). Then, the distance between the vertical edge and the horizontal edge is calculated, and the average value of the vertical edge and the average value of the horizontal edge are calculated. The average value of the vertical sides is taken as the vertical sides of the rectangle, and the average value of the horizontal sides is taken as the horizontal sides of the rectangle. The image area within the affine quadrilateral is then converted into a rectangular image by affine transformation, as shown in fig. 4.
Selecting gray level co-occurrence matrix
Figure 508952DEST_PATH_IMAGE091
The contrast, entropy, energy, inverse differential moment and the average value of the respective characteristic values at 0 °, 45 °, 90 °, 135 ° are taken as characteristic values.
Because the sizes of the concave-convex marks on the images are different, the invention adopts multi-template scanning to ensure the detection accuracy. The template isThe length L e [1,
Figure 665127DEST_PATH_IMAGE013
]the width W is equal to [1 ],
Figure 399865DEST_PATH_IMAGE012
]. The template is shared
Figure 884067DEST_PATH_IMAGE013
×
Figure 791980DEST_PATH_IMAGE012
And (4) respectively. And traversing the rectangular images respectively.
And (3) extracting features of the template to obtain a feature vector, calculating the Euclidean distance between the feature vector and the eight-neighborhood feature vector, and comparing the Euclidean distance with a standard threshold, wherein the positive ticket number is greater than the negative ticket number, and the region is a concave-convex mark, as shown in figure 4.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention are equivalent to or changed within the technical scope of the present invention.

Claims (7)

1. A concave-convex mark visual detection method based on neighborhood decision and gray level co-occurrence matrix description is characterized by comprising the following steps:
the first step, thresholding the original image: carrying out local self-adaptive thresholding on the original image to obtain a binary image;
the second step, target image extraction: performing edge extraction on the binary image through an edge extraction algorithm to segment a target image;
third step, fitting of the target image: fitting straight lines according to the contour points, clustering the straight lines, and fitting the contour points contained in the clustered straight lines again;
fourth step, target image correction: mapping the target image into a standard rectangular image;
using affine transformations, represented by a 3 x 3 matrix, which will be
Figure DEST_PATH_IMAGE001
Is transformed into original coordinates
Figure 458596DEST_PATH_IMAGE002
New coordinates of (i), i.e.
Figure DEST_PATH_IMAGE003
Converting an image area in the affine rectangle into a right-angle rectangular image through affine transformation, realizing the correction of the image, simultaneously cutting a background part, and reserving a target area;
fifth step, concave-convex mark recognition: detecting concave-convex marks by a characteristic judgment and multi-template matching method; selecting contrast, entropy, energy and inverse difference moment of the gray level co-occurrence matrix as characteristic values; mapping the irregular polygonal image into a rectangle, namely mapping the inclined image into the rectangular image, performing feature extraction on the template to obtain a feature vector, calculating the Euclidean distance between the feature vector and the eight-neighborhood feature vector, then comparing the Euclidean distance with a standard threshold value, performing voting judgment on the eight neighborhoods through multi-template sliding traversal, and if the Euclidean distance between the feature vector of the central template and the feature vector of the eight neighborhoods is smaller than a given threshold value, casting a positive vote, otherwise casting a negative vote; finally, determining the concave-convex mark attribute of the central template by counting the positive ticket number and the negative ticket number; if the positive ticket number is more than or equal to the negative ticket number, the mark is not the concave-convex mark; and if the number of the anti-votes is larger than that of the positive votes, the anti-votes are concave-convex marks.
2. The visual detection method of the concave-convex marks based on the neighborhood decision and gray level co-occurrence matrix description according to claim 1, characterized in that: in the third step, the straight line pixel v of the target image is identified according to the target image1,v2,…,vnThe expression of the straight line pixel is
Figure 834214DEST_PATH_IMAGE004
B is the slope and a is the intercept, from a and b to vnAnd (6) clustering.
3. The visual detection method of the concave-convex marks based on the neighborhood decision and gray level co-occurrence matrix description according to claim 2, characterized in that: the clustering results are in four classes, L respectivelyl,Lr,Rl,RrThen fitting L separatelyl,Lr,Rl,RrA straight line of (1) after fitting
Figure DEST_PATH_IMAGE005
Figure 441782DEST_PATH_IMAGE006
Figure DEST_PATH_IMAGE007
Figure 540450DEST_PATH_IMAGE008
Straight line, the intersect of four straight lines is four, is respectively: the coordinate of the upper left corner is (x)lu,ylu) The coordinate of the lower left corner is (x)ld,yld) The coordinate of the upper right corner is (x)ru,yru) The coordinate of the lower right corner is (x)rd,yrd) And the fitted image is a trapezoid image.
4. The visual detection method of the concave-convex marks based on the neighborhood decision and gray level co-occurrence matrix description according to claim 3, characterized in that: firstly, the distance of the vertical side of the quadrangle is calculated according to the four intersection points
Figure DEST_PATH_IMAGE009
Figure 591583DEST_PATH_IMAGE010
Then, the average distance between the vertical edges is calculatedSeparation device
Figure DEST_PATH_IMAGE011
5. The visual detection method of the concave-convex marks based on the neighborhood decision and gray level co-occurrence matrix description according to claim 4, characterized in that: firstly, the distance of the transverse side of the quadrangle is calculated according to the four intersection points
Figure 703764DEST_PATH_IMAGE012
Figure DEST_PATH_IMAGE013
Then, the average distance of the horizontal edges is calculated
Figure 557100DEST_PATH_IMAGE014
6. The visual inspection method of the concave-convex marks based on the neighborhood decision and gray level co-occurrence matrix description according to claim 5, characterized in that: handle
Figure DEST_PATH_IMAGE015
Figure 556280DEST_PATH_IMAGE016
As the length and width of the rectangle, the original trapezoid image is then mapped into the rectangular image by the image correction method.
7. The visual detection method of the concave-convex marks based on the neighborhood decision and gray level co-occurrence matrix description according to claim 6, characterized in that: firstly, extracting image texture and characteristics, wherein the template is rectangular, the length L belongs to [1 ",
Figure 840631DEST_PATH_IMAGE016
]the width W is equal to [1 ],
Figure 440109DEST_PATH_IMAGE015
]the templates are shared
Figure 405791DEST_PATH_IMAGE016
×
Figure 587373DEST_PATH_IMAGE015
And traversing each template in the rectangular image, and judging concave-convex marks through similarity comparison.
CN202011438083.4A 2020-12-11 2020-12-11 Concave-convex mark visual detection method based on neighborhood decision and gray level co-occurrence matrix description Active CN112233116B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011438083.4A CN112233116B (en) 2020-12-11 2020-12-11 Concave-convex mark visual detection method based on neighborhood decision and gray level co-occurrence matrix description

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011438083.4A CN112233116B (en) 2020-12-11 2020-12-11 Concave-convex mark visual detection method based on neighborhood decision and gray level co-occurrence matrix description

Publications (2)

Publication Number Publication Date
CN112233116A CN112233116A (en) 2021-01-15
CN112233116B true CN112233116B (en) 2021-08-03

Family

ID=74124603

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011438083.4A Active CN112233116B (en) 2020-12-11 2020-12-11 Concave-convex mark visual detection method based on neighborhood decision and gray level co-occurrence matrix description

Country Status (1)

Country Link
CN (1) CN112233116B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114359190B (en) * 2021-12-23 2022-06-14 武汉金丰塑业有限公司 Plastic product molding control method based on image processing
CN114111652A (en) * 2021-12-24 2022-03-01 格林美股份有限公司 Battery module flatness detection device and method based on machine vision
CN115131322B (en) * 2022-07-04 2023-04-07 浙江省建设装饰集团有限公司 Method for detecting surface defects of aluminum plate on outer vertical surface of building
CN115147733B (en) * 2022-09-05 2022-11-25 山东东盛澜渔业有限公司 Artificial intelligence-based marine garbage recognition and recovery method
CN116990323B (en) * 2023-09-26 2023-12-05 睿同科技有限公司 High-precision printing plate visual detection system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101135652A (en) * 2007-10-15 2008-03-05 清华大学 Weld joint recognition method based on texture partition
CN106355185A (en) * 2016-08-30 2017-01-25 兰州交通大学 Method for rapidly extracting steel rail surface area under condition of vibration
EP3528261A1 (en) * 2018-02-14 2019-08-21 China Medical University Hospital Prediction model for grouping hepatocellular carcinoma, prediction system thereof, and method for determining hepatocellular carcinoma group
CN111062915A (en) * 2019-12-03 2020-04-24 浙江工业大学 Real-time steel pipe defect detection method based on improved YOLOv3 model

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104050481B (en) * 2014-06-17 2017-05-03 西安电子科技大学 Multi-template infrared image real-time pedestrian detection method combining contour feature and gray level
CN105938563A (en) * 2016-04-14 2016-09-14 北京工业大学 Weld surface defect identification method based on image texture

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101135652A (en) * 2007-10-15 2008-03-05 清华大学 Weld joint recognition method based on texture partition
CN106355185A (en) * 2016-08-30 2017-01-25 兰州交通大学 Method for rapidly extracting steel rail surface area under condition of vibration
EP3528261A1 (en) * 2018-02-14 2019-08-21 China Medical University Hospital Prediction model for grouping hepatocellular carcinoma, prediction system thereof, and method for determining hepatocellular carcinoma group
CN111062915A (en) * 2019-12-03 2020-04-24 浙江工业大学 Real-time steel pipe defect detection method based on improved YOLOv3 model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Defect Segmentation of Ultrasonic Aluminum Bonding Joint Based on Region Growing and Level-Set;Zhou Xing 等;《2018 20th International Conference on Electronic Materials and Packaging》;20181220;正文第1-4页 *

Also Published As

Publication number Publication date
CN112233116A (en) 2021-01-15

Similar Documents

Publication Publication Date Title
CN112233116B (en) Concave-convex mark visual detection method based on neighborhood decision and gray level co-occurrence matrix description
CN108921176B (en) Pointer instrument positioning and identifying method based on machine vision
CN110866924B (en) Line structured light center line extraction method and storage medium
CN108389179B (en) Machine vision-based can cover surface defect detection method
WO2022007431A1 (en) Positioning method for micro qr two-dimensional code
CN108764229B (en) Water gauge image automatic identification method based on computer vision technology
CN112651968B (en) Wood board deformation and pit detection method based on depth information
CN107301661A (en) High-resolution remote sensing image method for registering based on edge point feature
CN111507390A (en) Storage box body identification and positioning method based on contour features
CN110569857B (en) Image contour corner detection method based on centroid distance calculation
CN108985305B (en) Laser etching industrial detonator coded image positioning and correcting method
CN115170669B (en) Identification and positioning method and system based on edge feature point set registration and storage medium
CN108376403B (en) Grid colony image segmentation method based on Hough circle transformation
CN113689429B (en) Wood board defect detection method based on computer vision
CN112734729B (en) Water gauge water level line image detection method and device suitable for night light supplement condition and storage medium
CN116664559B (en) Machine vision-based memory bank damage rapid detection method
CN111354047B (en) Computer vision-based camera module positioning method and system
CN114627080B (en) Vehicle stamping accessory defect detection method based on computer vision
CN109190434B (en) Bar code recognition algorithm based on sub-pixel level corner detection
CN115311289A (en) Method for detecting oil stain defects of plain-color cloth
CN112767359A (en) Steel plate corner detection method and system under complex background
CN112257721A (en) Image target region matching method based on Fast ICP
CN107516315B (en) Tunneling machine slag tapping monitoring method based on machine vision
CN110348307B (en) Path edge identification method and system for crane metal structure climbing robot
CN115100116A (en) Plate defect detection method based on three-dimensional point cloud

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant