CN111275697B - Battery silk-screen quality detection method based on ORB feature matching and LK optical flow method - Google Patents
Battery silk-screen quality detection method based on ORB feature matching and LK optical flow method Download PDFInfo
- Publication number
- CN111275697B CN111275697B CN202010085060.3A CN202010085060A CN111275697B CN 111275697 B CN111275697 B CN 111275697B CN 202010085060 A CN202010085060 A CN 202010085060A CN 111275697 B CN111275697 B CN 111275697B
- Authority
- CN
- China
- Prior art keywords
- image
- silk
- screen
- detected
- template
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 95
- 238000000034 method Methods 0.000 title claims abstract description 86
- 230000003287 optical effect Effects 0.000 title claims abstract description 54
- 230000007547 defect Effects 0.000 claims abstract description 57
- 238000012937 correction Methods 0.000 claims abstract description 24
- 238000007639 printing Methods 0.000 claims abstract description 18
- 238000007781 pre-processing Methods 0.000 claims abstract description 7
- 238000000638 solvent extraction Methods 0.000 claims abstract description 3
- 230000009466 transformation Effects 0.000 claims description 24
- 238000012545 processing Methods 0.000 claims description 18
- 230000011218 segmentation Effects 0.000 claims description 16
- 238000000605 extraction Methods 0.000 claims description 14
- 238000013519 translation Methods 0.000 claims description 13
- 230000007797 corrosion Effects 0.000 claims description 12
- 238000005260 corrosion Methods 0.000 claims description 12
- 230000008569 process Effects 0.000 claims description 11
- 238000007650 screen-printing Methods 0.000 claims description 10
- 239000011159 matrix material Substances 0.000 claims description 9
- 230000000877 morphologic effect Effects 0.000 claims description 8
- 239000013598 vector Substances 0.000 claims description 8
- 238000004458 analytical method Methods 0.000 claims description 7
- 238000012216 screening Methods 0.000 claims description 7
- 238000000926 separation method Methods 0.000 claims description 7
- 238000005520 cutting process Methods 0.000 claims description 6
- 230000001788 irregular Effects 0.000 claims description 6
- 238000013507 mapping Methods 0.000 claims description 6
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 5
- 238000000354 decomposition reaction Methods 0.000 claims description 3
- 238000011524 similarity measure Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 11
- 238000004364 calculation method Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 241000282414 Homo sapiens Species 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 230000002950 deficient Effects 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 238000005286 illumination Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 230000035945 sensitivity Effects 0.000 description 2
- 238000009966 trimming Methods 0.000 description 2
- 240000007594 Oryza sativa Species 0.000 description 1
- 235000007164 Oryza sativa Nutrition 0.000 description 1
- 206010047571 Visual impairment Diseases 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000003628 erosive effect Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000002955 isolation Methods 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000036651 mood Effects 0.000 description 1
- 238000011897 real-time detection Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 235000009566 rice Nutrition 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 230000004393 visual impairment Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
- G06T7/0008—Industrial image inspection checking presence/absence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
- G06T5/30—Erosion or dilatation, e.g. thinning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/269—Analysis of motion using gradient-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/757—Matching configurations of points or features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a battery silk-screen quality detection method based on ORB feature matching and an LK optical flow method, which comprises the steps of collecting battery silk-screen image data, preprocessing the image data and extracting a battery silk-screen area; modeling is carried out in a rectangular partitioning mode, the template comprises an illustration part template and a character part template, and the characteristics of the template silk screen and the silk screen to be detected are extracted and matched based on an ORB algorithm to realize the positioning of silk screen contents; detecting by using an image difference method based on morphology, and if false alarm occurs, performing secondary detection by using a distortion correction detection method based on an L-K optical flow method; and if false alarm does not occur, outputting a result image and detection data, and executing sorting operation. The method has the advantages of good real-time performance and high detection rate, and the adaptability and the detection rate of non-accurate printing are greatly improved by improving the traditional differential image method and introducing the optical flow method into the field of printing defect detection.
Description
Technical Field
The invention belongs to the technical field of machine vision automatic surface detection, and particularly relates to a battery silk-screen quality detection method based on ORB feature matching and an LK optical flow method.
Background
Detecting the silk screen/bar code of the battery is an important step in the battery assembling and processing process, and batteries with different types have differences in character types (Chinese characters, Korean, English, numbers and the like), character formats, illustration content, bar code formats and the like. At present, the battery silk screen/bar code is influenced by relevant factors such as clamps, equipment, personnel and the like to cause printing defects, and the types of the silk screen/bar code defects are mainly divided into: bar code defect/distortion/skew/fuzzy/double image/dirty/colour difference, silk-screen defect/skew/fuzzy/double image/dirty/colour difference, silk-screen and bar code information mismatch, bar code size and bar code/silk-screen position do not meet specification requirements, etc.
Traditional battery silk screen printing outward appearance detects mainly relies on naked eye, magnifying glass and CCD camera etc. to carry out artifical the measuring, because inevitably receive factors such as human mood, all ring edge borders noise, work concentration degree influence, is difficult to guarantee the accuracy and the real-time of testing result. The manual detection method is simple to operate, but the detection speed is low, misjudgment is easy to introduce, and the detection lacks objective standards.
In order to realize automatic detection of the screen printing quality of the battery, scholars at home and abroad carry out a great deal of research, and a plurality of classical methods such as a global template matching method, a pixel-by-pixel hierarchical detection method, a neural network algorithm, a wavelet transformation detection method, a Gabor transformation algorithm, a feature extraction method and the like are developed. However, the above method mainly has two problems:
1) the algorithm is too complex, the detection consumes long time, and the method is not suitable for being applied to the detection on a production line in a factory;
2) the characteristics of the application object are single, and the universality of the method and the detection capability of the method on the complex object need to be enhanced.
The battery silk-screen printing types detected by the invention are numerous, the applicable method of each pattern or character is different, and higher processing precision is required. The traditional classical method can not obtain good effect.
Disclosure of Invention
The invention aims to solve the technical problem of providing a battery silk-screen quality detection method based on ORB (organized FAST and rotaed BRIEF) feature matching and an L-K (Lukas-Kanade) optical flow method, which can detect the silk-screen appearance quality on a factory assembly line in real time, has high accuracy and improves the automation level of the battery silk-screen quality detection in the domestic battery manufacturing industry to a certain extent.
The invention adopts the following technical scheme:
a battery silk-screen quality detection method based on ORB feature matching and an LK optical flow method comprises the following steps:
s1, collecting battery silk-screen image data, and preprocessing to extract a battery silk-screen area;
s2, modeling by adopting a rectangular partitioning mode, wherein the modeling comprises an inserting picture part template and a character part template, extracting and matching the characteristics of the template silk screen and the silk screen to be detected based on an ORB algorithm, and realizing the positioning of silk screen contents;
s3, detecting the image difference image based on morphology, and if false alarm occurs, performing secondary detection by using a distortion correction detection method based on an L-K optical flow method; and if false alarm does not occur, outputting a result image and detection data, and executing sorting operation.
Specifically, step S1 specifically includes:
s101, creating a cross rectangular window according to the size of an original image, and cutting the original image to obtain boundary information of a silk-screen area of a battery cell;
s102, calculating a binarization threshold value by using an Ostu algorithm, preliminarily separating a silk-screen area from other backgrounds, properly offsetting the threshold value by combining with image gray features to accurately separate the silk-screen area of the battery cell from the other backgrounds, removing edge burrs through morphological open operation, and eliminating edge fineness segmentation errors;
s103, performing minimum circumscribed rectangle fitting on the region extracted in the step S102 by using a related algorithm, and cutting the original image according to a fitting result to obtain an accurate battery silk-screen region;
and S104, carrying out gray level correction on the image to be detected, and adjusting the image to be detected to be the same as the gray level of the template image.
Further, step S104 specifically includes:
s1041, graying the image, converting the original three-channel image into a single-channel gray image, and adopting a psychology gray formula:
gray=0.299*red+0.587*green+0.114*blue
s1042, dividing the foreground region R by using a threshold segmentation method1And a background region R2(ii) a Respectively calculating the gray level mean values of the foreground area and the background area, and respectively executing the operations on the image to be detected and the template image, wherein the gray level mean value is calculated as follows:
wherein R isiIs extracted after threshold segmentation, p is a pixel point in the region, g (p) is the gray value at the point p, F is RiThe total number of internal pixel points;
s1043, carrying out gray level transformation on the image to be detected to obtain a template foreground gray level mean value M1Template background gray level mean M2And the mean value T of the foreground gray scale of the image to be detected1And the background gray level mean value T of the image to be detected2Performing gray scale transformation, respectively calculating a gray scale scaling coefficient Mult and a gray scale translation coefficient Add, and performing mapping correction on the original gray scale of the image to be detected by using the two coefficients, specifically as follows:
Mult=(M1-M2)/(T1-T2)
Add=M1-Mult*T1
g'=g*Mult+Add
where Mult is the gray scale scaling coefficient, Add is the gray scale translation coefficient, and g' is the new gray scale value after mapping.
Specifically, step S2 specifically includes:
s201, extracting oFAST characteristic points;
s202, constructing an rBRIEF feature descriptor;
and S203, matching the characteristic points.
Further, in step S201, the gray value of the P point is compared with the gray values of 16 pixels in the neighborhood with the radius of 3 taking P as the center, and if the difference between the pixel value of the P point and the pixel values of n consecutive pixels in the neighborhood on the circle is greater than the threshold t, the point P is a feature point.
Further, in step S202, N point pairs are selected around the key point P, the comparison results of the N point pairs are combined to be used as a descriptor, and all the point pairs are compared to generate a binary string with a length of N.
Further, in step S203, a hamming distance between the descriptors is used as a similarity measure, the hamming distance between two equal-length binary strings is the number of different characters at corresponding positions of the two character strings, and a similarity threshold is set to screen matching point pairs; and after the matching result is obtained, registering the template and the image to be detected according to the corresponding relation between the template image and the ORB characteristic points in the image to be detected, and matching each rectangular sub-template in the template image with each part in the image to be detected.
Specifically, in step S3, the detection of the pictorial part is specifically:
s3011, obtaining a corresponding point relation according to a matching result, calculating a rigid body transformation matrix, aligning the sub-template image with an image to be detected, and segmenting detection contents from a background by using a threshold segmentation method;
s3012, performing morphological corrosion and expansion treatment on the segmented silk-screen content, and obtaining multi-printing defects and missing-printing defects through positive and negative working differences of the to-be-detected product and the template, wherein the multi-printing defects are as follows: t is(Corrosion)-M(expansion)(ii) a The missing printing defects are as follows: m(Corrosion)-T(expansion);
S3013, performing connectivity analysis and threshold screening on the difference image subjected to difference in the step S3012, wherein the area exceeding the threshold is a defect area;
and S3014, marking the defect by using the center of the defect as a circle center and using the minimum circumscribed circle of the defect.
Specifically, in step S3, the text defect detection process includes the following steps:
s3015, affine transformation registration;
s3016, performing skeleton extraction processing on the characters, extracting the later word lines, unifying the later word lines into a single-pixel width, and only keeping the topological structure of the later word lines;
s3017, performing Chinese character type translation processing on the extracted characters, namely performing image difference on the extracted characters in 8 directions of up, down, left, right, left, up, right, down, left and right to obtain 8 difference images;
s3018, performing intersection operation on the 8 difference images obtained in the step S3017, and taking the finally reserved area as a real defect area;
s3019, connectivity analysis threshold screening and defect marking.
Specifically, in step S3, the template image and the image to be detected are regarded as continuous frame images in the video, and irregular micro-distortion occurring in the image to be detected is regarded as local micro-motion, and the detection process is as follows:
s3021, calculating an optical flow field between the two images by using an L-K optical flow method, and calculating an optical flow vector V at any position (i, j) of the optical flow fieldi,jThe structure is as follows:
Vi,j=(x,y)
wherein, the x value represents the row offset of the image to be measured at the (i, j) point, and the y value represents the column offset;
s3022 decomposition of optical flow vector Vi,jObtaining components x and y of the data to form two data sets { x } and { y };
s3023, performing threshold segmentation on the image to be detected, acquiring a silk-screen foreground area T, and counting coordinates (i, j) of all points in the area T;
s3024, calculating a corresponding point set, comparing the image to be detected with the template image, and moving the pixel point at the coordinate (i, j) to the coordinate (i + x, j + y) due to distortion deformation;
s3025, taking (i, j) and (i + x, j + y) as corresponding point sets, calculating a projective transformation matrix A, and performing projective transformation on the silk screen to be tested by using the matrix to realize the correction of micro deformation;
and S3026, performing subtraction processing on the corrected image to be detected and the template image again, performing defect separation and extraction processing, and marking finally.
Compared with the prior art, the invention has at least the following beneficial effects:
the battery silk-screen quality detection method based on the ORB feature matching and the LK optical flow method is good in instantaneity, higher in detection rate than similar methods, and capable of correcting irregular distortion by introducing the optical flow method, so that adaptability of silk-screen quality detection is greatly enhanced.
Furthermore, after the image is preprocessed, the background redundancy is eliminated, the image quality is improved, and the subsequent detection can be carried out.
Furthermore, after gray level correction is carried out, the silk screen to be detected and the template silk screen can be adjusted to be basically the same gray level, and the influence of illumination change on detection is greatly reduced.
Furthermore, after the sub-templates are matched in a blocking mode, the positioning precision of each sub-template can be improved, false alarm caused by position and angle errors among the contents of all the templates in silk-screen printing is prevented, and meanwhile matching based on the ORB features has high accuracy and real-time performance.
Furthermore, the image feature points can be extracted quickly, efficiency and time are saved, and preconditions are provided for constructing an rBRIEF descriptor.
Furthermore, a feature descriptor in the form of a binary character string is constructed, the form is simple, the storage space is greatly saved, the matching time is reduced, meanwhile, the descriptor has invariance to rotation, and the adaptability of the algorithm is improved.
Furthermore, the similarity of the binary descriptors is measured by calculating the Hamming distance, the method is simple and efficient, and the corresponding points of the template and the image to be detected are screened out, so that matching and positioning are completed.
Furthermore, during the detection of the picture-in-picture, the false defects of the outline can be effectively eliminated by performing difference and corrosion expansion processing on the image, and the real defects of the picture-in-picture can be accurately separated.
Furthermore, false alarm caused by uneven thickness of lines of the characters can be effectively avoided by extracting the character skeleton in the character detection, and false alarm interference caused by poor consistency of the character positions can be further reduced by the 'Mi' character type translation strategy.
Furthermore, the template and the silk screen to be detected are regarded as continuous frame images, micro distortion occurring in the silk screen can be accurately corrected through L-K optical flow calculation and projection transformation, and false alarm interference of the distortion is eliminated.
In conclusion, the method has the advantages of good real-time performance and high detection rate, and the adaptability and the detection rate to non-accurate printing are greatly improved by improving the traditional differential image method and introducing the optical flow method into the field of printing defect detection.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a pre-processing image, wherein (a) is an original image and (b) is a pre-processed image;
FIG. 3 is a flow chart of the pretreatment;
FIG. 4 is a cross segmentation flow chart in which (a) is cross windowing, (b) is Ostu binarization, (c) is Ostu threshold shift, and (d) is morphological opening operation;
FIG. 5 is a gray scale calibration chart, wherein (a) is a template image, (b) is a pre-calibration image to be tested, and (c) is a post-calibration image to be tested;
FIG. 6 is a block diagram;
FIG. 7 is a flow chart of template matching based on ORB features;
FIG. 8 is a schematic diagram of oFAST detection;
FIG. 9 is a graph showing the results of oFAST detection;
FIG. 10 is a schematic diagram of a pair of rBRIEF points;
FIG. 11 is a diagram of the ORB algorithm matching results;
FIG. 12 is a flow chart of defect isolation extraction;
FIG. 13 is an illustration detection flow diagram;
FIG. 14 is a text detection flow chart;
FIG. 15 is a schematic view of a translation strategy;
FIG. 16 is a schematic diagram of a twist screen printing process, in which (a) is a coincidence degree comparison and (b) is a difference shadow detection result;
FIG. 17 is a schematic view of an optical flow field;
FIG. 18 is a flow chart of L-K optical flow detection;
FIG. 19 is a schematic diagram of a silk-screen calibration.
Detailed Description
The invention provides a battery silk-screen quality detection method based on ORB feature matching and an LK optical flow method, which comprises the following steps: the original image collected by the camera contains more background interference and noise, and the inclination degrees are different, so the original image needs to be subjected to trimming correction, affine transformation, gray correction and the like at the present stage; then, a template matching stage: extracting and matching the characteristics of the template silk screen and the silk screen to be detected based on an ORB algorithm to realize the positioning of silk screen contents; and finally, defect separation and extraction stage: in the stage, firstly, a morphology-based differential image method is adopted for primary detection, and a distortion correction detection method based on an L-K optical flow method is adopted for screen printing with irregular distortion for secondary detection.
Referring to fig. 1, the method for detecting the screen printing quality of a battery based on ORB feature matching and LK optical flow method of the present invention includes the following steps:
s1, image preprocessing
The original image collected by the camera contains more background interference and noise, and the inclination degrees are different, so the original image needs to be subjected to trimming correction, affine transformation, gray correction and the like at the present stage; as shown in fig. 2, (a) is an original image, and (b) is a preprocessed image.
Referring to fig. 3, the original image view field content includes interference regions such as a background stage, a fixture, a metal electrode plate, etc., which affect subsequent detection, and therefore, a preprocessing is required to extract a cell screen printing region, and the extraction steps are as follows:
s101, creating a cross rectangular window according to the size of an original image, and cutting the original image to obtain boundary information of a silk-screen area of a battery cell;
referring to fig. 4, through the above processing, information of four boundaries, i.e., the upper boundary, the lower boundary, the left boundary and the right boundary, of the battery silk-screen area is successfully obtained, and a large amount of background redundancy irrelevant to detection is removed.
S102, calculating a binarization threshold value by using an Ostu algorithm, preliminarily separating a silk-screen area from other backgrounds, properly shifting the threshold value by combining with image gray features to accurately separate the silk-screen area of the electric core from the other backgrounds, and finally removing edge burrs by morphological open operation to eliminate edge fineness segmentation errors, wherein the processing effect is shown in FIG. 4;
s103, performing minimum external rectangle fitting on the extracted area by using a relevant algorithm, and cutting an original image according to a fitting result to obtain an accurate battery silk-screen area;
and S104, performing gray level correction on the image to be detected, and adjusting the gray level to be the same as that of the template image so as to solve the influence caused by unstable illumination and improve the detection accuracy, which is important for subsequent detection.
The gray level correction method adopted by the invention specifically comprises the following steps:
s1041, graying the image, converting the original three-channel image into a single-channel gray image, wherein the human eye has the highest sensitivity to green and the lowest sensitivity to blue, so a psychology gray formula is adopted:
gray=0.299*red+0.587*green+0.114*blue (1)
s1042, dividing the foreground region R by using a threshold segmentation method1And a background region R2(ii) a Respectively calculating the gray level mean values of the foreground area and the background area, and respectively executing the operations on the image to be detected and the template image, wherein the gray level mean value calculation formula is shown as a formula 2:
wherein R isiIs an area (foreground or background area) extracted after threshold segmentation, p is a pixel point in the area, g (p) is a gray value at the point p, F is RiThe total number of internal pixel points;
s1043, carrying out gray level transformation on the image to be detected, and obtaining four key values through the calculation: template foreground (silk screen content) gray average M1Template background gray level mean M2And the gray average value T of the foreground (silk-screen content) of the image to be detected1And the background gray level mean value T of the image to be detected2Based on the four data, carrying out gray scale transformation, respectively calculating a gray scale scaling coefficient Mult and a gray scale translation coefficient Add, and carrying out mapping correction on the original gray scale of the image to be detected by using the two coefficients, wherein the formulas are as follows:
Mult=(M1-M2)/(T1-T2) (3)
Add=M1-Mult*T1 (4)
g'=g*Mult+Add (5)
wherein Mult is a gray scale scaling coefficient, Add is a gray scale translation coefficient, g' is a new gray scale value after mapping, and the effect of the original image after the above processing is shown in fig. 5, wherein, (a) is a template image, (b) is an image to be measured (before correction), and (c) is an image to be measured (after correction); as can be seen from fig. 5, the characteristics of the image to be detected, such as contrast, brightness, etc., after the gray level transformation are very close to the template image, which provides good conditions for the implementation of the subsequent template matching algorithm and the defect separation and extraction.
S2, template matching stage
Extracting and matching the characteristics of the template silk screen and the silk screen to be detected based on an ORB algorithm to realize the positioning of silk screen contents;
referring to fig. 6, after the preprocessing is finished, the silk-screen contents of each part on the image to be detected need to be identified and positioned, the ORB feature detection running time is far superior to the SIFT algorithm and the SURF algorithm, and the ORB feature detection running time can be applied to real-time feature detection.
The method comprises an illustration part template and a character part template, wherein all the sub-templates are sequentially established according to the sequence, each sub-template is covered by a rectangular mask with the gray value of 0 after the current modeling is finished, and then modeling is continuously carried out, so that rectangular frames in the graph are overlapped; then, the ORB template matching algorithm is used to perform matching and positioning (Oriented FAST and Rotated BRIEF ORB), which is mainly divided into the following three steps:
(1) detecting oFAST characteristic points;
(2) rBRIEF feature description;
(3) and matching the characteristic points.
Referring to fig. 7, the detection process and the technical details are as follows:
s201, extracting oFAST characteristic points;
referring to fig. 8, the fast corner detection is a fast corner feature detection algorithm, specifically, 16 pixel grays of an interest point and a surrounding area are determined, and a circular area is usually selected from the area, and the determination method is as follows:
and comparing the gray value of the P point with the gray values of 16 pixel points in a neighborhood taking the P as the center radius as 3, and if the subtraction of the pixel value of the P point and the pixel values of n continuous pixel points in the neighborhood on the circle is greater than a threshold t, the point P is considered as a characteristic point.
The oFAST algorithm has good scale invariance and rotation invariance, and the characteristic points of a certain sub-image extracted by the oFAST algorithm are shown in FIG. 9.
S202, constructing an rBRIEF feature descriptor;
after obtaining the feature points, we need to describe the attributes of the feature points in some way, this description is called a feature descriptor, and the ORB algorithm uses the rBRIEF algorithm to calculate the descriptor. The operation is to select N point pairs in a certain pattern around the keypoint P and combine the comparison results of the N point pairs as descriptors. All point pairs are compared, generating a binary string of length N, typically N being 128, 256 or 512.
Referring to fig. 10, a circle O is formed by taking the key point P as the center of the circle and d as the radius, and N point pairs are selected in a certain pattern within the circle O. Assuming that the currently selected 4 point pairs are as shown in FIG. 10, A in each point pair is comparediAnd BiThe relative size of the gray values can be used to describe a sub-string in binary with a length of 4, for example 1011. The coordinate system established by the ORB in the calculation of the rBRIEF descriptor is a 2-dimensional coordinate system established by taking the key point as the origin and taking the connecting line of the center of mass of the point area and the key point as the X axis, so that the descriptor has rotation invariance.
S203, matching the characteristic points;
the ORB algorithm is mainly characterized by high calculation speed. The invention adopts the Hamming distance between the descriptors as the similarity measure during matching, the Hamming distance between two binary strings with equal length is the number of different characters at the corresponding positions of the two character strings and also represents the similarity degree of the two binary strings, and the matching point pairs are screened by setting a similarity threshold value. The matching result obtained from the hamming distance is shown in fig. 11.
And after the matching result is obtained, registering the template and the image to be detected according to the corresponding relation between the template image and the ORB characteristic points in the image to be detected. And completing the work of the template matching stage, and matching each rectangular sub-template in the template image with each part in the image to be detected.
S3 defect separation and extraction based on optical flow method and morphological image difference method
Firstly, uniformly detecting all silk screens by using a morphology-based differential method, and if the detection result is normal, executing sorting operation; if a large number of false alarms appear in the detection result, secondary detection is carried out by using a distortion correction detection method based on an L-K optical flow method, so that the false judgment is eliminated.
Referring to fig. 12, the flowchart and the technical details of the defect separation and extraction stage are as follows:
s301, an image difference method defect analysis algorithm based on morphology;
the silk-screen content is divided into two types of pictures and characters, and sub-algorithms are respectively designed according to the respective characteristics of the pictures and the characters so as to achieve the best detection effect.
Referring to fig. 13, the inset area has concentrated ink and simple lines, and the following detection process is designed according to the characteristic:
s3011, according to the matching result, a corresponding point relation can be obtained, a rigid body transformation matrix is calculated according to the corresponding point relation, the sub-template image is aligned with the image to be detected, and the detection content is segmented from the background by using a threshold segmentation method;
and S3012, performing morphological erosion and expansion treatment on the segmented silk-screen content to prevent false contour defects caused by non-overlapping pattern boundaries. The multi-printing defect and the missing printing defect can be obtained through the positive and negative working difference of the to-be-detected product and the template, as follows:
multi-print defects:
T(Corrosion)-M(expansion)
Missing printing defects:
M(Corrosion)-T(expansion)
S3013, performing connectivity analysis and threshold screening on the difference image subjected to difference making, wherein the area exceeding the threshold is a defect area;
s3014, defect positioning and marking: and marking the defect by using the minimum circumcircle of the defect by taking the center of the defect as the center of a circle.
The method for detecting the inserting part is introduced, and lines of the character part are fine and complex, and defects are finer.
Referring to fig. 14, the text portion defect detection method has the following steps:
s3015, affine transformation registration: detecting the same picture;
s3016, skeleton extraction: the reason for adopting skeleton extraction is that the character lines are fine and complicated, and the line thickness is not uniform. If the expansion corrosion treatment is adopted, the topological structure of the characters can be damaged, and the parameters are difficult to adjust, so that the method provided by the invention can be used for extracting the skeleton of the characters. The extracted text lines are unified into a single-pixel width, and only the topological structure is reserved, so that the text lines are not influenced by the thickness;
s3017, image translation difference: the extracted characters are subjected to 'rice' -shaped translation processing, namely 8 directions of up, down, left, right, left-up, right-up, left-down and right-down, and then the images are subjected to subtraction to obtain 8 difference images. The reason for poor translation is that the consistency of the silk-screen content position is poor, a trace amount of position deviation exists, and a great amount of false alarms are caused by single direct difference making, as shown in fig. 15;
s3018, difference and shadow intersection operation: performing intersection operation on the 8 difference images obtained in the previous step, and taking the finally reserved area as a real defect area;
s3019, connectivity analysis threshold screening and defect marking: detecting the same picture;
s302 distortion correction detection algorithm based on L-K optical flow method
Through detection, most defects can be normally detected, but for a part of silk-screen products, due to the fact that printing contents of a printing process can generate micro-distortion and cannot cause abnormal vision of human beings, the defects belong to the category of good printed products, but the products can be judged to be defective products under the detection of a 3.1 detection method, namely a differential image method, and belong to the difficulty in the detection, as shown in fig. 16, the schematic diagram of the micro-distortion is shown, and observation shows that in the diagram (a), the coincidence degree of a template silk-screen and the silk-screen to be detected is low, and the products are irregular distortion, but the silk-screen to be detected belongs to the category of good products, and after direct detection, a large amount of false reports are generated, as shown in the diagram (b) on the right side. The invention provides a distortion correction detection algorithm based on an L-K optical flow method, creatively introduces the optical flow method into the field of surface defect detection, can accurately detect products with micro distortion and prevents false alarm.
The detection process and the technical details are as follows:
the optical flow is the instantaneous speed of the pixel motion of a space moving object on an observation imaging plane, and the optical flow method is a method for finding the corresponding relation between the previous frame and the current frame by using the change of the pixels in an image sequence on a time domain and the correlation between adjacent frames so as to calculate the motion information of the object between the adjacent frames. The instantaneous change rate of the gray scale at a specific coordinate point on a two-dimensional image plane is generally defined as an optical flow vector, and when the time interval is small (for example, between two consecutive frames of a video), the time interval is also equivalent to the displacement of a target point, and a schematic diagram of an optical flow field calculated from a certain silk-screen image is shown in fig. 17.
Because of the convenience of applying to a set of points of the input image, the invention selects the L-K optical flow method, and the algorithm is based on the following three assumptions:
(1) the brightness between adjacent frames is constant;
(2) the frame taking time of adjacent video frames is continuous, or the motion of an object between the adjacent frames is relatively small;
(3) the space consistency is kept; i.e. the pixel points of the same sub-image have the same motion.
The basic idea is as follows:
let I (x, y, t) be the luminance at a point P (x, y) on the image at time t, and I (x + dx, y + dy, t + dt) be the luminance at a corresponding point P' of a point (x, y) on the image at time t + dt, according to the luminance consistency assumption, we obtain:
I(x,y,t)=I(x+dx,y+dy,t+dt) (9)
from the definition of the optical flow (u, v) ═ dx/dt, dy/dt), using taylor expansion we can obtain:
Ixu+Iyv+It=0 (10)
the above equation is a basic optical flow constraint equation, one equation cannot solve two unknowns, at this time, another constraint condition needs to be introduced to solve the two velocity vectors, n equations can be obtained according to the assumption (3), and an L-K optical flow can be obtained by combining the least square method:
wherein the content of the first and second substances,i.e. the sought, i.e. calculated, L-K optical flow.
Referring to fig. 18, the template image and the image to be detected are regarded as continuous frame images in the video, and irregular micro distortion occurring in the image to be detected is regarded as local micro motion, and the detection process is as follows:
s3021 an optical flow field between the two images is calculated by using an L-K optical flow method, and as shown in FIG. 17, an optical flow vector V is present at an arbitrary position (i, j) in the optical flow fieldi,jThe structure is as follows:
Vi,j=(x,y) (12)
wherein, the x value represents the row offset of the image to be measured at the (i, j) point, and the y value represents the column offset;
s3022 decomposition of optical flow vector Vi,jObtaining components x and y of the data to form two data sets { x } and { y };
s3023, performing threshold segmentation on the image to be detected, acquiring a silk-screen foreground area T, and counting coordinates (i, j) of all points in the area T;
s3024, calculating a corresponding point set, comparing the image to be detected with the template image, and moving the pixel point at the coordinate (i, j) to the coordinate (i + x, j + y) due to distortion deformation;
and S3025, calculating a projective transformation matrix A by taking the (i, j) and the (i + x, j + y) as corresponding point sets, and performing projective transformation on the silk screen to be detected by using the matrix to realize the correction of the micro-deformation, wherein the corrected image is shown in FIG. 19. As can be seen from fig. 19, the coincidence ratio between the screen to be detected and the screen of the template is already high, and the distortion phenomenon is basically eliminated, so that the subsequent detection is facilitated;
and S3026, performing subtraction processing on the corrected image to be detected and the template image again, performing defect separation and extraction processing similar to that in the step S301, and performing final marking.
After the processing of the method, the normal detection of the trace distortion silk screen printing can be realized, the false alarm is not generated, the practicability of the method is greatly improved, the real-time detection of an enterprise assembly line can be met, and the higher detection accuracy can be ensured.
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
By adopting the battery silk-screen quality detection method based on ORB characteristic matching and LK optical flow method to carry out online test, 981 silk-screens can be correctly detected by carrying out online test on 1000 silk-screens to be detected, the detection rate is 98.1%, wherein 11 good silk-screens are judged as defective, and 8 defective silk-screens are judged as good.
The detection rate of the traditional global template matching method for the same sample is only 49%, and the traditional global template matching method is completely not suitable for detecting the inaccurate type printed matter.
In conclusion, compared with the traditional detection method, the detection method has high detection rate and good real-time performance, has strong adaptability to non-precise printed products such as distortion, position angle errors and the like, and can accurately separate real defects without generating false alarm.
The above-mentioned contents are only for illustrating the technical idea of the present invention, and the protection scope of the present invention is not limited thereby, and any modification made on the basis of the technical idea of the present invention falls within the protection scope of the claims of the present invention.
Claims (7)
1. A battery silk-screen quality detection method based on ORB feature matching and an LK optical flow method is characterized by comprising the following steps:
s1, collecting battery silk-screen image data, and preprocessing to extract a battery silk-screen area;
s2, modeling by adopting a rectangular partitioning mode, wherein the modeling comprises an inserting picture part template and a character part template, extracting and matching the characteristics of the template silk screen and the silk screen to be detected based on an ORB algorithm, and realizing the positioning of silk screen contents;
s3, detecting the image difference image based on morphology, and if false alarm occurs, performing secondary detection by using a distortion correction detection method based on an L-K optical flow method; if false alarm does not occur, outputting a result image and detection data, executing sorting operation, and specifically detecting the picture-inserting part as follows:
s3011, obtaining a corresponding point relation according to a matching result, calculating a rigid body transformation matrix, aligning the sub-template image with an image to be detected, and segmenting detection contents from a background by using a threshold segmentation method;
s3012, performing morphological corrosion and expansion treatment on the silk-screen content of the divided battery, and obtaining multi-printing defects and missing-printing defects through the positive and negative working difference of the to-be-detected product and the template, wherein the multi-printing defects are as follows: t is(Corrosion)-M(expansion)(ii) a The missing printing defects are as follows: m(Corrosion)-T(expansion),T(Corrosion)For the silk-screen printing of the content region, M, of the corroded battery to be tested(expansion)For silk-screening the expanded stencil to form a content area, M(Corrosion)For the etched screen-printed content area of the stencil, T(expansion)The expanded content area of the silk screen of the battery to be detected;
s3013, performing connectivity analysis and threshold screening on the difference image subjected to difference in the step S3012, wherein the area exceeding the threshold is a defect area;
s3014, marking the defect by using the minimum circumscribed circle of the defect with the center of the defect as the center of a circle;
the text part defect detection process is as follows:
s3015, affine transformation registration;
s3016, performing skeleton extraction processing on the characters, extracting the later word lines, unifying the later word lines into a single-pixel width, and only keeping the topological structure of the later word lines;
s3017, performing Chinese character type translation processing on the extracted characters, namely performing image difference on the extracted characters in 8 directions of up, down, left, right, left, up, right, down, left and right to obtain 8 difference images;
s3018, performing intersection operation on the 8 difference images obtained in the step S3017, and taking the finally reserved area as a real defect area;
s3019, communicating analysis threshold value screening and defect marking;
the template image and the image to be detected are regarded as continuous frame images in the video, irregular micro-distortion occurring in the image to be detected is regarded as local micro-motion, and the detection process is as follows:
s3021, calculating an optical flow field between the two images by using an L-K optical flow method, and calculating an optical flow vector V at any position (i, j) of the optical flow fieldi,jThe structure is as follows:
Vi,j=(x,y)
wherein, the x value represents the row offset of the image to be measured at the (i, j) point, and the y value represents the column offset;
s3022 decomposition of optical flow vector Vi,jObtaining components x and y of the data to form two data sets { x } and { y };
s3023, performing threshold segmentation on the image to be detected, acquiring a battery silk-screen foreground area T, and counting coordinates (i, j) of all points in the area T;
s3024, calculating a corresponding point set, comparing the image to be detected with the template image, and moving the pixel point at the coordinate (i, j) to the coordinate (i + x, j + y) due to distortion deformation;
s3025, taking (i, j) and (i + x, j + y) as corresponding point sets, calculating a projective transformation matrix A, and performing projective transformation on the silk screen of the battery to be tested by using the matrix to realize correction of micro deformation;
and S3026, performing subtraction processing on the corrected image to be detected and the template image again, performing defect separation and extraction processing, and marking finally.
2. The method for detecting the quality of the battery silk screen based on the ORB feature matching and the LK optical flow method as claimed in claim 1, wherein the step S1 is specifically as follows:
s101, creating a cross rectangular window according to the size of an original image, and cutting the original image to obtain boundary information of a battery silk-screen area;
s102, calculating a binarization threshold value by using an Ostu algorithm, preliminarily separating a battery silk-screen area from other backgrounds, properly offsetting the threshold value by combining with image gray features to accurately separate the battery silk-screen area from the other backgrounds, removing edge burrs through morphological open operation, and eliminating edge fineness segmentation errors;
s103, performing minimum external rectangle fitting on the region extracted in the step S102, and cutting the original image according to a fitting result to obtain an accurate battery silk screen region;
and S104, carrying out gray level correction on the image to be detected, and adjusting the image to be detected to be the same as the gray level of the template image.
3. The method for detecting the quality of the battery silk screen based on the ORB feature matching and the LK optical flow method as claimed in claim 2, wherein the step S104 is specifically as follows:
s1041, graying the image, converting the original three-channel image into a single-channel gray image, and adopting a psychology gray formula:
gray=0.299*red+0.587*green+0.114*blue
s1042, dividing the foreground region R by using a threshold segmentation method1And a background region R2(ii) a Respectively calculating the gray level mean values of the foreground area and the background area, and respectively executing the operations on the image to be detected and the template image, wherein the gray level mean value is calculated as follows:
wherein R isiIs extracted after threshold segmentation, p is a pixel point in the region, g (p) is the gray value at the point p, F is RiThe total number of internal pixel points;
s1043, carrying out gray level transformation on the image to be detected to obtain a template foreground gray level mean value M1Template background gray level mean M2And the mean value T of the foreground gray scale of the image to be detected1And the background gray level mean value T of the image to be detected2Performing gray scale transformation, respectively calculating a gray scale scaling coefficient Mult and a gray scale translation coefficient Add, and performing mapping correction on the original gray scale of the image to be detected by using the two coefficients, specifically as follows:
Mult=(M1-M2)/(T1-T2)
Add=M1-Mult*T1
g'=g*Mult+Add
where Mult is the gray scale scaling coefficient, Add is the gray scale translation coefficient, and g' is the new gray scale value after mapping.
4. The method for detecting the quality of the battery silk screen based on the ORB feature matching and the LK optical flow method as claimed in claim 1, wherein the step S2 is specifically as follows:
s201, extracting oFAST characteristic points;
s202, constructing an rBRIEF feature descriptor;
and S203, matching the characteristic points.
5. The method as claimed in claim 4, wherein in step S201, the gray value of P point is compared with the gray values of 16 pixels in the neighborhood with the radius of 3 and the center being P, and if the difference between the pixel value of P point and the pixel values of n consecutive pixels in the neighborhood on the circle is greater than the threshold t, then P is a feature point.
6. The method of claim 4, wherein in step S202, N point pairs are selected around the key point P, the comparison results of the N point pairs are combined to be a descriptor, and all the point pairs are compared to generate a binary string with a length of N.
7. The battery silk-screen quality detection method based on ORB feature matching and LK optical flow method of claim 4, wherein in step S203, the Hamming distance between descriptors is used as similarity measure, the Hamming distance between two equal-length binary strings is the number of different characters at the corresponding positions of two character strings, and the matching point pairs are screened by setting a similarity threshold; and after the matching result is obtained, registering the template and the image to be detected according to the corresponding relation between the template image and the ORB characteristic points in the image to be detected, and matching each rectangular sub-template in the template image with each part in the image to be detected.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010085060.3A CN111275697B (en) | 2020-02-10 | 2020-02-10 | Battery silk-screen quality detection method based on ORB feature matching and LK optical flow method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010085060.3A CN111275697B (en) | 2020-02-10 | 2020-02-10 | Battery silk-screen quality detection method based on ORB feature matching and LK optical flow method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111275697A CN111275697A (en) | 2020-06-12 |
CN111275697B true CN111275697B (en) | 2022-04-22 |
Family
ID=71002140
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010085060.3A Active CN111275697B (en) | 2020-02-10 | 2020-02-10 | Battery silk-screen quality detection method based on ORB feature matching and LK optical flow method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111275697B (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112560858B (en) * | 2020-10-13 | 2023-04-07 | 国家计算机网络与信息安全管理中心 | Character and picture detection and rapid matching method combining lightweight network and personalized feature extraction |
CN113063731B (en) * | 2021-03-24 | 2023-01-20 | 上海晨兴希姆通电子科技有限公司 | Detection system and detection method for rotary disc type glass cover plate silk screen printing |
CN113191348B (en) * | 2021-05-31 | 2023-02-03 | 山东新一代信息产业技术研究院有限公司 | Template-based text structured extraction method and tool |
CN113516584B (en) * | 2021-09-14 | 2021-11-30 | 风脉能源(武汉)股份有限公司 | Image gray processing method and system and computer storage medium |
CN114120197B (en) * | 2021-11-27 | 2024-03-29 | 中国传媒大学 | Ultra-high definition video abnormal signal detection method for 2SI mode transmission |
CN114612469B (en) * | 2022-05-09 | 2022-08-12 | 武汉中导光电设备有限公司 | Product defect detection method, device and equipment and readable storage medium |
CN116228746A (en) * | 2022-12-29 | 2023-06-06 | 摩尔线程智能科技(北京)有限责任公司 | Defect detection method, device, electronic apparatus, storage medium, and program product |
CN116310289B (en) * | 2023-05-12 | 2023-08-08 | 苏州优备精密智能装备股份有限公司 | System and method for on-line measurement of ink-jet printing and real-time adjustment of printing position angle |
CN117495852B (en) * | 2023-12-29 | 2024-05-28 | 天津中荣印刷科技有限公司 | Digital printing quality detection method based on image analysis |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104751465A (en) * | 2015-03-31 | 2015-07-01 | 中国科学技术大学 | ORB (oriented brief) image feature registration method based on LK (Lucas-Kanade) optical flow constraint |
CN107389701A (en) * | 2017-08-22 | 2017-11-24 | 西北工业大学 | A kind of PCB visual defects automatic checkout system and method based on image |
CN107680095A (en) * | 2017-10-25 | 2018-02-09 | 哈尔滨理工大学 | The electric line foreign matter detection of unmanned plane image based on template matches and optical flow method |
CN108122256A (en) * | 2017-12-25 | 2018-06-05 | 北京航空航天大学 | It is a kind of to approach under state the method for rotating object pose measurement |
CN108355987A (en) * | 2018-01-08 | 2018-08-03 | 西安交通大学 | A kind of screen printing of battery quality determining method based on piecemeal template matches |
CN108986093A (en) * | 2018-07-19 | 2018-12-11 | 常州宏大智能装备产业发展研究院有限公司 | Cylinder or flat screen printing machine network blocking defect detection method based on machine vision |
CN110264445A (en) * | 2019-05-30 | 2019-09-20 | 西安交通大学 | The screen printing of battery quality determining method of piecemeal template matching combining form processing |
CN110503633A (en) * | 2019-07-29 | 2019-11-26 | 西安理工大学 | A kind of applique ceramic disk detection method of surface flaw based on image difference |
WO2019245320A1 (en) * | 2018-06-22 | 2019-12-26 | 삼성전자주식회사 | Mobile robot device for correcting position by fusing image sensor and plurality of geomagnetic sensors, and control method |
CN110672617A (en) * | 2019-09-14 | 2020-01-10 | 华南理工大学 | Method for detecting defects of silk-screen area of glass cover plate of smart phone based on machine vision |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107662872B (en) * | 2016-07-29 | 2021-03-12 | 奥的斯电梯公司 | Monitoring system and monitoring method for passenger conveyor |
-
2020
- 2020-02-10 CN CN202010085060.3A patent/CN111275697B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104751465A (en) * | 2015-03-31 | 2015-07-01 | 中国科学技术大学 | ORB (oriented brief) image feature registration method based on LK (Lucas-Kanade) optical flow constraint |
CN107389701A (en) * | 2017-08-22 | 2017-11-24 | 西北工业大学 | A kind of PCB visual defects automatic checkout system and method based on image |
CN107680095A (en) * | 2017-10-25 | 2018-02-09 | 哈尔滨理工大学 | The electric line foreign matter detection of unmanned plane image based on template matches and optical flow method |
CN108122256A (en) * | 2017-12-25 | 2018-06-05 | 北京航空航天大学 | It is a kind of to approach under state the method for rotating object pose measurement |
CN108355987A (en) * | 2018-01-08 | 2018-08-03 | 西安交通大学 | A kind of screen printing of battery quality determining method based on piecemeal template matches |
WO2019245320A1 (en) * | 2018-06-22 | 2019-12-26 | 삼성전자주식회사 | Mobile robot device for correcting position by fusing image sensor and plurality of geomagnetic sensors, and control method |
CN108986093A (en) * | 2018-07-19 | 2018-12-11 | 常州宏大智能装备产业发展研究院有限公司 | Cylinder or flat screen printing machine network blocking defect detection method based on machine vision |
CN110264445A (en) * | 2019-05-30 | 2019-09-20 | 西安交通大学 | The screen printing of battery quality determining method of piecemeal template matching combining form processing |
CN110503633A (en) * | 2019-07-29 | 2019-11-26 | 西安理工大学 | A kind of applique ceramic disk detection method of surface flaw based on image difference |
CN110672617A (en) * | 2019-09-14 | 2020-01-10 | 华南理工大学 | Method for detecting defects of silk-screen area of glass cover plate of smart phone based on machine vision |
Non-Patent Citations (6)
Title |
---|
"A Hybrid 3D Registration Method of Augmented Reality for Intelligent Manufacturing";Xian Yang等;《IEEE Access》;20191216;第181867-181883页 * |
"AR Based on ORB Feature and KLT Tracking";Jie Ren等;《Applied Mechanics and Materials》;20130731;第333-340页 * |
"一种改进光流法的运动目标检测及跟踪算法";李成美等;《仪器仪表学报》;20180531;第249-256页 * |
"基于LK光流约束的ORB特征配准算法";刘嘉威等;《第三十四届中国控制会议论文集(C卷)》;20150731;第3914-3919页 * |
"基于光流跟踪和特征匹配的视觉里程计研究";贾哲;《中国优秀硕士学位论文全文数据库 信息科技辑》;20190815;I138-952 * |
"基于机器视觉的标签缺陷检测算法研究";殷红杰;《中国优秀硕士学位论文全文数据库 工程科技Ⅰ辑》;20190615;B024-274 * |
Also Published As
Publication number | Publication date |
---|---|
CN111275697A (en) | 2020-06-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111275697B (en) | Battery silk-screen quality detection method based on ORB feature matching and LK optical flow method | |
CN108460757B (en) | Mobile phone TFT-LCD screen Mura defect online automatic detection method | |
CN109308700A (en) | A kind of visual identity defect inspection method based on printed matter character | |
CN104568986A (en) | Method for automatically detecting printing defects of remote controller panel based on SURF (Speed-Up Robust Feature) algorithm | |
WO2017181724A1 (en) | Inspection method and system for missing electronic component | |
CN111242896A (en) | Color printing label defect detection and quality rating method | |
CN112149543B (en) | Building dust recognition system and method based on computer vision | |
CN109886960A (en) | The method of glass edge defects detection based on machine vision | |
CN109472271A (en) | Printed circuit board image contour extraction method and device | |
CN111667475B (en) | Machine vision-based Chinese date grading detection method | |
CN113034488A (en) | Visual detection method of ink-jet printed matter | |
CN115205223A (en) | Visual detection method and device for transparent object, computer equipment and medium | |
CN116168218A (en) | Circuit board fault diagnosis method based on image recognition technology | |
CN111354047A (en) | Camera module positioning method and system based on computer vision | |
CN115272350A (en) | Method for detecting production quality of computer PCB mainboard | |
CN110533660B (en) | Method for detecting silk-screen defect of electronic product shell | |
Ouji et al. | Chromatic/achromatic separation in noisy document images | |
CN108898584B (en) | Image analysis-based full-automatic veneered capacitor welding polarity discrimination method | |
CN111860500A (en) | Shoe print wear area detection and edge tracing method | |
CN110619331A (en) | Color distance-based color image field positioning method | |
CN115619725A (en) | Electronic component detection method and device, electronic equipment and automatic quality inspection equipment | |
CN115588208A (en) | Full-line table structure identification method based on digital image processing technology | |
CN114998346A (en) | Waterproof cloth quality data processing and identifying method | |
CN113506297B (en) | Printing data identification method based on big data processing | |
CN111798429B (en) | Visual inspection method for defects of printed matter |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |