CN115100158A - Printing detection method for realizing accurate image matching - Google Patents

Printing detection method for realizing accurate image matching Download PDF

Info

Publication number
CN115100158A
CN115100158A CN202210774009.2A CN202210774009A CN115100158A CN 115100158 A CN115100158 A CN 115100158A CN 202210774009 A CN202210774009 A CN 202210774009A CN 115100158 A CN115100158 A CN 115100158A
Authority
CN
China
Prior art keywords
image
matching
layer
pyramid
target product
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210774009.2A
Other languages
Chinese (zh)
Inventor
孟凤
申磊
邓红丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Daheng Image Vision Co ltd
Original Assignee
Beijing Daheng Image Vision Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Daheng Image Vision Co ltd filed Critical Beijing Daheng Image Vision Co ltd
Priority to CN202210774009.2A priority Critical patent/CN115100158A/en
Publication of CN115100158A publication Critical patent/CN115100158A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30144Printing quality

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a printing detection method for realizing accurate image matching. The method comprises the following steps: acquiring a target product image and a template image, and generating respective image pyramids; from the topmost layer of the image pyramid, performing traversal matching on the target product image and the corresponding layer image of the template image one by one through matching blocks, and determining the mapping relation of each area in the target product image and the template image; mapping all pixel values of the target product image to corresponding pixel points in the template image; performing differential operation on the mapped target product image and the template image, and outputting a differential image; and detecting the defects based on the difference image. The invention realizes the accurate alignment of the image pixel points, avoids the detection misjudgment of the target product image caused by stretching deformation, not only distinguishes the tiny detection deformation error and improves the detection precision, but also optimizes the detection time through optimizing the algorithm steps and the detection content, and the detection time is lower than the existing standard.

Description

Printing detection method for realizing accurate image matching
Technical Field
The invention relates to the field of printing detection, in particular to a printing detection method for realizing accurate image matching.
Background
Generally, a detection algorithm in a printing quality detection system selects a good collected image as a template image, finds out a geometric transformation relation from a target product image collected by a detected sample to the template image through local positioning during detection, and aligns the target product image with the template image after geometric transformation. The geometric transformation mostly uses affine transformation or transmission transformation, and uses the same transformation relation for all pixels on an image. In patents CN110503633A, "a method for detecting surface defects of decal ceramic discs based on image difference", CN112288734A "a method for detecting surface defects of decal fabrics based on image processing", and CN113436215A "method and apparatus for detecting foreground objects, storage media, and electronic apparatuses", methods for calculating affine transformation by using feature point matching are used. In the methods, firstly, feature points are extracted to generate feature descriptors, then feature point matching is carried out, an optimal matching point pair is selected through RANSAC to calculate an affine transformation matrix, one image is subjected to affine transformation and then is differentiated with the other image, and then defects or targets are found through operations such as binaryzation, morphology, blob analysis and the like. The affine transformation for the image in the above three patents is to perform scaling, rotation, and translation operations on the image by using the same positional transformation relationship for each pixel in the image.
However, the printing quality detection system is directed to various detection products, some products have different degrees of stretching deformation due to material, some products have overprint deviation in printing content, and the stretching deformation and the overprint deviation with small degree are judged to be good products. If the affine transformation methods in the three patents are used, only the image is scaled, rotated and translated, and the phenomenon that some parts of the image cannot be aligned after some parts of the image are aligned is inevitable. Therefore, the detection precision is reduced or the false alarm is increased, the experience of a user on the printing detection system is influenced, and the use efficiency of the detection system is also reduced.
Disclosure of Invention
In view of the above problems, the present invention is to provide a method for detecting printing quality, which can precisely align an image of a target product with an image of a template in terms of detection effect, is not affected by stretching and deformation of the image, and can meet a requirement of high system real-time performance in terms of detection time.
In order to improve the precision rate of printing quality detection and reduce the reduction of detection precision or the increase of false alarm caused by the deformation of a printed matter, the invention provides a printing quality detection method capable of realizing the accurate alignment of patterns of a sample to be detected and a template sample.
Specifically, the invention provides a printing detection method for realizing accurate image matching, which comprises the following steps:
step 1, collecting a target product image and a template image, and respectively generating respective image pyramids based on the target product image and the template image;
and 2, step: matching corresponding matching blocks of the target product image by taking each matching block of the template image as a reference from the topmost layer of the image pyramid, determining a mapping relation from a feature point in each matching block of the target product image in the same layer to a corresponding point in the template image, transmitting an upper-layer mapping relation to a lower layer of the pyramid, and further searching for matching until the mapping relation between the target product image and each region in the template image is determined;
step 3, mapping all pixel values in the matching blocks and the target product image to corresponding pixel points in the template image based on the mapping relation determined by the matching blocks;
step 4, performing differential operation on the mapped target product image and the template image, and outputting a differential image;
and 5, detecting defects based on the difference image.
Preferably, the step 2 includes:
2.1, generating image data of each layer in a template image pyramid by using the template image;
2.2, calling a layer with the minimum resolution in the pyramid of the template image, and dividing a matching block of the layer;
2.3, for each matching block in the layer of the template image pyramid, searching for a corresponding matching block in the product image pyramid respectively to obtain a mapping relation of each matching block in the layer for the template image pyramid and the product image pyramid;
and 2.4, transferring the mapping relation of each matching block in the upper layer to the next layer, repeating the step 2.3 in the layer to match the matching blocks, and determining the mapping relation of the matching blocks between the product image of the layer and the template image until determining the mapping relation of the matching blocks in the image of the lowest layer.
Preferably, the step 2.4 includes, for each matching block in the upper-layer image, determining the position of each feature point in the matching block in the template image pyramid and the product image pyramid, determining a scaling relationship between the upper-layer pyramid image and the next-layer pyramid image, determining the transfer position of each feature point in the corresponding matching block in the next-layer template image and the product image based on the scaling relationship, taking the transfer position of the feature point in each matching block as an initial position, and searching in the vicinity thereof to find the best matching position of each matching block in the layer.
Preferably, in the target image pyramid and the template image pyramid, the resolution is sequentially increased from the top layer of the pyramid to the bottom layer of the pyramid.
Preferably, the step 5 includes, for the difference image, delineating a candidate defect region by threshold segmentation, and screening out real defects by blob analysis of region features.
Preferably, the step 2.4 of determining the mapping relationship between the feature points in each matching block of the target product image in the same layer and the corresponding points in the template image includes: and calculating the gray difference between the pixels with the preset number near the feature points in the target product image and the pixels with the preset number near the matching points in the template image, and calculating the mapping relation of the feature points by taking the minimum gray difference or gray variance as an optimization condition.
Preferably, the method further includes skipping at intervals of one pyramid if the gray variance among the plurality of feature points is smaller than a preset value in the process of matching the feature points of the pyramid image in the current layer.
Preferably, the method further comprises aligning the patterns that are offset to within a certain number of pixels for distortion causing a small range of offset problems.
Preferably, the feature point matching in step 3 and the difference operation in step 4 are performed by using a gray scale difference method.
It should be noted that the "image pyramid" mentioned in the present invention refers to a multi-level image generated based on the same image and having a resolution from low to high, where the resolution of the pyramid top layer is the lowest, and the resolution is increased by a predetermined ratio every time the resolution is decreased.
The beneficial effects brought by the invention at least comprise one of the following:
1. the method solves the problem that the existing detection method has local misalignment in the affine transformation process, thereby improving the detection precision of the system;
2. compared with the existing detection method, the method has the advantages that the range of the detected object is expanded on the basis of realizing the existing detection effect, the method can be suitable for detecting various presswork under the conditions of local deformation and the like, and the detection on the object which is soft and easy to deform is more accurate;
3. the invention belongs to the algorithm innovation on the basis of the original equipment, can be realized without adding software and hardware equipment, and is simple and easy to operate.
Drawings
FIG. 1 is a flow chart of a conventional print quality inspection method
FIG. 2 is a template image in an embodiment of the present invention
FIG. 3 is an image of a target product according to an embodiment of the present invention
FIG. 4 is a differential image obtained using a prior art print quality inspection method
FIG. 5 is a detection flow chart of the detection method of the present invention
FIG. 6 is a difference image obtained by the detection method of the present invention
FIG. 7 is a comparison graph of local effects of the difference images of FIGS. 4 and 6
Detailed Description
The present invention is further illustrated by the following examples, which are intended to be purely exemplary and are not intended to limit the scope of the invention, which is defined in the appended claims, as may be amended by those skilled in the art upon reading the present invention.
Examples
Aiming at the problems in the existing image detection method, the invention provides the printing quality detection method for the accurate alignment of the images, which can effectively align the target product image and the template image in the detection effect, can avoid the defect in the differential image caused by the image matching problem even if the image has certain deformation and distortion, and can meet the requirement of high system real-time performance in the detection time. The printing quality inspection flow of the present invention is described below with reference to fig. 5. As shown in the figure, the detection method in this embodiment specifically includes the following steps:
step 1, photographing and scanning a target product, and processing the target product by application software in a graphic workstation to obtain a complete target product image; taking a picture of the template image, scanning the picture to obtain a complete template image picture after processing by application software in the graphic workstation, and respectively generating respective image pyramids based on the target product image and the template image, wherein the image pyramids are gradually increased from high resolution to low resolution, for example, the image pyramids can comprise 3 layers, the resolution of the top layer pyramid image is 256 at the lowest, and the resolution of the bottom layer pyramid image is 1024 at the highest;
step 2: traversing and matching the target product image and the corresponding layer image of the template image one by one from the topmost layer of the image pyramid, dividing the template image into a plurality of matching blocks, traversing and inquiring the target product image in the same layer for each matching block, finding the corresponding matching block, determining the mapping relation from the feature point (or pixel point) in each matching block to the corresponding point in the template image, transmitting the upper layer mapping relation to the lower layer of the pyramid and further searching and matching until determining the mapping relation between the target product image and each region in the template image;
step 3, mapping all pixel values in the matching blocks and the target product image to corresponding pixel points in the template image based on the mapping relation determined by each matching block; for example, in the bottom layer image, the target image is divided into a plurality of 8 × 8 matching blocks, and for a certain matching block, a corresponding matching block is determined based on the feature points in the matching block, for example, if the matching block is matched with a region formed by 64 pixels centered around four points (410,205), (411,205) (410,206) and (411,206) in the template image, the pixel values of the pixels in the matching block are mapped to corresponding pixels in the template image, and the corresponding pixel values in the template image are updated.
Step 4, performing differential operation on the mapped target product image and the template image, and outputting a differential image;
step 5, defect detection is carried out based on the difference image, firstly, a certain threshold value is set, the area with the difference image gray level difference larger than the set threshold value is defined as a candidate defect area, and the area with the gray level difference smaller than the set threshold value is defined as a non-defect area; and secondly, analyzing the regional characteristics through the blob, filtering the defects which do not meet the requirements from the candidate defect regions, and recording the residual candidate defect regions as real defects.
Steps 2 and 3 are the core steps of the present invention, and therefore, a detailed description thereof will be made herein. As shown in fig. 5, a dotted line frame is a specific flow corresponding to step 2 in this embodiment, which is specifically as follows:
2.1, generating template image data of each layer in a template image pyramid by using the template image; image data for each layer in the target image pyramid is generated using the target product image. The method for generating the template image pyramid by the template image comprises the following steps: recording the resolution of the template image as W x H, and taking the template image with the resolution of W x H as a first layer of a pyramid of the template image; then generating a second layer of the template image pyramid, merging every 2 × 2 pixels of the pyramid image of the first layer into 1 pixel, taking the gray average value of 4 pixels as the gray value of the merged pixel, and taking the merged image with the resolution of W/2 × H/2 as the second layer of the pyramid; then generating a third layer of a template image pyramid, combining 4 pixels of the image of the second layer of the pyramid into 1 pixel according to the same method, and generating a pyramid image of which the resolution is W/4 × H/4 and is the third layer; and repeating the steps, generating a fourth layer image and a fifth layer image of the template pyramid according to the same method, wherein the resolution of the fourth layer image is W/8H/8, and the resolution of the fifth layer image is W/16H/16, and thus, the generation of the template image pyramid is completed for 5 layers in total. And then generating a target image pyramid, namely, taking the image of the target product as the first layer of the target image pyramid, and then generating the second layer to the fifth layer of the target image pyramid according to the generation method of the template image pyramid, wherein the resolutions of the images from the first layer to the fifth layer are W & ltH & gt, W/2 & ltH/2 & gt, W/4 & ltH/4 & gt, W/8 & ltH/8 & gt and W/16 & ltH/16 & gt in sequence.
2.2, a layer (highest) with the lowest resolution in the pyramid of the target image is called, the layer is divided into a plurality of matching blocks, the layer (highest) with the lowest resolution in the pyramid of the template image is called, and the same matching block division is carried out on the layer;
2.3, for each matching block in the layer, respectively extracting feature points in the matching block from a pyramid of the product image, matching the extracted feature points with each other in the template image, finding out the corresponding position of each matching block in the layer in the pyramid image for the product image, and further determining the mapping relation between the two;
and 2.4, transferring the mapping relation of each matching block in the upper layer to the next layer, repeating the step 2.3 in the layer to perform feature matching by taking the matched feature point as the center, and determining the mapping relation among the matching blocks in the layer until determining the mapping relation among the matching blocks in the image of the lowest layer.
The method comprises the steps of determining the position of each feature point in an upper-layer image in a template image pyramid and a product image pyramid of each matching block in the upper-layer image, determining the scaling relation between the upper-layer pyramid image and a lower-layer pyramid image, determining the transmission position of each feature point in the corresponding matching block in the lower-layer template image and the product image based on the scaling relation, taking the transmission position of the feature point in each matching block as an initial position, searching nearby the initial position, and finding the best matching position of each matching block in the layer. The transfer process is performed based on scaling, for example, at the topmost layer, the position of a certain feature point in the target image is (100,20), the position of the feature point in the template image is (115,31), and the scaling of the next layer to the layer is 4:1, where the scaling refers to the proportion of the whole number of pixels, that is, the number of pixels in the length direction and the width direction is 2 times of the original number, then, in the next layer, the feature point of the target image is searched near the (200,40) position, and the feature point in the template image is searched near the (230,64) position.
In the further searching process, the step of determining the mapping relation between the feature points in each matching block of the target product image in the same layer and the corresponding points in the template image comprises the following steps: and calculating the gray difference between the pixels with the preset number near the feature points in the target product image and the pixels with the preset number near the matching points in the template image, and calculating the mapping relation of the feature points by taking the minimum gray difference or gray variance as an optimization condition.
Preferably, the method further includes skipping at intervals of one pyramid if the gray variance among the plurality of feature points is smaller than a preset value in the process of matching the feature points of the pyramid image in the current layer.
Preferably, in the printing quality detection system, the contents of the target product image and the template image are consistent, and considering the deviation of small-range deformation, in the invention, the pyramid layer number is reduced, and the calculation amount of the block matching part is reduced.
Preferably, the pyramid layer number can only reserve one layer or two layers with the largest resolution, the calculation amount is reduced by half, patterns with deviation within 20 pixels can be aligned, and the deviation in the printing detection system is within the range, namely the accuracy of alignment between the mapped template image and the template image is not influenced after modification.
Preferably, the feature point matching in the step 3 and the feature point matching in the step 4 use the simplest gray difference method to calculate the similarity, so that the calculation time is optimized, and the detection speed is increased.
The calculation method of the optimized gray difference comprises the following steps:
1. firstly, according to the position of the matching block in the image, sequentially taking 8 pieces of 8-bit image data from the template image and the target product image, converting the 8-bit image data into 32-bit floating point numbers and storing the 32-bit floating point numbers in a register.
2. The template data and the detection data in the register are subtracted from each other, and a square value is calculated.
3. And accumulating the square values in the registers in sequence.
4. And finally, adding the 8 sums in the register together, and averaging to obtain the gray level difference of the two matched blocks.
The calculation formula is shown as follows, wherein I is a template image, J is a target product image, the gray difference of two image blocks is D, and the smaller the D value is, the higher the similarity of the two matching blocks is.
Figure BDA0003725797660000101
In the formula, R represents a matching block on the template image, (R, c) is the position of the pixel in the matching block on the template image, and n is the number of pixels in the matching block. (u, v) is the positional offset of the target product image relative to the template image.
Comparative example
Next, a description will be given of an inspection flow in the conventional print quality inspection system with reference to the drawings. Fig. 1 shows a flow chart of a prior art detection method. As shown in the figure, the detection process specifically includes the following steps:
step 1: acquiring image data of a target product and image data of a template image through scanning;
step 2: aligning the target product image with the template image to obtain target product image data, wherein the target product image corresponds to the template image;
step 3, performing difference operation on the aligned target product image and the template image, and outputting a difference image, wherein the part with large gray value on the difference image is the position where the defect possibly exists;
and 4, detecting defects based on the difference image.
And 4, screening out real defects by carrying out threshold segmentation and blob analysis on the regional characteristics of the difference image. Wherein the threshold value is divided into a certain threshold value, the area with the gray difference of the difference image larger than the threshold value is defined as a candidate defect area, and the area with the gray difference smaller than the threshold value is defined as a non-defect area; the blob analysis region is characterized by filtering out the defects which do not meet the requirements from the candidate defect region, and the real defects are remained.
As shown in fig. 1, the dashed line frame is a specific flow corresponding to step 2, and the step further includes the following steps:
a-1, dividing the target product image and the template image into n multiplied by n areas with equal size, and sequencing a plurality of areas of the target product image according to a certain sequence;
a-2, selecting a first area from the target product image, finding matching points in the template image through characteristic point comparison, determining a mapping relation between the matching points, and calculating an affine transformation matrix;
a-3, performing geometric transformation on the target product image through the obtained transformation matrix, and aligning a first region of the transformed target product image with a corresponding part in the template image;
a-4, repeating the operations from the step a-1 to the step a-3 for n multiplied by n areas in sequence until all the areas are matched and aligned.
Taking the product in fig. 3 as an example, fig. 2 is a template image, the image width is 736 pixels, the image height is 464 pixels, and the calculated difference image is shown in fig. 4. Fig. 4 is a difference image of the product of fig. 3, where the positions with large gray values, i.e. white, are not aligned, and the positions with small gray values, i.e. black, are aligned better. In fig. 4, the gray values are smaller in the white boxes marked with (c), and are aligned. In the patterns of the upper part and the lower part of the product, such as white frames marked with the third part, the situation that the gray values are large and cannot be aligned exists, and the position with the largest measurement deviation on the image reaches 2 pixels. It can be seen that the existing partitioning algorithm cannot fundamentally solve the problem of local misalignment of the system.
The detection time in this comparative example was about 5 milliseconds.
Still taking the product in fig. 3 as an example, the print quality inspection time using the partial reduction method of the present invention is 4.4ms, which is lower than the time requirement in the print quality inspection system.
In fig. 6, the difference between the target product image and the template image after mapping by using the local reduction method of the present invention is shown, and compared with the difference image of the existing detection result in fig. 4, which does not use the algorithm of the present invention, the area with large gray scale is significantly reduced. Therefore, the algorithm of the invention can achieve the effect of locally aligning the images, and the deviation between the aligned patterns is less than 1 pixel.
In order to compare the prior art and the present invention more intuitively, and see the effect of the offset, fig. 7 shows a partial enlarged view of the differential image of fig. 4 and 6. The two top large images in fig. 7 are, from left to right, the prior differential image and the differential image using the present invention, respectively. The same position in the two images has 5 white boxes with marks, the image in the box is scratched out to be a small image under the large image, and the marks in the front of the small image correspond to the marks in the boxes of the large image. The left side of the small image is the cutout of the existing differential image, and the right side is the cutout of the differential image using the algorithm of the invention at the same position. The alignment effect before and after the local reduction method of the present invention can be more clearly seen from the small figures.
While the principles of the invention have been described in detail in connection with the preferred embodiments thereof, it will be understood by those skilled in the art that the foregoing embodiments are merely illustrative of exemplary implementations of the invention and are not limiting of the scope of the invention. The details of the embodiments are not to be interpreted as limiting the scope of the invention, and any obvious changes, such as equivalent alterations, simple substitutions and the like, based on the technical solution of the invention, can be interpreted without departing from the spirit and scope of the invention.

Claims (9)

1. A printing detection method for realizing accurate matching of images is characterized by comprising the following steps:
step 1, collecting a target product image and a template image, and respectively generating respective image pyramids based on the target product image and the template image;
step 2: starting from the topmost layer of the image pyramid, searching corresponding matching blocks in the target product image by taking each matching block of the template image as a reference, determining the mapping relation between each feature point in each matching block of the target product image in the same layer and the corresponding point in the template image, transmitting the upper-layer mapping relation to the lower layer of the pyramid, and further searching for matching until the mapping relation between the target product image and each region in the template image is determined;
step 3, mapping all pixel values in the matching blocks and the target product image to corresponding pixel points in the template image based on the mapping relation determined by the matching blocks;
step 4, performing differential operation on the mapped target product image and the template image, and outputting a differential image;
and 5, detecting defects based on the difference image.
2. The printing detection method for realizing the accurate matching of the images according to claim 1, wherein the step 2 comprises the following steps:
2.1, generating image data of each layer in a template image pyramid by using the template image;
2.2, calling a layer with the minimum resolution in the template image pyramid, and dividing the layer into matching blocks;
2.3, for each matching block in the layer of the template image pyramid, respectively searching for a corresponding matching block in the product image pyramid, and obtaining the mapping relation of each matching block in the layer for the template image pyramid and the product image pyramid;
and 2.4, transferring the mapping relation of each matching block in the upper layer to the next layer, repeating the step 2.3 in the layer to match the matching blocks, and determining the mapping relation of the matching blocks between the product image of the layer and the template image until determining the mapping relation of the matching blocks in the image of the lowest layer.
3. The print detection method according to claim 1,
the step 2.4 includes, for each matching block in the upper layer image, determining the position of each feature point in the matching block in the template image pyramid and the product image pyramid, determining the scaling relationship between the upper layer pyramid image and the next layer pyramid image, determining the transfer position of each feature point in the next layer template image and the corresponding matching block in the product image based on the scaling relationship, taking the transfer position of the feature point in each matching block as the initial position, searching nearby, and finding the best matching position of each matching block in the layer.
4. The print inspection method of claim 1, wherein image resolution increases sequentially from top to bottom pyramid levels in the target image pyramid and template image pyramid.
5. The printing inspection method according to claim 1, wherein the step 5 comprises defining candidate defect regions by threshold segmentation for the difference image, and screening real defects by blob analysis of the region characteristics.
6. The print inspection method of claim 1, wherein the step of determining the mapping relationship of the feature points in each matching block of the target product image in the same layer to the corresponding points in the template image comprises: and calculating the gray difference between the pixels with the preset number near the feature points in the target product image and the pixels with the preset number near the matching points in the template image, and calculating the mapping relation of the feature points by taking the minimum gray difference or gray variance as an optimization condition.
7. The print inspection method of claim 1, further comprising skipping at intervals of a pyramid if a gray variance between a plurality of feature points is less than a predetermined value during the matching of the feature points of the image of the pyramid at the current level.
8. The print inspection method of claim 2, further comprising aligning patterns that are biased to within a certain number of pixels for distortion causing a small range of misalignment problems.
9. The printing detection method for realizing precise image matching according to claim 6, wherein the feature point matching in step 3 and the difference operation in step 4 are performed by using a gray scale difference method.
CN202210774009.2A 2022-07-01 2022-07-01 Printing detection method for realizing accurate image matching Pending CN115100158A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210774009.2A CN115100158A (en) 2022-07-01 2022-07-01 Printing detection method for realizing accurate image matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210774009.2A CN115100158A (en) 2022-07-01 2022-07-01 Printing detection method for realizing accurate image matching

Publications (1)

Publication Number Publication Date
CN115100158A true CN115100158A (en) 2022-09-23

Family

ID=83294655

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210774009.2A Pending CN115100158A (en) 2022-07-01 2022-07-01 Printing detection method for realizing accurate image matching

Country Status (1)

Country Link
CN (1) CN115100158A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116894841A (en) * 2023-09-08 2023-10-17 山东天鼎舟工业科技有限公司 Visual detection method for quality of alloy shell of gearbox
CN117495852A (en) * 2023-12-29 2024-02-02 天津中荣印刷科技有限公司 Digital printing quality detection method based on image analysis

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108982508A (en) * 2018-05-23 2018-12-11 江苏农林职业技术学院 A kind of plastic-sealed body IC chip defect inspection method based on feature templates matching and deep learning
CN111028213A (en) * 2019-12-04 2020-04-17 北大方正集团有限公司 Image defect detection method and device, electronic equipment and storage medium
CN112508826A (en) * 2020-11-16 2021-03-16 哈尔滨工业大学(深圳) Printed matter defect detection method based on feature registration and gradient shape matching fusion
CN113592831A (en) * 2021-08-05 2021-11-02 北京方正印捷数码技术有限公司 Method and device for detecting printing error and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108982508A (en) * 2018-05-23 2018-12-11 江苏农林职业技术学院 A kind of plastic-sealed body IC chip defect inspection method based on feature templates matching and deep learning
CN111028213A (en) * 2019-12-04 2020-04-17 北大方正集团有限公司 Image defect detection method and device, electronic equipment and storage medium
CN112508826A (en) * 2020-11-16 2021-03-16 哈尔滨工业大学(深圳) Printed matter defect detection method based on feature registration and gradient shape matching fusion
CN113592831A (en) * 2021-08-05 2021-11-02 北京方正印捷数码技术有限公司 Method and device for detecting printing error and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
尹丽娜: "酒瓶表面印刷缺陷视觉检测技术的研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 2019, 15 January 2019 (2019-01-15), pages 38 - 55 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116894841A (en) * 2023-09-08 2023-10-17 山东天鼎舟工业科技有限公司 Visual detection method for quality of alloy shell of gearbox
CN116894841B (en) * 2023-09-08 2023-11-28 山东天鼎舟工业科技有限公司 Visual detection method for quality of alloy shell of gearbox
CN117495852A (en) * 2023-12-29 2024-02-02 天津中荣印刷科技有限公司 Digital printing quality detection method based on image analysis
CN117495852B (en) * 2023-12-29 2024-05-28 天津中荣印刷科技有限公司 Digital printing quality detection method based on image analysis

Similar Documents

Publication Publication Date Title
CN105608671B (en) A kind of image split-joint method based on SURF algorithm
CN115100158A (en) Printing detection method for realizing accurate image matching
CN109410207B (en) NCC (non-return control) feature-based unmanned aerial vehicle line inspection image transmission line detection method
CN107945111B (en) Image stitching method based on SURF (speeded up robust features) feature extraction and CS-LBP (local binary Pattern) descriptor
CN110136120B (en) Silk-screen printing sample plate size measuring method based on machine vision
CN115170669B (en) Identification and positioning method and system based on edge feature point set registration and storage medium
CN112233116B (en) Concave-convex mark visual detection method based on neighborhood decision and gray level co-occurrence matrix description
CN110111387B (en) Dial plate characteristic-based pointer meter positioning and reading method
CN115184380B (en) Method for detecting abnormity of welding spots of printed circuit board based on machine vision
CN111340701A (en) Circuit board image splicing method for screening matching points based on clustering method
CN112734761B (en) Industrial product image boundary contour extraction method
CN112419260A (en) PCB character area defect detection method
CN114022439A (en) Flexible circuit board defect detection method based on morphological image processing
CN116704516B (en) Visual inspection method for water-soluble fertilizer package
CN115471682A (en) Image matching method based on SIFT fusion ResNet50
CN116433733A (en) Registration method and device between optical image and infrared image of circuit board
CN110009615A (en) The detection method and detection device of image angle point
CN111354047A (en) Camera module positioning method and system based on computer vision
CN112183325A (en) Road vehicle detection method based on image comparison
CN110348307B (en) Path edge identification method and system for crane metal structure climbing robot
CN113406111B (en) Defect detection method and device based on structural light field video stream
CN114187253A (en) Circuit board part installation detection method
CN112833821B (en) Differential geometric three-dimensional micro-vision detection system and method for high-density IC welding spots
CN115311293B (en) Rapid matching method for printed matter pattern
CN114092448B (en) Plug-in electrolytic capacitor mixed detection method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination