CN115311293B - Rapid matching method for printed matter pattern - Google Patents
Rapid matching method for printed matter pattern Download PDFInfo
- Publication number
- CN115311293B CN115311293B CN202211245047.5A CN202211245047A CN115311293B CN 115311293 B CN115311293 B CN 115311293B CN 202211245047 A CN202211245047 A CN 202211245047A CN 115311293 B CN115311293 B CN 115311293B
- Authority
- CN
- China
- Prior art keywords
- key point
- image
- key
- points
- key points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 40
- 230000011218 segmentation Effects 0.000 claims abstract description 26
- 238000012216 screening Methods 0.000 claims description 25
- 238000004364 calculation method Methods 0.000 claims description 10
- 238000007781 pre-processing Methods 0.000 claims description 8
- 239000000126 substance Substances 0.000 claims description 8
- 238000012163 sequencing technique Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 abstract description 9
- 238000004519 manufacturing process Methods 0.000 abstract description 4
- 238000013473 artificial intelligence Methods 0.000 abstract 1
- 230000008569 process Effects 0.000 description 7
- 238000001514 detection method Methods 0.000 description 6
- 230000008859 change Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000035945 sensitivity Effects 0.000 description 3
- 230000003993 interaction Effects 0.000 description 2
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 229960001948 caffeine Drugs 0.000 description 1
- 230000002950 deficient Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- RYYVLZVUVIJVGH-UHFFFAOYSA-N trimethylxanthine Natural products CN1C(=O)N(C)C(=O)C2=C1N=CN2C RYYVLZVUVIJVGH-UHFFFAOYSA-N 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
- G06T5/30—Erosion or dilatation, e.g. thinning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30144—Printing quality
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Quality & Reliability (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of data processing, in particular to a method for quickly matching patterns of a printed matter. The method is a method for identifying by using electronic equipment, and the method is used for completing pattern matching of a printed matter by using an artificial intelligence system in the production field. Firstly, collecting a presswork image by using electronic equipment, and carrying out data processing on the presswork image to obtain a plurality of key points; further, data processing is carried out on the distance, the shape and the heart rate of the super-pixel block to which the key point belongs and the adjacent super-pixel block to obtain the key point credibility of the key point, and high-quality key points are screened out based on the key point credibility to generate key point descriptors; and matching the printed matter image and the printed matter template image based on the key point descriptors. In a printed matter pattern matching scene, invariance intersection is obtained through superpixel segmentation, high-quality key points are screened out according to the relationship between the key points and the invariance intersection, and the high-quality key points are used for realizing rapid matching of printed matter patterns.
Description
Technical Field
The invention relates to the technical field of data processing, in particular to a method for quickly matching patterns of a printed matter.
Background
The printed matter is influenced by various factors in the production process, a large number of defective products are easily generated due to inaccurate overprinting or standard exceeding errors of printing specifications, great loss is brought to enterprises, and the real-time matching and timely loss stopping are very important. When pattern printing is found to be a problem, the post-printing processing is stopped, and waste and loss can be avoided. The artificial printed matter quality detection efficiency is low, the effect is influenced by subjective factors, along with the rapid development of machine vision, the matching of the template and the printed matter pattern can be completed at high accuracy by constructing a key point descriptor based on an SIFT algorithm by means of a computer, and the descriptor has robustness to scale, illumination and rotation angle. However, this technique has two limitations: firstly, the Gaussian difference operator has sensitivity, and unstable points needing to be removed exist in the acquired key points; secondly, the time for generating descriptors and matching the descriptors by a plurality of key points is long, and the key point matching cannot be realized quickly.
Disclosure of Invention
In order to solve the above technical problems, an object of the present invention is to provide a method for quickly matching a pattern of a printed matter, which adopts the following technical solutions:
obtaining a printed matter image, and preprocessing the printed matter image to obtain an image to be matched;
constructing a Gaussian difference pyramid from the image to be matched, and obtaining a plurality of difference images; detecting local extreme points in a Gaussian difference space, and screening out key points in a difference image; performing superpixel segmentation on any layer of differential images in any group in the Gaussian differential pyramid to obtain a plurality of superpixel blocks; obtaining invariance intersection corresponding to the overlapping area of the superpixel blocks;
calculating the center rate of the key points according to the position relation between the key points and the invariance intersection; obtaining the distance difference degree according to the distance between the super pixel block to which the key point belongs and the adjacent super pixel block; calculating the shape difference degree according to the area intersection ratio of the superpixel block to which the key point belongs and the adjacent superpixel block, wherein the product of the distance difference degree, the heart rate and the shape difference degree is the key point credibility degree;
screening a plurality of high-quality key points from the plurality of key points based on the credibility of the key points; generating a key point descriptor based on the high-quality key points; and matching the key point descriptors corresponding to the images to be matched with the key point descriptors corresponding to the presswork template images to obtain the matching degree of the presswork images.
Preferably, the detecting local extreme points in the gaussian difference space and screening out key points in the difference image includes:
and for any group corresponding to the Gaussian difference pyramid, detecting the difference image of the same group adjacent up and down except the difference image of the bottom layer and the top layer in each group, selecting any local extreme point as a target point, comparing the target point with the adjacent local extreme point of the same scale and a plurality of local extreme points corresponding to the adjacent scales up and down, and when the pixel value of the target point is the maximum value or the minimum value of the plurality of points, taking the target point as a key point.
Preferably, the calculating the center rate of the key point according to the position relationship between the key point and the invariance intersection includes:
the calculation formula of the heart rate is as follows:
wherein, the first and the second end of the pipe are connected with each other,the center rate of the ith key point; />Is an exponential function with a natural constant as a base number; />The abscissa of the ith key point is; />The ordinate of the ith key point; />The abscissa of the geometric center point of the invariance intersection to which the ith key point belongs; />The ordinate of the geometric center point of the invariance intersection to which the ith key point belongs.
Preferably, the obtaining of the distance difference degree according to the distance between the super pixel block to which the key point belongs and the adjacent super pixel block includes:
the calculation formula of the distance difference degree is as follows:
wherein the content of the first and second substances,the distance difference degree of the ith key point; />The number of adjacent superpixel blocks of the superpixel block to which the ith key point belongs; />The abscissa of the ith key point is; />Is the ordinate of the ith key point; />The abscissa of the geometric center point of the t-th adjacent superpixel block of the superpixel block to which the ith key point belongs; />Is the ordinate of the geometric center point of the t-th adjacent superpixel block of the superpixel block to which the ith key point belongs.
Preferably, the calculating the shape difference degree according to the area intersection ratio of the super pixel block to which the key point belongs and the adjacent super pixel block includes:
corroding the super-pixel block with larger area in the super-pixel block to which the key point belongs and the adjacent super-pixel blocks until the area difference value of the two super-pixel blocks is minimum;
superposing the centroid of the super-pixel block to which the key point belongs and the centroid of the adjacent super-pixel block to obtain the corresponding area intersection ratio;
the calculation formula of the shape difference degree is as follows:
wherein the content of the first and second substances,the shape difference degree of the ith key point; />The number of adjacent superpixels of the superpixel block to which the ith key point belongs; />The area intersection ratio of the super pixel block to which the ith key point belongs and the corresponding t-th adjacent super pixel block.
Preferably, the screening of the plurality of high-quality key points from the plurality of key points based on the confidence level of the key points includes:
obtaining the credibility of the key points of each key point, sequencing the credibility of the key points from small to large to obtain a credibility sequence of the key points, and finding out the lower quartile of the credibility sequence of the key points, wherein the key points with the credibility of the key points above the lower quartile are high-quality key points.
Preferably, the matching the key point descriptor corresponding to the image to be matched and the key point descriptor corresponding to the print template image to obtain the matching degree of the print image includes:
for a key point descriptor in a key point descriptor set corresponding to the printed matter template image, acquiring a Euclidean distance between the key point descriptor corresponding to the printed matter template image and a key point descriptor corresponding to the image to be matched, wherein the key point descriptor with the Euclidean distance smaller than a preset distance threshold is a matching point; and the proportion of the matching points is the matching degree of the image of the printed matter.
Preferably, the preprocessing the image of the printed matter to obtain the image to be matched includes:
and performing semantic segmentation on the presswork image to obtain an image to be matched.
Preferably, the constructing a gaussian difference pyramid from the image to be matched includes:
and constructing a Gaussian difference pyramid from the image to be matched based on an SIFT algorithm.
The embodiment of the invention at least has the following beneficial effects:
the invention relates to the technical field of data processing. Firstly, obtaining a printed matter image, and preprocessing the printed matter image to obtain an image to be matched; constructing a Gaussian difference pyramid from the image to be matched, and obtaining a plurality of difference images; detecting local extreme points in a Gaussian difference space, and screening out key points in a difference image; performing superpixel segmentation on any layer of differential images in any group in the Gaussian differential pyramid to obtain a plurality of superpixel blocks; obtaining invariance intersection corresponding to the overlapping area of the superpixel blocks; calculating the center rate of the key points according to the position relation between the key points and the invariance intersection; obtaining the distance difference degree according to the distance between the super pixel block to which the key point belongs and the adjacent super pixel block; calculating the shape difference degree according to the area intersection ratio of the superpixel block to which the key point belongs and the adjacent superpixel block, wherein the product of the distance difference degree, the heart rate and the shape difference degree is the key point credibility degree; screening a plurality of high-quality key points from the plurality of key points based on the credibility of the key points; generating a key point descriptor based on the high-quality key points; key points are positioned based on an SIFT algorithm, and images with the same size and different fuzzy degrees are subjected to superpixel segmentation to obtain an invariance intersection; and (4) screening the key points according to the relation between the key points and the invariant feature set and the superpixel blocks, searching high-quality invariant key points, and constructing a few and accurate key point descriptor. And giving the credibility of the key points as weight by combining the position relation between the key points and the invariance intersection, wherein the key points with larger weight have better quality. The advantages of empowerment are: pertinence is achieved, and strong invariance key points can be obtained after screening; the discussion of the high-quality key points can accelerate the matching speed and realize the rapid detection of the quality of the printed patterns.
And matching the key point descriptors corresponding to the images to be matched with the key point descriptors corresponding to the presswork template images to obtain the matching degree of the presswork images. The method is based on the SIFT algorithm, under a printed matter pattern matching scene, invariance intersection is obtained through superpixel segmentation, the center rate of key points is calculated, the difference degree is calculated from two aspects of distance and shape, the key points are endowed with weights by combining the heart rate and the difference degree, the key points with larger weights are more important, high-quality key points are screened out according to the relationship between the key points and the invariance intersection, and the high-quality key points are used for realizing the rapid detection of the quality of the printed matter pattern.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of a method for fast matching printed patterns according to an embodiment of the present invention.
Detailed Description
To further illustrate the technical means and effects of the present invention adopted to achieve the predetermined objects, the following detailed description of a method for fast matching printed patterns according to the present invention, its specific implementation, structure, features and effects will be given in conjunction with the accompanying drawings and preferred embodiments. In the following description, different "one embodiment" or "another embodiment" refers to not necessarily the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The embodiment of the invention provides a specific implementation method of a quick matching method of a printed matter pattern, which is suitable for xx. In order to solve the problem that a key point descriptor is constructed based on an SIFT algorithm, a Gaussian difference operator of the key point descriptor has sensitivity, and unstable points needing to be removed exist in the obtained key points; and the time for generating descriptors and matching the descriptors by a plurality of key points is long, so that the key point matching cannot be realized quickly. According to the method, on the basis of SIFT key points, the images with the same size but different fuzzy degrees are subjected to superpixel segmentation to obtain the invariance intersection of superpixel segmentation blocks, the center rate of the key points is calculated, the difference degree is calculated from the two aspects of distance and shape, the key points are given weight by combining the heart rate and the difference degree, and the key points with larger weight are more important. The advantages of empowerment are: pertinence is achieved, and strong invariance key points can be obtained after screening; the discussion of the high-quality key points can accelerate the matching speed and realize the rapid detection of the quality of the printed patterns.
The following describes a specific scheme of the method for quickly matching a printed pattern provided by the present invention in detail with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of a method for fast matching of a printed pattern according to an embodiment of the present invention is shown, where the method includes the following steps:
and S100, acquiring a printed matter image, and preprocessing the printed matter image to obtain an image to be matched.
Firstly, an original image which is a printed matter template image and is manufactured after design is obtained. In the printing process flow, the printed patterns can be output by the printing equipment after being printed, and the printed patterns are horizontally placed on a production and transportation line for deep processing after being printed, so that a camera acquisition system is placed above an output port, printed matter images of printed products are acquired downwards in real time, and the printed matter images are acquired. It is contemplated that the captured print image may contain background images that are not of interest to the production line or the like. Therefore, the image of the printed matter is preprocessed to obtain the image to be matched, specifically: and performing semantic segmentation on the presswork image, setting the pixel value of the background region which is not interested as 0, keeping the pixel value of the pattern region of the presswork which is interested unchanged, and obtaining the image to be matched of the presswork to be matched.
And finishing the processes of obtaining the image of the printed matter template, acquiring the image of the printed matter in real time and preprocessing the image of the printed matter.
S200, constructing a Gaussian difference pyramid from the image to be matched, and obtaining a plurality of difference images; detecting local extreme points in a Gaussian difference space, and screening out key points in a difference image; performing superpixel segmentation on any layer of differential images in any group in the Gaussian differential pyramid to obtain a plurality of superpixel blocks; and acquiring invariance intersection corresponding to the overlapping area of the superpixel blocks.
Constructing a Gaussian difference pyramid by a plurality of images to be matched, specifically: and constructing a Gaussian difference pyramid from the image to be matched based on the SIFT algorithm. And acquiring a plurality of differential images. It should be noted that constructing the gaussian difference pyramid by using the SIFT algorithm is a known technique of those skilled in the art, and is not described herein again. The SIFT algorithm comprises the steps of carrying out Gaussian blur on an image to be matched, constructing a differential pyramid, positioning key points and constructing key point descriptors. The key points should have strong invariance, but since the gaussian difference operator is sensitive to noise and edges, there are weak stability key points that are misdetected, and therefore further discussion of the detected key points is required. And performing superpixel segmentation with the same specification on the differential images with different fuzzy degrees to determine an invariance intersection. And determining the invariable degree according to the position relation between the key points and the invariable aggregation and the superpixel blocks, and screening out high-quality key points from the plurality of key points.
The specific process of screening out high-quality key points from a plurality of key points comprises the following steps: (1) Constructing a Gaussian difference pyramid, and detecting an extreme value to obtain a key point; and (4) super-pixel segmentation difference images, and obtaining invariance intersection based on the difference images of the same group. (2) And calculating the heart rate according to the position relation of the key points and the invariance intersection, and then combining the distance between the heart rate and the superpixel block to obtain the credibility of the key points. (3) And screening a plurality of high-quality key points from the plurality of key points based on the credibility of the key points.
Constructing a Gaussian difference pyramid, and detecting an extreme value to obtain a key point; the superpixel segmentation difference image is based on the invariance intersection acquired from the difference images of the same group, and specifically comprises the following steps:
the Gaussian pyramid comprises two parameters of group number and layer number, images in different groups are different in size, and fuzzy parameters of images in different layers are different. And (4) differentiating the images with the same size to obtain a Gaussian difference pyramid, and detecting and positioning the key points on each layer of differential image through extreme values. Furthermore, superpixel segmentation can aggregate pixels with similar characteristics, superpixel segmentation with the same specification is carried out on a plurality of difference images in the same group, intersection is obtained through a series of superpixel blocks in corresponding positions, invariance intersection can be obtained, and pixel points in the invariance intersection have stability which key points should have. Specifically, the method comprises the following steps:
constructing a Gaussian pyramid based on SIFT algorithm, and obtaining the Gaussian pyramid by downsamplingImages to be matched of different resolution in groups, each group having>The layers are images to be matched. Differentiating the images to be matched of all layers with the same size and different fuzzy degrees in each group to obtain a Gaussian difference pyramid, and obtaining the->Group difference images, each group having->The layered difference image.
And detecting local extreme points in a Gaussian difference space, and screening out key points in the difference image. To ensure that the detected local extreme points are extreme points on the scale plane and in the scale space. And for any group corresponding to the Gaussian difference pyramid, detecting the difference image of the same group adjacent up and down except the difference image of the bottom layer and the top layer in each group, selecting any local extreme point as a target point, comparing the target point with the adjacent local extreme point of the same scale and a plurality of local extreme points corresponding to the adjacent scales up and down, and when the pixel value of the target point is the maximum value or the minimum value of the plurality of points, taking the target point as a key point. That is, each group of differential images except the bottom layer and the top layer needs to be detected with the same group of differential images adjacent up and down, a point is compared with the adjacent point with the same scale and the 26 points corresponding to the adjacent scales up and down, if the point is the maximum value or the minimum value in the points, the point is regarded as a key point of the image under the scale, and each layer of differential image has the positioned key point.
Further, performing superpixel segmentation on any layer of differential images in any group in the Gaussian differential pyramid to obtain a plurality of superpixel blocks; obtaining an invariance intersection corresponding to the overlapping area of the superpixel blocks, specifically:
by the first in a Gaussian difference pyramidThe group difference image is the study object and is compared with the group>Performing superpixel segmentation on the layer differential image, dividing the layer differential image into 100 regions, marking serial numbers of superpixel blocks in one differential image from top left to bottom right to obtain a series of superpixel blocks with the same size but different fuzzy degrees, recording the area of the superpixel blocks as S, and judging whether the area of the superpixel blocks is greater than or equal to S or not according to the standard value of the area of the super pixel blocks>Group is/are>The serial number in the layer image is->Is counted and/or counted>. The specific process of performing superpixel segmentation comprises the following steps: (1) Setting the number of super pixel blocks->Uniformly distributing seed points to obtain an initialized clustering center; (2) Calculating gradient values of all pixel points in the neighborhood of the initialized clustering center 3 x 3, moving the seed points to the place with the minimum gradient, and finishing the updating of the clustering center; searching in the neighborhood of each cluster center point, calculating the distance between the pixel point and the cluster center, and distributingA category label; (3) If for all +>And (4) each clustering center is iterated and updated in the previous step, the position of each clustering center is kept unchanged, and the iteration is finished.
Further, in the second of the Gaussian difference pyramidIn the group, a group of serial number->The superpixel block. Although the areas represented by the same sequence number are located close to each other, the size and shape of the pixel blocks with different layers have some differences, which are caused by the different blurring degrees of the images. The super-pixel segmentation is to classify the local image pixels, and along with the change of the fuzzy degree, the details in the image also change, and only the region of the stable extremum pixel point does not change greatly. Therefore, in the ^ th->In the group, the serial number is->Calculating the overlapping area of the group of superpixel blocks to obtain an invariance intersection; that is, the invariance intersection corresponding to the overlapping area of the overlapping region of the superpixel blocks is obtained.
The formula for computing the invariance intersection is:
wherein the content of the first and second substances,is at the fifth->Invariance intersection of kth superpixel in the group; />The number of layers in the group to which the super pixel block belongs; />Is at the fifth->Group is/are>The serial number in the layer image is->The area of the super pixel block.
Using a difference pyramid of gaussians as the secondGroup is the subject of study and is numbered ^ h>Of the super pixel block, the number of differential image layers being from 1 to ≥ er>Calculating the invariance intersection to obtain the invariance intersection of the area image in the differential image with different fuzzy degrees>The more the pixels in the subset have strong invariance, the more the pixels need to be used as key points, and matching is facilitated.
And constructing a Gaussian difference pyramid, positioning key points in the difference image, and completing the process of obtaining the invariance intersection by superpixel segmentation.
Step S300, calculating the center rate of the key points according to the position relation between the key points and the invariance intersection; obtaining the distance difference degree according to the distance between the super pixel block to which the key point belongs and the adjacent super pixel block; and calculating the shape difference degree according to the area intersection ratio of the superpixel block to which the key point belongs and the adjacent superpixel block, wherein the product of the distance difference degree, the heart rate and the shape difference degree is the key point credibility degree.
In the gaussian difference pyramid, the key points are preliminarily obtained through extreme value detection, but due to the sensitivity of gaussian operators, wrongly detected key points with weak stability exist, and further screening is needed. And performing superpixel segmentation on the differential image with the same specification, and calculating intersection of a series of segmentation results with the same group and the same sequence number to obtain invariance intersection containing stable pixel points. Calculating the heart rate according to the preliminarily obtained position relation between the key point and the invariance intersection to measure the stability, screening out the key points with strong stability, comparing the super pixel block to which the key point belongs with the surrounding adjacent super pixel blocks, calculating the difference degree from the two aspects of distance and shape to measure the extreme value, analyzing the distance difference degree and the shape difference degree of the middle heart rate and the super pixel blocks to obtain the credibility of the key point, and selecting the key points with higher quality to finish the rapid matching of the pattern.
Firstly, calculating the center rate of the key points according to the position relation between the key points detected in the Gaussian difference pyramid and the invariance intersection, and preparing for subsequently calculating the credibility of the key points.
In the second of the Gaussian difference pyramidIn the group, the image resolution is the same, and each layer of differential image has key points obtained through extremum detection. By means of vertical projection, will ^ h>Level 2 to ^ h in the group>The key points obtained by the layer are projected on the image of the layer 1, and the detected key points are marked as ^ greater or greater than>(ii) a All invariance intersections found in the previous step>The effective region of the key points is also shown in layer 1, where the stability of points located inside the region is stronger and the stability of points located at the edge of the region is weaker. Therefore, according to the key point->Intersect with invariance>The heart rate is calculated. For key points which are not in any constant interaction concentration, the key points are positioned outside the effective area, wherein the heart rate value is 0; for key points in a certain invariant subset of interactions>And acquiring the geometric central point of the invariance intersection>Determining the heart rate based on the calculated distance of the key point from the geometric center>。
The formula for calculating the middle heart rate is as follows:
wherein the content of the first and second substances,the center rate of the ith key point; />Is an exponential function with a natural constant as a base number; />The abscissa of the ith key point is; />Is the ordinate of the ith key point; />The abscissa of the geometric center point of the invariance intersection to which the ith key point belongs; />The ordinate of the geometric center point of the invariance intersection to which the ith key point belongs.
Determining the center rate corresponding to the key point according to the distance from the key point to the geometric center point of the invariance intersection. The greater the distance from the key point to the geometric center point of the invariance intersection to which the key point belongs, the greater the center rate->The smaller; on the contrary, when the distance from the key point to the geometric center point of the invariance intersection is smaller, the corresponding center rate is larger. And performing the same operation on the key points in each group of difference images in the Gaussian difference pyramid to obtain the center rates of all the key points.
Further, the center rate of the key points, the distance difference degree and the shape difference degree of the adjacent super-pixel blocks are combined to obtain the credibility degree of the key points. Obtaining the distance difference degree according to the distance between the super pixel block to which the key point belongs and the adjacent super pixel block; and calculating the shape difference degree according to the area intersection ratio of the super pixel block to which the key point belongs and the adjacent super pixel block.
The higher the center rate of the key points, the closer to the invariance intersection the points are locatedThe stronger the stability, the more central. On this basis, the SIFT key points also need to have extreme values, that is, the SIFT key points need to have larger difference with the adjacent area pixel points. For theIn a key point>Calculating its geometric center ≥ from its neighboring superpixel block>Degree of distance difference and degree of shape difference. The larger the distance difference degree and the shape difference degree are, the larger the difference between the representative key point and the surrounding pixels is, the stronger the extreme value is, that is, only the key points with the heart rate larger than the preset central rate threshold value are subjected to subsequent key point credibility calculation, and the key points with the heart rate smaller than or equal to the preset central rate threshold value are not subjected to subsequent key point credibility calculation. In the embodiment of the present invention, the value of the preset central rate threshold is 0.7, and in other embodiments, an implementer may adjust the value according to an actual situation. Since the high-quality key point is the extreme point with strong stability, the central rate of the key point is combined>And the degree of difference in distance and shape from the surrounding neighboring super-pixel blocks yields the confidence level ^ of the key point>Specifically, the method comprises the following steps: the product of the distance difference degree, the heart rate and the shape difference degree is a key point credibility degree.
The calculation formula of the distance difference degree is as follows:
wherein the content of the first and second substances,the distance difference degree of the ith key point; />The number of adjacent superpixel blocks of the superpixel block to which the ith key point belongs; />The abscissa of the ith key point is; />Is the ordinate of the ith key point; />The abscissa of the geometric center point of the t-th adjacent superpixel block of the superpixel to which the ith key point belongs; />Is the ordinate of the geometric center point of the t-th adjacent superpixel block of the superpixel block to which the ith key point belongs.
The method for acquiring the shape difference degree comprises the following steps: firstly, corroding a superpixel block with a larger area in the superpixel block to which the key point belongs and the adjacent superpixel blocks until the area difference value of the two superpixel blocks is minimum; for the corroded super-pixel block, the centroid of the super-pixel block to which the key point belongs and the centroid of the adjacent super-pixel block are overlapped, and the corresponding area intersection ratio is obtained;
the calculation formula of the shape difference degree is as follows:
wherein the content of the first and second substances,the shape difference degree of the ith key point; />The number of adjacent superpixels of the superpixel block to which the ith key point belongs; />The area intersection ratio of the super pixel block to which the ith key point belongs and the corresponding t-th adjacent super pixel block.
Wherein the value range of the number n of the adjacent superpixels of the superpixel block to which the key point belongs is (2, 9). Confidence level of key point in formulaDepending on the product of distance, shape and heart rate. The distance difference degree is the average distance from the key point to the geometric center of the surrounding adjacent superpixel blocks; the degree of shape difference is the shape difference of the key point from the surrounding neighboring superpixel blocks. For a super-pixel block, a neighborhood is the closest super-pixel blocks that can surround the super-pixel block, and the neighboring super-pixel blocks are also the neighborhood super-pixel blocks. The greater the degree of distance difference and the degree of shape difference, the greater the extremum; the greater the center rate, the greater the stability, and the greater the confidence level of the key point obtained by the product of the three>The larger the key point is, the more extreme the stability of the key point is, the better quality of the key point is, and the credibility is higher.
And then, the process of obtaining the credibility of the key points by combining the heart rate of the key points in the invariance intersection, the distance difference degree between the key points and the adjacent superpixel blocks and the shape difference degree is completed.
S400, screening a plurality of high-quality key points from the plurality of key points based on the credibility of the key points; generating a key point descriptor based on the high-quality key points; and matching the key point descriptor corresponding to the image to be matched with the key point descriptor corresponding to the presswork template image to obtain the matching degree of the presswork image.
And screening key points based on the credibility of the key points, constructing a key point descriptor, and calculating the matching degree of the printed product and the template pattern.
Firstly, screening a plurality of high-quality key points from a plurality of key points based on the credibility of the key points, specifically: obtaining the credibility of the key points of each key point, sequencing the credibility of the key points from small to large to obtain a credibility sequence of the key points, and finding out the lower quartile of the credibility sequence of the key points, wherein the key points with the credibility of the key points above the lower quartile are high-quality key points. That is, after the credibility of each key point is obtained, the credibility is sorted from small to large and the lower quartile is found, and the key points with the credibility above the lower quartile are high-quality key points. Wherein, the quartile is also called as quartile point, which means that all numerical values are arranged from small to large in statistics and divided into four equal parts at the positions of three division points, and the quartile is generally a numerical value at 25% position and called as lower quartile; and the value at the 75% position, called the upper quartile.
And generating a key point descriptor by using a general SIFT algorithm based on the high-quality key points to obtain a key point descriptor set. And the image to be matched corresponding to the printed matter module image and the printed matter image respectively have corresponding key point descriptor subsets.
And matching the key point descriptor corresponding to the image to be matched with the key point descriptor corresponding to the presswork template image to obtain the matching degree of the presswork image. Specifically, the method comprises the following steps: for a key point descriptor in a key point descriptor set corresponding to a printed matter template image, acquiring the Euclidean distance between the key point descriptor corresponding to the printed matter template image and the key point descriptor corresponding to the image to be matched, wherein the key point descriptor with the Euclidean distance smaller than a preset distance threshold is a matching point; and the proportion of the matching points is the matching degree of the image of the printed matter. The preset distance threshold is set by the practitioner according to different actual situations.
That is, the key point descriptor set of the printed matter template image and the image to be matched is obtained, for each key point descriptor in the printed matter template image, the Euclidean distance is used for searching the matching points of the current key point descriptor in the key point descriptor set of the image to be matched, the occupation ratio of the matching points is calculated, namely the proportion of the total occupation ratio of the matching points in the key point descriptor set is used as the matching degree of the printed matter pattern. And matching the printed matter image with the plurality of printed matter template images based on the matching degree after the matching degree of the printed matter image is obtained.
In summary, the present invention relates to the field of data processing technology. Firstly, obtaining a printed matter image, and preprocessing the printed matter image to obtain an image to be matched; constructing a Gaussian difference pyramid from the image to be matched, and obtaining a plurality of difference images; detecting local extreme points in a Gaussian difference space, and screening out key points in a difference image; performing superpixel segmentation on any layer of differential images in any group in the Gaussian differential pyramid to obtain a plurality of superpixel blocks; obtaining invariance intersection corresponding to the overlapping area of the superpixel blocks; calculating the center rate of the key points according to the position relation between the key points and the invariance intersection; obtaining the distance difference degree according to the distance between the super pixel block to which the key point belongs and the adjacent super pixel block; calculating the shape difference degree according to the area intersection ratio of the superpixel block to which the key point belongs and the adjacent superpixel block, wherein the product of the distance difference degree, the middle heart rate and the shape difference degree is the credibility degree of the key point; screening a plurality of high-quality key points from the plurality of key points based on the credibility of the key points; generating a key point descriptor based on the high-quality key points; and matching the key point descriptors corresponding to the images to be matched with the key point descriptors corresponding to the presswork template images to obtain the matching degree of the presswork images. The method is based on the SIFT algorithm, invariance intersection is obtained through superpixel segmentation under a printed matter pattern matching scene, high-quality key points are screened out according to the relation between the key points and the invariance intersection, and the quality of the printed matter pattern is rapidly detected by using the high-quality key points.
It should be noted that: the sequence of the above embodiments of the present invention is only for description, and does not represent the advantages or disadvantages of the embodiments. The processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
All the embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from other embodiments.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and should not be taken as limiting the scope of the present invention, which is intended to cover any modifications, equivalents, improvements, etc. within the spirit and scope of the present invention.
Claims (8)
1. A method for fast matching of a printed pattern, the method comprising the steps of:
obtaining a printed matter image, and preprocessing the printed matter image to obtain an image to be matched;
constructing a Gaussian difference pyramid from the image to be matched, and obtaining a plurality of difference images; detecting local extreme points in a Gaussian difference space, and screening out key points in a difference image; performing superpixel segmentation on any layer of differential images in any group in the Gaussian differential pyramid to obtain a plurality of superpixel blocks; obtaining invariance intersection corresponding to the overlapping area of the superpixel blocks;
calculating the center rate of the key points according to the position relation between the key points and the invariance intersection; obtaining the distance difference degree according to the distance between the super pixel block to which the key point belongs and the adjacent super pixel block; calculating the shape difference degree according to the area intersection ratio of the superpixel block to which the key point belongs and the adjacent superpixel block, wherein the product of the distance difference degree, the heart rate and the shape difference degree is the key point credibility degree;
screening a plurality of high-quality key points from the plurality of key points based on the credibility of the key points; generating a key point descriptor based on the high-quality key points; matching the key point descriptor corresponding to the image to be matched with the key point descriptor corresponding to the presswork template image to obtain the matching degree of the presswork image;
wherein, the calculation formula of the heart rate is as follows:
wherein the content of the first and second substances,the center rate of the ith key point; />Is an exponential function with a natural constant as a base number; />The abscissa of the ith key point is; />Is the ordinate of the ith key point; />The abscissa of the geometric center point of the invariance intersection to which the ith key point belongs; />The ordinate of the geometric center point of the invariance intersection to which the ith key point belongs.
2. The method for fast matching of printed patterns according to claim 1, wherein the detecting local extreme points in the gaussian difference space and screening out key points in the difference image comprises:
and for any group corresponding to the Gaussian difference pyramid, detecting the difference image of the same group adjacent up and down except the difference image of the bottom layer and the top layer in each group, selecting any local extreme point as a target point, comparing the target point with the adjacent local extreme point of the same scale and a plurality of local extreme points corresponding to the adjacent scales up and down, and when the pixel value of the target point is the maximum value or the minimum value of the plurality of points, taking the target point as a key point.
3. The method for fast matching of printed patterns according to claim 1, wherein the obtaining of the distance difference degree according to the distance between the super pixel block to which the key point belongs and the adjacent super pixel block comprises:
the calculation formula of the distance difference degree is as follows:
wherein the content of the first and second substances,the distance difference degree of the ith key point; />The number of adjacent superpixel blocks of the superpixel block to which the ith key point belongs; />The abscissa of the ith key point is; />The ordinate of the ith key point; />The abscissa of the geometric center point of the t-th adjacent superpixel block of the superpixel block to which the ith key point belongs; />Is the ordinate of the geometric center point of the t-th adjacent superpixel block of the superpixel block to which the ith key point belongs.
4. The method for fast matching of printed matter patterns according to claim 1, wherein the calculating of the degree of shape difference according to the area intersection ratio of the super pixel block to which the key point belongs and the adjacent super pixel block comprises:
corroding the super-pixel block with larger area in the super-pixel block to which the key point belongs and the adjacent super-pixel blocks until the area difference value of the two super-pixel blocks is minimum;
superposing the centroid of the super-pixel block to which the key point belongs and the centroid of the adjacent super-pixel block to obtain the corresponding area intersection ratio;
the calculation formula of the shape difference degree is as follows:
wherein, the first and the second end of the pipe are connected with each other,the shape difference degree of the ith key point; />The number of adjacent superpixels of the superpixel block to which the ith key point belongs; />The area intersection ratio of the superpixel block to which the ith key point belongs and the corresponding t-th adjacent superpixel block.
5. The method as claimed in claim 1, wherein said screening a plurality of high-quality key points from a plurality of key points based on the credibility of the key points comprises:
obtaining the credibility of the key points of each key point, sequencing the credibility of the key points from small to large to obtain a credibility sequence of the key points, and finding out the lower quartile of the credibility sequence of the key points, wherein the key points with the credibility of the key points above the lower quartile are high-quality key points.
6. The method for quickly matching a pattern of a printed matter according to claim 1, wherein the step of matching the key point descriptor corresponding to the image to be matched with the key point descriptor corresponding to the template image of the printed matter to obtain the matching degree of the image of the printed matter comprises the steps of:
for the key point descriptors in the key point descriptor set corresponding to the printed matter template image, acquiring Euclidean distances between the key point descriptors corresponding to the printed matter template image and the key point descriptors corresponding to the image to be matched, wherein the key point descriptors with the Euclidean distances smaller than a preset distance threshold are matched points; and the proportion of the matching points is the matching degree of the image of the printed matter.
7. The method for quickly matching a printed pattern according to claim 1, wherein the preprocessing the printed image to obtain an image to be matched comprises:
and performing semantic segmentation on the presswork image to obtain an image to be matched.
8. The method for fast matching of printed patterns according to claim 1, wherein the constructing of the gaussian difference pyramid from the image to be matched comprises:
and constructing a Gaussian difference pyramid from the image to be matched based on an SIFT algorithm.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211245047.5A CN115311293B (en) | 2022-10-12 | 2022-10-12 | Rapid matching method for printed matter pattern |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211245047.5A CN115311293B (en) | 2022-10-12 | 2022-10-12 | Rapid matching method for printed matter pattern |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115311293A CN115311293A (en) | 2022-11-08 |
CN115311293B true CN115311293B (en) | 2023-03-28 |
Family
ID=83868281
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211245047.5A Active CN115311293B (en) | 2022-10-12 | 2022-10-12 | Rapid matching method for printed matter pattern |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115311293B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117764912A (en) * | 2023-11-08 | 2024-03-26 | 东莞市中钢模具有限公司 | Visual inspection method for deformation abnormality of automobile part die casting die |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106709948A (en) * | 2016-12-21 | 2017-05-24 | 浙江大学 | Quick binocular stereo matching method based on superpixel segmentation |
CN106960442A (en) * | 2017-03-01 | 2017-07-18 | 东华大学 | Based on the infrared night robot vision wide view-field three-D construction method of monocular |
CN115100014B (en) * | 2022-06-24 | 2023-03-24 | 山东省人工智能研究院 | Multi-level perception-based social network image copying and moving counterfeiting detection method |
-
2022
- 2022-10-12 CN CN202211245047.5A patent/CN115311293B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN115311293A (en) | 2022-11-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115018828B (en) | Defect detection method for electronic component | |
CN113160192B (en) | Visual sense-based snow pressing vehicle appearance defect detection method and device under complex background | |
CN110148162B (en) | Heterogeneous image matching method based on composite operator | |
CN110909800B (en) | Vehicle detection method based on Faster R-CNN improved algorithm | |
CN106875381B (en) | Mobile phone shell defect detection method based on deep learning | |
CN109550712A (en) | A kind of chemical fiber wire tailfiber open defect detection system and method | |
CN107392929B (en) | Intelligent target detection and size measurement method based on human eye vision model | |
CN105675626A (en) | Character defect detecting method of tire mold | |
CN109472770B (en) | Method for quickly matching image characteristic points in printed circuit board detection | |
CN111340855A (en) | Road moving target detection method based on track prediction | |
CN114387233A (en) | Sand mold defect detection method and system based on machine vision | |
CN111160373B (en) | Method for extracting, detecting and classifying defect image features of variable speed drum part | |
CN104657980A (en) | Improved multi-channel image partitioning algorithm based on Meanshift | |
CN115272312B (en) | Plastic mobile phone shell defect detection method based on machine vision | |
CN115311293B (en) | Rapid matching method for printed matter pattern | |
CN112734761A (en) | Industrial product image boundary contour extraction method | |
CN116309577B (en) | Intelligent detection method and system for high-strength conveyor belt materials | |
CN111950559A (en) | Pointer instrument automatic reading method based on radial gray scale | |
CN110348307B (en) | Path edge identification method and system for crane metal structure climbing robot | |
CN115546795A (en) | Automatic reading method of circular pointer instrument based on deep learning | |
CN111738310B (en) | Material classification method, device, electronic equipment and storage medium | |
CN110910497B (en) | Method and system for realizing augmented reality map | |
CN105761237B (en) | Chip x-ray image Hierarchical Segmentation based on mean shift | |
CN112418262A (en) | Vehicle re-identification method, client and system | |
CN113591548B (en) | Target ring identification method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |