CN115311293B - Rapid matching method for printed matter pattern - Google Patents

Rapid matching method for printed matter pattern Download PDF

Info

Publication number
CN115311293B
CN115311293B CN202211245047.5A CN202211245047A CN115311293B CN 115311293 B CN115311293 B CN 115311293B CN 202211245047 A CN202211245047 A CN 202211245047A CN 115311293 B CN115311293 B CN 115311293B
Authority
CN
China
Prior art keywords
key point
image
key
points
key points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211245047.5A
Other languages
Chinese (zh)
Other versions
CN115311293A (en
Inventor
丁如荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong Dongding Color Printing Packaging Factory
Original Assignee
Nantong Dongding Color Printing Packaging Factory
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nantong Dongding Color Printing Packaging Factory filed Critical Nantong Dongding Color Printing Packaging Factory
Priority to CN202211245047.5A priority Critical patent/CN115311293B/en
Publication of CN115311293A publication Critical patent/CN115311293A/en
Application granted granted Critical
Publication of CN115311293B publication Critical patent/CN115311293B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30144Printing quality

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of data processing, in particular to a method for quickly matching patterns of a printed matter. The method is a method for identifying by using electronic equipment, and the method is used for completing pattern matching of a printed matter by using an artificial intelligence system in the production field. Firstly, collecting a presswork image by using electronic equipment, and carrying out data processing on the presswork image to obtain a plurality of key points; further, data processing is carried out on the distance, the shape and the heart rate of the super-pixel block to which the key point belongs and the adjacent super-pixel block to obtain the key point credibility of the key point, and high-quality key points are screened out based on the key point credibility to generate key point descriptors; and matching the printed matter image and the printed matter template image based on the key point descriptors. In a printed matter pattern matching scene, invariance intersection is obtained through superpixel segmentation, high-quality key points are screened out according to the relationship between the key points and the invariance intersection, and the high-quality key points are used for realizing rapid matching of printed matter patterns.

Description

Rapid matching method for printed matter pattern
Technical Field
The invention relates to the technical field of data processing, in particular to a method for quickly matching patterns of a printed matter.
Background
The printed matter is influenced by various factors in the production process, a large number of defective products are easily generated due to inaccurate overprinting or standard exceeding errors of printing specifications, great loss is brought to enterprises, and the real-time matching and timely loss stopping are very important. When pattern printing is found to be a problem, the post-printing processing is stopped, and waste and loss can be avoided. The artificial printed matter quality detection efficiency is low, the effect is influenced by subjective factors, along with the rapid development of machine vision, the matching of the template and the printed matter pattern can be completed at high accuracy by constructing a key point descriptor based on an SIFT algorithm by means of a computer, and the descriptor has robustness to scale, illumination and rotation angle. However, this technique has two limitations: firstly, the Gaussian difference operator has sensitivity, and unstable points needing to be removed exist in the acquired key points; secondly, the time for generating descriptors and matching the descriptors by a plurality of key points is long, and the key point matching cannot be realized quickly.
Disclosure of Invention
In order to solve the above technical problems, an object of the present invention is to provide a method for quickly matching a pattern of a printed matter, which adopts the following technical solutions:
obtaining a printed matter image, and preprocessing the printed matter image to obtain an image to be matched;
constructing a Gaussian difference pyramid from the image to be matched, and obtaining a plurality of difference images; detecting local extreme points in a Gaussian difference space, and screening out key points in a difference image; performing superpixel segmentation on any layer of differential images in any group in the Gaussian differential pyramid to obtain a plurality of superpixel blocks; obtaining invariance intersection corresponding to the overlapping area of the superpixel blocks;
calculating the center rate of the key points according to the position relation between the key points and the invariance intersection; obtaining the distance difference degree according to the distance between the super pixel block to which the key point belongs and the adjacent super pixel block; calculating the shape difference degree according to the area intersection ratio of the superpixel block to which the key point belongs and the adjacent superpixel block, wherein the product of the distance difference degree, the heart rate and the shape difference degree is the key point credibility degree;
screening a plurality of high-quality key points from the plurality of key points based on the credibility of the key points; generating a key point descriptor based on the high-quality key points; and matching the key point descriptors corresponding to the images to be matched with the key point descriptors corresponding to the presswork template images to obtain the matching degree of the presswork images.
Preferably, the detecting local extreme points in the gaussian difference space and screening out key points in the difference image includes:
and for any group corresponding to the Gaussian difference pyramid, detecting the difference image of the same group adjacent up and down except the difference image of the bottom layer and the top layer in each group, selecting any local extreme point as a target point, comparing the target point with the adjacent local extreme point of the same scale and a plurality of local extreme points corresponding to the adjacent scales up and down, and when the pixel value of the target point is the maximum value or the minimum value of the plurality of points, taking the target point as a key point.
Preferably, the calculating the center rate of the key point according to the position relationship between the key point and the invariance intersection includes:
the calculation formula of the heart rate is as follows:
Figure 44037DEST_PATH_IMAGE002
wherein, the first and the second end of the pipe are connected with each other,
Figure DEST_PATH_IMAGE003
the center rate of the ith key point; />
Figure 102123DEST_PATH_IMAGE004
Is an exponential function with a natural constant as a base number; />
Figure DEST_PATH_IMAGE005
The abscissa of the ith key point is; />
Figure 475335DEST_PATH_IMAGE006
The ordinate of the ith key point; />
Figure DEST_PATH_IMAGE007
The abscissa of the geometric center point of the invariance intersection to which the ith key point belongs; />
Figure 883927DEST_PATH_IMAGE008
The ordinate of the geometric center point of the invariance intersection to which the ith key point belongs.
Preferably, the obtaining of the distance difference degree according to the distance between the super pixel block to which the key point belongs and the adjacent super pixel block includes:
the calculation formula of the distance difference degree is as follows:
Figure 26195DEST_PATH_IMAGE010
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE011
the distance difference degree of the ith key point; />
Figure 192865DEST_PATH_IMAGE012
The number of adjacent superpixel blocks of the superpixel block to which the ith key point belongs; />
Figure 256636DEST_PATH_IMAGE005
The abscissa of the ith key point is; />
Figure 924378DEST_PATH_IMAGE006
Is the ordinate of the ith key point; />
Figure DEST_PATH_IMAGE013
The abscissa of the geometric center point of the t-th adjacent superpixel block of the superpixel block to which the ith key point belongs; />
Figure 734202DEST_PATH_IMAGE014
Is the ordinate of the geometric center point of the t-th adjacent superpixel block of the superpixel block to which the ith key point belongs.
Preferably, the calculating the shape difference degree according to the area intersection ratio of the super pixel block to which the key point belongs and the adjacent super pixel block includes:
corroding the super-pixel block with larger area in the super-pixel block to which the key point belongs and the adjacent super-pixel blocks until the area difference value of the two super-pixel blocks is minimum;
superposing the centroid of the super-pixel block to which the key point belongs and the centroid of the adjacent super-pixel block to obtain the corresponding area intersection ratio;
the calculation formula of the shape difference degree is as follows:
Figure 461987DEST_PATH_IMAGE016
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE017
the shape difference degree of the ith key point; />
Figure 294945DEST_PATH_IMAGE012
The number of adjacent superpixels of the superpixel block to which the ith key point belongs; />
Figure 828694DEST_PATH_IMAGE018
The area intersection ratio of the super pixel block to which the ith key point belongs and the corresponding t-th adjacent super pixel block.
Preferably, the screening of the plurality of high-quality key points from the plurality of key points based on the confidence level of the key points includes:
obtaining the credibility of the key points of each key point, sequencing the credibility of the key points from small to large to obtain a credibility sequence of the key points, and finding out the lower quartile of the credibility sequence of the key points, wherein the key points with the credibility of the key points above the lower quartile are high-quality key points.
Preferably, the matching the key point descriptor corresponding to the image to be matched and the key point descriptor corresponding to the print template image to obtain the matching degree of the print image includes:
for a key point descriptor in a key point descriptor set corresponding to the printed matter template image, acquiring a Euclidean distance between the key point descriptor corresponding to the printed matter template image and a key point descriptor corresponding to the image to be matched, wherein the key point descriptor with the Euclidean distance smaller than a preset distance threshold is a matching point; and the proportion of the matching points is the matching degree of the image of the printed matter.
Preferably, the preprocessing the image of the printed matter to obtain the image to be matched includes:
and performing semantic segmentation on the presswork image to obtain an image to be matched.
Preferably, the constructing a gaussian difference pyramid from the image to be matched includes:
and constructing a Gaussian difference pyramid from the image to be matched based on an SIFT algorithm.
The embodiment of the invention at least has the following beneficial effects:
the invention relates to the technical field of data processing. Firstly, obtaining a printed matter image, and preprocessing the printed matter image to obtain an image to be matched; constructing a Gaussian difference pyramid from the image to be matched, and obtaining a plurality of difference images; detecting local extreme points in a Gaussian difference space, and screening out key points in a difference image; performing superpixel segmentation on any layer of differential images in any group in the Gaussian differential pyramid to obtain a plurality of superpixel blocks; obtaining invariance intersection corresponding to the overlapping area of the superpixel blocks; calculating the center rate of the key points according to the position relation between the key points and the invariance intersection; obtaining the distance difference degree according to the distance between the super pixel block to which the key point belongs and the adjacent super pixel block; calculating the shape difference degree according to the area intersection ratio of the superpixel block to which the key point belongs and the adjacent superpixel block, wherein the product of the distance difference degree, the heart rate and the shape difference degree is the key point credibility degree; screening a plurality of high-quality key points from the plurality of key points based on the credibility of the key points; generating a key point descriptor based on the high-quality key points; key points are positioned based on an SIFT algorithm, and images with the same size and different fuzzy degrees are subjected to superpixel segmentation to obtain an invariance intersection; and (4) screening the key points according to the relation between the key points and the invariant feature set and the superpixel blocks, searching high-quality invariant key points, and constructing a few and accurate key point descriptor. And giving the credibility of the key points as weight by combining the position relation between the key points and the invariance intersection, wherein the key points with larger weight have better quality. The advantages of empowerment are: pertinence is achieved, and strong invariance key points can be obtained after screening; the discussion of the high-quality key points can accelerate the matching speed and realize the rapid detection of the quality of the printed patterns.
And matching the key point descriptors corresponding to the images to be matched with the key point descriptors corresponding to the presswork template images to obtain the matching degree of the presswork images. The method is based on the SIFT algorithm, under a printed matter pattern matching scene, invariance intersection is obtained through superpixel segmentation, the center rate of key points is calculated, the difference degree is calculated from two aspects of distance and shape, the key points are endowed with weights by combining the heart rate and the difference degree, the key points with larger weights are more important, high-quality key points are screened out according to the relationship between the key points and the invariance intersection, and the high-quality key points are used for realizing the rapid detection of the quality of the printed matter pattern.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of a method for fast matching printed patterns according to an embodiment of the present invention.
Detailed Description
To further illustrate the technical means and effects of the present invention adopted to achieve the predetermined objects, the following detailed description of a method for fast matching printed patterns according to the present invention, its specific implementation, structure, features and effects will be given in conjunction with the accompanying drawings and preferred embodiments. In the following description, different "one embodiment" or "another embodiment" refers to not necessarily the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The embodiment of the invention provides a specific implementation method of a quick matching method of a printed matter pattern, which is suitable for xx. In order to solve the problem that a key point descriptor is constructed based on an SIFT algorithm, a Gaussian difference operator of the key point descriptor has sensitivity, and unstable points needing to be removed exist in the obtained key points; and the time for generating descriptors and matching the descriptors by a plurality of key points is long, so that the key point matching cannot be realized quickly. According to the method, on the basis of SIFT key points, the images with the same size but different fuzzy degrees are subjected to superpixel segmentation to obtain the invariance intersection of superpixel segmentation blocks, the center rate of the key points is calculated, the difference degree is calculated from the two aspects of distance and shape, the key points are given weight by combining the heart rate and the difference degree, and the key points with larger weight are more important. The advantages of empowerment are: pertinence is achieved, and strong invariance key points can be obtained after screening; the discussion of the high-quality key points can accelerate the matching speed and realize the rapid detection of the quality of the printed patterns.
The following describes a specific scheme of the method for quickly matching a printed pattern provided by the present invention in detail with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of a method for fast matching of a printed pattern according to an embodiment of the present invention is shown, where the method includes the following steps:
and S100, acquiring a printed matter image, and preprocessing the printed matter image to obtain an image to be matched.
Firstly, an original image which is a printed matter template image and is manufactured after design is obtained. In the printing process flow, the printed patterns can be output by the printing equipment after being printed, and the printed patterns are horizontally placed on a production and transportation line for deep processing after being printed, so that a camera acquisition system is placed above an output port, printed matter images of printed products are acquired downwards in real time, and the printed matter images are acquired. It is contemplated that the captured print image may contain background images that are not of interest to the production line or the like. Therefore, the image of the printed matter is preprocessed to obtain the image to be matched, specifically: and performing semantic segmentation on the presswork image, setting the pixel value of the background region which is not interested as 0, keeping the pixel value of the pattern region of the presswork which is interested unchanged, and obtaining the image to be matched of the presswork to be matched.
And finishing the processes of obtaining the image of the printed matter template, acquiring the image of the printed matter in real time and preprocessing the image of the printed matter.
S200, constructing a Gaussian difference pyramid from the image to be matched, and obtaining a plurality of difference images; detecting local extreme points in a Gaussian difference space, and screening out key points in a difference image; performing superpixel segmentation on any layer of differential images in any group in the Gaussian differential pyramid to obtain a plurality of superpixel blocks; and acquiring invariance intersection corresponding to the overlapping area of the superpixel blocks.
Constructing a Gaussian difference pyramid by a plurality of images to be matched, specifically: and constructing a Gaussian difference pyramid from the image to be matched based on the SIFT algorithm. And acquiring a plurality of differential images. It should be noted that constructing the gaussian difference pyramid by using the SIFT algorithm is a known technique of those skilled in the art, and is not described herein again. The SIFT algorithm comprises the steps of carrying out Gaussian blur on an image to be matched, constructing a differential pyramid, positioning key points and constructing key point descriptors. The key points should have strong invariance, but since the gaussian difference operator is sensitive to noise and edges, there are weak stability key points that are misdetected, and therefore further discussion of the detected key points is required. And performing superpixel segmentation with the same specification on the differential images with different fuzzy degrees to determine an invariance intersection. And determining the invariable degree according to the position relation between the key points and the invariable aggregation and the superpixel blocks, and screening out high-quality key points from the plurality of key points.
The specific process of screening out high-quality key points from a plurality of key points comprises the following steps: (1) Constructing a Gaussian difference pyramid, and detecting an extreme value to obtain a key point; and (4) super-pixel segmentation difference images, and obtaining invariance intersection based on the difference images of the same group. (2) And calculating the heart rate according to the position relation of the key points and the invariance intersection, and then combining the distance between the heart rate and the superpixel block to obtain the credibility of the key points. (3) And screening a plurality of high-quality key points from the plurality of key points based on the credibility of the key points.
Constructing a Gaussian difference pyramid, and detecting an extreme value to obtain a key point; the superpixel segmentation difference image is based on the invariance intersection acquired from the difference images of the same group, and specifically comprises the following steps:
the Gaussian pyramid comprises two parameters of group number and layer number, images in different groups are different in size, and fuzzy parameters of images in different layers are different. And (4) differentiating the images with the same size to obtain a Gaussian difference pyramid, and detecting and positioning the key points on each layer of differential image through extreme values. Furthermore, superpixel segmentation can aggregate pixels with similar characteristics, superpixel segmentation with the same specification is carried out on a plurality of difference images in the same group, intersection is obtained through a series of superpixel blocks in corresponding positions, invariance intersection can be obtained, and pixel points in the invariance intersection have stability which key points should have. Specifically, the method comprises the following steps:
constructing a Gaussian pyramid based on SIFT algorithm, and obtaining the Gaussian pyramid by downsampling
Figure DEST_PATH_IMAGE019
Images to be matched of different resolution in groups, each group having>
Figure 173918DEST_PATH_IMAGE020
The layers are images to be matched. Differentiating the images to be matched of all layers with the same size and different fuzzy degrees in each group to obtain a Gaussian difference pyramid, and obtaining the->
Figure 462817DEST_PATH_IMAGE019
Group difference images, each group having->
Figure DEST_PATH_IMAGE021
The layered difference image.
And detecting local extreme points in a Gaussian difference space, and screening out key points in the difference image. To ensure that the detected local extreme points are extreme points on the scale plane and in the scale space. And for any group corresponding to the Gaussian difference pyramid, detecting the difference image of the same group adjacent up and down except the difference image of the bottom layer and the top layer in each group, selecting any local extreme point as a target point, comparing the target point with the adjacent local extreme point of the same scale and a plurality of local extreme points corresponding to the adjacent scales up and down, and when the pixel value of the target point is the maximum value or the minimum value of the plurality of points, taking the target point as a key point. That is, each group of differential images except the bottom layer and the top layer needs to be detected with the same group of differential images adjacent up and down, a point is compared with the adjacent point with the same scale and the 26 points corresponding to the adjacent scales up and down, if the point is the maximum value or the minimum value in the points, the point is regarded as a key point of the image under the scale, and each layer of differential image has the positioned key point.
Further, performing superpixel segmentation on any layer of differential images in any group in the Gaussian differential pyramid to obtain a plurality of superpixel blocks; obtaining an invariance intersection corresponding to the overlapping area of the superpixel blocks, specifically:
by the first in a Gaussian difference pyramid
Figure 455175DEST_PATH_IMAGE022
The group difference image is the study object and is compared with the group>
Figure 589353DEST_PATH_IMAGE021
Performing superpixel segmentation on the layer differential image, dividing the layer differential image into 100 regions, marking serial numbers of superpixel blocks in one differential image from top left to bottom right to obtain a series of superpixel blocks with the same size but different fuzzy degrees, recording the area of the superpixel blocks as S, and judging whether the area of the superpixel blocks is greater than or equal to S or not according to the standard value of the area of the super pixel blocks>
Figure 701665DEST_PATH_IMAGE022
Group is/are>
Figure 646618DEST_PATH_IMAGE012
The serial number in the layer image is->
Figure DEST_PATH_IMAGE023
Is counted and/or counted>
Figure 437857DEST_PATH_IMAGE024
. The specific process of performing superpixel segmentation comprises the following steps: (1) Setting the number of super pixel blocks->
Figure DEST_PATH_IMAGE025
Uniformly distributing seed points to obtain an initialized clustering center; (2) Calculating gradient values of all pixel points in the neighborhood of the initialized clustering center 3 x 3, moving the seed points to the place with the minimum gradient, and finishing the updating of the clustering center; searching in the neighborhood of each cluster center point, calculating the distance between the pixel point and the cluster center, and distributingA category label; (3) If for all +>
Figure 329721DEST_PATH_IMAGE026
And (4) each clustering center is iterated and updated in the previous step, the position of each clustering center is kept unchanged, and the iteration is finished.
Further, in the second of the Gaussian difference pyramid
Figure 296540DEST_PATH_IMAGE022
In the group, a group of serial number->
Figure 396083DEST_PATH_IMAGE023
The superpixel block. Although the areas represented by the same sequence number are located close to each other, the size and shape of the pixel blocks with different layers have some differences, which are caused by the different blurring degrees of the images. The super-pixel segmentation is to classify the local image pixels, and along with the change of the fuzzy degree, the details in the image also change, and only the region of the stable extremum pixel point does not change greatly. Therefore, in the ^ th->
Figure 81142DEST_PATH_IMAGE022
In the group, the serial number is->
Figure 304925DEST_PATH_IMAGE023
Calculating the overlapping area of the group of superpixel blocks to obtain an invariance intersection; that is, the invariance intersection corresponding to the overlapping area of the overlapping region of the superpixel blocks is obtained.
The formula for computing the invariance intersection is:
Figure 126251DEST_PATH_IMAGE028
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE029
is at the fifth->
Figure 209745DEST_PATH_IMAGE022
Invariance intersection of kth superpixel in the group; />
Figure 116521DEST_PATH_IMAGE021
The number of layers in the group to which the super pixel block belongs; />
Figure 396192DEST_PATH_IMAGE024
Is at the fifth->
Figure 337604DEST_PATH_IMAGE022
Group is/are>
Figure 529682DEST_PATH_IMAGE012
The serial number in the layer image is->
Figure 923754DEST_PATH_IMAGE023
The area of the super pixel block.
Using a difference pyramid of gaussians as the second
Figure 741537DEST_PATH_IMAGE022
Group is the subject of study and is numbered ^ h>
Figure 537455DEST_PATH_IMAGE023
Of the super pixel block, the number of differential image layers being from 1 to ≥ er>
Figure 166013DEST_PATH_IMAGE021
Calculating the invariance intersection to obtain the invariance intersection of the area image in the differential image with different fuzzy degrees>
Figure 375278DEST_PATH_IMAGE029
The more the pixels in the subset have strong invariance, the more the pixels need to be used as key points, and matching is facilitated.
And constructing a Gaussian difference pyramid, positioning key points in the difference image, and completing the process of obtaining the invariance intersection by superpixel segmentation.
Step S300, calculating the center rate of the key points according to the position relation between the key points and the invariance intersection; obtaining the distance difference degree according to the distance between the super pixel block to which the key point belongs and the adjacent super pixel block; and calculating the shape difference degree according to the area intersection ratio of the superpixel block to which the key point belongs and the adjacent superpixel block, wherein the product of the distance difference degree, the heart rate and the shape difference degree is the key point credibility degree.
In the gaussian difference pyramid, the key points are preliminarily obtained through extreme value detection, but due to the sensitivity of gaussian operators, wrongly detected key points with weak stability exist, and further screening is needed. And performing superpixel segmentation on the differential image with the same specification, and calculating intersection of a series of segmentation results with the same group and the same sequence number to obtain invariance intersection containing stable pixel points. Calculating the heart rate according to the preliminarily obtained position relation between the key point and the invariance intersection to measure the stability, screening out the key points with strong stability, comparing the super pixel block to which the key point belongs with the surrounding adjacent super pixel blocks, calculating the difference degree from the two aspects of distance and shape to measure the extreme value, analyzing the distance difference degree and the shape difference degree of the middle heart rate and the super pixel blocks to obtain the credibility of the key point, and selecting the key points with higher quality to finish the rapid matching of the pattern.
Firstly, calculating the center rate of the key points according to the position relation between the key points detected in the Gaussian difference pyramid and the invariance intersection, and preparing for subsequently calculating the credibility of the key points.
In the second of the Gaussian difference pyramid
Figure 606539DEST_PATH_IMAGE022
In the group, the image resolution is the same, and each layer of differential image has key points obtained through extremum detection. By means of vertical projection, will ^ h>
Figure 129400DEST_PATH_IMAGE022
Level 2 to ^ h in the group>
Figure 319073DEST_PATH_IMAGE021
The key points obtained by the layer are projected on the image of the layer 1, and the detected key points are marked as ^ greater or greater than>
Figure 812371DEST_PATH_IMAGE030
(ii) a All invariance intersections found in the previous step>
Figure 925951DEST_PATH_IMAGE029
The effective region of the key points is also shown in layer 1, where the stability of points located inside the region is stronger and the stability of points located at the edge of the region is weaker. Therefore, according to the key point->
Figure 430882DEST_PATH_IMAGE030
Intersect with invariance>
Figure 650511DEST_PATH_IMAGE029
The heart rate is calculated. For key points which are not in any constant interaction concentration, the key points are positioned outside the effective area, wherein the heart rate value is 0; for key points in a certain invariant subset of interactions>
Figure DEST_PATH_IMAGE031
And acquiring the geometric central point of the invariance intersection>
Figure 116258DEST_PATH_IMAGE032
Determining the heart rate based on the calculated distance of the key point from the geometric center>
Figure 17218DEST_PATH_IMAGE003
The formula for calculating the middle heart rate is as follows:
Figure 642235DEST_PATH_IMAGE002
wherein the content of the first and second substances,
Figure 49076DEST_PATH_IMAGE003
the center rate of the ith key point; />
Figure 126754DEST_PATH_IMAGE004
Is an exponential function with a natural constant as a base number; />
Figure 96984DEST_PATH_IMAGE005
The abscissa of the ith key point is; />
Figure 448943DEST_PATH_IMAGE006
Is the ordinate of the ith key point; />
Figure 151320DEST_PATH_IMAGE007
The abscissa of the geometric center point of the invariance intersection to which the ith key point belongs; />
Figure 840927DEST_PATH_IMAGE008
The ordinate of the geometric center point of the invariance intersection to which the ith key point belongs.
Determining the center rate corresponding to the key point according to the distance from the key point to the geometric center point of the invariance intersection
Figure 834422DEST_PATH_IMAGE003
. The greater the distance from the key point to the geometric center point of the invariance intersection to which the key point belongs, the greater the center rate->
Figure 434031DEST_PATH_IMAGE003
The smaller; on the contrary, when the distance from the key point to the geometric center point of the invariance intersection is smaller, the corresponding center rate is larger. And performing the same operation on the key points in each group of difference images in the Gaussian difference pyramid to obtain the center rates of all the key points.
Further, the center rate of the key points, the distance difference degree and the shape difference degree of the adjacent super-pixel blocks are combined to obtain the credibility degree of the key points. Obtaining the distance difference degree according to the distance between the super pixel block to which the key point belongs and the adjacent super pixel block; and calculating the shape difference degree according to the area intersection ratio of the super pixel block to which the key point belongs and the adjacent super pixel block.
The higher the center rate of the key points, the closer to the invariance intersection the points are locatedThe stronger the stability, the more central. On this basis, the SIFT key points also need to have extreme values, that is, the SIFT key points need to have larger difference with the adjacent area pixel points. For the
Figure DEST_PATH_IMAGE033
In a key point>
Figure 900784DEST_PATH_IMAGE031
Calculating its geometric center ≥ from its neighboring superpixel block>
Figure 828420DEST_PATH_IMAGE034
Degree of distance difference and degree of shape difference. The larger the distance difference degree and the shape difference degree are, the larger the difference between the representative key point and the surrounding pixels is, the stronger the extreme value is, that is, only the key points with the heart rate larger than the preset central rate threshold value are subjected to subsequent key point credibility calculation, and the key points with the heart rate smaller than or equal to the preset central rate threshold value are not subjected to subsequent key point credibility calculation. In the embodiment of the present invention, the value of the preset central rate threshold is 0.7, and in other embodiments, an implementer may adjust the value according to an actual situation. Since the high-quality key point is the extreme point with strong stability, the central rate of the key point is combined>
Figure 546978DEST_PATH_IMAGE003
And the degree of difference in distance and shape from the surrounding neighboring super-pixel blocks yields the confidence level ^ of the key point>
Figure DEST_PATH_IMAGE035
Specifically, the method comprises the following steps: the product of the distance difference degree, the heart rate and the shape difference degree is a key point credibility degree.
The calculation formula of the distance difference degree is as follows:
Figure 673196DEST_PATH_IMAGE036
wherein the content of the first and second substances,
Figure 451797DEST_PATH_IMAGE011
the distance difference degree of the ith key point; />
Figure 381575DEST_PATH_IMAGE012
The number of adjacent superpixel blocks of the superpixel block to which the ith key point belongs; />
Figure 903824DEST_PATH_IMAGE005
The abscissa of the ith key point is; />
Figure 299863DEST_PATH_IMAGE006
Is the ordinate of the ith key point; />
Figure 373998DEST_PATH_IMAGE013
The abscissa of the geometric center point of the t-th adjacent superpixel block of the superpixel to which the ith key point belongs; />
Figure 400860DEST_PATH_IMAGE014
Is the ordinate of the geometric center point of the t-th adjacent superpixel block of the superpixel block to which the ith key point belongs.
The method for acquiring the shape difference degree comprises the following steps: firstly, corroding a superpixel block with a larger area in the superpixel block to which the key point belongs and the adjacent superpixel blocks until the area difference value of the two superpixel blocks is minimum; for the corroded super-pixel block, the centroid of the super-pixel block to which the key point belongs and the centroid of the adjacent super-pixel block are overlapped, and the corresponding area intersection ratio is obtained;
the calculation formula of the shape difference degree is as follows:
Figure DEST_PATH_IMAGE037
wherein the content of the first and second substances,
Figure 398903DEST_PATH_IMAGE017
the shape difference degree of the ith key point; />
Figure 562031DEST_PATH_IMAGE012
The number of adjacent superpixels of the superpixel block to which the ith key point belongs; />
Figure 823379DEST_PATH_IMAGE018
The area intersection ratio of the super pixel block to which the ith key point belongs and the corresponding t-th adjacent super pixel block.
Wherein the value range of the number n of the adjacent superpixels of the superpixel block to which the key point belongs is (2, 9). Confidence level of key point in formula
Figure 337537DEST_PATH_IMAGE035
Depending on the product of distance, shape and heart rate. The distance difference degree is the average distance from the key point to the geometric center of the surrounding adjacent superpixel blocks; the degree of shape difference is the shape difference of the key point from the surrounding neighboring superpixel blocks. For a super-pixel block, a neighborhood is the closest super-pixel blocks that can surround the super-pixel block, and the neighboring super-pixel blocks are also the neighborhood super-pixel blocks. The greater the degree of distance difference and the degree of shape difference, the greater the extremum; the greater the center rate, the greater the stability, and the greater the confidence level of the key point obtained by the product of the three>
Figure 529484DEST_PATH_IMAGE035
The larger the key point is, the more extreme the stability of the key point is, the better quality of the key point is, and the credibility is higher.
And then, the process of obtaining the credibility of the key points by combining the heart rate of the key points in the invariance intersection, the distance difference degree between the key points and the adjacent superpixel blocks and the shape difference degree is completed.
S400, screening a plurality of high-quality key points from the plurality of key points based on the credibility of the key points; generating a key point descriptor based on the high-quality key points; and matching the key point descriptor corresponding to the image to be matched with the key point descriptor corresponding to the presswork template image to obtain the matching degree of the presswork image.
And screening key points based on the credibility of the key points, constructing a key point descriptor, and calculating the matching degree of the printed product and the template pattern.
Firstly, screening a plurality of high-quality key points from a plurality of key points based on the credibility of the key points, specifically: obtaining the credibility of the key points of each key point, sequencing the credibility of the key points from small to large to obtain a credibility sequence of the key points, and finding out the lower quartile of the credibility sequence of the key points, wherein the key points with the credibility of the key points above the lower quartile are high-quality key points. That is, after the credibility of each key point is obtained, the credibility is sorted from small to large and the lower quartile is found, and the key points with the credibility above the lower quartile are high-quality key points. Wherein, the quartile is also called as quartile point, which means that all numerical values are arranged from small to large in statistics and divided into four equal parts at the positions of three division points, and the quartile is generally a numerical value at 25% position and called as lower quartile; and the value at the 75% position, called the upper quartile.
And generating a key point descriptor by using a general SIFT algorithm based on the high-quality key points to obtain a key point descriptor set. And the image to be matched corresponding to the printed matter module image and the printed matter image respectively have corresponding key point descriptor subsets.
And matching the key point descriptor corresponding to the image to be matched with the key point descriptor corresponding to the presswork template image to obtain the matching degree of the presswork image. Specifically, the method comprises the following steps: for a key point descriptor in a key point descriptor set corresponding to a printed matter template image, acquiring the Euclidean distance between the key point descriptor corresponding to the printed matter template image and the key point descriptor corresponding to the image to be matched, wherein the key point descriptor with the Euclidean distance smaller than a preset distance threshold is a matching point; and the proportion of the matching points is the matching degree of the image of the printed matter. The preset distance threshold is set by the practitioner according to different actual situations.
That is, the key point descriptor set of the printed matter template image and the image to be matched is obtained, for each key point descriptor in the printed matter template image, the Euclidean distance is used for searching the matching points of the current key point descriptor in the key point descriptor set of the image to be matched, the occupation ratio of the matching points is calculated, namely the proportion of the total occupation ratio of the matching points in the key point descriptor set is used as the matching degree of the printed matter pattern. And matching the printed matter image with the plurality of printed matter template images based on the matching degree after the matching degree of the printed matter image is obtained.
In summary, the present invention relates to the field of data processing technology. Firstly, obtaining a printed matter image, and preprocessing the printed matter image to obtain an image to be matched; constructing a Gaussian difference pyramid from the image to be matched, and obtaining a plurality of difference images; detecting local extreme points in a Gaussian difference space, and screening out key points in a difference image; performing superpixel segmentation on any layer of differential images in any group in the Gaussian differential pyramid to obtain a plurality of superpixel blocks; obtaining invariance intersection corresponding to the overlapping area of the superpixel blocks; calculating the center rate of the key points according to the position relation between the key points and the invariance intersection; obtaining the distance difference degree according to the distance between the super pixel block to which the key point belongs and the adjacent super pixel block; calculating the shape difference degree according to the area intersection ratio of the superpixel block to which the key point belongs and the adjacent superpixel block, wherein the product of the distance difference degree, the middle heart rate and the shape difference degree is the credibility degree of the key point; screening a plurality of high-quality key points from the plurality of key points based on the credibility of the key points; generating a key point descriptor based on the high-quality key points; and matching the key point descriptors corresponding to the images to be matched with the key point descriptors corresponding to the presswork template images to obtain the matching degree of the presswork images. The method is based on the SIFT algorithm, invariance intersection is obtained through superpixel segmentation under a printed matter pattern matching scene, high-quality key points are screened out according to the relation between the key points and the invariance intersection, and the quality of the printed matter pattern is rapidly detected by using the high-quality key points.
It should be noted that: the sequence of the above embodiments of the present invention is only for description, and does not represent the advantages or disadvantages of the embodiments. The processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
All the embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from other embodiments.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and should not be taken as limiting the scope of the present invention, which is intended to cover any modifications, equivalents, improvements, etc. within the spirit and scope of the present invention.

Claims (8)

1. A method for fast matching of a printed pattern, the method comprising the steps of:
obtaining a printed matter image, and preprocessing the printed matter image to obtain an image to be matched;
constructing a Gaussian difference pyramid from the image to be matched, and obtaining a plurality of difference images; detecting local extreme points in a Gaussian difference space, and screening out key points in a difference image; performing superpixel segmentation on any layer of differential images in any group in the Gaussian differential pyramid to obtain a plurality of superpixel blocks; obtaining invariance intersection corresponding to the overlapping area of the superpixel blocks;
calculating the center rate of the key points according to the position relation between the key points and the invariance intersection; obtaining the distance difference degree according to the distance between the super pixel block to which the key point belongs and the adjacent super pixel block; calculating the shape difference degree according to the area intersection ratio of the superpixel block to which the key point belongs and the adjacent superpixel block, wherein the product of the distance difference degree, the heart rate and the shape difference degree is the key point credibility degree;
screening a plurality of high-quality key points from the plurality of key points based on the credibility of the key points; generating a key point descriptor based on the high-quality key points; matching the key point descriptor corresponding to the image to be matched with the key point descriptor corresponding to the presswork template image to obtain the matching degree of the presswork image;
wherein, the calculation formula of the heart rate is as follows:
Figure DEST_PATH_IMAGE002
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE004
the center rate of the ith key point; />
Figure DEST_PATH_IMAGE006
Is an exponential function with a natural constant as a base number; />
Figure DEST_PATH_IMAGE008
The abscissa of the ith key point is; />
Figure DEST_PATH_IMAGE010
Is the ordinate of the ith key point; />
Figure DEST_PATH_IMAGE012
The abscissa of the geometric center point of the invariance intersection to which the ith key point belongs; />
Figure DEST_PATH_IMAGE014
The ordinate of the geometric center point of the invariance intersection to which the ith key point belongs.
2. The method for fast matching of printed patterns according to claim 1, wherein the detecting local extreme points in the gaussian difference space and screening out key points in the difference image comprises:
and for any group corresponding to the Gaussian difference pyramid, detecting the difference image of the same group adjacent up and down except the difference image of the bottom layer and the top layer in each group, selecting any local extreme point as a target point, comparing the target point with the adjacent local extreme point of the same scale and a plurality of local extreme points corresponding to the adjacent scales up and down, and when the pixel value of the target point is the maximum value or the minimum value of the plurality of points, taking the target point as a key point.
3. The method for fast matching of printed patterns according to claim 1, wherein the obtaining of the distance difference degree according to the distance between the super pixel block to which the key point belongs and the adjacent super pixel block comprises:
the calculation formula of the distance difference degree is as follows:
Figure DEST_PATH_IMAGE016
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE018
the distance difference degree of the ith key point; />
Figure DEST_PATH_IMAGE020
The number of adjacent superpixel blocks of the superpixel block to which the ith key point belongs; />
Figure 45498DEST_PATH_IMAGE008
The abscissa of the ith key point is; />
Figure 697059DEST_PATH_IMAGE010
The ordinate of the ith key point; />
Figure DEST_PATH_IMAGE022
The abscissa of the geometric center point of the t-th adjacent superpixel block of the superpixel block to which the ith key point belongs; />
Figure DEST_PATH_IMAGE024
Is the ordinate of the geometric center point of the t-th adjacent superpixel block of the superpixel block to which the ith key point belongs.
4. The method for fast matching of printed matter patterns according to claim 1, wherein the calculating of the degree of shape difference according to the area intersection ratio of the super pixel block to which the key point belongs and the adjacent super pixel block comprises:
corroding the super-pixel block with larger area in the super-pixel block to which the key point belongs and the adjacent super-pixel blocks until the area difference value of the two super-pixel blocks is minimum;
superposing the centroid of the super-pixel block to which the key point belongs and the centroid of the adjacent super-pixel block to obtain the corresponding area intersection ratio;
the calculation formula of the shape difference degree is as follows:
Figure DEST_PATH_IMAGE026
wherein, the first and the second end of the pipe are connected with each other,
Figure DEST_PATH_IMAGE028
the shape difference degree of the ith key point; />
Figure 7955DEST_PATH_IMAGE020
The number of adjacent superpixels of the superpixel block to which the ith key point belongs; />
Figure DEST_PATH_IMAGE030
The area intersection ratio of the superpixel block to which the ith key point belongs and the corresponding t-th adjacent superpixel block.
5. The method as claimed in claim 1, wherein said screening a plurality of high-quality key points from a plurality of key points based on the credibility of the key points comprises:
obtaining the credibility of the key points of each key point, sequencing the credibility of the key points from small to large to obtain a credibility sequence of the key points, and finding out the lower quartile of the credibility sequence of the key points, wherein the key points with the credibility of the key points above the lower quartile are high-quality key points.
6. The method for quickly matching a pattern of a printed matter according to claim 1, wherein the step of matching the key point descriptor corresponding to the image to be matched with the key point descriptor corresponding to the template image of the printed matter to obtain the matching degree of the image of the printed matter comprises the steps of:
for the key point descriptors in the key point descriptor set corresponding to the printed matter template image, acquiring Euclidean distances between the key point descriptors corresponding to the printed matter template image and the key point descriptors corresponding to the image to be matched, wherein the key point descriptors with the Euclidean distances smaller than a preset distance threshold are matched points; and the proportion of the matching points is the matching degree of the image of the printed matter.
7. The method for quickly matching a printed pattern according to claim 1, wherein the preprocessing the printed image to obtain an image to be matched comprises:
and performing semantic segmentation on the presswork image to obtain an image to be matched.
8. The method for fast matching of printed patterns according to claim 1, wherein the constructing of the gaussian difference pyramid from the image to be matched comprises:
and constructing a Gaussian difference pyramid from the image to be matched based on an SIFT algorithm.
CN202211245047.5A 2022-10-12 2022-10-12 Rapid matching method for printed matter pattern Active CN115311293B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211245047.5A CN115311293B (en) 2022-10-12 2022-10-12 Rapid matching method for printed matter pattern

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211245047.5A CN115311293B (en) 2022-10-12 2022-10-12 Rapid matching method for printed matter pattern

Publications (2)

Publication Number Publication Date
CN115311293A CN115311293A (en) 2022-11-08
CN115311293B true CN115311293B (en) 2023-03-28

Family

ID=83868281

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211245047.5A Active CN115311293B (en) 2022-10-12 2022-10-12 Rapid matching method for printed matter pattern

Country Status (1)

Country Link
CN (1) CN115311293B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117764912A (en) * 2023-11-08 2024-03-26 东莞市中钢模具有限公司 Visual inspection method for deformation abnormality of automobile part die casting die

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106709948A (en) * 2016-12-21 2017-05-24 浙江大学 Quick binocular stereo matching method based on superpixel segmentation
CN106960442A (en) * 2017-03-01 2017-07-18 东华大学 Based on the infrared night robot vision wide view-field three-D construction method of monocular
CN115100014B (en) * 2022-06-24 2023-03-24 山东省人工智能研究院 Multi-level perception-based social network image copying and moving counterfeiting detection method

Also Published As

Publication number Publication date
CN115311293A (en) 2022-11-08

Similar Documents

Publication Publication Date Title
CN115018828B (en) Defect detection method for electronic component
CN113160192B (en) Visual sense-based snow pressing vehicle appearance defect detection method and device under complex background
CN110148162B (en) Heterogeneous image matching method based on composite operator
CN110909800B (en) Vehicle detection method based on Faster R-CNN improved algorithm
CN106875381B (en) Mobile phone shell defect detection method based on deep learning
CN109550712A (en) A kind of chemical fiber wire tailfiber open defect detection system and method
CN107392929B (en) Intelligent target detection and size measurement method based on human eye vision model
CN105675626A (en) Character defect detecting method of tire mold
CN109472770B (en) Method for quickly matching image characteristic points in printed circuit board detection
CN111340855A (en) Road moving target detection method based on track prediction
CN114387233A (en) Sand mold defect detection method and system based on machine vision
CN111160373B (en) Method for extracting, detecting and classifying defect image features of variable speed drum part
CN104657980A (en) Improved multi-channel image partitioning algorithm based on Meanshift
CN115272312B (en) Plastic mobile phone shell defect detection method based on machine vision
CN115311293B (en) Rapid matching method for printed matter pattern
CN112734761A (en) Industrial product image boundary contour extraction method
CN116309577B (en) Intelligent detection method and system for high-strength conveyor belt materials
CN111950559A (en) Pointer instrument automatic reading method based on radial gray scale
CN110348307B (en) Path edge identification method and system for crane metal structure climbing robot
CN115546795A (en) Automatic reading method of circular pointer instrument based on deep learning
CN111738310B (en) Material classification method, device, electronic equipment and storage medium
CN110910497B (en) Method and system for realizing augmented reality map
CN105761237B (en) Chip x-ray image Hierarchical Segmentation based on mean shift
CN112418262A (en) Vehicle re-identification method, client and system
CN113591548B (en) Target ring identification method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant