CN117152219A - Image registration method, system and equipment - Google Patents

Image registration method, system and equipment Download PDF

Info

Publication number
CN117152219A
CN117152219A CN202311087856.2A CN202311087856A CN117152219A CN 117152219 A CN117152219 A CN 117152219A CN 202311087856 A CN202311087856 A CN 202311087856A CN 117152219 A CN117152219 A CN 117152219A
Authority
CN
China
Prior art keywords
image
comparison
descriptors
descriptor
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311087856.2A
Other languages
Chinese (zh)
Inventor
滕达
樊坤
王文通
于延锁
余卫勇
刘强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Petrochemical Technology
Original Assignee
Beijing Institute of Petrochemical Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Petrochemical Technology filed Critical Beijing Institute of Petrochemical Technology
Priority to CN202311087856.2A priority Critical patent/CN117152219A/en
Publication of CN117152219A publication Critical patent/CN117152219A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to an image registration method, an image registration system and image registration equipment, and relates to the technical field of image processing. The image registration method comprises the steps of obtaining a specified number of subareas from an image to be detected, obtaining a comparison area corresponding to the subareas from a template image, and detecting characteristic points of the specified number of subareas and the specified number of comparison areas in a multithread parallel mode to obtain a target description subset of the subareas and a comparison description subset of the comparison areas; matching descriptors in the target descriptor set with descriptors in the contrast descriptor set to obtain a matching point coordinate set, and calculating by using the matching point coordinate set to obtain a transformation matrix, wherein the descriptors are feature vectors of feature points; and obtaining the registered image to be detected by using the transformation matrix. The method can improve the speed of feature point detection and descriptor generation of the image and shorten the time required by image registration.

Description

Image registration method, system and equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image registration method, system, and apparatus.
Background
With rapid advances in industrial technology and ever-increasing production scale, ensuring the quality of printed matter has become a very challenging task. The printed matter defect is accurately and timely detected to reduce the defective products from flowing into the market, reduce the cost and save the manpower resources, and the problem which is urgently needed to be solved at present.
The existing printed matter defect detection technology mainly comprises a traditional machine vision method and a deep learning method. Among the conventional machine vision methods, the image difference method is most commonly used. The method judges whether the printed matter has defects or not by comparing the difference between the registered image to be detected and the template image. In the image registration process, feature point detection and descriptor generation are the most critical steps, and the final detection effect is directly affected. The SIFT algorithm is considered one of the most representative and efficient feature point detection algorithms. The method can detect key points in the image and calculate descriptors of the key points, has good rotation invariance, scale invariance and illumination invariance, and is excellent in detecting defects of printed matters. However, the traditional SIFT has higher computational complexity, resulting in slower speed, and cannot meet the real-time requirement.
In the traditional machine vision method, the traditional SIFT algorithm detects feature points and generates descriptors on the whole image to be detected and the template image, so that the computing complexity is high, the image registration time is long, and the real-time requirement cannot be met.
Disclosure of Invention
Therefore, the application provides the image registration method, the system and the equipment, which can improve the speed of image feature point detection and descriptor generation and shorten the time required by image registration.
In order to achieve the above purpose, the application adopts the following technical scheme:
in a first aspect, the present application provides an image registration method, comprising:
acquiring a specified number of subareas from an image to be detected, acquiring a comparison area corresponding to the subareas from a template image, and detecting characteristic points of the specified number of subareas and the specified number of comparison areas in a multithreading parallel mode to obtain a target description subset of the subareas and a comparison description subset of the comparison area;
matching descriptors in the target descriptor set with descriptors in the contrast descriptor set to obtain a matching point coordinate set, and calculating by using the matching point coordinate set to obtain a transformation matrix, wherein the descriptors are feature vectors of feature points;
and obtaining the registered image to be detected by using the transformation matrix.
Further, matching the descriptors in the target descriptor set with the descriptors in the contrast descriptor set to obtain a matching point coordinate set, including:
acquiring one descriptor in a target descriptor set as a first descriptor, and calculating the distances between the first descriptor and all descriptors in a contrast descriptor set respectively to obtain a plurality of description distances;
sorting the description distances from small to large, selecting descriptors in the comparison descriptor set corresponding to the first description distance as second descriptors, and selecting descriptors in the comparison descriptor set corresponding to the second description distance as third descriptors;
and acquiring a distance ratio of the first description distance to the second description distance, and if the distance ratio is smaller than a preset threshold, determining that the first descriptor and the second descriptor are matched to obtain a matching point coordinate set.
Further, obtaining a specified number of sub-regions from an image to be detected, obtaining a comparison region corresponding to the sub-regions from a template image, and detecting feature points of the specified number of sub-regions and the specified number of comparison regions in a multithreading parallel manner to obtain a target description subset of the sub-regions and a comparison description subset of the comparison region, wherein the method comprises the following steps:
acquiring a first sub-area and a second sub-area from an image to be detected, acquiring the first comparison area and the second comparison area from a template image, and detecting characteristic points of the first sub-area and the second sub-area and the first comparison area by adopting a multithread parallel mode to obtain a first description subset of the first sub-area, a second description subset of the second sub-area, a third description subset of the first comparison area and a fourth description subset of the second comparison area;
combining the first descriptor set and the second descriptor set to obtain a target descriptor set;
combining the third descriptor set and the fourth descriptor set to obtain a comparison descriptor set.
Further, after the characteristic point detection is performed on the specified number of sub-areas and the specified number of comparison areas by adopting a multithreading parallel mode, the method further comprises the following steps:
obtaining a control feature point set of a control area;
and caching the comparison feature point set and the comparison description subset to generate registration data of the template image.
Further, before dividing the image to be measured into a specified number of sub-regions, the method includes:
and sequentially performing image scaling processing, image denoising processing and image contrast adjustment on the image to be detected and the template image to obtain the image to be detected and the template image in a specified format.
Further, after obtaining the registered image to be measured by using the transformation matrix, the method further comprises:
obtaining a difference image by using the registered image to be detected and the template image;
sequentially performing threshold division and connected domain analysis on the difference image to obtain the number and position coordinates of the defect areas;
and obtaining a defect detection result of the image to be detected by using the number and the position coordinates of the defect areas.
In a second aspect, there is provided an image registration system comprising:
the detection module is used for acquiring a specified number of subareas from the image to be detected, acquiring a comparison area corresponding to the subareas from the template image, and detecting characteristic points of the specified number of subareas and the specified number of comparison areas in a multithreading parallel mode to obtain a target description subset of the subareas and a comparison description subset of the comparison area;
the transformation matrix calculation module is used for matching the descriptors in the target descriptor set with the descriptors in the contrast descriptor set to obtain a matching point coordinate set, calculating the transformation matrix by using the matching point coordinate set, wherein the descriptors are feature vectors of the feature points;
and the registration module is used for obtaining registered images to be detected by using the transformation matrix.
In a third aspect, there is provided an image registration apparatus comprising:
a processor and a memory;
the processor is connected with the memory through a communication bus;
the processor is used for calling and executing the program stored in the memory;
a memory for storing a program for performing at least one image registration method of any one of the first aspects.
The technical scheme provided by the application can comprise the following beneficial effects:
the image registration method acquires a specified number of subareas from an image to be detected, acquires comparison areas corresponding to the subareas from a template image, and detects characteristic points of the specified number of subareas and the specified number of comparison areas in a multithreading parallel mode to obtain a target descriptor set of the subareas and a comparison descriptor set of the comparison areas; the method has the advantages that the characteristic point detection is carried out on a plurality of target areas in a multithreading parallel mode, the description subsets of the plurality of target areas and the comparison area can be obtained simultaneously, compared with the characteristic point detection carried out on a single area in sequence, the detection time is further shortened, the time required by the whole image registration process is shortened, and the registration instantaneity is improved. Meanwhile, the application caches the obtained comparison feature point set and comparison descriptor set of the template image to obtain registration data, and after detection starts, the cached registration data is directly called to register the image to be detected, so that repeated detection of feature points of the template image is not needed, the number of threads of multi-thread detection is reduced, and the detection efficiency is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
FIG. 1 illustrates a flowchart of a method of image registration according to an exemplary embodiment;
FIG. 2 is an image to be measured shown according to an exemplary embodiment;
FIG. 3 is a template image shown according to an exemplary embodiment;
FIG. 4 is another flow chart illustrating an image registration method according to an exemplary embodiment;
FIG. 5 is a binary image shown according to an exemplary embodiment;
FIG. 6 is a defect profile diagram, shown according to an exemplary embodiment;
FIG. 7 is a graph illustrating defect detection results in an image to be tested, according to an exemplary embodiment;
FIG. 8 is a block diagram schematic diagram of an image registration system, shown according to an exemplary embodiment;
FIG. 9 is a block diagram schematic diagram of an image registration apparatus, shown according to an example embodiment;
fig. 10 is another block diagram schematic diagram of an image registration system, shown according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the application. Rather, they are merely examples of apparatus and methods consistent with aspects of the application as detailed in the accompanying claims.
In the defect detection of the printed matter, the image registration is used for realizing the accurate alignment of the image to be detected and the template image, ensuring that the images have the same content at the same position, eliminating the space offset between the images, and enabling the subsequent steps of image difference, image segmentation, defect analysis and the like to be accurately and reliably carried out.
Before the defect detection of the printed matter, image acquisition is carried out, and an image with high definition is acquired as a template image. After the detection starts, the conveyor belt continuously feeds the printed matter to be detected into the image acquisition equipment, and the image acquisition equipment continuously acquires the image to be detected. In the whole detection process, template images are acquired once, all images to be detected in the detection process are subjected to image registration with the template image, and then image difference, image segmentation, connected domain analysis and other processes are performed, so that a defect detection result is obtained.
The image acquisition device can be an industrial linear array camera (charge coupled device, CCD), the CCD camera is used for image acquisition in an industrial production site, the CCD camera has the characteristics of high resolution and rapid acquisition, the details of a printed matter can be accurately captured in high-speed motion, and a high-quality image without any defects is selected as a template. Through the accurate image acquisition process, clear and real image data is provided, and a reliable basis is provided for subsequent defect detection of printed matters.
Before image registration, the image to be detected and the template image are required to be subjected to image preprocessing operation, wherein the image preprocessing operation comprises image scaling processing, image denoising processing and image contrast adjustment of the image to be detected and the template image in sequence, so that the image to be detected and the template image with the specified format are obtained, and the image registration result can be more accurate.
In an exemplary embodiment, the image scaling process employs a bilinear interpolation to scale down the template image and the image to be measured to the same scale according to a specified scale. Specifically, the bilinear interpolation algorithm is based on weighted average calculation of surrounding adjacent points of the pixel point, so that the gray value of a new pixel can be estimated more accurately, and the image detail and quality are maintained.
The calculation formula (1) of bilinear interpolation is as follows:
f(x′,y′)=(1-dx)*(1-dy)*f(x1,y1)+dx*(1-dy)*f(x2,y1)+(1-dx)*dy*f(x1,y2)+dx*dy*f(x2,y2)(1)
f (x, y) represents the pixel value at coordinates (x, y) in the original image; f (x ', y') denotes a pixel value at the scaled image coordinates (x ', y'), dx denotes an interpolation weight in the x direction, and dy denotes an interpolation weight in the y direction.
Experiments prove that when the image size is scaled to 650×650, the registration accuracy is not affected, and the effect is optimal; and the image can be scaled to other sizes according to the actual sizes of the image to be measured and the template image.
Illustratively, the image denoising process reduces the amplitude of high frequency noise by weighted averaging the neighborhood pixels using a gaussian filter algorithm. In the actual detection process of the printed matter, image noise is derived from unexpected signal fluctuation in the image acquisition, transmission and storage processes, the amplitude of high-frequency noise can be effectively reduced through image denoising processing, the effective denoising effect is achieved, noise interference is reduced, and the image quality and definition are improved.
The formula (2) of the gaussian filtering algorithm is:
I filtered (x, y) is the pixel value of the filtered image at the (x, y) position, k is the radius of the gaussian filter, I (x+i, j+i) is the pixel value of the original image at the (x+i, j+i) position, and G (I, j) is the filter coefficient of the gaussian filter at the (I, j) position. G%i, j) is calculated according to a gaussian function, the gaussian function being formula (3):
sigma is the standard deviation of the gaussian function, controlling the degree of blurring of the filter, a larger standard deviation leading to a stronger blurring effect.
The image contrast is illustratively adjusted by histogram equalization techniques. The pixel values of the image are transformed, and the histogram distribution of the original image is changed into a uniform distribution form, so that the contrast of the image is enhanced, the details and defects of the image are highlighted, and the visibility of the features is improved. The formula (4) of histogram equalization is as follows:
r represents the gray level of each pixel in the original image, L is the number of gray levels of the pixel, and p (j) is the probability of the pixel in the original image having a pixel value of j.
Referring to fig. 1, fig. 1 is a flowchart illustrating an image registration method according to an exemplary embodiment, the image registration method including the steps of:
s101, acquiring a specified number of subareas from an image to be detected, acquiring a comparison area corresponding to the subareas from a template image, and detecting characteristic points of the specified number of subareas and the specified number of comparison areas in a multithreading parallel mode to obtain a target description subset of the subareas and a comparison description subset of the comparison area.
Referring to fig. 2 and 3, fig. 2 is an image to be measured according to an exemplary embodiment; FIG. 3 is a template image shown according to an exemplary embodiment.
And acquiring a specified number of sub-areas in the image to be detected, wherein the specified number is a positive integer and is more than or equal to 2.
As shown in fig. 2, several scattered and distinct sub-regions are selected from the image to be measured, and the size and number of the sub-regions, for example, region 1, region 2, region 3 and region 4, are determined. Wherein the image to be measured is character-defective and not shown in the figure.
In particular, the size and number of sub-regions depends on the size of the image and the registration requirements. For example, the size of the image to be measured is relatively large, but registration accuracy is required to be high, and several sub-areas can be selected more. The sub-regions should have image content that provides good feature points, which are image locations with uniqueness and stability, such as edges, contours, corner points, etc.
As shown in fig. 3, the selection of the comparison areas corresponding to the image to be measured, for example, the area 5, the area 6, the area 7 and the area 8, in the template image ensures that the subarea of the image to be measured and the selected comparison areas in the template image are in one-to-one correspondence, that is, they have the same content in the image, and the content of the image to be measured and the selected areas in the template image are covered, wherein the area 1 corresponds to the area 5, the area 2 corresponds to the area 6, the area 3 corresponds to the area 7, and the area 4 corresponds to the area 8.
The method does not detect the characteristic points of the whole image to be detected and the template image, only detects a plurality of target areas in the image to be detected, and greatly shortens the image detection time, thereby reducing the time required for detecting the characteristic points.
And simultaneously detecting the characteristic points and generating descriptors by adopting a multithreading parallel mode in each sub-region of the image to be detected and a comparison region of the template image to obtain a target description subset and a target characteristic point set of the sub-region, and comparing the comparison description subset and the comparison characteristic point set of the comparison region. Specifically, a SIFT feature detector may be used for feature point detection.
The first sub-region and the second sub-region are obtained from the image to be detected, the first comparison region and the second comparison region are obtained from the template image, and the characteristic point detection is carried out on the first sub-region, the second sub-region, the first comparison region and the second comparison region in a multithread parallel mode.
And obtaining a plurality of characteristic points of a first sub-region and a second sub-region of the image to be detected in a multithreading parallel mode, generating descriptors according to the obtained characteristic points, and further obtaining a first descriptor set and a first characteristic point set of the first sub-region, and a second descriptor set and a second characteristic point set of the second sub-region.
And obtaining a plurality of characteristic points of a first comparison area and a second comparison area of the template image in a multithreading parallel mode, generating descriptors according to the obtained characteristic points, and further obtaining a third description subset and a third characteristic point set of the first comparison area, and a fourth description subset and a fourth characteristic point set of the second comparison area.
And combining the first descriptor set and the second descriptor set to obtain a target descriptor set. And combining the first characteristic point set and the second characteristic point set to obtain a target characteristic point set. Combining the third descriptor set and the fourth descriptor set to obtain a comparison descriptor set. And combining the third characteristic point set and the fourth characteristic point set to obtain a comparison characteristic point set.
The characteristic point detection is carried out on a plurality of target areas in a multithread parallel mode, the description subsets of the plurality of target areas and the comparison area can be obtained simultaneously, and compared with the characteristic point detection carried out on a single area in sequence, the detection time is further shortened.
Specifically, four threads, such as thread one, thread two, thread three, and thread four, are created. And carrying out feature detection on a first sub-region of the image to be detected by the thread to obtain a first descriptor set and a first feature point set. And performing feature detection on a second sub-region of the image to be detected by the thread II to obtain a second description sub-set and a second feature point set of the second sub-region. And performing feature detection on a first comparison area in the three-thread template image to obtain three descriptor sets and a third feature point set, performing feature detection on a second comparison area in the four-thread template image to obtain a fourth descriptor set and a fourth feature point set, merging to obtain a target descriptor set and a target feature point set of the image to be detected, and comparing the descriptor sets and the feature point sets of the template image.
After the characteristic points of the appointed number of sub-areas and the appointed number of comparison areas are detected in a multithreading parallel mode, the obtained comparison characteristic point set and the comparison description subset are cached to obtain the registration data of the template image, and the registration data of the template image can be directly called subsequently, so that the calculation time is further shortened.
In the actual operation process of detecting the defects of the printed matter, before starting detection, an image to be detected and a template image are required to be obtained, feature point detection is carried out on the image to be detected and the template image, a corresponding feature point set and a corresponding description subset are generated, and a comparison feature point set and a comparison description subset of the template image are used as registration data to be stored. After the detection is started, the images to be detected are subjected to defect detection one by one, registration data of the template images are directly called, registration between the images to be detected and the template images is carried out, only feature point detection is needed to be carried out on the images to be detected during registration, a target feature point set and a target description subset are generated, the template images do not need to be processed again, repeated processing on the template images is reduced, the number of threads parallel to multithreading is reduced, and the defect detection efficiency of printed matters is further improved.
For example, four threads are needed for feature point detection processing of an image to be detected and a template image in a multithread parallel mode, after detection is started, registration data of the template image is directly called to register the template to be detected, and only feature point detection is needed for the image to be detected, and at the moment, two threads are needed for parallel processing in a multithread parallel mode.
S102, matching descriptors in the target descriptor set with descriptors in the contrast descriptor set to obtain a matching point coordinate set, and calculating by using the matching point coordinate set to obtain a transformation matrix, wherein the descriptors are feature vectors of feature points.
In an exemplary embodiment, one descriptor in the target descriptor set is obtained as a first descriptor, and distances between the first descriptor and all descriptors in the contrast descriptor set are calculated, so as to obtain a plurality of description distances. And sorting the description distances from small to large, selecting the descriptors in the comparison description subset corresponding to the first description distance as second descriptors, and selecting the descriptors in the comparison description subset corresponding to the second description distance as third descriptors.
The second descriptor is the nearest descriptor (nearest neighbor) to the first descriptor compared to all descriptors in the control descriptor set. The third descriptor is the second closest descriptor (next-neighbor distance) than in all descriptors in the control descriptor set.
Dividing the first description distance by the second description distance to obtain a distance ratio, if the distance ratio is smaller than a preset threshold, determining that the first descriptor and the second descriptor are matched to be effective matching points, putting the first descriptor into a first matching point coordinate set, and putting the second descriptor into a second matching point coordinate set. And finding out all the effective matching points of the target description subset and the contrast description subset to obtain a matching point coordinate set.
Specifically, all valid matching points can be obtained using a k-Nearest Neighbor (KNN) Nearest Neighbor algorithm.
The transformation matrix is a matrix capable of describing a transformation relationship in a two-dimensional space between the image to be measured and the template image. The transformation matrix contains parameters of transformation such as rotation, translation, scaling and the like, can realize alignment and registration of pixel levels, and ensures that the positions of the image to be detected and the template image are consistent in two-dimensional space. Based on the resulting set of matching point coordinates, a set of correct matching points is selected from the set of matching point coordinates using a predetermined algorithm, such as random sample consensus (RANdom SAmple Consensus, RANdom SAmple Consensus, RANSAC), to calculate an optimal transformation matrix.
S103, obtaining the registered image to be detected by using the transformation matrix.
Specifically, the pixels of the image to be measured are mapped onto the blank image with the same size as the template image according to the transformation matrix obtained in the step S102 by using perspective transformation, and the pixels of the image to be measured are rearranged according to the positional relationship on the template image, so that the registered image to be measured, with the relative positions of the pixels in the template image kept consistent, is obtained.
Referring to fig. 4, fig. 4 is another flow chart illustrating an image registration method according to an exemplary embodiment.
As shown in fig. 4, the method comprises the following steps:
s401, acquiring a specified number of subareas from an image to be detected, acquiring a comparison area corresponding to the subareas from a template image, and detecting characteristic points of the specified number of subareas and the specified number of comparison areas in a multithreading parallel mode to obtain a target description subset of the subareas and a comparison description subset of the comparison area.
S402, matching descriptors in the target descriptor set with descriptors in the contrast descriptor set to obtain a matching point coordinate set, and calculating by using the matching point coordinate set to obtain a transformation matrix, wherein the descriptors are feature vectors of feature points.
S403, obtaining the registered image to be detected by using the transformation matrix.
The image registration method has been described in detail in the above embodiment by implementing step S401, step S402 and step S403 to obtain registered images to be measured.
After the registered image to be detected is obtained, the method further comprises the steps of sequentially carrying out image difference, image segmentation, connected domain analysis and the like on the registered image to be detected to obtain the number and position coordinates of the defect areas, and further obtaining a defect detection result.
S404, obtaining a difference image by using the registered image to be detected and the template image.
Registering the image to be detected and the template image to obtain a registered image to be detected, and carrying out image difference processing on the registered image to be detected and the template image to obtain a difference image. Specifically, the pixel points at the corresponding positions of the registered image to be detected and the template image are subtracted, and the absolute value of the subtracted result is taken to obtain a difference image. The difference image displays the difference between the image to be detected and the template image, highlighting the potential defect area.
And S405, sequentially carrying out threshold segmentation and connected domain analysis on the difference image to obtain the number and position coordinates of the defect areas.
Referring to fig. 5, fig. 5 is a binary image shown according to an exemplary embodiment. As shown in fig. 5, the difference image is subjected to threshold segmentation to obtain a binary image, so that the defect part in the template to be detected is further highlighted.
Specifically, a threshold is set for classifying the pixel values in the difference image into two categories: pixels greater than the threshold value set pixels of a first preset value, which may be 255, and pixels less than or equal to the threshold value set pixels of a second preset value, which may be 0. And converting the difference image into a binary image by using the set threshold value, wherein pixels with pixel values of a first preset value represent regions with obvious differences, and pixels with pixel values of a second preset value represent regions with insignificant differences.
Referring to fig. 6, fig. 6 is a defect profile diagram according to an exemplary embodiment. As shown in fig. 6, the defects satisfying the requirements are obtained through screening by using connected domain analysis.
The defect meeting the requirements can be understood as an area region corresponding to the area threshold value of the connected domain outline, namely the defect, in the application. In a specific operation process, setting a region threshold, calculating the area of each connected region contour, and filtering out the contour with the area smaller than the region threshold; and reserving the outline with the area larger than or equal to the area threshold value. The area corresponding to the reserved outline is the defect, and is regarded as the defect area.
And carrying out defect analysis by using the connected domain to obtain the number and position coordinates of the defect regions. Specifically, edge detection is performed on the obtained binary image by using an edge detection operator algorithm, adjacent pixel points with the same color (the pixel value is a first preset value) are identified, and the pixel points form a connected domain. And marking and grouping the communicated pixel points by utilizing the connectivity of the pixels to form different communicated domains. And screening out obvious defect areas from different connected areas by using the area threshold.
In order to facilitate displaying the position and shape information of defects in the defect detection process of the printed matter, calculating a minimum circumscribed positive rectangle for each defect area to obtain the number and position coordinates of the defect areas.
S406, obtaining a defect detection result of the image to be detected by using the number and the position coordinates of the defect areas.
Referring to fig. 7, fig. 7 is a diagram illustrating a defect detection result in an image to be detected according to an exemplary embodiment. In the figure, 71 is a position where a defect exists.
And mapping the obtained number and position coordinates of the defect areas onto an original image to be detected, and drawing rectangular frames of the defect areas at corresponding positions. For images with defects, they are marked NG; for the defect-free image, it is marked as OK, and the defect detection result is obtained.
Based on a general inventive concept, the embodiment of the present application also provides an image registration system, which is used to implement the above-mentioned method embodiment. Referring to fig. 8, fig. 8 is a block diagram illustrating an image registration system according to an exemplary embodiment. As shown in fig. 8, the image registration system 8 includes the following structure:
the detection module 801 is configured to obtain a specified number of sub-regions from an image to be detected, obtain a comparison region corresponding to the sub-regions from a template image, and perform feature point detection on the specified number of sub-regions and the specified number of comparison regions in a multithread parallel manner to obtain a target descriptor set of the sub-regions and a comparison descriptor set of the comparison region;
the transformation matrix calculation module 802 is configured to match a descriptor in the target descriptor set with a descriptor in the reference descriptor set to obtain a matching point coordinate set, and calculate by using the matching point coordinate set to obtain a transformation matrix, where the descriptor is a feature vector of a feature point;
and the registration module 803 is configured to obtain a registered image to be detected by using the transformation matrix.
The specific manner in which the various modules perform the operations in relation to the systems of the above embodiments have been described in detail in relation to the embodiments of the method and will not be described in detail herein.
Based on a general inventive concept, the embodiment of the present application further provides an image registration apparatus for implementing the above method embodiment. Referring to fig. 9, fig. 9 is a block diagram illustrating an image registration apparatus according to an exemplary embodiment. As shown in fig. 9, the image registration apparatus 9 includes the following structure:
a processor 91 and a memory 92;
the processor 91 is connected with the memory 92 through a communication bus;
wherein the processor 91 is configured to call and execute a program stored in the memory 92;
and a memory for storing a program for executing at least the image registration method of the above embodiment.
Specific implementation manners of the image registration apparatus provided by the embodiments of the present application may refer to implementation manners of the image registration method of any of the above embodiments, and will not be described herein.
Based on a general inventive concept, the embodiment of the present application further provides an image defect detection system, configured to implement the above method embodiment. Referring to fig. 10, fig. 10 is another block diagram schematic diagram of an image registration system, according to an example embodiment. The system comprises an image acquisition module 11, an image preprocessing module 12, an image registration module 13, an image post-processing module 14 and a defect display module 15.
The image acquisition module 11 is used for acquiring a template image and an image to be detected;
the image preprocessing module 12 is used for sequentially performing image scaling processing, image denoising processing and image contrast adjustment on the template image and the image to be detected to obtain the image to be detected and the template image in a specified format;
the image registration module 13 is used for obtaining a registered image to be detected according to the image to be detected and the template image;
the image post-processing module 14 is used for sequentially carrying out image difference, image segmentation and defect analysis on the registered image to be detected to obtain the number and position coordinates of the defect areas;
and a defect display module 15 for displaying the defect detection result.
The specific manner in which the various modules perform the operations in relation to the systems of the above embodiments have been described in detail in relation to the embodiments of the method and will not be described in detail herein.
In order to explain the effect of the technical solution provided in this embodiment, this embodiment provides a verification embodiment.
Common feature point detection algorithms include scale-invariant feature transforms (Scale Invariant Feature Transform, SIFT), stable feature acceleration algorithms (SpeedUp Robust Features, SURF), algorithms for fast feature point extraction and description (Oriented FAST and Rotated BRIEF, ORB), and the like. The following experiments are used to compare the time required for detecting the defects of the printed matter by the algorithms, please refer to table 1, and table 1 is a comparison of different algorithms for detecting the defects of the printed matter.
In the verification embodiment, in the case of scaling the image to be detected and the template image to 650×650, the processing time of the image registration method in the defect detection of the printed matter is significantly reduced. The time for detecting the image defects of the printed matter by adopting the traditional SIFT algorithm to carry out image registration is 180 ms/sheet; the defect detection notch of the image defect detection of the printed matter is 50 ms/sheet by performing image registration based on an improved SIFT algorithm (namely the image registration method in the application), and compared with the defect detection time of the image defect of the printed matter is greatly shortened by adopting the image registration method in the application. The image registration method for detecting the image defects of the printed matter has obvious advantages compared with a SURF and ORB registered image defect detection algorithm of the printed matter. Referring to table 1, the image registration method respectively adopts the method, SIFT algorithm, SURF algorithm and ORB algorithm in the application to detect the image defects of different printed products, for example, product 1, product 2, product 3, product 4 and product 5, so as to obtain the accuracy of defect detection of different printed products (i.e. accuracy of different types of printed products), average accuracy% and average detection speed (i.e. detection speed frame/s, one image to be detected is one frame).
TABLE 1
It is to be understood that the same or similar parts in the above embodiments may be referred to each other, and that in some embodiments, the same or similar parts in other embodiments may be referred to.
It should be noted that in the description of the present application, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Furthermore, in the description of the present application, unless otherwise indicated, the meaning of "plurality" means at least two.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, includes one or a combination of the steps of the method embodiments.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product.
The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, or the like.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the application, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the application.

Claims (8)

1. A method of image registration, comprising:
acquiring a specified number of subareas from an image to be detected, acquiring a comparison area corresponding to the subareas from a template image, and detecting characteristic points of the specified number of subareas and the specified number of comparison areas in a multithreading parallel mode to obtain a target descriptor set of the subareas and a comparison descriptor set of the comparison areas;
matching the descriptors in the target descriptor set with the descriptors in the contrast descriptor set to obtain a matching point coordinate set, and calculating by using the matching point coordinate set to obtain a transformation matrix, wherein the descriptors are feature vectors of feature points;
and obtaining the registered image to be detected by using the transformation matrix.
2. The method of claim 1, wherein said matching the descriptors in the target subset of descriptors with the descriptors in the control subset of descriptors to obtain a set of matching point coordinates, comprising:
acquiring one descriptor in the target descriptor set as a first descriptor, and calculating the distances between the first descriptor and all descriptors in the comparison descriptor set respectively to obtain a plurality of description distances;
sorting the description distances from small to large, selecting descriptors in the comparison descriptor set corresponding to the first description distance as second descriptors, and selecting descriptors in the comparison descriptor set corresponding to the second description distance as third descriptors;
and acquiring a distance ratio of the first description distance to the second description distance, and if the distance ratio is smaller than a preset threshold, determining that the first descriptor and the second descriptor are matched to obtain a matching point coordinate set.
3. The method according to claim 1, wherein the obtaining a specified number of sub-regions from the image to be detected, and obtaining a comparison region corresponding to the sub-regions from the template image, and performing feature point detection on the specified number of sub-regions and the specified number of comparison regions in a multithreaded parallel manner, to obtain a target description subset of the sub-regions, and a comparison description subset of the comparison regions, includes:
acquiring a first sub-area and a second sub-area from an image to be detected, acquiring a first comparison area and a second comparison area from a template image, and detecting characteristic points of the first sub-area and the second sub-area and the first comparison area and the second comparison area in a multithreading parallel mode to obtain a first description subset of the first sub-area, a second description subset of the second sub-area, a third description subset of the first comparison area and a fourth description subset of the second comparison area;
combining the first descriptor set and the second descriptor set to obtain the target descriptor set;
and combining the third description subset and the fourth description subset to obtain the comparison description subset.
4. The method of claim 1, wherein after performing feature point detection on the specified number of the sub-regions and the specified number of the control regions in a multithreaded parallel manner, further comprises:
obtaining a control feature point set of the control region;
and caching the comparison feature point set and the comparison description subset to generate registration data of the template image.
5. The method of claim 1, wherein prior to dividing the image to be measured into a specified number of sub-regions, comprising:
and sequentially performing image scaling processing, image denoising processing and image contrast adjustment on the image to be detected and the template image to obtain the image to be detected and the template image in a specified format.
6. The method according to claim 1, wherein after obtaining the registered image to be measured by using the transformation matrix, further comprises:
obtaining a difference image by using the registered image to be detected and the template image;
threshold segmentation and connected domain analysis are sequentially carried out on the difference image to obtain the number and position coordinates of the defect areas;
and obtaining a defect detection result of the image to be detected by using the number and the position coordinates of the defect areas.
7. An image registration system, comprising:
the detection module is used for acquiring a specified number of subareas from an image to be detected, acquiring comparison areas corresponding to the subareas from a template image, and detecting characteristic points of the specified number of subareas and the specified number of comparison areas in a multithread parallel mode to obtain a target descriptor set of the subareas and a comparison descriptor set of the comparison areas;
the transformation matrix calculation module is used for matching the descriptors in the target descriptor set with the descriptors in the contrast descriptor set to obtain a matching point coordinate set, and calculating by using the matching point coordinate set to obtain a transformation matrix, wherein the descriptors are feature vectors of feature points;
and the registration module is used for obtaining registered images to be detected by utilizing the transformation matrix.
8. An image registration apparatus, comprising:
a processor and a memory;
the processor is connected with the memory through a communication bus;
the processor is used for calling and executing the program stored in the memory;
the memory for storing a program for performing at least one image registration method as claimed in any one of claims 1 to 6.
CN202311087856.2A 2023-08-25 2023-08-25 Image registration method, system and equipment Pending CN117152219A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311087856.2A CN117152219A (en) 2023-08-25 2023-08-25 Image registration method, system and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311087856.2A CN117152219A (en) 2023-08-25 2023-08-25 Image registration method, system and equipment

Publications (1)

Publication Number Publication Date
CN117152219A true CN117152219A (en) 2023-12-01

Family

ID=88907292

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311087856.2A Pending CN117152219A (en) 2023-08-25 2023-08-25 Image registration method, system and equipment

Country Status (1)

Country Link
CN (1) CN117152219A (en)

Similar Documents

Publication Publication Date Title
CN111260731B (en) Self-adaptive detection method for checkerboard sub-pixel level corner points
CN108898610B (en) Object contour extraction method based on mask-RCNN
CN109978839B (en) Method for detecting wafer low-texture defects
CN109409374B (en) Joint-based same-batch test paper answer area cutting method
CN107945111B (en) Image stitching method based on SURF (speeded up robust features) feature extraction and CS-LBP (local binary Pattern) descriptor
CN110136120B (en) Silk-screen printing sample plate size measuring method based on machine vision
KR20130030220A (en) Fast obstacle detection
Yan et al. 3D shape reconstruction from multifocus image fusion using a multidirectional modified Laplacian operator
CN111080661A (en) Image-based line detection method and device and electronic equipment
CN103841298B (en) Video image stabilization method based on color constant and geometry invariant features
CN110111387B (en) Dial plate characteristic-based pointer meter positioning and reading method
CN110472521B (en) Pupil positioning calibration method and system
CN111507908A (en) Image correction processing method, device, storage medium and computer equipment
CN116416268B (en) Method and device for detecting edge position of lithium battery pole piece based on recursion dichotomy
US20230095142A1 (en) Method and apparatus for improving object image
CN113066088A (en) Detection method, detection device and storage medium in industrial detection
CN115512381A (en) Text recognition method, text recognition device, text recognition equipment, storage medium and working machine
Liu et al. Enhancement of contour smoothness by substitution of interpolated sub-pixel points for edge pixels
CN113112396B (en) Method for detecting conductive particles
CN111553927B (en) Checkerboard corner detection method, detection system, computer device and storage medium
Pan et al. An efficient method for skew correction of license plate
CN117152219A (en) Image registration method, system and equipment
CN115908399A (en) Magnetic sheet flaw detection method based on improved visual attention mechanism
CN111091513B (en) Image processing method, device, computer readable storage medium and electronic equipment
CN111768436B (en) Improved image feature block registration method based on fast-RCNN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination