CN116433733A - Registration method and device between optical image and infrared image of circuit board - Google Patents

Registration method and device between optical image and infrared image of circuit board Download PDF

Info

Publication number
CN116433733A
CN116433733A CN202310060353.XA CN202310060353A CN116433733A CN 116433733 A CN116433733 A CN 116433733A CN 202310060353 A CN202310060353 A CN 202310060353A CN 116433733 A CN116433733 A CN 116433733A
Authority
CN
China
Prior art keywords
image
points
feature
entropy
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310060353.XA
Other languages
Chinese (zh)
Inventor
黄海鸿
郑心遥
李磊
胡嘉琦
周帮来
刘志峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei University of Technology
China National Electric Apparatus Research Institute Co Ltd
Original Assignee
Hefei University of Technology
China National Electric Apparatus Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei University of Technology, China National Electric Apparatus Research Institute Co Ltd filed Critical Hefei University of Technology
Priority to CN202310060353.XA priority Critical patent/CN116433733A/en
Publication of CN116433733A publication Critical patent/CN116433733A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30141Printed circuit board [PCB]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a registration method and a registration device between a visible light image and an infrared image of a circuit board. The registration method comprises the following steps: the image area blocking statistical information entropy is adopted, and according to a threshold value, a high entropy information area is extracted to serve as an original image of a subsequent detection feature point, so that the quality of the feature point is improved, and meanwhile, the detection efficiency is improved; selecting a method combining FAST+SIFT algorithm to extract image feature points; adopting an improved SIFT feature point annular descriptor; and according to the FLANN matching method and the improved RANSAC algorithm, feature point matching and mismatching elimination are completed, so that fine matching of images is realized. According to the invention, when images are matched, the points which can better reflect the image characteristics are selected for matching, so that the problems of low efficiency and low response strength of the traditional SIFT algorithm when the characteristic points are extracted are solved while the matching precision is ensured, and the registration efficiency and the precision are obviously improved compared with the traditional algorithm.

Description

Registration method and device between optical image and infrared image of circuit board
Technical Field
The invention relates to an infrared and optical image registration method based on combination of image entropy and an improved SIFT algorithm and a registration device adopting the registration method in the field of image processing, in particular to a registration method and a registration device between an optical image and an infrared image of a circuit board.
Background
A large number of various circuit boards are used in various electronic products and various electric appliances, the application of the circuit boards is wider and wider, the mechanism and the function of the circuit boards are more and more complex, and when the circuit boards fail, a great deal of time and energy are required to be spent by the traditional contact diagnosis mode. Among them, the infrared thermal imaging detection technology is a non-contact detection technology, which has been successfully applied in many fields, and the fault detection of the circuit board is one of the important uses. When the circuit board works, different heat radiation exists in each element, the infrared image is obtained and then is subjected to image processing, the processed fault infrared image and the infrared image of the circuit board which is not faulty are subjected to characteristic comparison analysis, and the fault part and the fault element are judged through an intelligent algorithm.
Because the resolution of the infrared image is low, a lot of image information can be lost in the imaging process, so that the judgment of the fault position of the circuit board and components becomes extremely difficult, the optical imaging is clear, and the image information is complete. Therefore, the infrared image and the optical image are fused, so that the accurate detection of the fault defect distribution of the printed circuit board of the product can be realized, but the infrared image and the optical image cannot be directly fused due to different resolutions of the infrared thermal imager and the optical camera, different shooting angles and the like, and therefore, the registration algorithm of the infrared image and the visible light image needs to be studied.
Image registration is an important link in the field of image processing, and refers to the corresponding relation between images of the same scene at two different time points, and is a basic problem in the field of computer vision research and is also computer vision application. In recent years, image registration methods based on feature extraction have been rapidly developed, image registration based on feature points is most widely used in the field of image registration,
among a plurality of image registration algorithms, the SIFT method has good invariance under the conditions of image rotation, scale transformation and affine transformation, and becomes the most stable algorithm at present, however, the SIFT method has limitations that the SIFT method needs to process a plurality of times of data volume of an original image, the generation process of a feature descriptor is quite complex, and the calculation amount is large when the feature is matched due to the fact that the dimension of the descriptor is high.
Disclosure of Invention
Aiming at the technical problems of difficult registration of infrared and visible light images and large calculation amount of a SIFT algorithm, the invention provides a SIFT registration method based on characteristic points, provides an infrared and optical image registration method based on combination of image entropy and the improved SIFT algorithm, optimizes the flow of the SIFT algorithm, improves the speed and the accuracy of the SIFT algorithm to a certain extent, and particularly relates to a registration method and a device between an optical image and an infrared image of a circuit board.
The invention is realized by adopting the following technical scheme: a method of registration between a visible light image and an infrared image of a circuit board, the method comprising the steps of:
step one, taking a visible light image and an infrared image of the circuit board as input images;
step two, traversing the two input images by adopting non-overlapping sliding windows, dividing the windows, calculating the information entropy of the window area after division, defining the image local area higher than a given preset information entropy threshold as a high entropy area and the image local area lower than the given information entropy threshold as a low entropy area according to the histogram formed by the acquired information entropy, wherein the high entropy area is used for subsequent algorithm feature extraction to participate in feature point detection, and the low entropy area does not participate in feature point detection;
detecting characteristic points of the high-entropy areas screened out of the infrared image and the high-entropy areas screened out of the visible light image by adopting a SIFT+FAST algorithm, and screening out representative points as respective SIFT characteristic points;
step four, respectively constructing annular descriptors for SIFT feature points detected by the two images, performing PCA dimension reduction processing, and respectively acquiring 64-dimensional feature vector descriptors of the visible light images and 64-dimensional feature vector descriptors of the infrared images;
And fifthly, taking Euclidean distance and cosine similarity as similarity measurement indexes of the two images, calculating Euclidean distance and cosine similarity of feature point feature vectors on the two images, adopting a nearest neighbor/secondary neighbor FLANN algorithm to perform initial matching on the reference image and the image to be matched, adopting a RANSAC algorithm to remove incorrect matching, and finally realizing the precise matching between the visible light image and the infrared image.
As a further improvement of the above solution, in the second step, the method for screening the high entropy area and the low entropy area by using the information entropy threshold includes the following steps:
firstly, dividing a visible light image and an infrared image by adopting non-overlapping sliding windows, traversing each image by using a plurality of non-overlapping sliding windows, dividing each image according to the size of the window, and calculating the information entropy of each window area;
and secondly, according to a histogram formed by the acquired information entropy, setting a segmentation threshold, namely the information entropy threshold, screening a window area for calculating the information entropy, reserving a window area larger than the set information entropy threshold, extracting the characteristic points of a subsequent SIFT+FAST algorithm, and not detecting the characteristic points of the window area smaller than the information entropy threshold.
As a further improvement of the above scheme, for a two-dimensional image in discrete form, its information entropy P i,j The calculation formula of (2) is as follows:
P i,j =f(i,j)/W·h
Figure BDA0004061138710000031
wherein W, h is the width and height of the picture respectively, (i, j) is a binary group, i represents the gray value of the center in a certain sliding window, j is the gray average value of the pixels except the center in the window; f (i, j) represents the number of times (i, j) this tuple appears in the whole image, and H is the image two-dimensional gray entropy.
As a further improvement of the above solution, in step three, a detection method for detecting feature points in a high entropy area screened out from each image by using sift+fast algorithm includes the following steps:
firstly, constructing a Gaussian scale space;
the gaussian scale space of an image is defined as a function L (x, y, σ):
L(x,y,σ)=G(x,y,σ)*I(x,y)
wherein I (x, y) is an input image, G (x, y, sigma) is a variable-scale Gaussian function, (x, y) is a point coordinate on the image, and sigma is a Gaussian blur coefficient; the adjacent layers in each group are subtracted to obtain a Gaussian differential pyramid DOG, the subsequent feature point extraction is carried out on the DOG pyramid, and the formula of a DOG operator D (x, y, sigma) is as follows:
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)=L(x,y,kσ)-L(x,y,σ)
wherein k is a proportionality coefficient;
secondly, detecting and accurately positioning Gaussian scale space feature points;
Searching all scales and image positions in a Gaussian scale space, positioning extreme points on each layer of image of all scales, and determining that a circle is drawn with the radius of 3 by taking the point as the center, wherein at least 12 pixel points in 16 pixel points on the edge satisfy the ratio I x +T 1 Greater or equal than I x -T 1 For hours, then consider this point as the key point and then accurately determine by fitting a three-dimensional quadratic functionKey point location and scale, where I x For detecting the pixel value of the point T 1 Is a pixel range threshold;
then, removing the points with low contrast and the points positioned at the edges of the image;
removing the two unstable points by setting a contrast threshold and a Hessian matrix;
finally, calculating the direction of the feature points;
the gradient direction characteristics of the neighborhood pixels of the key points are utilized, so that the rotation invariance of the image is realized; sampling in a plurality of neighborhood windows taking the feature points as the centers, and counting the gradient directions of the neighborhood pixels by using a histogram; the gradient histogram ranges from 0 degrees to 360 degrees, and every 45 degrees is a direction, the histogram is divided into 8 directions, namely 8 gradient direction information exists in each characteristic point; the peak of the histogram represents the main direction of the neighborhood gradient at the feature point, i.e. the direction that is the feature point; meanwhile, a Gaussian function is used for smoothing the histogram, the influence of mutation is reduced, and when another peak value equivalent to 80% of the energy of the main peak value exists in the gradient direction histogram, the direction is regarded as the auxiliary direction of the characteristic point; a feature point may be designated to have multiple directions, a primary direction, and more than one secondary direction for enhanced robustness of the match.
Further, when removing points with low contrast and points located at the edges of the image, the extreme points are accurate to the sub-pixel level by using a fitting three-dimensional quadratic function, are substituted into the Taylor expansion, and only the first two terms are taken:
Figure BDA0004061138710000041
wherein the method comprises the steps of
Figure BDA0004061138710000042
Wherein represents the offset relative to the interpolated center coordinates (x, y);
presetting a first contrast threshold, comparing and analyzing the contrast of the extreme points with the first contrast threshold, and taking the extreme points with the contrast larger than the first contrast threshold as feature points to be selected; meanwhile, a second contrast threshold value is preset, the second contrast threshold value is larger than the first contrast threshold value, and extreme points with the contrast larger than the second contrast threshold value are continuously stored as feature points to be selected;
acquiring a Hessian matrix H (x, y) of the feature points to be selected:
Figure BDA0004061138710000051
Tr(H(x,y))=D xx (x,y)+D yy (x, y) represents the sum of the eigenvalues of matrix H (x, y), det (H (x, y)) =d xx (x,y)D yy (x,y)-(D xy (x,y)) 2 Representing the determinant of matrix H (x, y), where D xx (x,y),D xy (x,y),D yy The (x, y) value is obtained by differentiating the corresponding positions of the neighborhood of the candidate points, the principal curvature of Det (H (x, y)) is proportional to the characteristic value of H (x, y), and the value is set
Figure BDA0004061138710000052
Representing the ratio of the maximum characteristic value to the minimum characteristic value of H (x, y), then +.>
Figure BDA0004061138710000053
To detect whether the principal curvature is at a certain threshold T 2 Under, only the +.>
Figure BDA0004061138710000054
If the above formula is established, the feature point is rejected, otherwise, the feature point is reserved.
Further, the calculation method for calculating the direction of the feature point includes the steps of:
for the key points detected in the DOG pyramid, collecting gradient and direction distribution characteristics of pixels in a 3 sigma neighborhood window of a Gaussian pyramid image where the key points are positioned; the modulus and direction of the gradient are as follows:
Figure BDA0004061138710000055
θ(x,y)=tan -1 ((L(x,y+1)-L(x,y-1))/(L(x+1,y)-L(x-1,y)))
wherein L (x, y) is a scale space value at (x, y) where the key point is located, L (x+1, y) is a scale space value at (x+1, y) where the key point is located, L (x-1, y) is a scale space value at (x-1, y) where the key point is located, L (x, y+1) is a scale space value at (x, y+1) where the key point is located, L (x, y-1) is a scale space value at (x, y-1) where the key point is located, m (x, y) is a gradient modulus value, and θ (x, y) is a gradient direction.
As a further improvement of the above solution, in step four, the method for acquiring the 64-dimensional annular feature vector descriptor of the two images includes the steps of:
for any one feature key point, taking the key point as the center of a circle in the scale space, and making a circle with a radius of 13, dividing the region into 8 concentric annular circles in a mode of radius of 2, 3, 4, 5, 6, 8, 10 and 13 pixel points as the gradient distribution weight of the pixel point farther from the center of the circle is smaller, so as to form 8 sub-regions, wherein the key point in each sub-region has 8 gradient directions, and therefore 8×8=64 data, namely 64-dimensional SIFT feature vectors are totally obtained.
As a further improvement of the above solution, in step five, the matching method for performing initial matching by using the FLANN algorithm combining euclidean distance and cosine similarity includes the following steps:
after SIFT feature vectors of the two images are generated, euclidean distance and cosine similarity between feature point feature vectors of the two images are calculated, the distance and direction between the feature vectors are used as similarity judgment indexes, feature points with minimum distance and cosine similarity higher than a certain given threshold are selected as initial matching points, and the Euclidean distance ratio of nearest neighbors to next nearest neighbors is smaller than a certain proportion threshold T 3 0.77, determining a pair of correct matching points and removing the wrong matching points; and connecting the matching points in the visible light image and the infrared image by using lines. Thereby achieving image registration.
The invention also provides a registration device between an optical image and an infrared image of a circuit board, the registration device comprising:
the acquisition module is used for acquiring an infrared image and a visible light image of the circuit board, and the visible light image is an optical image and an infrared image;
the entropy region distinguishing module is used for respectively removing low-entropy regions according to the respective image information entropy in the visible optical image and the infrared image, and reserving high-entropy regions for subsequent feature point detection;
The construction module is used for constructing a Gaussian scale space for the high-entropy area and establishing an image Gaussian pyramid and a Gaussian differential pyramid;
the characteristic point screening module is used for acquiring extreme points in different scale spaces in the Gaussian differential pyramid by using a FAST+SIFT combination algorithm, and accurately positioning and screening the characteristic points according to the extreme points;
the removing module is used for screening and removing unstable points by adopting a threshold method and a Hessian matrix method; including points of low contrast and points at the edges of the image;
the characteristic point direction calculation module is used for calculating and determining the characteristic point direction and constructing a key point 64-dimensional annular descriptor;
and the key point matching module is used for carrying out key point matching by using the Euclidean distance and cosine similarity between vectors as measurement indexes and applying a quick approximate nearest neighbor search FLANN, and eliminating mismatching by using a RANSAC random sampling consistency algorithm.
As a further improvement of the above solution, the configuration device is further configured to use the above registration method between the optical image and the infrared image of any of the circuit boards.
Compared with the prior art, the invention has the following beneficial effects:
1. the image area blocking statistical information entropy is adopted, and the high entropy information area is extracted as a detection target image according to the threshold value, so that the accuracy of the matching point pair is improved, and the matching accuracy is higher than that of the traditional SIFT matching point pair.
2. By using the SIFT and FAST combined method, the problems of slower efficiency and low response strength when the traditional algorithm is used for extracting the feature points are solved, the accuracy of the matching point pairs is improved, and the matching accuracy is higher than that of the traditional SIFT matching point pairs.
3. The improved SIFT feature point annular descriptor is adopted, and the overall operation speed of the algorithm is improved on the premise of ensuring the registration quality.
According to the invention, when images are matched, the points which can better reflect the image characteristics are selected for matching, so that the problems of low efficiency and low response strength of the traditional SIFT algorithm when the characteristic points are extracted are solved while the matching precision is ensured, and the registration efficiency and the precision are obviously improved compared with the traditional algorithm. Therefore, the invention selects the visible light and the infrared image as experimental data, compares the experimental data with the traditional SIFT algorithm, obviously improves the registration efficiency and the precision compared with the traditional algorithm, and has wide application prospects in image fusion, remote sensing image processing, computer vision, vision field and power equipment diagnosis.
Drawings
Fig. 1 is a flowchart of a registration method between an optical image and an infrared image of a circuit board provided by the present invention.
Fig. 2 is an entropy histogram of the visible light image of fig. 1.
Fig. 3 is an entropy histogram of the infrared image of fig. 1.
Fig. 4 is a feature point detection diagram in fig. 1.
Fig. 5 is a schematic diagram of the process of creating a 64-dimensional ring descriptor in fig. 1.
Fig. 6 is a conventional sift algorithm registration chart.
Fig. 7 is a registration diagram of the modified sift algorithm of fig. 1.
Fig. 8 is a graph comparing results.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that when a component is referred to as being "mounted on" another component, it can be on the other component or intervening components may also be present. When an element is referred to as being "disposed on" another element, it can be disposed on the other element or intervening elements may also be present. When an element is referred to as being "fixed to" another element, it can be fixed to the other element or intervening elements may also be present.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. The term "or/and" as used herein includes any and all combinations of one or more of the associated listed items.
The registration method between the optical image and the infrared image of the circuit board mainly comprises 5 steps.
And step one, taking the optical image and the infrared image of the circuit board as input images to be registered.
Step two, traversing the optical image and the infrared image by adopting non-overlapping sliding windows, dividing the windows, calculating the information entropy of the window area after division, defining an image local area with high information entropy higher than a preset information entropy threshold as a high entropy area, defining an image local area with low information entropy lower than the information entropy threshold as a low entropy area, wherein the high entropy area is used for subsequent algorithm feature extraction to participate in feature point detection, and the low entropy area is not used for feature point detection.
And thirdly, respectively detecting characteristic points of the high-entropy area screened out of the optical image and the high-entropy area screened out of the infrared image by adopting a SIFT+FAST algorithm, and respectively screening representative points as respective SIFT characteristic points.
And fourthly, respectively constructing annular descriptors and performing dimension reduction processing on SIFT feature points of the optical image and SIFT feature points of the infrared image to respectively acquire 64-dimensional feature vector descriptors of the optical image and 64-dimensional feature vector descriptors of the infrared image.
And fifthly, taking Euclidean distance and cosine similarity as similarity measurement indexes of the optical image and the infrared image, calculating Euclidean distance and cosine similarity of feature vectors of feature points on the two images, adopting a nearest neighbor/next neighbor FLANN algorithm to perform initial matching on the optical image and the infrared image, adopting a RANSAC algorithm to remove mismatching in the optical image and the infrared image, and finally realizing fine matching between the optical image and the infrared image.
Of course, referring to fig. 1, seven aspects can be summarized:
1) Acquiring an infrared image and a visible light image of a PCB;
2) Removing a low entropy region according to the size of the image information entropy, and reserving a high entropy region for subsequent feature point detection;
3) Constructing a Gaussian scale space for the image, and constructing an image Gaussian pyramid and a Gaussian differential pyramid;
4) Acquiring extreme points in different scale spaces in the Gaussian differential pyramid by using a FAST+SIFT combination algorithm, and accurately positioning and screening out characteristic points according to the extreme points;
5) Screening and removing unstable points by adopting a threshold method and a Hessian matrix method; including points of low contrast and points at the edges of the image;
6) Calculating and determining the direction of the characteristic points, and constructing a 64-dimensional annular descriptor of the key points;
7) And (3) using the Euclidean distance between vectors and cosine similarity as measurement indexes, performing key point matching by using a quick approximate nearest neighbor search (FLANN), and eliminating mismatching by using a RANSAC random sampling consistency algorithm.
Each step is then analyzed in detail.
Aiming at the first step, acquiring an infrared image and a visible light image of a PCB (printed circuit board);
aiming at the second step, the specific method for screening the high-entropy window area and the low-entropy window area by utilizing the image entropy threshold value is as follows:
first, the reference image and the image to be registered are segmented by using non-overlapping sliding windows, the images are traversed by using a plurality of (such as 5*5) non-overlapping sliding windows, the images are segmented according to the window size, and the information entropy of each small window area is calculated. For a two-dimensional image in a discrete form, the information entropy is calculated according to the following formula:
P i,j =f(i,j)/W·h
Figure BDA0004061138710000101
Wherein W, h is the width and height of the picture respectively, (i, j) is a binary group, i represents the gray value of the center in a certain sliding window, j is the gray average value of the pixels except the center in the window; f (i, j) represents the number of times (i, j) this tuple appears in the whole image, and H is the image two-dimensional gray entropy.
Secondly, according to the histogram (shown in fig. 2 and 3) formed by the acquired information entropy, a segmentation threshold is set, window areas for calculating the information entropy are screened, window areas larger than the set threshold are reserved for subsequent extraction of characteristic points of a SIFT+FAST algorithm, and window areas smaller than the threshold are not subjected to subsequent detection of the characteristic points.
For the third step, please refer to fig. 4, the specific method for detecting the feature points of the image high entropy region by adopting the sift+fast algorithm is as follows:
(1) Building a Gaussian scale space: the Gaussian scale space of an image is defined as a function
Figure BDA0004061138710000102
As a variable-scale gaussian function, the gaussian convolution kernel is the only linear kernel that implements the scale transformation. Wherein (x, y) is the coordinates of points on the image, σ is a gaussian blur coefficient, the size of σ determines the smoothness of the image, the large scale corresponds to the profile features of the image, the small scale corresponds to the detail features of the image, the output image is I (x, y), i.e., L (x, y, σ) =g (x, y, σ) ×i (x, y), wherein I (x, y) is the input image, and G (x, y, σ) is the variable scale gaussian function. Creating a good image Gaussian After the pyramid, in order to effectively detect stable key points in the scale space, adjacent layers in each group are subtracted to obtain a Gaussian differential pyramid (DOG), and the subsequent feature point extraction is carried out on the DOG pyramid.
(2) Detecting and accurately positioning Gaussian scale space feature points: searching all scales and image positions in a Gaussian scale space, positioning extreme points on each layer of image of all scales, and determining that a circle is drawn with the radius of 3 by taking the point as the center, wherein at least 12 pixel points in 16 pixel points on the edge satisfy the ratio I x +T 1 Greater or equal than I x -T 1 And if the time is hours, the point is considered as a key point, and then the position and the scale of the key point are accurately determined by fitting a three-dimensional quadratic function.
(3) Points of low contrast and points at the edges of the image are removed: both unstable points are removed by setting a contrast threshold and a Hessian matrix.
(4) Calculating the direction of the characteristic points: the gradient direction characteristics of the neighborhood pixels of the key points are utilized, so that the rotation invariance of the image is realized; sampling in a plurality of neighborhood windows such as 4 multiplied by 4 with the feature points as the center, and counting the gradient direction of the neighborhood pixels by using a histogram; the gradient histogram ranges from 0 to 360 degrees, and the histogram is divided into 8 directions, namely 8 gradient direction information are provided for each characteristic point. The peak of the histogram represents the main direction of the neighborhood gradient at the feature point, i.e. the direction that is the feature point. And meanwhile, a Gaussian function is used for smoothing the histogram, the influence of mutation is reduced, and when another peak value equivalent to 80% of the energy of the main peak value exists in the gradient direction histogram, the direction is regarded as the auxiliary direction of the characteristic point. A feature point may be designated to have multiple directions, a primary direction, and more than one secondary direction for enhanced robustness of the match.
For step four, please refer to fig. 5, the specific method for obtaining the 64-dimensional annular feature vector descriptors of the reference image and the image to be matched is as follows:
for any one feature key point, taking the key point as the center of a circle in the scale space, and making a circle with a radius of 13, dividing the region into 8 concentric annular circles with the radius of 2, 3, 4, 5, 6, 8, 10 and 13 as the gradient distribution weight of the pixel points which are farther from the center of the circle is smaller, so as to form 8 sub-regions, wherein the key point in each sub-region has 8 gradient directions, and therefore 8×8=64 data, namely 64-dimensional SIFT feature vectors are totally obtained.
Aiming at the fifth step, the specific method for carrying out initial matching by utilizing a FLANN algorithm combining the Euclidean distance and the cosine similarity is as follows:
after SIFT feature vectors of the two images are generated, the Euclidean distance and cosine similarity of feature vectors of feature points on the two images are calculated, the distance and direction between the vectors are used as similarity judging indexes, feature points with the smallest distance and the cosine similarity higher than a certain given threshold value are used as initial matching points, the ratio of Euclidean distances of nearest neighbors and secondary neighbors is smaller than a certain ratio threshold value T and is 0.77, a pair of matching points are judged, and the error matching points are removed; and then connecting the matching points in the reference image and the image to be registered by lines to realize image registration. Referring to fig. 6, 7 and 8, fig. 6 is a conventional sift algorithm registration chart; FIG. 7 is a registration diagram of the modified sift algorithm of FIG. 1; fig. 8 is a graph comparing results.
The registration method between the optical image and the infrared image of the circuit board can be designed into embedded software or non-embedded software when in application, but the registration device between the optical image and the infrared image of the circuit board can be designed independently.
The registration device comprises an acquisition module, an entropy region distinguishing module, a construction module, a characteristic point screening module, a removal module, a characteristic point direction calculation module and a key point matching module.
The acquisition module is used for acquiring an infrared image and a visible light image of the circuit board, taking the visible light image, namely the optical image, as a reference image and taking the infrared image as an image to be matched. The entropy region distinguishing module is used for respectively removing low-entropy regions according to the respective image information entropy of the reference image and the image to be matched, and reserving high-entropy regions for subsequent feature point detection. The construction module is used for constructing a Gaussian scale space for the high-entropy region and establishing an image Gaussian pyramid and a Gaussian differential pyramid. The feature point screening module is used for acquiring extreme points in different scale spaces in the Gaussian differential pyramid by using a FAST+SIFT combination algorithm, and accurately positioning and screening the feature points according to the extreme points. The removing module is used for screening and removing unstable points by adopting a threshold method and a Hessian matrix method; including points of low contrast and points at the edges of the image. The characteristic point direction calculation module is used for calculating and determining the characteristic point direction and constructing a key point 64-dimensional annular descriptor. The key point matching module is used for carrying out key point matching by using the Euclidean distance and cosine similarity between vectors as measurement indexes and applying a quick approximate nearest neighbor search FLANN, and eliminating mismatching by using a RANSAC random sampling consistency algorithm.
The image entropy is an estimate of how "busy" an image is, expressed as the average number of bits in the image gray level set, and also describes the average information content of the image source. The entropy of an image is a statistical form of characteristics, which reflects the quantity of average information in the image, and represents the aggregation characteristics of gray distribution of the image, and the larger the entropy of the image information is, the more characteristic points with high contrast and high quality are indicated, and vice versa.
The high entropy area is reserved as a subsequent feature point detection according to the image information entropy, specifically: firstly, traversing a visible light image and an infrared image of a circuit board by adopting a non-overlapping sliding window, dividing the window, and calculating the information entropy of a window area after division; secondly, setting a threshold according to information entropy of a plurality of local areas of the image, selecting a proper threshold to reserve the local areas of the image with high information entropy according to a histogram formed by the acquired information entropy, removing the image areas with low information entropy, and extracting feature points by adopting an improved SIFT algorithm aiming at the reserved image areas.
The specific flow for constructing the Gaussian scale space and the Gaussian pyramid of the image is as follows: after the visible light and infrared images are grayed, respectively doubling the visible light and infrared images to be used as 1 st group of 1 st layers of the Gaussian pyramid, wherein the 1 st group of 1 st layers are positioned at the bottommost end of the Gaussian pyramid and are Successively upsampling the images, taking the image after Gaussian convolution of the 1 st group of 1 st layer images as the 2 nd layer of the 1 st group pyramid, wherein the Gaussian convolution function is as follows:
Figure BDA0004061138710000131
then multiplying sigma by a proportionality coefficient k so as to wait for a new smoothing factor sigma=k sigma, smoothing the group 1 layer 2 image by using the new smoothing factor sigma=k sigma, taking the formed image result as the group 1 layer 3 image, and repeating the operation so as to finally obtain an L layer image in the group 1; for the group 2 image, downsampling the group 1 reciprocal layer 3 image by a scale factor of 2, taking the obtained image as the group 2 layer 1 image, then smoothing the group 2 layer 1 image by a smoothing factor sigma to obtain the group 2 layer 2 image, and obtaining the group 2L image just like the above, wherein the sizes of the images in the same group are the same, but the smoothing scales of the images are different, and the corresponding smoothing coefficients are respectively: 0, sigma, k sigma, 2k sigma, k (L-2)σ
The Gaussian differential pyramid is constructed, and feature point detection and accurate positioning are carried out, specifically: on the basis of the Gaussian pyramid of the image constructed in the last step, adjacent layers in each group are subtracted to obtain a Gaussian differential pyramid (DOG), and subsequent SIFT feature point extraction is performed on the DOG pyramid. The 1 st layer of the 1 st group of the DOG pyramid is obtained by subtracting the 1 st layer of the 1 st group from the 1 st layer of the Gaussian pyramid, and the steps are repeated to form a Gaussian differential pyramid, simplifying the Gaussian differential scale space, removing the 1 st layer scale space of the 1 st group in the Gaussian differential scale space, and detecting extreme points through the simplified Gaussian differential scale space. Whether a certain characteristic point K in a certain layer of image is a characteristic point or not can be judged, a circle can be drawn by taking the point as the center and the radius is 3 pixels, and 16 pixel points are arranged on a circumference arc line of the circle. By comparing the 16 pixel points with the pixel value of the center point to be detected, whether at least 12 continuous pixel points in the 16 pixel points arranged on the circumference satisfy the ratio I k -t is greater than or both are greater than I k And +t is small. If such a requirement is satisfied, K is determined to be a feature point. For the purpose ofAnd (3) reducing the characteristic point detection time, detecting pixel points of positions 1, 5, 9 and 13 with the positions up, down, left and right being 90 degrees for each point, and if at least 3 of the 4 points meet the condition, continuing to detect the pixel points in 16 fields. Otherwise, judging that the point is a non-characteristic point, and directly eliminating.
The specific flow for removing the points with low contrast and the points positioned at the edges of the image is as follows: and (3) accurate extreme points to sub-pixel levels by using a fitting three-dimensional quadratic function, substituting Taylor expansion, and taking the first two terms:
Figure BDA0004061138710000141
presetting a first contrast threshold, comparing and analyzing the contrast of the extreme points with the first contrast threshold, and taking the extreme points with the contrast larger than the first contrast threshold as feature points to be selected; meanwhile, a second contrast threshold value is preset, the second contrast threshold value is larger than the first contrast threshold value, and extreme points with the contrast larger than the second contrast threshold value are continuously stored as feature points to be selected; because the DOG operator will produce a stronger edge response, some less stable edge response points should be removed. Acquiring a Hessian matrix of the feature points to be selected: / >
Figure BDA0004061138710000142
Wherein the D value can be obtained by taking the difference between adjacent pixel points, and the characteristic value of H is in direct proportion to the main curvature of D. The unstable edge response points are removed, and the two unstable points are removed by setting a contrast threshold and a Hessian matrix.
In this embodiment, a Hessian matrix H (x, y) of the feature points to be selected is obtained:
Figure BDA0004061138710000143
Tr(H(x,y))=D xx (x,y)+D yy (x, y) represents the sum of the eigenvalues of matrix H (x, y), det (H (x, y)) =d xx (x,y)D yy (x,y)-(D xy (x,y)) 2 Representing the determinant of matrix H (x, y), where D xx (x,y),D xy (x,y),D yy The (x, y) value is obtained by differentiating the corresponding positions of the neighborhood of the candidate points, the principal curvature of Det (H (x, y)) is proportional to the characteristic value of H (x, y), and the value is set
Figure BDA0004061138710000144
Representing the ratio of the maximum characteristic value to the minimum characteristic value of H (x, y), then +.>
Figure BDA0004061138710000145
To detect whether the principal curvature is at a certain threshold T 2 Under, only the +.>
Figure BDA0004061138710000146
If the above formula is established, the feature point is rejected, otherwise, the feature point is reserved.
The key point direction is calculated, specifically: for the key points detected in the DOG pyramid, gradient and direction distribution characteristics of pixels in a 3 sigma neighborhood window of the Gaussian pyramid image where the key points are located are collected. The modulus and direction of the gradient are as follows:
Figure BDA0004061138710000151
θ(x,y)=tan -1 ((L(x,y+1)-L(x,y-1))/(L(x+1,y)-L(x-1,y)))
wherein L (x, y) is a scale space value at (x, y) where the key point is located, L (x+1, y) is a scale space value at (x+1, y) where the key point is located, L (x-1, y) is a scale space value at (x-1, y) where the key point is located, L (x, y+1) is a scale space value at (x, y+1) where the key point is located, L (x, y-1) is a scale space value at (x, y-1) where the key point is located, m (x, y) is a gradient modulus value, and θ (x, y) is a gradient direction.
And designating a direction parameter for each key point by utilizing the gradient direction distribution characteristic of the key point neighborhood pixels, so that the operator has rotation invariance. And (3) adopting a gradient histogram statistical method, counting, namely determining the direction of the key point by taking the key point as an original point and using the histogram to count the gradient and the direction of pixels in the neighborhood by using the image pixel points in the 3 sigma neighborhood window of the Gaussian pyramid image. The gradient histogram divides the range of directions from 0 to 360 degrees into 36 bins of 10 degrees each. The peak direction of the histogram represents the main direction of the keypoint, and its contribution to the histogram decreases with the area further from the center point; meanwhile, a Gaussian function is used for smoothing the histogram, the influence of mutation is reduced, and when another peak value equivalent to 80% of the energy of the main peak value exists in the gradient direction histogram, the direction is regarded as the auxiliary direction of the characteristic point; a feature point may be designated to have multiple directions, a primary direction, and more than one secondary direction for enhanced robustness of the match.
Constructing a key point descriptor to form a feature vector: with the ring descriptor, since the ring has rotational invariance, there is no need to determine the principal direction of the feature points. Taking a key point as a circle center, taking a round window with a radius of 13 as a neighborhood range of a characteristic point, and dividing the neighborhood into 8 concentric circles, namely 8 sub-areas by respectively taking 2, 3, 4, 5, 6, 8, 10 and 13 pixels with the radius. And counting the pixel gradients and directions of 8 directions (one direction is every 45 degrees) of all the pixel points on each annular subarea. Therefore, the total is 8×8=64, the feature vectors are ordered and weighted by a gaussian window, and normalization is adopted to process the feature vectors in order to reduce negative images generated by the matching effect due to illumination transformation.
Performing key point matching by using a FLANN algorithm: after SIFT 64-dimensional feature vectors of the two images are generated, euclidean distance and cosine similarity between feature point feature vectors on the two images are calculated as similarity measures. The first two feature points closest to the Euclidean distance are found out from the reference image and are called the nearest neighbor and the next nearest neighbor. If the distance of the nearest neighbor feature point divided by the distance of the next nearest neighbor feature point is smaller than a preset proportion threshold value and the cosine similarity is higher than a given threshold value, the group of points are considered to be successfully matched; otherwise, the feature points are considered to fail to match, namely no matching points exist; and then connecting the matching points in the reference image and the image to be registered by lines to realize image registration. After an initial match is made, a partial mismatch may occur in one image. In order to eliminate the mismatching, the RANSAC algorithm is adopted to eliminate the mismatching point pairs so as to realize the fine matching of the images.
Compared with the prior art, the invention has the following beneficial effects:
1. the image area blocking statistical information entropy is adopted, and the high entropy information area is extracted as a detection target image according to the threshold value, so that the accuracy of the matching point pair is improved, and the matching accuracy is higher than that of the traditional SIFT matching point pair.
2. By using the SIFT and FAST combined method, the problems of slower efficiency and low response strength when the traditional algorithm is used for extracting the feature points are solved, the accuracy of the matching point pairs is improved, and the matching accuracy is higher than that of the traditional SIFT matching point pairs.
3. The improved SIFT feature point annular descriptor is adopted, and the overall operation speed of the algorithm is improved on the premise of ensuring the registration quality.
The invention selects the visible light and the infrared image as experimental data, and compares the experimental data with the traditional SIFT algorithm, and compared with the traditional algorithm, the registration efficiency and the precision are obviously improved. The method has wide application prospect in image fusion, remote sensing image processing, computer vision, vision field and power equipment diagnosis.
The technical features of the above-described embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above-described embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples illustrate only a few embodiments of the invention, which are described in detail and are not to be construed as limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of protection of the present invention is to be determined by the appended claims.

Claims (10)

1. A method of registering between a visible light image and an infrared image of a circuit board, the method comprising the steps of:
step one, taking a visible light image and an infrared image of the circuit board as input images;
step two, traversing the two input images by adopting non-overlapping sliding windows, dividing the windows, calculating the information entropy of the window area after division, defining the image local area higher than a given preset information entropy threshold as a high entropy area and the image local area lower than the given information entropy threshold as a low entropy area according to the histogram formed by the acquired information entropy, wherein the high entropy area is used for subsequent algorithm feature extraction to participate in feature point detection, and the low entropy area does not participate in feature point detection;
detecting characteristic points of the high-entropy areas screened out of the infrared image and the high-entropy areas screened out of the visible light image by adopting a SIFT+FAST algorithm, and screening out representative points as respective SIFT characteristic points;
step four, respectively constructing annular descriptors for SIFT feature points detected by the two images, performing PCA dimension reduction processing, and respectively acquiring 64-dimensional feature vector descriptors of the visible light images and 64-dimensional feature vector descriptors of the infrared images;
And fifthly, taking Euclidean distance and cosine similarity as similarity measurement indexes of the two images, calculating Euclidean distance and cosine similarity of feature point feature vectors on the two images, adopting a nearest neighbor/secondary neighbor FLANN algorithm to perform initial matching on the reference image and the image to be matched, adopting a RANSAC algorithm to remove incorrect matching, and finally realizing the precise matching between the visible light image and the infrared image.
2. The method of registration between an optical image and an infrared image of a circuit board according to claim 1, wherein in the second step, the method of screening the high entropy region and the low entropy region using the information entropy threshold value comprises the steps of:
firstly, dividing a visible light image and an infrared image by adopting non-overlapping sliding windows, traversing each image by using a plurality of non-overlapping sliding windows, dividing each image according to the size of the window, and calculating the information entropy of each window area;
and secondly, according to a histogram formed by the acquired information entropy, setting a segmentation threshold, namely the information entropy threshold, screening a window area for calculating the information entropy, reserving a window area larger than the set information entropy threshold, extracting the characteristic points of a subsequent SIFT+FAST algorithm, and not detecting the characteristic points of the window area smaller than the information entropy threshold.
3. The method of registration between an optical image and an infrared image of a circuit board according to claim 2, characterized in that for a two-dimensional image in discrete form, its information entropy P i,j The calculation formula of (2) is as follows:
P i,j =f(i,j)/W·h
Figure FDA0004061138690000021
wherein W, h is the width and height of the picture respectively, (i, j) is a binary group, i represents the gray value of the center in a certain sliding window, j is the gray average value of the pixels except the center in the window; f (i, j) represents the number of times (i, j) this tuple appears in the whole image, and H is the image two-dimensional gray entropy.
4. The registration method between an optical image and an infrared image of a circuit board according to claim 1, wherein in the third step, a detection method for detecting feature points of a high entropy area screened out of each image by using sift+fast algorithm comprises the following steps:
firstly, constructing a Gaussian scale space;
the gaussian scale space of an image is defined as a function L (x, y, σ):
L(x,y,σ)=G(x,y,σ)*I(x,y)
wherein I (x, y) is an input image, G (x, y, sigma) is a variable-scale Gaussian function, (x, y) is a point coordinate on the image, and sigma is a Gaussian blur coefficient; the adjacent layers in each group are subtracted to obtain a Gaussian differential pyramid DOG, the subsequent feature point extraction is carried out on the DOG pyramid, and the formula of a DOG operator D (x, y, sigma) is as follows:
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)=L(x,y,kσ)-L(x,y,σ)
Wherein k is a proportionality coefficient;
secondly, detecting and accurately positioning Gaussian scale space feature points;
searching all scales and image positions in a Gaussian scale space, positioning extreme points on each layer of image of all scales, and determining that a circle is drawn with the radius of 3 by taking the point as the center, wherein at least 12 pixel points in 16 pixel points on the edge satisfy the ratio I x +T 1 Greater or equal than I x -T 1 For hours, the point is considered as a key point, and then the position and the scale of the key point are accurately determined by fitting a three-dimensional quadratic function, wherein I x For detecting the pixel value of the point T 1 Is a pixel range threshold;
then, removing the points with low contrast and the points positioned at the edges of the image;
removing the two unstable points by setting a contrast threshold and a Hessian matrix;
finally, calculating the direction of the feature points;
the gradient direction characteristics of the neighborhood pixels of the key points are utilized, so that the rotation invariance of the image is realized; sampling in a plurality of neighborhood windows taking the feature points as the centers, and counting the gradient directions of the neighborhood pixels by using a histogram; the gradient histogram ranges from 0 degrees to 360 degrees, and every 45 degrees is a direction, the histogram is divided into 8 directions, namely 8 gradient direction information exists in each characteristic point; the peak of the histogram represents the main direction of the neighborhood gradient at the feature point, i.e. the direction that is the feature point; meanwhile, a Gaussian function is used for smoothing the histogram, the influence of mutation is reduced, and when another peak value equivalent to 80% of the energy of the main peak value exists in the gradient direction histogram, the direction is regarded as the auxiliary direction of the characteristic point; a feature point may be designated to have multiple directions, a primary direction, and more than one secondary direction for enhanced robustness of the match.
5. The method of registration between an optical image and an infrared image of a circuit board according to claim 4, wherein when removing points of low contrast and points located at the edges of the image, the extreme points are refined to sub-pixel level using a fitting three-dimensional quadratic function, substituted into taylor expansion, and taken only from the first two:
Figure FDA0004061138690000031
wherein the method comprises the steps of
Figure FDA0004061138690000032
Wherein represents the offset relative to the interpolated center coordinates (x, y);
presetting a first contrast threshold, comparing and analyzing the contrast of the extreme points with the first contrast threshold, and taking the extreme points with the contrast larger than the first contrast threshold as feature points to be selected; meanwhile, a second contrast threshold value is preset, the second contrast threshold value is larger than the first contrast threshold value, and extreme points with the contrast larger than the second contrast threshold value are continuously stored as feature points to be selected;
acquiring a Hessian matrix H (x, y) of the feature points to be selected:
Figure FDA0004061138690000033
Tr(H(x,y))=D xx (x,y)+D yy (x, y) represents the sum of the eigenvalues of matrix H (x, y), det (H (x, y)) =d xx (x,y)D yy (x,y)-(D xy (x,y)) 2 Representing the determinant of matrix H (x, y), where D xx (x,y),D xy (x,y),D yy The (x, y) value is obtained by differentiating the corresponding positions of the neighborhood of the candidate points, the principal curvature of Det (H (x, y)) is proportional to the characteristic value of H (x, y), and the value is set
Figure FDA0004061138690000041
Representing the ratio of the maximum characteristic value to the minimum characteristic value of H (x, y), then +. >
Figure FDA0004061138690000042
To detect whether the principal curvature is at a certain threshold T 2 Under, only the +.>
Figure FDA0004061138690000043
If the above formula is established, the feature point is rejected, otherwise, the feature point is reserved.
6. The method of registration between an optical image and an infrared image of a circuit board according to claim 4, wherein the calculating method of calculating the feature point direction comprises the steps of:
for the key points detected in the DOG pyramid, collecting gradient and direction distribution characteristics of pixels in a 3 sigma neighborhood window of a Gaussian pyramid image where the key points are positioned; the modulus and direction of the gradient are as follows:
Figure FDA0004061138690000044
θ(x,y)=tan -1 ((L(x,y+1)-L(x,y-1))/(L(x+1,y)-L(x-1,y)))
wherein L (x, y) is a scale space value at (x, y) where the key point is located, L (x+1, y) is a scale space value at (x+1, y) where the key point is located, L (x-1, y) is a scale space value at (x-1, y) where the key point is located, L (x, y+1) is a scale space value at (x, y+1) where the key point is located, L (x, y-1) is a scale space value at (x, y-1) where the key point is located, m (x, y) is a gradient modulus value, and θ (x, y) is a gradient direction.
7. The method of registration between an optical image and an infrared image of a circuit board according to claim 1, wherein in step four, the method of acquiring a 64-dimensional annular feature vector descriptor of two images comprises the steps of:
For any one feature key point, taking the key point as the center of a circle in the scale space, and making a circle with a radius of 13, dividing the region into 8 concentric annular circles in a mode of radius of 2, 3, 4, 5, 6, 8, 10 and 13 pixel points as the gradient distribution weight of the pixel point farther from the center of the circle is smaller, so as to form 8 sub-regions, wherein the key point in each sub-region has 8 gradient directions, and therefore 8×8=64 data, namely 64-dimensional SIFT feature vectors are totally obtained.
8. The method of registration between an optical image and an infrared image of a circuit board according to claim 1, wherein in step five, the matching method for performing initial matching using a FLANN algorithm in which euclidean distance is combined with cosine similarity includes the steps of:
after SIFT feature vectors of the two images are generated, euclidean distance and cosine similarity between feature point feature vectors of the two images are calculated, the distance and direction between the feature vectors are used as similarity judgment indexes, feature points with minimum distance and cosine similarity higher than a certain given threshold are selected as initial matching points, and the Euclidean distance ratio of nearest neighbors to next nearest neighbors is smaller than a certain proportion threshold T 3 0.77, determining a pair of correct matching points and removing the wrong matching points; and connecting the matching points in the visible light image and the infrared image by using lines. Thereby achieving image registration.
9. A registration device between an optical image and an infrared image of a circuit board, the registration device comprising:
the acquisition module is used for acquiring an infrared image and a visible light image of the circuit board, and the visible light image is an optical image and an infrared image;
the entropy region distinguishing module is used for respectively removing low-entropy regions according to the respective image information entropy in the visible optical image and the infrared image, and reserving high-entropy regions for subsequent feature point detection;
the construction module is used for constructing a Gaussian scale space for the high-entropy area and establishing an image Gaussian pyramid and a Gaussian differential pyramid;
the characteristic point screening module is used for acquiring extreme points in different scale spaces in the Gaussian differential pyramid by using a FAST+SIFT combination algorithm, and accurately positioning and screening the characteristic points according to the extreme points;
the removing module is used for screening and removing unstable points by adopting a threshold method and a Hessian matrix method; including points of low contrast and points at the edges of the image;
The characteristic point direction calculation module is used for calculating and determining the characteristic point direction and constructing a key point 64-dimensional annular descriptor;
and the key point matching module is used for carrying out key point matching by using the Euclidean distance and cosine similarity between vectors as measurement indexes and applying a quick approximate nearest neighbor search FLANN, and eliminating mismatching by using a RANSAC random sampling consistency algorithm.
10. The registration device between an optical image and an infrared image of a circuit board according to claim 9, wherein the configuration device is further configured to employ the registration method between an optical image and an infrared image of a circuit board according to any one of claims 1 to 8.
CN202310060353.XA 2023-01-18 2023-01-18 Registration method and device between optical image and infrared image of circuit board Pending CN116433733A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310060353.XA CN116433733A (en) 2023-01-18 2023-01-18 Registration method and device between optical image and infrared image of circuit board

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310060353.XA CN116433733A (en) 2023-01-18 2023-01-18 Registration method and device between optical image and infrared image of circuit board

Publications (1)

Publication Number Publication Date
CN116433733A true CN116433733A (en) 2023-07-14

Family

ID=87093205

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310060353.XA Pending CN116433733A (en) 2023-01-18 2023-01-18 Registration method and device between optical image and infrared image of circuit board

Country Status (1)

Country Link
CN (1) CN116433733A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116703909A (en) * 2023-08-07 2023-09-05 威海海泰电子有限公司 Intelligent detection method for production quality of power adapter
CN116758086A (en) * 2023-08-21 2023-09-15 山东聚宁机械有限公司 Bulldozer part quality detection method based on image data
CN117635676A (en) * 2023-11-08 2024-03-01 国网上海市电力公司 Transformer infrared and visible light image registration method
CN117830301A (en) * 2024-03-04 2024-04-05 青岛正大正电力环保设备有限公司 Slag dragging region detection method based on infrared and visible light fusion characteristics

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116703909A (en) * 2023-08-07 2023-09-05 威海海泰电子有限公司 Intelligent detection method for production quality of power adapter
CN116703909B (en) * 2023-08-07 2023-10-27 威海海泰电子有限公司 Intelligent detection method for production quality of power adapter
CN116758086A (en) * 2023-08-21 2023-09-15 山东聚宁机械有限公司 Bulldozer part quality detection method based on image data
CN116758086B (en) * 2023-08-21 2023-10-20 山东聚宁机械有限公司 Bulldozer part quality detection method based on image data
CN117635676A (en) * 2023-11-08 2024-03-01 国网上海市电力公司 Transformer infrared and visible light image registration method
CN117830301A (en) * 2024-03-04 2024-04-05 青岛正大正电力环保设备有限公司 Slag dragging region detection method based on infrared and visible light fusion characteristics
CN117830301B (en) * 2024-03-04 2024-05-14 青岛正大正电力环保设备有限公司 Slag dragging region detection method based on infrared and visible light fusion characteristics

Similar Documents

Publication Publication Date Title
CN116433733A (en) Registration method and device between optical image and infrared image of circuit board
US8233716B2 (en) System and method for finding stable keypoints in a picture image using localized scale space properties
CN111340701B (en) Circuit board image splicing method for screening matching points based on clustering method
CN105957082A (en) Printing quality on-line monitoring method based on area-array camera
CN107967482A (en) Icon-based programming method and device
CN110490913B (en) Image matching method based on feature description operator of corner and single line segment grouping
CN106650580B (en) Goods shelf quick counting method based on image processing
CN108537832B (en) Image registration method and image processing system based on local invariant gray feature
CN112085772A (en) Remote sensing image registration method and device
CN110222661A (en) It is a kind of for motion estimate and the feature extracting method of tracking
CN112288758A (en) Infrared and visible light image registration method for power equipment
CN112614167A (en) Rock slice image alignment method combining single-polarization and orthogonal-polarization images
CN111127353A (en) High-dynamic image ghost removing method based on block registration and matching
CN104966283A (en) Imaging layered registering method
CN110910497B (en) Method and system for realizing augmented reality map
CN116977316A (en) Full-field detection and quantitative evaluation method for damage defects of complex-shape component
CN117078726A (en) Different spectrum image registration method based on edge extraction
CN116612165A (en) Registration method for large-view-angle difference SAR image
US11645827B2 (en) Detection method and device for assembly body multi-view change based on feature matching
CN111768436B (en) Improved image feature block registration method based on fast-RCNN
CN114964206A (en) Monocular vision odometer target pose detection method
Wang et al. Application of improved SURF algorithm in real scene matching and recognition
Dong et al. Affine template matching based on multi-scale dense structure principal direction
Lin et al. A cost-effective automatic dial meter reader using a lightweight convolutional neural network
Kong et al. Research on Canny Edge Feature Detection Technology of Color Image Based on Vector Properties

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination