CN113723428A - Image feature matching method, device and system and PCB visual detection equipment - Google Patents

Image feature matching method, device and system and PCB visual detection equipment Download PDF

Info

Publication number
CN113723428A
CN113723428A CN202110955667.7A CN202110955667A CN113723428A CN 113723428 A CN113723428 A CN 113723428A CN 202110955667 A CN202110955667 A CN 202110955667A CN 113723428 A CN113723428 A CN 113723428A
Authority
CN
China
Prior art keywords
point
matching
target
image
standard
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110955667.7A
Other languages
Chinese (zh)
Inventor
王京辉
贺小林
杨帆
黄伟
徐毛毛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gree Green Refrigeration Technology Center Co Ltd of Zhuhai
Zhuhai Gree Energy Saving Environmental Protection Refrigeration Technology Research Center Co Ltd
Original Assignee
Gree Green Refrigeration Technology Center Co Ltd of Zhuhai
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gree Green Refrigeration Technology Center Co Ltd of Zhuhai filed Critical Gree Green Refrigeration Technology Center Co Ltd of Zhuhai
Priority to CN202110955667.7A priority Critical patent/CN113723428A/en
Publication of CN113723428A publication Critical patent/CN113723428A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30141Printed circuit board [PCB]

Abstract

The invention relates to an image feature matching method, device and system and PCB visual detection equipment, wherein the method comprises the following steps: acquiring a pre-input image to be matched and a standard image; extracting a target characteristic point set and a standard characteristic point set; extracting a target feature descriptor with a preset dimension of each target feature point and a standard feature descriptor with a preset dimension of each standard feature point, wherein the preset dimension is less than 128 dimensions; roughly matching the target characteristic points and the standard characteristic points according to the target characteristic descriptors and the standard characteristic descriptors, matching the matching points of each target characteristic point from the standard characteristic point set, and generating an initial matching point set; and eliminating the mismatching points of the initial matching point set to obtain matching data. According to the scheme, the dimension of the extracted feature descriptors is smaller than 128 dimensions, and finally mismatching is eliminated, so that the mismatching rate is reduced while the calculation complexity and the matching time are reduced, and the image matching accuracy is guaranteed.

Description

Image feature matching method, device and system and PCB visual detection equipment
Technical Field
The invention relates to the technical field of image matching, in particular to an image feature matching method, device and system and PCB visual detection equipment.
Background
When the PCB is produced, lines, silk screen printing, versions and the like on the surface of the PCB need to be detected, and the PCB is easy to fatigue through manual visual detection, so that the condition of missing detection or detection error is caused, and the detection accuracy and the detection efficiency are lower.
In the prior art, a traditional SIFT image matching method is usually used for visually detecting the surface of a PCB board, but each feature point required to be extracted by the traditional SIFT image matching method comprises a 128-dimensional feature descriptor, and the 128-dimensional feature descriptor is complex in calculation process and large in calculation amount in the matching process, so that the matching time is long.
Therefore, how to reduce the computation complexity in the matching process, improve the matching efficiency, and reduce the mismatching rate is a technical problem that needs to be solved urgently by those skilled in the art.
Disclosure of Invention
In view of the above, an object of the present invention is to provide an image feature matching method, apparatus, system and PCB visual inspection device, so as to solve the problem in the prior art that the matching process is complicated in calculation process and large in calculation amount, which results in long matching time, but if the dimension of the feature descriptor is reduced, when there are many similar features in the PCB image, a large number of repeated feature points are generated, which is prone to mismatching.
In order to achieve the purpose, the invention adopts the following technical scheme:
an image feature matching method, comprising:
acquiring a pre-input image to be matched and a standard image;
extracting a target characteristic point set corresponding to the image to be matched and a standard characteristic point set corresponding to the standard image;
extracting a target feature descriptor with a preset dimension of each target feature point in the target feature point set and a standard feature descriptor with a preset dimension of each standard feature point in the standard feature point set; the preset dimension is less than 128 dimensions;
according to the target feature descriptor corresponding to each target feature point and the standard feature descriptor corresponding to each standard feature point, performing rough matching on the target feature points and the standard feature points, matching the matching points corresponding to each target feature point from the standard feature point set, and generating an initial matching point set between the target feature point set and the standard feature point set;
and carrying out false matching point elimination processing on the initial matching point set to obtain matching data between the image to be matched and the standard image.
Further, in the image feature matching method, the performing false matching point elimination processing on the initial matching point set to obtain matching data between the image to be matched and the standard image includes:
taking the initial matching point set as a point set to be processed, and dividing all matching points in the point set to be processed into preset parts to obtain a preset part of point subset to be processed;
performing mismatching point elimination processing on each to-be-processed point subset to obtain an inner point corresponding to each to-be-processed point subset;
collecting all interior points in the to-be-processed point subset to obtain a current interior point set corresponding to the to-be-matched image;
adding one to the current iteration times recorded in advance, and judging whether the updated current iteration times reach the target iteration times determined by the pre-training;
if the current iteration times reach the target iteration times, calculating a basic matrix as matching data between the image to be matched and the standard image according to the current inner point set based on an eight-point method;
and if the current iteration times do not reach the target iteration times, taking the current internal point set as a point set to be processed, and continuing to perform distribution and mismatching point elimination processing on the point set to be processed until the current iteration times reach the target iteration times.
Further, in the image feature matching method, the performing a false matching point rejection process on each to-be-processed point subset to obtain an interior point corresponding to each to-be-processed point subset includes:
randomly extracting a preset number of current matching point pairs from the to-be-processed point subset;
determining a current basic matrix according to the current matching point pair based on an eight-point method;
determining a conversion error value of each group of initial matching point pairs which are not extracted in the to-be-processed point subset according to the current basic matrix and a preset error calculation rule;
judging whether the conversion error value of each group of initial matching point pairs is smaller than a preset error threshold value or not;
if the conversion error value of the initial matching point pair is smaller than the preset error threshold value, taking the initial matching point pair as an inner point pair;
if the conversion error of the initial matching point pair is not smaller than the preset error threshold, rejecting the initial matching point pair;
collecting the interior point pairs corresponding to the point subset to be processed to obtain a current interior point subset corresponding to the point subset to be processed;
adding one to the pre-recorded extraction times, and judging whether the extraction times reach the preset target extraction times or not;
if the extraction frequency does not reach the target extraction frequency, randomly extracting a preset number of current matching point pairs from the to-be-processed point subset again so as to re-determine the current internal point subset;
and if the extraction frequency reaches the target extraction frequency, comparing the number of the inner points in the current inner point subset corresponding to each extraction, and taking the inner point in the current inner point subset with the largest number of the inner points as the inner point corresponding to the point subset to be processed.
Further, in the image feature matching method, the step of determining the target iteration number includes:
performing feature extraction and rough matching on a first training image and a second training image which are input in advance to obtain a training matching point set between the first training image and the second training image;
carrying out error matching point elimination processing on the training matching point set to obtain a training inner point set corresponding to the training matching point set;
adding one to the pre-recorded training iteration times, and calculating a basic matrix as training output data according to the training inner point set based on an eight-point method;
judging whether the training output data accords with training matching data between the first training image and the second training image according to the training output data;
if the training output data accords with the training matching data, taking the updated training iteration times as the target iteration times;
if the training output data do not accord with the training matching data, taking the training inner point set as the training matching point set to carry out mismatching point elimination processing until the training output data accord with the training matching data;
the first training image and the second training image are images of the same object from different angles.
Further, in the above image feature matching method, the rough matching of the target feature points and the standard feature points according to the target feature descriptor corresponding to each target feature point and the standard feature descriptor corresponding to each standard feature point, matching a matching point corresponding to each target feature point from the standard feature point set, and generating an initial matching point set between the target feature point set and the standard feature point set includes:
calculating the Euclidean distance between each standard characteristic point and the target characteristic point according to the target characteristic descriptor corresponding to each target characteristic point, the standard characteristic descriptor corresponding to each standard characteristic point and the Euclidean distance calculation rule, and determining the nearest Euclidean distance and the next nearest Euclidean distance corresponding to the target characteristic point;
calculating the distance ratio of the nearest Euclidean distance and the next nearest Euclidean distance corresponding to each target feature point;
judging whether the distance ratio corresponding to each target feature point is smaller than a preset threshold value or not;
if the distance ratio corresponding to the target feature point is smaller than the preset threshold, taking the standard feature point corresponding to the nearest Euclidean distance as a matching point corresponding to the target feature point;
and collecting all the target characteristic points and the matching points corresponding to all the target characteristic points to obtain an initial matching point set between the target characteristic point set and the standard characteristic point set.
Further, in the above image feature matching method, before the calculating, according to the target feature descriptor corresponding to each target feature point, the standard feature descriptor corresponding to each standard feature point, and the euclidean distance calculation rule, the euclidean distance between each standard feature point and the target feature point, and determining the nearest euclidean distance and the next nearest euclidean distance corresponding to the target feature point, the method further includes:
creating a matching tree corresponding to each target feature point in the target feature point set according to the standard feature point set and the target feature point set;
correspondingly, the calculating, according to the target feature descriptor corresponding to each target feature point, the standard feature descriptor corresponding to each standard feature point, and the euclidean distance calculation rule, the euclidean distance between each standard feature point and the target feature point, and determining the nearest euclidean distance and the next nearest euclidean distance corresponding to the target feature point, includes:
and calculating the Euclidean distance between each standard characteristic point and the target characteristic point according to the target characteristic descriptor corresponding to each target characteristic point, the standard characteristic descriptor corresponding to each standard characteristic point and the Euclidean distance calculation rule based on the matching tree corresponding to each target characteristic point, and determining the nearest Euclidean distance and the second nearest Euclidean distance corresponding to the target characteristic point.
The present invention also provides an image feature matching apparatus, including:
the acquisition module is used for acquiring a pre-input image to be matched and a standard image;
the first extraction module is used for extracting a target characteristic point set corresponding to the image to be matched and a standard characteristic point set corresponding to the standard image;
the second extraction module is used for extracting a target feature descriptor with a preset dimension of each target feature point in the target feature point set and a standard feature descriptor with a preset dimension of each standard feature point in the standard feature point set; the preset dimension is less than 128 dimensions;
the rough matching module is used for roughly matching the target feature points and the standard feature points according to the target feature descriptors corresponding to the target feature points and the standard feature descriptors corresponding to the standard feature points, matching the matching points corresponding to the target feature points from the standard feature point set, and generating an initial matching point set between the target feature point set and the standard feature point set;
and the eliminating module is used for carrying out error matching point eliminating processing on the initial matching point set to obtain matching data between the image to be matched and the standard image.
Further, in the image feature matching apparatus, the eliminating module includes: the device comprises a distribution unit, a processing unit, a first set unit, an iteration judgment unit, a matrix calculation unit and a first determination unit;
the distribution unit is used for taking the initial matching point set as a point set to be processed, dividing all matching point pairs in the point set to be processed into preset parts, and obtaining a preset part of point subset to be processed;
the processing unit is used for carrying out mismatching point elimination processing on each to-be-processed point subset to obtain an inner point corresponding to each to-be-processed point subset;
the first set unit is used for setting the interior points in all the to-be-processed point subsets to obtain a current interior point set corresponding to the to-be-matched image;
the iteration judging unit is used for adding one to the current iteration times recorded in advance and judging whether the updated current iteration times reach the target iteration times determined by the pre-training;
the matrix calculation unit is used for calculating a basic matrix as matching data between the image to be matched and the standard image according to the current inner point set based on an eight-point method if the current iteration times reach the target iteration times;
and the first determining unit is used for taking the current internal point set as a point set to be processed if the current iteration number does not reach the target iteration number, and continuing to perform distribution and false matching point elimination processing on the point set to be processed until the current iteration number reaches the target iteration number.
Further, in the image feature matching device, the rough matching module includes: the device comprises a distance calculation unit, a ratio judgment unit, a second determination unit and a second collection unit;
the distance calculation unit is used for calculating the Euclidean distance between each standard characteristic point and the target characteristic point according to the target characteristic descriptor corresponding to each target characteristic point, the standard characteristic descriptor corresponding to each standard characteristic point and the Euclidean distance calculation rule, and determining the nearest Euclidean distance and the next nearest Euclidean distance corresponding to the target characteristic point;
the ratio calculation unit is used for calculating the distance ratio of the nearest Euclidean distance corresponding to each target feature point to the next nearest Euclidean distance;
the ratio judging unit is used for judging whether the distance ratio corresponding to each target feature point is smaller than a preset threshold value or not;
the second determining unit is configured to, if the distance ratio corresponding to the target feature point is smaller than the preset threshold, use the standard feature point corresponding to the closest euclidean distance as a matching point corresponding to the target feature point;
the second set unit is configured to set all the target feature points and matching points corresponding to all the target feature points to obtain an initial matching point set between the target feature point set and the standard feature point set.
Further, in the image feature matching device, the rough matching module further includes: a creating unit;
the creating unit is used for creating a matching tree corresponding to each target feature point in the target feature point set according to the standard feature point set and the target feature point set;
the distance extraction unit is specifically configured to calculate a euclidean distance between each standard feature point and the target feature point according to the target feature descriptor corresponding to each target feature point, the standard feature descriptor corresponding to each standard feature point, and a euclidean distance calculation rule based on the matching tree corresponding to each target feature point, and determine a closest euclidean distance and a next closest euclidean distance corresponding to the target feature point.
The present invention also provides an image feature matching system, comprising: the system comprises a processor and a memory connected with the processor;
the memory is used for storing a computer program, and the computer program is at least used for executing the image feature matching method;
the processor is used for calling and executing the computer program.
The invention also provides a PCB visual detection device, which comprises: the system comprises an image collector, a data analyzer and the image feature matching system;
the image collector and the data analyzer are respectively connected with the image characteristic matching system;
the image collector is used for collecting a PCB surface image and inputting the PCB surface image into the image feature matching system so that the image feature matching system outputs matching data corresponding to the PCB surface image;
the data analyzer is used for analyzing the matching data output by the image feature matching system and determining the detection result of the PCB.
An image feature matching method, device and system and PCB visual detection equipment are provided, and the method comprises the following steps: acquiring a pre-input image to be matched and a standard image; extracting a target characteristic point set corresponding to an image to be matched and a standard characteristic point set corresponding to a standard image; extracting a target feature descriptor with a preset dimension of each target feature point in the target feature point set and a standard feature descriptor with a preset dimension of each standard feature point in the standard feature point set, wherein the preset dimension is less than 128 dimensions; according to the target feature descriptor corresponding to each target feature point and the standard feature descriptor corresponding to each standard feature point, performing rough matching on the target feature points and the standard feature points, matching the matching points corresponding to each target feature point from the standard feature point set, and generating an initial matching point set corresponding to the target feature point set and the standard feature point set; and carrying out false matching point elimination processing on the initial matching point set to obtain matching data between the image to be matched and the standard image. By adopting the technical scheme of the invention, the dimension of the extracted feature descriptor is less than 128 dimensions, the calculation complexity and the matching time are reduced, the mismatching is eliminated finally, the mismatching rate is reduced while the calculation complexity and the matching time are reduced, and the image matching accuracy is ensured.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart provided by one embodiment of an image feature matching method of the present invention;
FIG. 2 is a schematic diagram of the partitioning of the feature descriptors of FIG. 1;
FIG. 3 is a schematic structural diagram provided by an embodiment of the image feature matching apparatus of the present invention;
FIG. 4 is a schematic diagram of a structure provided by one embodiment of the image feature matching system of the present invention;
fig. 5 is a schematic structural diagram provided by an embodiment of the PCB board visual inspection apparatus of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be described in detail below. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the examples given herein without any inventive step, are within the scope of the present invention.
Fig. 1 is a flowchart provided by an embodiment of the image feature matching method of the present invention. As shown in fig. 1, the image feature matching method of the present embodiment specifically includes the following steps:
s101, obtaining a pre-input image to be matched and a standard image.
The embodiment first needs to acquire the pre-input image to be matched and the standard image. The image to be matched is an image of the surface of the PCB collected by the image collecting device, and the standard image is a standard image of the PCB in a drawing.
And S102, extracting a target characteristic point set corresponding to the image to be matched and a standard characteristic point set corresponding to the standard image.
And extracting target characteristic points from the image to be matched by utilizing a corresponding extraction technology to form a target characteristic point set corresponding to the image to be matched, and extracting standard characteristic points from the standard image to form a standard characteristic point set corresponding to the standard image. The SIFT technology in the prior art is preferably adopted in the extraction technology, the SIFT technology is the prior art, the specific steps of extracting the feature points from the image belong to the prior art, and the embodiment is not specifically described.
S103, extracting a target feature descriptor with a preset dimension of each target feature point in the target feature point set and a standard feature descriptor with a preset dimension of each standard feature point in the standard feature point set.
And extracting a target feature descriptor with a preset dimension of each target feature point in the target feature point set from the image to be matched, and extracting a standard feature descriptor with a preset dimension of each standard feature point in the standard feature point set from the standard image. Wherein the predetermined dimension is less than 128 dimensions. Thus, the computational complexity in the subsequent matching process using the descriptor can be reduced, and the matching speed can be increased. In this embodiment, the predetermined dimension is preferably 64 dimensions. Fig. 2 is a schematic diagram of the division of the feature descriptor in fig. 1, as shown in fig. 2, the schematic diagram of the division is a schematic diagram of the division of the 64-dimensional feature descriptor, and the extraction method of the 64-dimensional feature descriptor is as follows:
an 8 × 8 square pixel region (e.g., a middle black-frame square region in fig. 2) around a feature point (e.g., a black point in the middle of fig. 2) is divided into 4 small square sub-pixel regions (e.g., 4 a regions in fig. 2) with 2 × 2. Each a region includes 8 pieces of directional information, and the descriptor of the feature point obtained by accumulating the 8 pieces of directional information according to the gradient is 4 × 8-32 dimensions.
In order to supplement the feature point descriptor of 32 dimensions, 4 trapezoidal pixel regions (e.g., 4 b regions in fig. 2) close to the feature point are selected from 16 × 16 square pixels (e.g., the whole black-frame square region in fig. 2), 8 pieces of direction information included in all rectangular pixel regions (b regions) are processed by fully utilizing a gradient accumulation method, and the feature point descriptor obtained on the basis is 4 × 8-32 dimensions.
By combining these two descriptors, we can get a new 32+ 32-64 dimensional descriptor.
In this embodiment, the adopted method for extracting the 64-dimensional feature descriptors is formed by combining two descriptors with different geometric shapes and different numbers of pixels, and in order to eliminate the differences between the two descriptors, the two descriptors need to be balanced, and meanwhile, in order to reflect the influence of the distance on the weight, the gradient needs to be weighted according to the distance. Compared with the original image information, the new descriptor describes the image information more comprehensively and can reflect the characteristics of key points more effectively.
S104, according to the target feature descriptors corresponding to the target feature points and the standard feature descriptors corresponding to the standard feature points, rough matching is conducted on the target feature points and the standard feature points, matching points corresponding to the target feature points are matched from the standard feature point set, and an initial matching point set between the target feature point set and the standard feature point set is generated.
In this embodiment, a KNN algorithm (nearest neighbor node algorithm) may be used to perform rough matching on the target feature points and the standard feature points according to the target feature descriptor corresponding to each target feature point and the standard feature descriptor corresponding to each standard feature point, so as to match the matching points corresponding to each target feature point from the standard feature point set. And combining each target characteristic point and the corresponding matching point into a matching point pair, and collecting all the matching point pairs to generate an initial matching point set between the target characteristic point set and the standard characteristic point set.
And S105, carrying out error matching point elimination processing on the initial matching point set to obtain matching data between the image to be matched and the standard image.
Since the dimensions of the target feature descriptor and the standard feature descriptor in this embodiment are both less than 128 dimensions, and the image mismatching rate may be increased, in this embodiment, after the initial matching point set between the target feature point set and the standard feature point set is determined, the initial matching point set needs to be subjected to mismatching point elimination processing, so as to obtain matching data between the image to be matched and the standard image. The RANSAC algorithm is preferably used for the mismatching point elimination processing. Therefore, the calculation complexity in the image matching process can be reduced through the dimension reduction of the feature descriptors, the image matching efficiency is improved, the mismatching rate is reduced through the mode of rejecting the mismatching points, and the accuracy of image matching is guaranteed.
The image feature matching method of the embodiment acquires a pre-input image to be matched and a standard image; extracting a target characteristic point set corresponding to an image to be matched and a standard characteristic point set corresponding to a standard image; extracting a target feature descriptor with a preset dimension of each target feature point in the target feature point set and a standard feature descriptor with a preset dimension of each standard feature point in the standard feature point set, wherein the preset dimension is less than 128 dimensions; according to the target feature descriptor corresponding to each target feature point and the standard feature descriptor corresponding to each standard feature point, performing rough matching on the target feature points and the standard feature points, matching the matching points corresponding to each target feature point from the standard feature point set, and generating an initial matching point set corresponding to the target feature point set and the standard feature point set; and carrying out false matching point elimination processing on the initial matching point set to obtain matching data between the image to be matched and the standard image. By adopting the technical scheme of the embodiment, the dimension of the extracted feature descriptors is less than 128 dimensions, the calculation complexity and the matching time are reduced, the mismatching is eliminated finally, the mismatching rate is reduced while the calculation complexity and the matching time are reduced, and the image matching accuracy is ensured.
Further, in the image feature matching method of this embodiment, step S104 specifically includes the following steps:
firstly, according to a target feature descriptor corresponding to each target feature point, a standard feature descriptor corresponding to each standard feature point and an Euclidean distance calculation rule, calculating the Euclidean distance between each standard feature point and the target feature point, and determining the nearest Euclidean distance and the next nearest Euclidean distance corresponding to the target feature point.
According to the Euclidean distance calculation rule, the target feature descriptors corresponding to each target feature point and the standard feature descriptors corresponding to each standard feature point which are extracted in advance, the Euclidean distance between the target feature point and each standard feature point is calculated for each target feature point, the nearest Euclidean distance and the next-nearest Euclidean distance are extracted from the Euclidean distances, and therefore the nearest Euclidean distance and the next-nearest Euclidean distance corresponding to each target feature point are determined.
Secondly, calculating the distance ratio of the nearest Euclidean distance and the next nearest Euclidean distance corresponding to each target feature point.
In this embodiment, after the nearest euclidean distance and the next nearest euclidean distance corresponding to each target feature point need to be calculated, a ratio between the nearest euclidean distance and the next nearest euclidean distance corresponding to each target feature point needs to be calculated, so as to determine the distance ratio corresponding to each target feature point.
Thirdly, judging whether the distance ratio corresponding to each target feature point is smaller than a preset threshold value.
In this embodiment, it needs to be determined whether the distance ratio corresponding to each target feature point is smaller than a preset threshold, where the preset threshold is preferably set to 2.
And fourthly, if the distance ratio corresponding to the target characteristic point is smaller than a preset threshold value, taking the standard characteristic point corresponding to the nearest Euclidean distance as a matching point corresponding to the target characteristic point.
If the distance ratio corresponding to the target feature point is judged to be smaller than the preset threshold, the reliable matching exists between the standard feature point and the target feature point, wherein the distance between the standard feature point and the target feature point is the nearest Euclidean distance, therefore, the standard feature point corresponding to the nearest Euclidean distance is used as the matching point corresponding to the target feature point, and thus, the matching points corresponding to all the target feature points can be determined.
If the distance ratio corresponding to the target feature point is judged to be not smaller than the preset threshold, the standard feature point with the distance to the target feature point being the nearest Euclidean distance is not the matching point of the target feature point.
Fifthly, collecting all the target characteristic points and the matching points corresponding to all the target characteristic points to obtain an initial matching point set between the target characteristic point set and the standard characteristic point set.
After determining the matching points corresponding to all the target feature points through the above steps, each target feature point and the corresponding matching point thereof may be combined into a matching point pair, and all the matching point pairs are collected to generate an initial matching point set between the target feature point set and the standard feature point set.
Further, step S104 in this embodiment further includes the following steps:
and creating a matching tree corresponding to each target feature point in the target feature point set according to the standard feature point set and the target feature point set.
In this embodiment, a matching Tree corresponding to each target feature point in the target feature point set can be created by using a KD-Tree technique, a standard feature point set and a target feature point set, so that when a closest euclidean distance and a next closest euclidean distance corresponding to the target feature point are determined, a KD-Tree combined with an optimal node prior search (BBF) method can be used to search for the closest euclidean distance and the next closest euclidean distance. The establishment of the matching tree facilitates the searching during matching, and the searching strategy can be automatically switched according to the difference between the standard characteristic point set and the target characteristic point set to reduce the time and further improve the efficiency of image matching.
Further, in the image feature matching method of this embodiment, step S105 specifically includes the following steps:
firstly, taking an initial matching point set as a point set to be processed, dividing all matching point pairs in the point set to be processed into preset parts, and obtaining a preset part of point subset to be processed.
In this embodiment, an initial matching point set between the generated target feature point set and the standard feature point set is used as a to-be-processed point set, all matching point pairs in the to-be-processed point set are divided into a preset number of parts, and a set of each matching point pair is used as a to-be-processed point subset, so that the preset number of parts of the to-be-processed point subset is obtained.
And secondly, performing mismatching point elimination processing on each to-be-processed point subset to obtain an inner point corresponding to each to-be-processed point subset.
In this embodiment, the RANSAC algorithm may be used to perform the mismatch point elimination processing on each to-be-processed point subset, so as to obtain an interior point corresponding to each to-be-processed point subset. The method comprises the following specific steps:
(1) and randomly extracting a preset number of current matching point pairs from the point subset to be processed.
A preset number of current matching point pairs are randomly extracted from the subset of points to be processed by using the RANSAC algorithm, wherein the preset number in the embodiment is preferably 4.
(2) And determining the current basic matrix according to the current matching point pair based on an eight-point method.
The current basis matrix is calculated by using an eight-point method and a preset number of current matching point pairs randomly extracted, where the eight-point method is a prior art, so that a person skilled in the art can calculate the current basis matrix through the preset number of current matching point pairs according to the prior art, and this embodiment is not specifically described.
(3) And determining the conversion error value of each group of initial matching point pairs which are not extracted in the point subset to be processed according to the current basic matrix and a preset error calculation rule.
And after the current basic matrix is calculated, calculating the conversion error value of each group of initial matching point pairs which are not extracted in the point subset to be processed according to the current basic matrix and a preset error calculation rule. The preset error calculation rule is as follows:
E(x1,x2,H)=d(x1,H-1,x2)2+d(x2,Hx1)2
wherein x is1,x2For the initial matched point pair, H is the current basis matrix.
(4) And judging whether the conversion error value of each group of initial matching point pairs is smaller than a preset error threshold value.
(5) And if the conversion error value of the initial matching point pair is smaller than the preset error threshold value, taking the initial matching point pair as an inner point pair.
(6) And if the conversion error of the initial matching point pair is not less than the preset error threshold, rejecting the initial matching point pair.
(7) And collecting the interior point pairs corresponding to the point subsets to be processed to obtain the current interior point subsets corresponding to the point subsets to be processed.
After the conversion error value of each group of initial matching point pairs is judged, a plurality of inner point pairs can be obtained, the inner point pairs are inner point pairs corresponding to the current to-be-processed point subset, and the inner point pairs are collected, so that the current inner point subset corresponding to the to-be-processed point subset can be obtained.
(8) And adding one to the pre-recorded extraction times, and judging whether the extraction times reach the preset target extraction times.
In this embodiment, after the conversion error value of each group of initial matching point pairs in the current to-be-processed point subset is determined, the pre-recorded extraction frequency needs to be increased by one, and then it is determined whether the extraction reaches the preset target extraction frequency. Wherein, the target extraction times is the maximum times of randomly extracting the sampling value by the transmission parameter setting RANSAC method.
(9) And if the extraction frequency does not reach the target extraction frequency, randomly extracting a preset number of current matching point pairs from the to-be-processed point subset again, and thus re-determining the current internal point subset.
And if the extraction times do not reach the target extraction times, randomly extracting the current to-be-processed point subset again, and then re-determining the current inner point subset according to the current matching point pair randomly extracted at this time.
(10) And if the extraction frequency reaches the target extraction frequency, comparing the quantity of the inner points in the current inner point subset corresponding to each extraction, and taking the inner points in the current inner point subset with the largest quantity of the inner point pairs as the inner points corresponding to the point subset to be processed.
And if the extraction frequency reaches the target extraction frequency, comparing the number of the interior point pairs in the current interior point subset determined by each extraction, and taking the interior point in the current interior point subset with the largest number of the interior point pairs during one extraction as the interior point corresponding to the current point subset to be processed.
And obtaining the interior points corresponding to each to-be-processed point subset by the method for obtaining the interior points corresponding to the to-be-processed point subsets.
Thirdly, the inner points in all the to-be-processed point subsets are collected to obtain a current inner point set corresponding to the to-be-matched image.
After the interior points corresponding to each to-be-processed point subset are obtained through the steps, all the interior point sets in all the to-be-processed point subsets are collected to obtain the current interior point set corresponding to the to-be-matched image.
And fourthly, adding one to the pre-recorded current iteration times, and judging whether the updated current iteration times reach the target iteration times determined by pre-training.
In this embodiment, after the current inner point set corresponding to the image to be matched is determined, the current iteration number recorded in advance is incremented by one, and then whether the current iteration number reaches the target iteration number determined by the pre-training is judged. In this embodiment, the target iteration number is a target iteration number determined when the first training image and the second training image are used for training according to the execution steps of the image matching method in this embodiment.
And fifthly, if the current iteration number reaches the target iteration number, calculating a basic matrix as matching data between the image to be matched and the standard image according to the current inner point set based on an eight-point method.
And if the current iteration times reach the target iteration times, calculating a basic matrix according to the corresponding inner points of the current inner point set corresponding to the image to be matched by using an eight-point method, and taking the basic matrix as matching data between the image to be matched and the standard image.
And sixthly, if the current iteration number does not reach the target iteration number, taking the current internal point set as a point set to be processed, and continuing to perform distribution and false matching point elimination processing on the point set to be processed until the current iteration number reaches the target iteration number.
Further, in the image feature matching method of this embodiment, the determining step of the target iteration number is as follows:
firstly, feature extraction and rough matching are carried out on a first training image and a second training image which are input in advance, and a training matching point set between the first training image and the second training image is obtained.
Before matching the image to be matched with the standard image, the image feature matching method needs to be trained by using a first training image and a second training image, and the target iteration number is determined. Therefore, feature extraction and rough matching are firstly performed on a first training image and a second training image which are input in advance to obtain a training matching point set between the first training image and the second training image. The steps of feature extraction and rough matching are the same as those of step S102, step S103 and step S104, and detailed description is omitted in this embodiment. The first training image and the second training image are images of the same object at different angles, where the different perspectives may be angles after slight translation and rotation.
And secondly, carrying out error matching point elimination processing on the training matching point set to obtain a training inner point set corresponding to the training matching point set.
And carrying out error matching point elimination processing on the training matching point set by using a RANSAC algorithm to obtain a training inner point set corresponding to the training matching point set. The method for determining the training interior point set is the same as the method for determining the current interior point set corresponding to the image to be matched in step S105, and this embodiment is not specifically described again.
And thirdly, adding one to the pre-recorded training iteration times, and calculating a basic matrix as training output data according to a training inner point set based on an eight-point method.
After the training inner point set corresponding to the training matching point set is determined, the embodiment needs to add one to the training iteration number recorded in advance, and calculates a basic matrix according to the inner point pairs in the training inner point set by using an eight-point method, and uses the basic matrix as training output data.
And fourthly, judging whether the training output data accords with training matching data between the first training image and the second training image or not according to the training output data.
And fifthly, if the training output data accords with the training matching data, taking the updated training iteration times as the target iteration times.
If the training output data is judged to be in accordance with the training matching data, the more accurate output data can be obtained after the mismatching point elimination processing of the training iteration number, and therefore the updated training iteration number is used as the target iteration number.
And sixthly, if the training output data does not accord with the training matching data, taking the training inner point set as the training matching point set to carry out error matching point elimination processing until the training output data accords with the training matching data.
If the training output data are judged not to accord with the training matching data, the fact that the number of mismatching points is more after the mismatching point removing processing of the training iteration times is shown, the obtained training inner point set is used as a training matching point set, and the mismatching point removing processing is continuously carried out by using the training matching point set until the training output data accord with the training matching data.
In order to be more comprehensive, the application also provides an image feature matching device corresponding to the image feature matching method provided by the embodiment of the invention.
Fig. 3 is a schematic structural diagram provided by an embodiment of the image feature matching apparatus of the present invention, and as shown in fig. 3, the image feature matching apparatus of the present embodiment includes: the device comprises an acquisition module 11, a first extraction module 12, a second extraction module 13, a rough matching module 14 and a rejection module 15.
The acquisition module 11 is configured to acquire a pre-input image to be matched and a standard image;
the first extraction module 12 is configured to extract a target feature point set corresponding to an image to be matched and a standard feature point set corresponding to a standard image;
a second extraction module 13, configured to extract a target feature descriptor with a preset dimension of each target feature point in the target feature point set and a standard feature descriptor with a preset dimension of each standard feature point in the standard feature point set; the preset dimension is less than 128 dimensions;
a rough matching module 14, configured to perform rough matching on the target feature points and the standard feature points according to the target feature descriptors corresponding to each target feature point and the standard feature descriptors corresponding to each standard feature point, match a matching point corresponding to each target feature point from the standard feature point set, and generate an initial matching point set between the target feature point set and the standard feature point set;
and the eliminating module 15 is configured to perform false matching point eliminating processing on the initial matching point set to obtain matching data between the image to be matched and the standard image.
The image feature matching device of the embodiment has the advantages that the dimension of the extracted feature descriptors is smaller than 128 dimensions, the calculation complexity and the matching time are reduced, finally, the mismatching is eliminated, the mismatching rate is reduced while the calculation complexity and the matching time are reduced, and the image matching accuracy is guaranteed.
Further, in the image feature matching apparatus of the present embodiment, the eliminating module 15 includes: the device comprises a distribution unit, a processing unit, a first set unit, an iteration judgment unit, a matrix calculation unit and a first determination unit.
The distribution unit is used for taking the initial matching point set as a point set to be processed, dividing all matching point pairs in the point set to be processed into preset parts, and obtaining a preset part of point subset to be processed;
the processing unit is used for carrying out mismatching point elimination processing on each to-be-processed point subset to obtain an inner point corresponding to each to-be-processed point subset;
the first set unit is used for setting the interior points in all the to-be-processed point subsets to obtain a current interior point set corresponding to the to-be-matched image;
the iteration judging unit is used for adding one to the current iteration times recorded in advance and judging whether the updated current iteration times reach the target iteration times determined by the pre-training;
the matrix calculation unit is used for calculating a basic matrix as matching data between the image to be matched and the standard image according to the current inner point set based on an eight-point method if the current iteration times reach the target iteration times;
and the first determining unit is used for taking the current internal point set as a point set to be processed if the current iteration number does not reach the target iteration number, and continuously performing distribution and mismatching point elimination processing on the point set to be processed until the current iteration number reaches the target iteration number.
Further, in the image feature matching apparatus of this embodiment, the processing unit is specifically configured to:
randomly extracting a preset number of current matching point pairs from the point subset to be processed;
determining a current basic matrix according to the current matching point pair based on an eight-point method;
determining a conversion error value of each group of initial matching point pairs which are not extracted in the point subset to be processed according to the current basic matrix and a preset error calculation rule;
judging whether the conversion error value of each group of initial matching point pairs is smaller than a preset error threshold value or not;
if the conversion error value of the initial matching point pair is smaller than a preset error threshold value, taking the initial matching point pair as an inner point pair;
if the conversion error of the initial matching point pair is not smaller than a preset error threshold value, rejecting the initial matching point pair;
the method comprises the steps of collecting interior point pairs corresponding to a point subset to be processed to obtain a current interior point subset corresponding to the point subset to be processed;
adding one to the pre-recorded extraction times, and judging whether the extraction times reach the preset target extraction times or not;
if the extraction frequency does not reach the target extraction frequency, randomly extracting a preset number of current matching point pairs from the to-be-processed point subset again so as to re-determine the current internal point subset;
and if the extraction frequency reaches the target extraction frequency, comparing the quantity of the inner points in the current inner point subset corresponding to each extraction, and taking the inner points in the current inner point subset with the largest quantity of the inner point pairs as the inner points corresponding to the point subset to be processed.
Further, in the image feature matching apparatus of this embodiment, the step of determining the target iteration count in the iteration determining unit includes:
performing feature extraction and rough matching on a first training image and a second training image which are input in advance to obtain a training matching point set between the first training image and the second training image;
carrying out error matching point elimination processing on the training matching point set to obtain a training inner point set corresponding to the training matching point set;
adding one to the pre-recorded training iteration times, and calculating a basic matrix as training output data according to a training inner point set based on an eight-point method;
judging whether the training output data accords with training matching data between the first training image and the second training image or not according to the training output data;
if the training output data accords with the training matching data, taking the updated training iteration times as target iteration times;
if the training output data do not accord with the training matching data, taking the training inner point set as a training matching point set to carry out error matching point elimination processing until the training output data accord with the training matching data;
the first training image and the second training image are images of the same object from different angles.
Further, in the image feature matching apparatus of the present embodiment, the rough matching module 14 includes: the device comprises a distance calculation unit, a ratio judgment unit, a second determination unit and a second collection unit.
The distance calculation unit is used for calculating the Euclidean distance between each standard characteristic point and the target characteristic point according to the target characteristic descriptor corresponding to each target characteristic point, the standard characteristic descriptor corresponding to each standard characteristic point and the Euclidean distance calculation rule, and determining the nearest Euclidean distance and the next nearest Euclidean distance corresponding to the target characteristic point;
the ratio calculation unit is used for calculating the distance ratio of the nearest Euclidean distance corresponding to each target feature point to the next nearest Euclidean distance;
the ratio judging unit is used for judging whether the distance ratio corresponding to each target characteristic point is smaller than a preset threshold value or not;
a second determining unit, configured to, if the distance ratio corresponding to the target feature point is smaller than a preset threshold, use a standard feature point corresponding to the closest euclidean distance as a matching point corresponding to the target feature point;
and the second set unit is used for setting all the target characteristic points and the matching points corresponding to all the target characteristic points to obtain an initial matching point set between the target characteristic point set and the standard characteristic point set.
Further, in the image feature matching apparatus of the present embodiment, the rough matching module 14 further includes: a cell is created.
The creating unit is used for creating a matching tree corresponding to each target feature point in the target feature point set according to the standard feature point set and the target feature point set;
and the distance extraction unit is specifically used for calculating the Euclidean distance between each standard characteristic point and the target characteristic point according to the target characteristic descriptor corresponding to each target characteristic point, the standard characteristic descriptor corresponding to each standard characteristic point and the Euclidean distance calculation rule based on the matching tree corresponding to each target characteristic point, and determining the nearest Euclidean distance and the next nearest Euclidean distance corresponding to the target characteristic point.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 4 is a schematic structural diagram provided in an embodiment of the image feature matching system of the present invention, and as shown in fig. 4, the image feature matching system of the present embodiment includes: a processor 21 and a memory 22 connected to the processor 21. The memory 22 is used for storing a computer program for executing at least the image feature matching method described in the above embodiments. The processor 21 is used to invoke and execute computer programs.
The image feature matching system of the embodiment has the advantages that the dimension of the extracted feature descriptors is smaller than 128 dimensions, the calculation complexity and the matching time are reduced, finally, the mismatching is eliminated, the mismatching rate is reduced while the calculation complexity and the matching time are reduced, and the image matching accuracy is guaranteed.
Fig. 5 is a schematic structural diagram provided by an embodiment of the PCB visual inspection apparatus of the present invention, and as shown in fig. 5, the PCB visual inspection apparatus of the present embodiment includes: an image collector, a data analyzer and the image feature matching system described in the above embodiments. The image collector and the data analyzer are respectively connected with the image characteristic matching system.
The image collector is used for collecting the PCB surface image and inputting the PCB surface image into the image feature matching system so that the image feature matching system outputs matching data corresponding to the PCB surface image; and the data analyzer is used for analyzing the matching data output by the image characteristic matching system and determining the detection result of the PCB.
When the PCB visual detection equipment of the embodiment is used for image matching, the dimensionality of the extracted feature descriptors is smaller than 128 dimensionalities, the calculation complexity and the matching time are reduced, finally, mismatching is eliminated, the mismatching rate is reduced while the calculation complexity and the matching time are reduced, and the image matching accuracy is guaranteed.
It is understood that the same or similar parts in the above embodiments may be mutually referred to, and the same or similar parts in other embodiments may be referred to for the content which is not described in detail in some embodiments.
It should be noted that the terms "first," "second," and the like in the description of the present invention are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Further, in the description of the present invention, the meaning of "a plurality" means at least two unless otherwise specified.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (12)

1. An image feature matching method, comprising:
acquiring a pre-input image to be matched and a standard image;
extracting a target characteristic point set corresponding to the image to be matched and a standard characteristic point set corresponding to the standard image;
extracting a target feature descriptor with a preset dimension of each target feature point in the target feature point set and a standard feature descriptor with a preset dimension of each standard feature point in the standard feature point set; the preset dimension is less than 128 dimensions;
according to the target feature descriptor corresponding to each target feature point and the standard feature descriptor corresponding to each standard feature point, performing rough matching on the target feature points and the standard feature points, matching the matching points corresponding to each target feature point from the standard feature point set, and generating an initial matching point set between the target feature point set and the standard feature point set;
and carrying out false matching point elimination processing on the initial matching point set to obtain matching data between the image to be matched and the standard image.
2. The image feature matching method according to claim 1, wherein the performing the mis-matching point elimination processing on the initial matching point set to obtain the matching data between the image to be matched and the standard image includes:
taking the initial matching point set as a point set to be processed, and dividing all matching points in the point set to be processed into preset parts to obtain a preset part of point subset to be processed;
performing mismatching point elimination processing on each to-be-processed point subset to obtain an inner point corresponding to each to-be-processed point subset;
collecting all interior points in the to-be-processed point subset to obtain a current interior point set corresponding to the to-be-matched image;
adding one to the current iteration times recorded in advance, and judging whether the updated current iteration times reach the target iteration times determined by the pre-training;
if the current iteration times reach the target iteration times, calculating a basic matrix as matching data between the image to be matched and the standard image according to the current inner point set based on an eight-point method;
and if the current iteration times do not reach the target iteration times, taking the current internal point set as a point set to be processed, and continuing to perform distribution and mismatching point elimination processing on the point set to be processed until the current iteration times reach the target iteration times.
3. The image feature matching method according to claim 2, wherein the performing the mis-matching point elimination processing on each to-be-processed point subset to obtain an interior point corresponding to each to-be-processed point subset includes:
randomly extracting a preset number of current matching point pairs from the to-be-processed point subset;
determining a current basic matrix according to the current matching point pair based on an eight-point method;
determining a conversion error value of each group of initial matching point pairs which are not extracted in the to-be-processed point subset according to the current basic matrix and a preset error calculation rule;
judging whether the conversion error value of each group of initial matching point pairs is smaller than a preset error threshold value or not;
if the conversion error value of the initial matching point pair is smaller than the preset error threshold value, taking the initial matching point pair as an inner point pair;
if the conversion error of the initial matching point pair is not smaller than the preset error threshold, rejecting the initial matching point pair;
collecting the interior point pairs corresponding to the point subset to be processed to obtain a current interior point subset corresponding to the point subset to be processed;
adding one to the pre-recorded extraction times, and judging whether the extraction times reach the preset target extraction times or not;
if the extraction frequency does not reach the target extraction frequency, randomly extracting a preset number of current matching point pairs from the to-be-processed point subset again so as to re-determine the current internal point subset;
and if the extraction frequency reaches the target extraction frequency, comparing the number of the inner points in the current inner point subset corresponding to each extraction, and taking the inner point in the current inner point subset with the largest number of the inner points as the inner point corresponding to the point subset to be processed.
4. The image feature matching method according to claim 2, wherein the step of determining the target number of iterations includes:
performing feature extraction and rough matching on a first training image and a second training image which are input in advance to obtain a training matching point set between the first training image and the second training image;
carrying out error matching point elimination processing on the training matching point set to obtain a training inner point set corresponding to the training matching point set;
adding one to the pre-recorded training iteration times, and calculating a basic matrix as training output data according to the training inner point set based on an eight-point method;
judging whether the training output data accords with training matching data between the first training image and the second training image according to the training output data;
if the training output data accords with the training matching data, taking the updated training iteration times as the target iteration times;
if the training output data do not accord with the training matching data, taking the training inner point set as the training matching point set to carry out mismatching point elimination processing until the training output data accord with the training matching data;
the first training image and the second training image are images of the same object from different angles.
5. The image feature matching method according to claim 1, wherein the rough matching of the target feature points and the standard feature points according to the target feature descriptor corresponding to each target feature point and the standard feature descriptor corresponding to each standard feature point, the matching of the matching points corresponding to each target feature point from the standard feature point set, and the generation of the initial matching point set between the target feature point set and the standard feature point set comprises:
calculating the Euclidean distance between each standard characteristic point and the target characteristic point according to the target characteristic descriptor corresponding to each target characteristic point, the standard characteristic descriptor corresponding to each standard characteristic point and the Euclidean distance calculation rule, and determining the nearest Euclidean distance and the next nearest Euclidean distance corresponding to the target characteristic point;
calculating the distance ratio of the nearest Euclidean distance and the next nearest Euclidean distance corresponding to each target feature point;
judging whether the distance ratio corresponding to each target feature point is smaller than a preset threshold value or not;
if the distance ratio corresponding to the target feature point is smaller than the preset threshold, taking the standard feature point corresponding to the nearest Euclidean distance as a matching point corresponding to the target feature point;
and collecting all the target characteristic points and the matching points corresponding to all the target characteristic points to obtain an initial matching point set between the target characteristic point set and the standard characteristic point set.
6. The image feature matching method according to claim 5, wherein before calculating the euclidean distance between each standard feature point and the target feature point according to the target feature descriptor corresponding to each target feature point, the standard feature descriptor corresponding to each standard feature point, and the euclidean distance calculation rule, and determining the nearest euclidean distance and the next nearest euclidean distance corresponding to the target feature point, the method further comprises:
creating a matching tree corresponding to each target feature point in the target feature point set according to the standard feature point set and the target feature point set;
correspondingly, the calculating, according to the target feature descriptor corresponding to each target feature point, the standard feature descriptor corresponding to each standard feature point, and the euclidean distance calculation rule, the euclidean distance between each standard feature point and the target feature point, and determining the nearest euclidean distance and the next nearest euclidean distance corresponding to the target feature point, includes:
and calculating the Euclidean distance between each standard characteristic point and the target characteristic point according to the target characteristic descriptor corresponding to each target characteristic point, the standard characteristic descriptor corresponding to each standard characteristic point and the Euclidean distance calculation rule based on the matching tree corresponding to each target characteristic point, and determining the nearest Euclidean distance and the second nearest Euclidean distance corresponding to the target characteristic point.
7. An image feature matching apparatus, characterized by comprising:
the acquisition module is used for acquiring a pre-input image to be matched and a standard image;
the first extraction module is used for extracting a target characteristic point set corresponding to the image to be matched and a standard characteristic point set corresponding to the standard image;
the second extraction module is used for extracting a target feature descriptor with a preset dimension of each target feature point in the target feature point set and a standard feature descriptor with a preset dimension of each standard feature point in the standard feature point set; the preset dimension is less than 128 dimensions;
the rough matching module is used for roughly matching the target feature points and the standard feature points according to the target feature descriptors corresponding to the target feature points and the standard feature descriptors corresponding to the standard feature points, matching the matching points corresponding to the target feature points from the standard feature point set, and generating an initial matching point set between the target feature point set and the standard feature point set;
and the eliminating module is used for carrying out error matching point eliminating processing on the initial matching point set to obtain matching data between the image to be matched and the standard image.
8. The image feature matching device according to claim 7, wherein the culling module includes: the device comprises a distribution unit, a processing unit, a first set unit, an iteration judgment unit, a matrix calculation unit and a first determination unit;
the distribution unit is used for taking the initial matching point set as a point set to be processed, dividing all matching point pairs in the point set to be processed into preset parts, and obtaining a preset part of point subset to be processed;
the processing unit is used for carrying out mismatching point elimination processing on each to-be-processed point subset to obtain an inner point corresponding to each to-be-processed point subset;
the first set unit is used for setting the interior points in all the to-be-processed point subsets to obtain a current interior point set corresponding to the to-be-matched image;
the iteration judging unit is used for adding one to the current iteration times recorded in advance and judging whether the updated current iteration times reach the target iteration times determined by the pre-training;
the matrix calculation unit is used for calculating a basic matrix as matching data between the image to be matched and the standard image according to the current inner point set based on an eight-point method if the current iteration times reach the target iteration times;
and the first determining unit is used for taking the current internal point set as a point set to be processed if the current iteration number does not reach the target iteration number, and continuing to perform distribution and false matching point elimination processing on the point set to be processed until the current iteration number reaches the target iteration number.
9. The image feature matching device according to claim 7, wherein the rough matching module includes: the device comprises a distance calculation unit, a ratio judgment unit, a second determination unit and a second collection unit;
the distance calculation unit is used for calculating the Euclidean distance between each standard characteristic point and the target characteristic point according to the target characteristic descriptor corresponding to each target characteristic point, the standard characteristic descriptor corresponding to each standard characteristic point and the Euclidean distance calculation rule, and determining the nearest Euclidean distance and the next nearest Euclidean distance corresponding to the target characteristic point;
the ratio calculation unit is used for calculating the distance ratio of the nearest Euclidean distance corresponding to each target feature point to the next nearest Euclidean distance;
the ratio judging unit is used for judging whether the distance ratio corresponding to each target feature point is smaller than a preset threshold value or not;
the second determining unit is configured to, if the distance ratio corresponding to the target feature point is smaller than the preset threshold, use the standard feature point corresponding to the closest euclidean distance as a matching point corresponding to the target feature point;
the second set unit is configured to set all the target feature points and matching points corresponding to all the target feature points to obtain an initial matching point set between the target feature point set and the standard feature point set.
10. The image feature matching device according to claim 9, wherein the rough matching module further comprises: a creating unit;
the creating unit is used for creating a matching tree corresponding to each target feature point in the target feature point set according to the standard feature point set and the target feature point set;
the distance extraction unit is specifically configured to calculate a euclidean distance between each standard feature point and the target feature point according to the target feature descriptor corresponding to each target feature point, the standard feature descriptor corresponding to each standard feature point, and a euclidean distance calculation rule based on the matching tree corresponding to each target feature point, and determine a closest euclidean distance and a next closest euclidean distance corresponding to the target feature point.
11. An image feature matching system, comprising: the system comprises a processor and a memory connected with the processor;
the memory is configured to store a computer program for performing at least the image feature matching method of any one of claims 1-6;
the processor is used for calling and executing the computer program.
12. A PCB board visual inspection device, comprising: an image collector, a data analyzer and an image feature matching system as claimed in claim 11;
the image collector and the data analyzer are respectively connected with the image characteristic matching system;
the image collector is used for collecting a PCB surface image and inputting the PCB surface image into the image feature matching system so that the image feature matching system outputs matching data corresponding to the PCB surface image;
the data analyzer is used for analyzing the matching data output by the image feature matching system and determining the detection result of the PCB.
CN202110955667.7A 2021-08-19 2021-08-19 Image feature matching method, device and system and PCB visual detection equipment Pending CN113723428A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110955667.7A CN113723428A (en) 2021-08-19 2021-08-19 Image feature matching method, device and system and PCB visual detection equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110955667.7A CN113723428A (en) 2021-08-19 2021-08-19 Image feature matching method, device and system and PCB visual detection equipment

Publications (1)

Publication Number Publication Date
CN113723428A true CN113723428A (en) 2021-11-30

Family

ID=78676925

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110955667.7A Pending CN113723428A (en) 2021-08-19 2021-08-19 Image feature matching method, device and system and PCB visual detection equipment

Country Status (1)

Country Link
CN (1) CN113723428A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114998773A (en) * 2022-08-08 2022-09-02 四川腾盾科技有限公司 Characteristic mismatching elimination method and system suitable for aerial image of unmanned aerial vehicle system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102722731A (en) * 2012-05-28 2012-10-10 南京航空航天大学 Efficient image matching method based on improved scale invariant feature transform (SIFT) algorithm
CN109410255A (en) * 2018-10-17 2019-03-01 中国矿业大学 A kind of method for registering images and device based on improved SIFT and hash algorithm
CN112150520A (en) * 2020-08-18 2020-12-29 徐州华讯科技有限公司 Image registration method based on feature points

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102722731A (en) * 2012-05-28 2012-10-10 南京航空航天大学 Efficient image matching method based on improved scale invariant feature transform (SIFT) algorithm
CN109410255A (en) * 2018-10-17 2019-03-01 中国矿业大学 A kind of method for registering images and device based on improved SIFT and hash algorithm
CN112150520A (en) * 2020-08-18 2020-12-29 徐州华讯科技有限公司 Image registration method based on feature points

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
许志华 等: "《地灾与建筑损毁的无人机与地面LiDAR协同观测及评估》", vol. 1, 31 March 2019, 北京理工大学出版社, pages: 37 - 38 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114998773A (en) * 2022-08-08 2022-09-02 四川腾盾科技有限公司 Characteristic mismatching elimination method and system suitable for aerial image of unmanned aerial vehicle system
CN114998773B (en) * 2022-08-08 2023-02-17 四川腾盾科技有限公司 Characteristic mismatching elimination method and system suitable for aerial image of unmanned aerial vehicle system

Similar Documents

Publication Publication Date Title
CN113269237B (en) Assembly change detection method, device and medium based on attention mechanism
WO2020107717A1 (en) Visual saliency region detection method and apparatus
KR101303124B1 (en) Apparatus and method for object tracking using feature descriptor, and apparatus and method for removing garbage feature
CN109815956B (en) License plate character recognition method based on self-adaptive position segmentation
CN110135438B (en) Improved SURF algorithm based on gradient amplitude precomputation
CN111797744B (en) Multimode remote sensing image matching method based on co-occurrence filtering algorithm
CN108550166B (en) Spatial target image matching method
CN111079739A (en) Multi-scale attention feature detection method
CN110991547A (en) Image significance detection method based on multi-feature optimal fusion
CN109165603B (en) Ship detection method and device
CN110610123A (en) Multi-target vehicle detection method and device, electronic equipment and storage medium
CN115588190A (en) Mature fruit identification and picking point positioning method and device
LU503034B1 (en) Palmprint Recognition Method Based on Fusion Depth Network
CN113450269A (en) Point cloud key point extraction method based on 3D vision
Zhou et al. Fast circle detection using spatial decomposition of Hough transform
CN113569981A (en) Power inspection bird nest detection method based on single-stage target detection network
CN113723428A (en) Image feature matching method, device and system and PCB visual detection equipment
CN111738079A (en) Banknote denomination recognition method and device
CN111127407B (en) Fourier transform-based style migration forged image detection device and method
CN112150474A (en) Underwater bubble image feature segmentation and extraction method
Liu et al. An improved image retrieval method based on sift algorithm and saliency map
CN112084365A (en) Real-time image retrieval method of network camera based on OpenCV and CUDA acceleration
CN108256578B (en) Gray level image identification method, device, equipment and readable storage medium
CN115937736A (en) Small target detection method based on attention and context awareness
CN114708420A (en) Visual positioning method and device based on local variance and posterior probability classifier

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination