CN116824183A - Image feature matching method and device based on multiple feature descriptors - Google Patents

Image feature matching method and device based on multiple feature descriptors Download PDF

Info

Publication number
CN116824183A
CN116824183A CN202310841374.5A CN202310841374A CN116824183A CN 116824183 A CN116824183 A CN 116824183A CN 202310841374 A CN202310841374 A CN 202310841374A CN 116824183 A CN116824183 A CN 116824183A
Authority
CN
China
Prior art keywords
descriptor
feature
value
threshold
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310841374.5A
Other languages
Chinese (zh)
Other versions
CN116824183B (en
Inventor
樊迎博
毛善君
汤璧屾
陈华州
宋春久
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Beijing Longruan Technologies Inc
Original Assignee
Peking University
Beijing Longruan Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University, Beijing Longruan Technologies Inc filed Critical Peking University
Priority to CN202310841374.5A priority Critical patent/CN116824183B/en
Publication of CN116824183A publication Critical patent/CN116824183A/en
Application granted granted Critical
Publication of CN116824183B publication Critical patent/CN116824183B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/758Involving statistics of pixels or of feature values, e.g. histogram matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/11Technique with transformation invariance effect

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an image feature matching method and device based on multiple feature descriptors, relates to the field of image processing and image feature matching, and aims to construct multiple feature descriptors and finish accurate matching of images by detecting feature points on the images. The multiple feature descriptors in the invention are constructed by using different permutation and combination methods of symbol, mean value and central value descriptors, and the direction information, numerical value information and global information of feature points are considered. And scanning the pixel matrix in the range cut around the feature points by adopting a sliding window, extracting three feature descriptors in each window, combining and splicing to generate a corresponding matrix numerical distribution histogram and the feature descriptors. And carrying out feature point matching according to feature descriptors of different images, and selecting partial feature points with the closest descriptors as optimal points to carry out image matching, so that the image matching effect is more accurate. The method provides technical support for image matching used in mine mining environment, scene modeling and industrial production.

Description

Image feature matching method and device based on multiple feature descriptors
Technical Field
The invention relates to the field of image processing and image feature matching, in particular to an image feature matching method and an image feature matching device based on multiple feature descriptors.
Background
The construction of feature descriptors and image matching are key technologies in the field of computer vision for identifying, matching and tracking specific features in images. These techniques are widely used in many areas, including computer graphics, robotics, autopilot, virtual reality, and the like.
However, due to various gesture changes, illumination changes and noise interference of the camera in the shooting process, the quality of the generated feature descriptors is easily reduced, and the matching precision is reduced. Meanwhile, in some embedded devices or mobile devices, hardware resources are limited, and large-scale feature extraction and matching may not be performed. The operation efficiency and application scene resulting in image matching are greatly limited.
At present, aiming at the problems of the construction of the feature descriptors and the image matching, a method for fully considering the direction information, the numerical information and the global information of the feature points is not fully considered in the prior art. The partial method adopts a method of counting gradient histograms around characteristic points of images to generate characteristic descriptors, the method is easy to miss global information of the images, and the characteristic points with the same partial gradients but different numerical differences are subjected to mismatching, so that unstable factors exist in the application process of the algorithm, and larger image matching errors are easy to cause.
And part of methods are used for directly carrying out violent search matching on part of areas around the feature points to solve the problem of complex generation of feature descriptors, but the methods have higher requirements on hardware conditions, and have poorer effect on large-scale feature extraction and matching, and cannot meet the actual requirements in large-scale operation scenes such as cities or industries.
Disclosure of Invention
In view of the above problems, the present invention proposes an image feature matching method and an image feature matching device based on multiple feature descriptors.
The embodiment of the invention provides an image feature matching method based on multiple feature descriptors, which comprises the following steps:
detecting feature points in each image based on a feature point detection algorithm;
according to the distribution and actual requirements of the feature points, a first threshold, a second threshold and a third threshold are set, wherein the first threshold is used for setting the size of a pixel matrix around the feature points to be intercepted, the second threshold is used for setting the radius of a sliding window, and the third threshold is used for setting the bit width of a feature descriptor;
scanning and calculating the characteristic points by using the first threshold value and the second threshold value to obtain a symbol descriptor, a mean descriptor and a central value descriptor;
based on the symbol descriptor, the mean descriptor and the central value descriptor, combining the third threshold value to obtain a feature descriptor;
and carrying out feature point matching on the feature descriptors of the two or more images, and selecting the feature point with the optimal matching result as a final image matching result according to the comparison result.
Optionally, the feature point detection algorithm is only used for detecting feature points in each image, and the feature point detection algorithm includes: FAST, SIFT, SURF and SuperPoint algorithm.
Optionally, scanning and calculating the feature points by using the first threshold value and the second threshold value to obtain a symbol descriptor, a mean descriptor and a central value descriptor, including:
and scanning and calculating a pixel matrix intercepted around the feature points by utilizing a sliding window with the radius of the sliding window to obtain the descriptor, the mean descriptor and the central value descriptor.
Optionally, the first threshold is a patch_size threshold;
the second threshold is radius threshold;
the third threshold is a bit_width threshold.
Optionally, the threshold values of the first threshold value, the second threshold value and the third threshold value are obtained through calculation or network self-training through the distribution of the feature points and actual demands.
Optionally, based on the symbol descriptor, the mean descriptor and the central value descriptor, combining the third threshold value to obtain a feature descriptor, including:
performing different modes of splicing and combining on the symbol descriptor, the mean descriptor and the central value descriptor to directly generate a feature descriptor with the third threshold value set bit width; or alternatively, the process may be performed,
and performing different modes of splicing and combining on the symbol descriptor, the mean descriptor and the central value descriptor to generate a corresponding matrix numerical distribution histogram, and generating a characteristic descriptor with a third threshold value set bit width according to the matrix numerical distribution histogram.
Optionally, the third thresholded bit-wide feature descriptors remain consistent under different rotations, scales, inversions, and affine transformations; the symbol descriptor, the mean descriptor and the central value descriptor are respectively calculated in the following ways:
calculating the absolute value of each peripheral point pixel except the central point pixel in each sliding window, comparing the absolute value of each peripheral point pixel with the absolute value of the central point pixel, setting 1 if the absolute value of each peripheral point pixel is larger than the absolute value of the central point pixel, setting 0 if the absolute value of each peripheral point pixel is smaller than the absolute value of the central point pixel, and sequentially arranging the results to generate the symbol descriptor;
calculating the average value of all pixels in each sliding window, comparing the average value with the average value of the pixels in the sliding window where the characteristic points are located, setting 1 if the average value of all the pixels is larger than the average value of the pixels in the sliding window where the characteristic points are located, setting 0 if the average value of all the pixels is smaller than the average value of the pixels in the sliding window where the characteristic points are located, and sequentially arranging the results to generate the average value descriptor;
and calculating the central point value of each sliding window compared with the intercepted pixel matrix average value around the characteristic point and the full-image pixel matrix average value, if the central point value of each sliding window is larger than the pixel matrix average value around the characteristic point and the full-image pixel matrix average value, setting 1, and if the central point value of each sliding window is smaller than the characteristic point, setting 0, and arranging the results in sequence to generate the central value descriptor.
Optionally, the symbol descriptor, the mean descriptor and the central value descriptor are spliced and combined in different manners, including:
sequentially splicing according to the sequence of the central value description, the symbol descriptor and the mean descriptor to generate the feature descriptor; or alternatively, the process may be performed,
and adding the symbol descriptor and the mean value description according to bits, and adding the central value descriptor in high order to generate the feature descriptor.
Optionally, performing feature point matching on feature descriptors of two or more images includes:
performing the characteristic point matching by adopting an L1 norm matching or L2 norm matching mode; or alternatively, the process may be performed,
performing the feature point matching by adopting a mode of calculating the Hamming distance between the feature descriptor of the first image and the feature descriptor of the second image; or alternatively, the process may be performed,
and (3) performing the feature point matching by calculating two adjacent bits from right to left of the feature descriptor of the first image and the feature descriptor of the second image, if the two bits are not all 0 s, marking the two bits as 1 s, and counting the number of bits of the new 1 s.
The embodiment of the invention provides an image feature matching device based on multiple feature descriptors, which comprises:
a detection module 410, configured to detect a feature point in each image based on a feature point detection algorithm;
the threshold setting module 420 is configured to set a first threshold, a second threshold and a third threshold according to the distribution and the actual requirement of the feature points, where the first threshold is used to set a size of a pixel matrix around the feature points to be intercepted, the second threshold is used to set a sliding window radius, and the third threshold is used to set a bit width of the feature descriptor;
a scanning module 430, configured to scan and calculate the feature points by using the first threshold and the second threshold to obtain a symbol descriptor, a mean descriptor, and a central value descriptor;
a feature descriptor module 440, configured to obtain a feature descriptor based on the symbol descriptor, the mean descriptor, and the central value descriptor, in combination with the third threshold value;
the matching selection module 450 is configured to perform feature point matching on feature descriptors of two or more images, and select a feature point with an optimal matching result as a final image matching result according to the comparison result.
Optionally, the scanning module is specifically configured to:
and scanning and calculating a pixel matrix intercepted around the feature points by utilizing a sliding window with the radius of the sliding window to obtain the descriptor, the mean descriptor and the central value descriptor.
Optionally, the threshold values of the first threshold value, the second threshold value and the third threshold value in the threshold setting module are obtained by calculating or performing network self-training through the distribution and the actual demand of the feature points;
the first threshold is a patch_size threshold;
the second threshold is radius threshold;
the third threshold is a bit_width threshold.
Optionally, the feature descriptor module is specifically configured to:
performing different modes of splicing and combining on the symbol descriptor, the mean descriptor and the central value descriptor to directly generate a feature descriptor with the third threshold value set bit width; or alternatively, the process may be performed,
performing different modes of splicing and combining on the symbol descriptor, the mean descriptor and the central value descriptor to generate a corresponding matrix numerical value distribution histogram, and generating a characteristic descriptor with a third threshold value set bit width according to the matrix numerical value distribution histogram;
wherein, the splicing and combining the symbol descriptor, the mean descriptor and the central value descriptor in different modes comprises:
sequentially splicing according to the sequence of the central value description, the symbol descriptor and the mean descriptor to generate the feature descriptor; or alternatively, the process may be performed,
and adding the symbol descriptor and the mean value description according to bits, and adding the central value descriptor in high order to generate the feature descriptor.
Optionally, the third thresholded bit-wide feature descriptors remain consistent under different rotations, scales, inversions, and affine transformations; the calculation modes of the symbol descriptor, the mean descriptor and the central value descriptor in the scanning module respectively comprise:
calculating the absolute value of each peripheral point pixel except the central point pixel in each sliding window, comparing the absolute value of each peripheral point pixel with the absolute value of the central point pixel, setting 1 if the absolute value of each peripheral point pixel is larger than the absolute value of the central point pixel, setting 0 if the absolute value of each peripheral point pixel is smaller than the absolute value of the central point pixel, and sequentially arranging the results to generate the symbol descriptor;
calculating the average value of all pixels in each sliding window, comparing the average value with the average value of the pixels in the sliding window where the characteristic points are located, setting 1 if the average value of all the pixels is larger than the average value of the pixels in the sliding window where the characteristic points are located, setting 0 if the average value of all the pixels is smaller than the average value of the pixels in the sliding window where the characteristic points are located, and sequentially arranging the results to generate the average value descriptor;
and calculating the central point value of each sliding window compared with the intercepted pixel matrix average value around the characteristic point and the full-image pixel matrix average value, if the central point value of each sliding window is larger than the pixel matrix average value around the characteristic point and the full-image pixel matrix average value, setting 1, and if the central point value of each sliding window is smaller than the characteristic point, setting 0, and arranging the results in sequence to generate the central value descriptor.
Optionally, the matching selection module is specifically configured to:
performing the characteristic point matching by adopting an L1 norm matching or L2 norm matching mode; or alternatively, the process may be performed,
performing the feature point matching by adopting a mode of calculating the Hamming distance between the feature descriptor of the first image and the feature descriptor of the second image; or alternatively, the process may be performed,
and (3) performing the feature point matching by calculating two adjacent bits from right to left of the feature descriptor of the first image and the feature descriptor of the second image, if the two bits are not all 0 s, marking the two bits as 1 s, and counting the number of bits of the new 1 s.
The image feature matching method based on the multiple feature descriptors provided by the invention comprises the steps of firstly detecting feature points in each image based on a feature point detection algorithm; and respectively setting three thresholds of the size of a pixel matrix around the feature points to be intercepted, the radius of a sliding window and the bit width of the feature descriptor according to the distribution of the feature points and the actual requirements.
Then, the characteristic points are scanned and calculated by utilizing the first two thresholds, so that a symbol descriptor, a mean descriptor and a central value descriptor are obtained; based on the symbol descriptor, the mean descriptor and the central value descriptor, combining a third threshold value to obtain a feature descriptor; and finally, carrying out feature point matching on feature descriptors of two or more images, and selecting a feature point with the optimal matching result as a final image matching result according to the comparison result.
The method for constructing the multiple feature descriptors uses different arrangement and synthesis methods of the symbol descriptors, the mean descriptors and the central value descriptors as the multiple feature descriptors, fully considers the direction information, the numerical value information and the global information of the feature points, and can enable the image matching based on the feature descriptors to be more accurate and effective. The global information of the image can not be missed, the phenomenon of mismatching of the feature points with the same partial gradient but different numerical values can not be caused naturally, and the image matching is more accurate. Meanwhile, the requirements on hardware conditions are low, the effects of large-scale feature extraction and matching are good, the actual requirements of large-scale operation scenes such as cities or industries are well met, and particularly, good technical support is provided for image matching used in mine mining environments, scene modeling and industrial production.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
FIG. 1 is a flow chart of an image feature matching method based on multiple feature descriptors according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an exemplary method for constructing multiple feature descriptors in an embodiment of the present invention;
FIG. 3 is a schematic diagram of an exemplary multi-feature descriptor set-up stitching method in accordance with an embodiment of the present invention;
fig. 4 is a block diagram of an image feature matching device based on multiple feature descriptors according to an embodiment of the present invention.
Detailed Description
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Referring to fig. 1, a flowchart of an image feature matching method based on multiple feature descriptors according to an embodiment of the present invention is shown, where the image feature matching method includes:
step 101: feature points in each image are detected based on a feature point detection algorithm.
Feature points in each image are first detected based on a feature point detection algorithm. If a plurality of images exist, naturally each image needs to be detected to obtain the characteristic points. The following steps 102 to 104 are explained and illustrated by taking a method to be executed after any one of the images is detected to obtain the feature point as an example.
In a preferred embodiment, the feature point detection algorithm provided by the invention is only used for detecting feature points in each image, and no other operation is performed. The feature point detection algorithm comprises: FAST, SIFT, SURF and SuperPoint, etc.
Step 102: according to distribution and actual requirements of feature points, a first threshold, a second threshold and a third threshold are set, wherein the first threshold is used for setting the size of a pixel matrix around the feature points to be intercepted, the second threshold is used for setting the radius of a sliding window, and the third threshold is used for setting the bit width of a feature descriptor.
For any image, after obtaining the characteristic points, three thresholds are set according to the distribution and actual requirements of the characteristic points, namely: the method comprises the steps of a first threshold value, a second threshold value and a third threshold value, wherein the first threshold value is used for setting the size of a pixel matrix around a feature point to be intercepted, the second threshold value is used for setting the radius of a sliding window, and the third threshold value is used for setting the bit width of a feature descriptor. By setting the three thresholds, a foundation is laid for the subsequent application of the direction information, numerical information and global information of the feature points into the construction of feature descriptors and image matching.
In one possible embodiment, the threshold values of the first threshold value, the second threshold value and the third threshold value may be set directly and manually, or may be obtained by calculation or network self-training through distribution of feature points and actual requirements.
In a preferred embodiment, the first threshold may be set to be the patch_size threshold; setting a second threshold as radius threshold; and setting the third threshold as a bit width threshold.
Step 103: and scanning and calculating the characteristic points by using the first threshold value and the second threshold value to obtain a symbol descriptor, a mean descriptor and a central value descriptor.
After the size of the pixel matrix around the feature point to be intercepted, the radius of the sliding window and the bit width of the feature descriptor are set, the first threshold value and the second threshold value can be utilized, namely: and scanning and calculating the feature points by utilizing the size of the pixel matrix around the feature points to be intercepted and the radius of the sliding window to obtain a symbol descriptor, a mean descriptor and a central value descriptor.
A preferred way is: and scanning and calculating a pixel matrix intercepted around the feature points by utilizing a sliding window with the radius of the sliding window to obtain a descriptor, a mean descriptor and a central value descriptor.
Taking the first threshold as the patch_size threshold, the second threshold as the radius threshold, and the third threshold as the bit_width threshold as an example, assume that the patch_size threshold is set to 2, the radius threshold is set to 1, and the bit_width threshold is set to 18. Referring to the schematic diagram of the construction method of the multiple feature descriptor shown in fig. 2, a region representing 5×5 around the feature point is extracted according to the parameter patch_size threshold set to 2, and a sliding window size is represented as 3×3 within the region according to the parameter radius threshold set.
The feature descriptors that set the bit width based on the bit_width threshold remain consistent under different rotations, scales, flipping, and affine transformations. On this basis, the symbol descriptor, the mean descriptor and the central value descriptor are respectively calculated in the following ways:
and calculating the absolute value of each peripheral point pixel except the central point pixel in each sliding window compared with the absolute value of the central point pixel, setting 1 if the absolute value of the peripheral point pixel is larger than the absolute value of the central point pixel and setting 0 if the absolute value of the peripheral point pixel is smaller than the absolute value of the central point pixel, and sequentially arranging the results to generate the symbol descriptor.
And calculating the average value of all pixels in each sliding window, comparing the average value with the average value of the pixels in the sliding window where the characteristic points are located, setting 1 if the average value of all the pixels is larger than the average value of the pixels in the sliding window where the characteristic points are located, setting 0 if the average value of all the pixels is smaller than the average value of the pixels in the sliding window where the characteristic points are located, and sequentially arranging the results to generate the average value descriptor.
And calculating the central point value of each sliding window compared with the intercepted pixel matrix average value around the characteristic point and the whole image pixel matrix average value, setting 1 if the central point value of each sliding window is larger than the pixel matrix average value around the characteristic point and the whole image pixel matrix average value, setting 0 if the central point value of each sliding window is smaller than the characteristic point and the whole image pixel matrix average value, and sequentially arranging the results to generate a central value descriptor.
In connection with fig. 2, when calculating the symbol descriptor, as shown in the top row in fig. 2, the sliding window includes 9 windows from top left to bottom right, the size of each surrounding point pixel absolute value in each sliding window except for the center point pixel (for example, the white small boxes surrounded by eight shadows in the top left-most drawing) is calculated, compared with the size of the center point pixel absolute value, if the surrounding point pixel absolute value is greater than the center point pixel absolute value, 1 is set, and if the surrounding point pixel absolute value is less than the center point pixel absolute value, S1, S2, …, S8 and S9 are respectively obtained, and the results are sequentially arranged to obtain S, so as to generate the symbol descriptor, and the right-most example in fig. 2 is represented by 9×xxxxxxx.
When calculating the mean descriptor, as shown in the middle row of fig. 2, the sliding windows are divided into 9 windows from top left to bottom right, the average value of all pixels in each sliding window is calculated, the average value is compared with the average value of the pixels in the sliding window where the feature points are located (the shade in the middle row of fig. 2 and the leftmost diagram), if the average value of all the pixels is greater than the average value of the pixels in the sliding window where the feature points are located, the average value is set to 1, and if the average value is less than the average value, the average value is set to 0, M1, M2, …, M8 and M9 are respectively obtained, and the results are sequentially arranged to obtain M, so that the mean descriptor is generated, and the rightmost example in fig. 2 is shown by 9 x xxxxxxxx.
In calculating the central value descriptor, as shown in the lowest line of fig. 2, the sliding window includes 9 windows from top left to bottom right, the central point (for example, the shadow in the leftmost graph of the lowest line) of each sliding window is calculated, compared with the average value of the pixel matrix around the intercepted feature point and the average value of the whole graph, if the central point value of each sliding window is greater than the average value of the pixel matrix around the feature point and the average value of the whole graph, the central point value of each sliding window is set to 1, and if the central point value of each sliding window is less than the average value of the pixel matrix around the feature point and the average value of the whole graph, the central point value of each sliding window is set to 0, C1, C2, …, C8 and C9 are respectively obtained, and the results are sequentially arranged, so that the central value descriptor is generated, and the rightmost example in fig. 2 is represented by 9 x xx.
The symbol descriptor, the mean descriptor and the central value descriptor can be obtained through the mode.
Step 104: and combining the third threshold value based on the symbol descriptor, the mean descriptor and the central value descriptor to obtain the feature descriptor.
After the symbol descriptor, the mean descriptor and the central value descriptor are obtained, the feature descriptor is obtained by combining a third threshold value based on the symbol descriptor, the mean descriptor and the central value descriptor, and the feature descriptor is obtained. Specific:
the symbol descriptor, the mean descriptor and the central value descriptor can be spliced and combined in different modes to directly generate a characteristic descriptor with a third threshold value set bit width; or, the symbol descriptor, the mean descriptor and the central value descriptor can be spliced and combined in different modes to generate a corresponding matrix numerical distribution histogram, and then the feature descriptor with the third threshold value set bit width is generated according to the matrix numerical distribution histogram.
In one possible embodiment, the method for differently stitching and combining the symbol descriptor, the mean descriptor, and the center value descriptor includes:
sequentially splicing according to the sequence of the central value description, the symbol descriptor and the mean value descriptor to generate a feature descriptor; or adding the symbol descriptor and the mean value description according to bits, and adding a central value descriptor in a high order to generate a feature descriptor.
Referring to the schematic diagram of the multi-feature descriptor combination and splicing method shown in fig. 3, the three are sequentially spliced according to a central value description C, a symbol descriptor S and a mean descriptor M to generate a feature descriptor, and assuming that the central value description C is 10 (2 bit), the symbol descriptor S is 10110100 (8 bit), and the mean descriptor M is 01110011 (8 bit), the corresponding generated multi-feature descriptor CSM is 10101101000111011, and the total number of the multi-feature descriptors is 18bit.
Of course, the sequence of each descriptor can be adjusted according to different actual requirements, or the symbol descriptor and the mean value description can be added according to the bit, and then the central value descriptor is added at the high position to generate the feature descriptor. Also illustrated in fig. 3 is the sequential concatenation of the center value description C, the mean descriptor M, and the symbol descriptor S, resulting in a multiple feature descriptor CMS of 100111001110110100; and sequentially splicing the symbol descriptor S, the central value description C and the mean value descriptor M in sequence to generate a multi-feature descriptor SCM of 10110100100111011.
Step 105: and carrying out feature point matching on the feature descriptors of the two or more images, and selecting the feature point with the optimal matching result as a final image matching result according to the comparison result.
After the feature descriptors of each image are obtained according to the steps 102 to 104, feature point matching can be performed on the feature descriptors of two or more images, and then the feature point with the optimal matching result is selected as the final image matching result according to the comparison result.
In one possible embodiment, a method for performing feature point matching on feature descriptors of two or more images includes:
performing characteristic point matching by adopting an L1 norm matching or L2 norm matching mode; or, performing feature point matching by adopting a mode of calculating the Hamming distance between the feature descriptor of the first image and the feature descriptor of the second image; alternatively, feature point matching is performed by calculating two adjacent bits of the feature descriptor of the first image and the feature descriptor of the second image from right to left, and counting the number of bits of the new 1 if not all 0 s are marked as one 1.
In actual matching, the matching method can be selected according to different actual needs, wherein the hamming distances of the two feature descriptors are calculated, i.e. the sum of the numbers of bits of 1 in all elements is calculated.
For matching comparison results, for example: and (3) carrying out Hamming distance comparison on each feature descriptor of the 1 st image and any feature description of the 2 nd image, taking a pair of feature descriptors with the minimum Hamming distance as an optimal feature descriptor point pair of the point, and selecting a part of point pairs with the minimum Hamming distance among all feature descriptor point pairs as an optimal feature point matching pair for image stitching.
Based on the image feature matching method based on multiple feature descriptors, the embodiment of the invention further provides an image feature matching device based on multiple feature descriptors, and referring to the device block diagram shown in fig. 4, the image feature matching device includes:
a detection module 410, configured to detect a feature point in each image based on a feature point detection algorithm;
the threshold setting module 420 is configured to set a first threshold, a second threshold and a third threshold according to the distribution and the actual requirement of the feature points, where the first threshold is used to set a size of a pixel matrix around the feature points to be intercepted, the second threshold is used to set a sliding window radius, and the third threshold is used to set a bit width of the feature descriptor;
a scanning module 430, configured to scan and calculate the feature points by using the first threshold and the second threshold to obtain a symbol descriptor, a mean descriptor, and a central value descriptor;
a feature descriptor module 440, configured to obtain a feature descriptor based on the symbol descriptor, the mean descriptor, and the central value descriptor, in combination with the third threshold value;
the matching selection module 450 is configured to perform feature point matching on feature descriptors of two or more images, and select a feature point with an optimal matching result as a final image matching result according to the comparison result.
Optionally, the scanning module 430 is specifically configured to:
and scanning and calculating a pixel matrix intercepted around the feature points by utilizing a sliding window with the radius of the sliding window to obtain the descriptor, the mean descriptor and the central value descriptor.
Optionally, the threshold values of the first threshold value, the second threshold value and the third threshold value in the threshold setting module 420 are obtained by calculating or performing network self-training through the distribution and the actual requirement of the feature points;
the first threshold is a patch_size threshold;
the second threshold is radius threshold;
the third threshold is a bit_width threshold.
Optionally, the feature descriptor module 440 is specifically configured to:
performing different modes of splicing and combining on the symbol descriptor, the mean descriptor and the central value descriptor to directly generate a feature descriptor with the third threshold value set bit width; or alternatively, the process may be performed,
performing different modes of splicing and combining on the symbol descriptor, the mean descriptor and the central value descriptor to generate a corresponding matrix numerical value distribution histogram, and generating a characteristic descriptor with a third threshold value set bit width according to the matrix numerical value distribution histogram;
wherein, the splicing and combining the symbol descriptor, the mean descriptor and the central value descriptor in different modes comprises:
sequentially splicing according to the sequence of the central value description, the symbol descriptor and the mean descriptor to generate the feature descriptor; or alternatively, the process may be performed,
and adding the symbol descriptor and the mean value description according to bits, and adding the central value descriptor in high order to generate the feature descriptor.
Optionally, the third thresholded bit-wide feature descriptors remain consistent under different rotations, scales, inversions, and affine transformations; the symbol descriptor, the mean descriptor, and the center value descriptor in the scan module 430 are calculated in the following ways:
calculating the absolute value of each peripheral point pixel except the central point pixel in each sliding window, comparing the absolute value of each peripheral point pixel with the absolute value of the central point pixel, setting 1 if the absolute value of each peripheral point pixel is larger than the absolute value of the central point pixel, setting 0 if the absolute value of each peripheral point pixel is smaller than the absolute value of the central point pixel, and sequentially arranging the results to generate the symbol descriptor;
calculating the average value of all pixels in each sliding window, comparing the average value with the average value of the pixels in the sliding window where the characteristic points are located, setting 1 if the average value of all the pixels is larger than the average value of the pixels in the sliding window where the characteristic points are located, setting 0 if the average value of all the pixels is smaller than the average value of the pixels in the sliding window where the characteristic points are located, and sequentially arranging the results to generate the average value descriptor;
and calculating the central point value of each sliding window compared with the intercepted pixel matrix average value around the characteristic point and the full-image pixel matrix average value, if the central point value of each sliding window is larger than the pixel matrix average value around the characteristic point and the full-image pixel matrix average value, setting 1, and if the central point value of each sliding window is smaller than the characteristic point, setting 0, and arranging the results in sequence to generate the central value descriptor.
Optionally, the matching selection module 450 is specifically configured to:
performing the characteristic point matching by adopting an L1 norm matching or L2 norm matching mode; or alternatively, the process may be performed,
performing the feature point matching by adopting a mode of calculating the Hamming distance between the feature descriptor of the first image and the feature descriptor of the second image; or alternatively, the process may be performed,
and (3) performing the feature point matching by calculating two adjacent bits from right to left of the feature descriptor of the first image and the feature descriptor of the second image, if the two bits are not all 0 s, marking the two bits as 1 s, and counting the number of bits of the new 1 s.
In summary, according to the image feature matching method based on multiple feature descriptors provided by the invention, feature points in each image are detected based on a feature point detection algorithm; and respectively setting three thresholds of the size of a pixel matrix around the feature points to be intercepted, the radius of a sliding window and the bit width of the feature descriptor according to the distribution of the feature points and the actual requirements.
Then, the characteristic points are scanned and calculated by utilizing the first two thresholds, so that a symbol descriptor, a mean descriptor and a central value descriptor are obtained; based on the symbol descriptor, the mean descriptor and the central value descriptor, combining a third threshold value to obtain a feature descriptor; and finally, carrying out feature point matching on feature descriptors of two or more images, and selecting a feature point with the optimal matching result as a final image matching result according to the comparison result.
The method for constructing the multiple feature descriptors uses different arrangement and synthesis methods of the symbol descriptors, the mean descriptors and the central value descriptors as the multiple feature descriptors, fully considers the direction information, the numerical value information and the global information of the feature points, and can enable the image matching based on the feature descriptors to be more accurate and effective. The global information of the image can not be missed, the phenomenon of mismatching of the feature points with the same partial gradient but different numerical values can not be caused naturally, and the image matching is more accurate. Meanwhile, the requirements on hardware conditions are low, the effects of large-scale feature extraction and matching are good, the actual requirements of large-scale operation scenes such as cities or industries are well met, and particularly, good technical support is provided for image matching used in mine mining environments, scene modeling and industrial production.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or terminal device comprising the element.
The embodiments of the present invention have been described above with reference to the accompanying drawings, but the present invention is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present invention and the scope of the claims, which are to be protected by the present invention.

Claims (10)

1. An image feature matching method based on multiple feature descriptors, which is characterized by comprising the following steps:
detecting feature points in each image based on a feature point detection algorithm;
according to the distribution and actual requirements of the feature points, a first threshold, a second threshold and a third threshold are set, wherein the first threshold is used for setting the size of a pixel matrix around the feature points to be intercepted, the second threshold is used for setting the radius of a sliding window, and the third threshold is used for setting the bit width of a feature descriptor;
scanning and calculating the characteristic points by using the first threshold value and the second threshold value to obtain a symbol descriptor, a mean descriptor and a central value descriptor;
based on the symbol descriptor, the mean descriptor and the central value descriptor, combining the third threshold value to obtain a feature descriptor;
and carrying out feature point matching on the feature descriptors of the two or more images, and selecting the feature point with the optimal matching result as a final image matching result according to the comparison result.
2. The image feature matching method according to claim 1, wherein the feature point detection algorithm is used only for detection of feature points in each image, the feature point detection algorithm comprising: FAST, SIFT, SURF and SuperPoint algorithm.
3. The image feature matching method according to claim 1, wherein scanning and calculating the feature points using the first threshold and the second threshold to obtain a symbol descriptor, a mean descriptor, and a center value descriptor, comprises:
and scanning and calculating a pixel matrix intercepted around the feature points by utilizing a sliding window with the radius of the sliding window to obtain the descriptor, the mean descriptor and the central value descriptor.
4. The image feature matching method of claim 1, wherein the first threshold is a patch_size threshold;
the second threshold is radius threshold;
the third threshold is a bit_width threshold.
5. The image feature matching method according to claim 1, wherein the threshold values of the first threshold value, the second threshold value and the third threshold value are obtained by calculation or network self-training through distribution and actual requirements of the feature points.
6. The image feature matching method of claim 1, wherein deriving feature descriptors based on the symbol descriptors, the mean descriptors, and the center value descriptors, in combination with the third threshold, comprises:
performing different modes of splicing and combining on the symbol descriptor, the mean descriptor and the central value descriptor to directly generate a feature descriptor with the third threshold value set bit width; or alternatively, the process may be performed,
and performing different modes of splicing and combining on the symbol descriptor, the mean descriptor and the central value descriptor to generate a corresponding matrix numerical distribution histogram, and generating a characteristic descriptor with a third threshold value set bit width according to the matrix numerical distribution histogram.
7. The image feature matching method of claim 1, wherein the third thresholded bit-wide feature descriptors remain consistent under different rotations, scales, inversions, and affine transformations; the symbol descriptor, the mean descriptor and the central value descriptor are respectively calculated in the following ways:
calculating the absolute value of each peripheral point pixel except the central point pixel in each sliding window, comparing the absolute value of each peripheral point pixel with the absolute value of the central point pixel, setting 1 if the absolute value of each peripheral point pixel is larger than the absolute value of the central point pixel, setting 0 if the absolute value of each peripheral point pixel is smaller than the absolute value of the central point pixel, and sequentially arranging the results to generate the symbol descriptor;
calculating the average value of all pixels in each sliding window, comparing the average value with the average value of the pixels in the sliding window where the characteristic points are located, setting 1 if the average value of all the pixels is larger than the average value of the pixels in the sliding window where the characteristic points are located, setting 0 if the average value of all the pixels is smaller than the average value of the pixels in the sliding window where the characteristic points are located, and sequentially arranging the results to generate the average value descriptor;
and calculating the central point value of each sliding window compared with the intercepted pixel matrix average value around the characteristic point and the full-image pixel matrix average value, if the central point value of each sliding window is larger than the pixel matrix average value around the characteristic point and the full-image pixel matrix average value, setting 1, and if the central point value of each sliding window is smaller than the characteristic point, setting 0, and arranging the results in sequence to generate the central value descriptor.
8. The image feature matching method of claim 6, wherein differently stitching the symbol descriptor, the mean descriptor, and the center value descriptor comprises:
sequentially splicing according to the sequence of the central value description, the symbol descriptor and the mean descriptor to generate the feature descriptor; or alternatively, the process may be performed,
and adding the symbol descriptor and the mean value description according to bits, and adding the central value descriptor in high order to generate the feature descriptor.
9. The image feature matching method according to claim 1, wherein feature point matching is performed on feature descriptors of two or more images, comprising:
performing the characteristic point matching by adopting an L1 norm matching or L2 norm matching mode; or alternatively, the process may be performed,
performing the feature point matching by adopting a mode of calculating the Hamming distance between the feature descriptor of the first image and the feature descriptor of the second image; or alternatively, the process may be performed,
and (3) performing the feature point matching by calculating two adjacent bits from right to left of the feature descriptor of the first image and the feature descriptor of the second image, if the two bits are not all 0 s, marking the two bits as 1 s, and counting the number of bits of the new 1 s.
10. An image feature matching device based on multiple feature descriptors, the image feature matching device comprising:
the detection module is used for detecting the characteristic points in each image based on a characteristic point detection algorithm;
the threshold setting module is used for setting a first threshold, a second threshold and a third threshold according to the distribution and actual requirements of the feature points, wherein the first threshold is used for setting the size of a pixel matrix around the feature points to be intercepted, the second threshold is used for setting the radius of a sliding window, and the third threshold is used for setting the bit width of a feature descriptor;
the scanning module is used for scanning and calculating the characteristic points by utilizing the first threshold value and the second threshold value to obtain a symbol descriptor, a mean descriptor and a central value descriptor;
the feature descriptor module is used for obtaining a feature descriptor based on the symbol descriptor, the mean descriptor and the central value descriptor and combining the third threshold value;
and the matching selection module is used for carrying out characteristic point matching on the characteristic descriptors of the two or more images, and selecting the characteristic point with the optimal matching result as a final image matching result according to the comparison result.
CN202310841374.5A 2023-07-10 2023-07-10 Image feature matching method and device based on multiple feature descriptors Active CN116824183B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310841374.5A CN116824183B (en) 2023-07-10 2023-07-10 Image feature matching method and device based on multiple feature descriptors

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310841374.5A CN116824183B (en) 2023-07-10 2023-07-10 Image feature matching method and device based on multiple feature descriptors

Publications (2)

Publication Number Publication Date
CN116824183A true CN116824183A (en) 2023-09-29
CN116824183B CN116824183B (en) 2024-03-12

Family

ID=88140880

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310841374.5A Active CN116824183B (en) 2023-07-10 2023-07-10 Image feature matching method and device based on multiple feature descriptors

Country Status (1)

Country Link
CN (1) CN116824183B (en)

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007006526A (en) * 2001-05-25 2007-01-11 Ricoh Co Ltd Image processor, low-linear-density dot region detecting unit, image scanner, image forming apparatus and color copier
US20130223730A1 (en) * 2012-02-28 2013-08-29 Electronics And Telecommunications Research Institute Scalable feature descriptor extraction and matching method and system
CN103279939A (en) * 2013-04-27 2013-09-04 北京工业大学 Image stitching processing system
US20140355889A1 (en) * 2013-05-30 2014-12-04 Seiko Epson Corporation Tree-Model-Based Stereo Matching
CN104376548A (en) * 2014-11-07 2015-02-25 中国电子科技集团公司第二十八研究所 Fast image splicing method based on improved SURF algorithm
CN104809731A (en) * 2015-05-05 2015-07-29 北京工业大学 Gradient binaryzation based rotation-invariant and scale-invariant scene matching method
CN105590114A (en) * 2015-12-22 2016-05-18 马洪明 Image characteristic quantity generation method
US20160371537A1 (en) * 2015-03-26 2016-12-22 Beijing Kuangshi Technology Co., Ltd. Method, system, and computer program product for recognizing face
CN107945111A (en) * 2017-11-17 2018-04-20 中国矿业大学 A kind of image split-joint method based on SURF feature extraction combination CS LBP descriptors
US20180182092A1 (en) * 2014-05-09 2018-06-28 Given Imaging Ltd. System and method for sequential image analysis of an in vivo image stream
US20190212903A1 (en) * 2016-06-08 2019-07-11 Huawei Technologies Co., Ltd. Processing Method and Terminal
CN110246168A (en) * 2019-06-19 2019-09-17 中国矿业大学 A kind of feature matching method of mobile crusing robot binocular image splicing
CN111257588A (en) * 2020-01-17 2020-06-09 东北石油大学 ORB and RANSAC-based oil phase flow velocity measurement method
CN111340109A (en) * 2020-02-25 2020-06-26 深圳市景阳科技股份有限公司 Image matching method, device, equipment and storage medium
CN112085117A (en) * 2020-09-16 2020-12-15 北京邮电大学 Robot motion monitoring visual information fusion method based on MTLBP-Li-KAZE-R-RANSAC
CN113095385A (en) * 2021-03-31 2021-07-09 安徽工业大学 Multimode image matching method based on global and local feature description
CN114549634A (en) * 2021-12-27 2022-05-27 杭州环峻科技有限公司 Camera pose estimation method and system based on panoramic image
CN114693522A (en) * 2022-03-14 2022-07-01 江苏大学 Full-focus ultrasonic image splicing method
CN115861640A (en) * 2022-10-24 2023-03-28 盐城工学院 Rapid image matching method based on ORB and SURF characteristics
CN116310373A (en) * 2022-10-24 2023-06-23 盐城工学院 Image matching method based on improved SURF features

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007006526A (en) * 2001-05-25 2007-01-11 Ricoh Co Ltd Image processor, low-linear-density dot region detecting unit, image scanner, image forming apparatus and color copier
US20130223730A1 (en) * 2012-02-28 2013-08-29 Electronics And Telecommunications Research Institute Scalable feature descriptor extraction and matching method and system
CN103279939A (en) * 2013-04-27 2013-09-04 北京工业大学 Image stitching processing system
US20140355889A1 (en) * 2013-05-30 2014-12-04 Seiko Epson Corporation Tree-Model-Based Stereo Matching
US20180182092A1 (en) * 2014-05-09 2018-06-28 Given Imaging Ltd. System and method for sequential image analysis of an in vivo image stream
CN104376548A (en) * 2014-11-07 2015-02-25 中国电子科技集团公司第二十八研究所 Fast image splicing method based on improved SURF algorithm
US20160371537A1 (en) * 2015-03-26 2016-12-22 Beijing Kuangshi Technology Co., Ltd. Method, system, and computer program product for recognizing face
CN104809731A (en) * 2015-05-05 2015-07-29 北京工业大学 Gradient binaryzation based rotation-invariant and scale-invariant scene matching method
CN105590114A (en) * 2015-12-22 2016-05-18 马洪明 Image characteristic quantity generation method
US20190212903A1 (en) * 2016-06-08 2019-07-11 Huawei Technologies Co., Ltd. Processing Method and Terminal
CN107945111A (en) * 2017-11-17 2018-04-20 中国矿业大学 A kind of image split-joint method based on SURF feature extraction combination CS LBP descriptors
CN110246168A (en) * 2019-06-19 2019-09-17 中国矿业大学 A kind of feature matching method of mobile crusing robot binocular image splicing
CN111257588A (en) * 2020-01-17 2020-06-09 东北石油大学 ORB and RANSAC-based oil phase flow velocity measurement method
CN111340109A (en) * 2020-02-25 2020-06-26 深圳市景阳科技股份有限公司 Image matching method, device, equipment and storage medium
CN112085117A (en) * 2020-09-16 2020-12-15 北京邮电大学 Robot motion monitoring visual information fusion method based on MTLBP-Li-KAZE-R-RANSAC
CN113095385A (en) * 2021-03-31 2021-07-09 安徽工业大学 Multimode image matching method based on global and local feature description
CN114549634A (en) * 2021-12-27 2022-05-27 杭州环峻科技有限公司 Camera pose estimation method and system based on panoramic image
CN114693522A (en) * 2022-03-14 2022-07-01 江苏大学 Full-focus ultrasonic image splicing method
CN115861640A (en) * 2022-10-24 2023-03-28 盐城工学院 Rapid image matching method based on ORB and SURF characteristics
CN116310373A (en) * 2022-10-24 2023-06-23 盐城工学院 Image matching method based on improved SURF features

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
尚常军;丁瑞;: "基于曲率局部二值模式的深度图像手势特征提取", 计算机应用, no. 10 *
张茗茗;周诠: "基于多重匹配的可见水印去除算法", 计算机工程与设计, no. 01 *

Also Published As

Publication number Publication date
CN116824183B (en) 2024-03-12

Similar Documents

Publication Publication Date Title
CN108121991B (en) Deep learning ship target detection method based on edge candidate region extraction
JP5775225B2 (en) Text detection using multi-layer connected components with histograms
CN110032998B (en) Method, system, device and storage medium for detecting characters of natural scene picture
CN102713938B (en) Scale space normalization technique for improved feature detection in uniform and non-uniform illumination changes
CN102667810A (en) Face recognition in digital images
CN109343920B (en) Image processing method and device, equipment and storage medium thereof
GB2431793A (en) Image comparison
CN110675425B (en) Video frame identification method, device, equipment and medium
CN108830283B (en) Image feature point matching method
CN112215925A (en) Self-adaptive follow-up tracking multi-camera video splicing method for coal mining machine
CN110942473A (en) Moving target tracking detection method based on characteristic point gridding matching
KR101753360B1 (en) A feature matching method which is robust to the viewpoint change
CN111626145B (en) Simple and effective incomplete form identification and page-crossing splicing method
US20160048728A1 (en) Method and system for optical character recognition that short circuit processing for non-character containing candidate symbol images
Nam et al. Content-aware image resizing detection using deep neural network
US11256949B2 (en) Guided sparse feature matching via coarsely defined dense matches
Lecca et al. Comprehensive evaluation of image enhancement for unsupervised image description and matching
CN111832497B (en) Text detection post-processing method based on geometric features
CN113704276A (en) Map updating method and device, electronic equipment and computer readable storage medium
CN116824183B (en) Image feature matching method and device based on multiple feature descriptors
CN115345895B (en) Image segmentation method and device for visual detection, computer equipment and medium
CN114926508B (en) Visual field boundary determining method, device, equipment and storage medium
CN110766003A (en) Detection method of fragment and link scene characters based on convolutional neural network
CN116403010A (en) Medical image matching method based on FAST algorithm
CN113840135A (en) Color cast detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant