CN115222974A - Feature point matching method and device, storage medium and electronic equipment - Google Patents

Feature point matching method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN115222974A
CN115222974A CN202210827776.5A CN202210827776A CN115222974A CN 115222974 A CN115222974 A CN 115222974A CN 202210827776 A CN202210827776 A CN 202210827776A CN 115222974 A CN115222974 A CN 115222974A
Authority
CN
China
Prior art keywords
image
optical flow
matching
registered
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210827776.5A
Other languages
Chinese (zh)
Inventor
杨露
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202210827776.5A priority Critical patent/CN115222974A/en
Publication of CN115222974A publication Critical patent/CN115222974A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/54Extraction of image or video features relating to texture

Abstract

The present disclosure relates to the field of image processing technologies, and in particular, to a feature point matching method and apparatus, a computer-readable storage medium, and an electronic device, where the method includes: acquiring a forward optical flow and a reverse optical flow between an image to be registered and a reference image; determining optical flow errors of candidate pixel points in the image to be registered or the reference image according to the forward optical flow and the backward optical flow; selecting a target pixel point from the candidate pixel points according to the optical flow errors of the candidate pixel points; and performing repeated texture detection on the target pixel points to obtain a detection result, and determining matching feature point pairs between the image to be registered and the reference image according to the detection result. The precision of the obtained matching characteristic point pair is high.

Description

Feature point matching method and device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a feature point matching method and apparatus, a computer-readable storage medium, and an electronic device.
Background
In computer vision applications, it is often necessary to match feature points of different images.
However, the matching accuracy of the feature point matching method in the related art is poor, so that the matching degree of the obtained matched feature point pair is low.
It is noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure and therefore may include information that does not constitute prior art that is already known to a person of ordinary skill in the art.
Disclosure of Invention
The present disclosure is directed to a feature point matching method, a feature point matching method apparatus, a computer readable medium, and an electronic device, so as to improve the matching precision of the feature point matching method at least to a certain extent, and further improve the matching degree of the obtained matched feature point pair.
According to a first aspect of the present disclosure, there is provided a feature point matching method including: acquiring a forward optical flow and a reverse optical flow between an image to be registered and a reference image; determining optical flow errors of candidate pixel points in the image to be registered or the reference image according to the forward optical flow and the backward optical flow; selecting a target pixel point from the candidate pixel points according to the optical flow errors of the candidate pixel points; and performing repeated texture detection on the target pixel point to obtain a detection result, and determining a matching feature point pair between the image to be registered and the reference image according to the detection result.
According to a second aspect of the present disclosure, there is provided a feature point matching device including: the optical flow acquisition module is used for acquiring a forward optical flow and a reverse optical flow between the image to be registered and the reference image; an error determination module, configured to determine optical flow errors of candidate pixel points in the image to be registered or the reference image according to the forward optical flow and the backward optical flow; a pixel selection module: the optical flow error detection module is used for selecting a target pixel point from the candidate pixel points according to the optical flow error of the candidate pixel points; and the image matching module is used for carrying out repeated texture detection on the target pixel points to obtain a detection result and determining matching characteristic point pairs between the image to be registered and the reference image according to the detection result.
According to a third aspect of the present disclosure, there is provided a computer readable medium having stored thereon a computer program which, when executed by a processor, performs the method described above.
According to a fourth aspect of the present disclosure, there is provided an electronic apparatus, comprising: one or more processors; and memory storing one or more programs that, when executed by the one or more processors, cause the one or more processors to implement the above-described method.
The feature point matching method provided by one embodiment of the disclosure acquires a forward optical flow and a backward optical flow between an image to be registered and a reference image; determining optical flow errors of candidate pixel points in the image to be registered or the reference image according to the forward optical flow and the backward optical flow; selecting a target pixel point from the candidate pixel points according to the optical flow errors of the candidate pixel points; and performing repeated texture detection on the target pixel points to obtain a detection result, and determining matching feature point pairs between the image to be registered and the reference image according to the detection result. Compared with the prior art, on one hand, the optical flow error is determined through the forward optical flow and the reverse optical flow between the reference images of the image to be registered, the target pixel point is obtained according to the optical flow error, the inaccurate pixel point is eliminated, and the matching precision of the feature point is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty. In the drawings:
FIG. 1 illustrates a schematic diagram of an exemplary system architecture to which embodiments of the present disclosure may be applied;
FIG. 2 schematically illustrates a flow chart of a feature point matching method in an exemplary embodiment of the disclosure;
FIG. 3 schematically illustrates a schematic of a forward optical flow image in an exemplary embodiment of the disclosure;
FIG. 4 schematically illustrates a schematic view of an inverse optical flow image in an exemplary embodiment of the present disclosure;
FIG. 5 schematically illustrates a flow chart for determining optical flow errors in an exemplary embodiment of the disclosure;
FIG. 6 schematically illustrates another flow chart for determining optical-flow errors in exemplary embodiments of the disclosure;
FIG. 7 schematically illustrates a flow chart for determining a detection result in an exemplary embodiment of the present disclosure;
FIG. 8 schematically illustrates a schematic diagram of a first image block and a second image block in an exemplary embodiment of the disclosure;
FIG. 9 is a flow chart that schematically illustrates repetitive texture detection, in an exemplary embodiment of the present disclosure
FIG. 10 schematically illustrates a flow chart of another method of feature point matching in an exemplary embodiment of the disclosure;
FIG. 11 schematically illustrates a computational schematic of an epipolar error in an exemplary embodiment of the present disclosure;
FIG. 12 schematically illustrates a flow chart of yet another method of feature point matching in an exemplary embodiment of the present disclosure;
fig. 13 is a schematic diagram illustrating a composition of a feature point matching apparatus in an exemplary embodiment of the present disclosure;
fig. 14 schematically illustrates a composition diagram of another feature point matching apparatus in an exemplary embodiment of the present disclosure;
fig. 15 shows a schematic diagram of an electronic device to which an embodiment of the present disclosure may be applied.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
Feature point matching schemes that are more commonly used in the related art include a descriptor-based feature point matching scheme, an optical flow-based feature point matching scheme, a block matching-based feature point matching scheme, and a deep learning-based feature point matching scheme. The matching scheme of the feature points has the probability of errors, and further the problem of mismatching of the feature points can be caused. Therefore, the feature point matching scheme should include steps of feature point matching and feature point screening, and both the feature point matching and the feature point screening will directly affect the quality of the feature point matching.
The more common mismatching point elimination algorithm comprises: a homography matrix-based ransac point elimination algorithm, a basic matrix-based ransac screening point scheme, a GMS (Grid-based Motion Statistics) based feature point screening scheme and the like. The screening method based on the homography matrix is only suitable for scenes with planar image contents; the screening method based on the basic matrix is a screening scheme based on epipolar constraint, and when the texture of an image is repeated along the epipolar direction, mismatching points in the epipolar direction are difficult to eliminate; the GMS-based point picking scheme judges whether the point to be screened is a correctly matched feature point according to the probability of a correctly matched point of the region to be screened, so that the wrongly matched point is easily reserved or deleted in a piece. The above screening scheme for the mismatching points cannot ensure that the mismatching points in the repeated texture area are deleted effectively.
Based on the above disadvantages, the present disclosure proposes a feature point matching method, and fig. 1 shows a schematic diagram of a system architecture capable of implementing the feature point matching method of the present disclosure, where the system architecture 100 may include a terminal 110 and a server 120. The terminal 110 may be a terminal device such as a smart phone, a tablet computer, a desktop computer, and a notebook computer, and the server 120 generally refers to a background system that provides relevant services such as image processing and feature point matching in this exemplary embodiment, and may be a server or a cluster formed by multiple servers. The terminal 110 and the server 120 may form a connection through a wired or wireless communication link for data interaction.
In one embodiment, the above-described feature point matching method may be performed by the terminal 110. For example, after the user captures an image using the terminal 110 or the user selects an image to be registered and a reference image in an album of the terminal 110, the terminal 110 matches feature points in the image to be registered and the reference image and outputs a matched feature point pair.
In one embodiment, the above-described feature point matching method may be performed by the server 120. For example, after the user uses the terminal 110 to shoot an image or the user aligns an image to be registered with a reference image in an album of the terminal 110, the terminal 110 uploads the image to the server 120, the server 120 performs feature point matching on the image to be registered with the reference image, and a matching feature point pair is returned to the terminal 110.
As can be seen from the above, the execution subject of the feature point matching method in the present exemplary embodiment may be the terminal 110 or the server 120, which is not limited by the present disclosure.
The image quality evaluation method in the present exemplary embodiment is described below with reference to fig. 2, where fig. 2 shows an exemplary flow of the image quality evaluation method, and may include:
step S210, acquiring a forward optical flow and a reverse optical flow between the image to be registered and the reference image;
step S220, determining optical flow errors of candidate pixel points in the image to be registered or the reference image according to the forward optical flow and the backward optical flow;
step S230, selecting a target pixel point from the candidate pixel points according to the optical flow errors of the candidate pixel points;
step S240, performing repeated texture detection on the target pixel point to obtain a detection result, and determining a matching feature point pair between the image to be registered and the reference image according to the detection result.
Based on the method, on one hand, the optical flow error is determined through the forward optical flow and the backward optical flow between the reference images of the image to be registered, the target pixel point is obtained according to the optical flow error, the inaccurate pixel point is eliminated, and the matching precision of the feature point is improved.
Each step in fig. 2 is explained in detail below.
Referring to fig. 2, in step S210, a forward optical flow and a backward optical flow between an image to be registered and a reference image are acquired.
In one disclosed example embodiment, the forward optical flow is an optical flow from the reference image to the quasi-image to be registered, and the backward optical flow is an optical flow from the quasi-image to be registered to the reference image.
In this exemplary embodiment, before acquiring the forward optical flow and the backward optical flow between the image to be registered and the reference image, the image to be registered and the reference image may be first preprocessed, where the preprocessing may include, but is not limited to, converting the image to be registered and the reference image into a grayscale image, performing resizing, brightness stretching, and the like on the grayscale image, and the preprocessing may also be customized according to a user requirement, which is not specifically limited in this exemplary embodiment.
In the present exemplary embodiment, a DIS (Dense Inverse Search) algorithm, i.e., a Dense Inverse optical flow Search algorithm, may be utilized to obtain a forward optical flow and an Inverse optical flow between the image to be registered and the reference image.
The DIS (Dense Inverse Search) is a Dense Inverse optical flow Search algorithm, which is an optical flow calculation method based on an image pyramid. The method comprises the steps of multi-scale-based rapid optical flow reverse search, sparse optical flow densification and refinement of dense optical flow rapid variation. Compared with the optical flow estimation at the pixel level, DIS is an optical flow reverse search method based on the pixel block level, calculates a gradient once, can be used for multiple reverse searches, and reduces the calculation amount. After the optical flow calculation of the blocks is finished, the optical flow of each pixel is calculated by weighting the optical flow of the blocks according to a certain rule, and the sparse optical flow densification is realized. And finally, performing variation optimization on the dense optical flow graph based on the gradient information and the smoothness to ensure that the dense optical flow graph is more reliable. The DIS optical flow algorithm calculates dense optical flow information among images, and reliable optical flow information can be directly selected according to a specific rule to perform feature matching, so that a feature point matching pair with higher quality is obtained.
In this exemplary embodiment, the method may specifically include constructing an image pyramid for the image to be registered and the reference image, initializing some calculation amounts, and calculating a block-shaped integral graph of the image according to a gradient of the current layer image; solving a sparse image optical flow field based on the reverse search of the image block; thickening the sparse optical flow field to obtain a dense optical flow of the image and performing variation refinement on the dense optical flow image; and transforming the intermediate densified optical flow graph to the size of the next layer of image to be used as an initial optical flow for optical flow searching of the next layer of image. And transforming the optical flow image at the bottommost layer to the size same as that of the original image and multiplying the optical flow image by the corresponding amplification ratio to obtain the optical flow corresponding to the image to be registered and the reference image.
The forward optical flow and the backward optical flow can be represented in the form of an image, as shown with reference to fig. 3 and 4, fig. 3 shows a forward optical flow image, and fig. 4 shows a backward optical flow image, where the optical flow of each pixel in a two-dimensional image is a two-dimensional vector. As shown in the optical flow diagram, the a region indicates that the optical flow in the x direction is negative, the optical flow in the y direction is positive, the B region indicates that both the optical flows in the x direction and the y direction are positive, the C region indicates that the optical flow in the x direction is positive, the optical flow in the y direction is negative, and the D region indicates that both the optical flows in the x direction and the y direction are negative.
It should be noted that, reference may be made to the content in the related art for a manner of acquiring an optical flow by using DIS (sense Inverse Search), and details thereof are not repeated here.
In step S220, an optical flow error of a candidate pixel point in the image to be registered or the reference image is determined according to the forward optical flow and the backward optical flow.
In an example embodiment of the present disclosure, an optical flow error corresponding to a candidate pixel point in a reference image may be calculated according to a forward optical flow and a backward optical flow, at this time, the candidate pixel point may include all pixel points in the reference image, or a pixel point in each region, or may be customized according to a user requirement, which is not specifically limited in this example embodiment. The optical flow errors may be expressed in the form of optical flow error images, and referring to fig. 5, specifically, determining optical flow errors corresponding to candidate pixel points in the reference image may include steps S510 to S530.
In step S510, a first matching point of a candidate pixel point in the reference image in the image to be registered is determined according to the forward optical flow.
In this exemplary embodiment, the first matching points of all candidate pixels in the reference image in a one-to-one correspondence in the image to be registered may be determined directly according to the forward optical flow, for example, it is assumed that the candidate pixels in the reference image are a n If the first matching point in the image to be registered is B n Wherein, A1 n And B n In a one-to-one correspondence, i.e. A1 1 And B 1 Corresponding to, A1 3 And B 3 And (7) corresponding.
In step S520, a second matching point of the first matching point in the reference image is determined according to the inverse optical flow.
After obtaining the first matching point, a second matching point corresponding to the first matching point in the reference image may be calculated by using the inverse optical flow, for example, the first matching point is calculated as B by using the inverse optical flow n Corresponding second matching point A2 in the reference image n Wherein, A2 n And B n In a one-to-one correspondence, i.e. A2 1 And B 1 In response to this, the mobile terminal is allowed to,A2 3 and B 3 And (7) corresponding.
In step S530, the optical flow error is determined according to the candidate pixel point in the reference image and the second matching point corresponding to the candidate pixel point.
After the second matching point is determined, the optical flow error may be determined by predicting the second matching point from the candidate pixel point in the reference image, and specifically, the second matching point A2 may be calculated n And candidate pixel point A1 n The distance between each candidate pixel and the second matching point may be calculated as the optical flow error, and the distance between each candidate pixel and the second matching point may be calculated as a plurality of optical flow errors, an image composed of a plurality of optical flow errors may be calculated as an optical flow error image, or a distance between some candidate pixels and the second matching point may be calculated as an optical flow error image, which is not particularly limited in the present exemplary embodiment.
In another example embodiment of the present disclosure, an optical flow error corresponding to a candidate pixel point in an image to be registered may be calculated according to a forward optical flow and a backward optical flow, and at this time, the candidate pixel point may include all pixel points in the image to be registered, or a pixel point in each area, or may be customized according to a user requirement, which is not specifically limited in this example embodiment. The optical flow errors may be expressed in the form of optical flow error images, and referring to fig. 6, specifically, determining optical flow errors corresponding to candidate pixel points in the reference image may include steps S610 to S630.
In step S610, a third matching point of the candidate pixel point in the image to be registered in the reference image is determined according to the inverse optical flow.
In this exemplary embodiment, the third matching points corresponding to all candidate pixel points in the image to be registered in the reference image one to one may be determined directly according to the inverse optical flow, for example, it is assumed that the candidate pixel point in the image to be registered is B1 n Then its corresponding third matching point in the reference image is A n Wherein A is n And B1 n In a one-to-one correspondence of the values of n, i.e. A 1 And B1 1 Corresponds to, A 3 And B1 3 And (7) correspondingly.
In step S620, a fourth matching point of the third matching point in the image to be registered is determined according to the forward optical flow.
After obtaining the third matching point, a fourth matching point corresponding to the first matching point in the image to be registered may be calculated by using the inverse optical flow, for example, the third matching point is calculated as a by using the inverse optical flow n Corresponding fourth matching point B2 in the image to be registered n Wherein, B2 n And A n In a one-to-one correspondence, i.e. B2 1 And A 1 Corresponding, B2 3 And A 3 And (7) correspondingly.
In step S630, the optical flow error is determined according to the candidate pixel point in the image to be registered and the fourth matching point corresponding to the candidate pixel point.
After the fourth matching point is determined, the optical flow error may be determined by predicting the fourth matching point according to the candidate pixel point in the image to be registered, and specifically, the fourth matching point B2 may be calculated n And candidate pixel B1 n The distance between the candidate pixel and the fourth matching point may be calculated as the optical flow error, and the distance between each candidate pixel and the fourth matching point may be calculated as an optical flow error image, or an image composed of a plurality of optical flow errors may be calculated as an optical flow error image, or a distance between some candidate pixels and the fourth matching point may be calculated as an optical flow error image, which is not particularly limited in the present exemplary embodiment.
In step S230, a target pixel point is selected from the candidate pixel points according to the optical flow errors of the candidate pixel points.
In this exemplary embodiment, after determining the optical flow error, a target pixel point may be selected from the candidate pixel points according to the optical flow error, specifically, a preset optical flow threshold may be first set, and then a plurality of target pixel points may be selected from the candidate pixel points according to the preset optical flow threshold.
In this exemplary embodiment, the candidate pixel point with the optical flow error smaller than the preset optical flow threshold may be determined as the target pixel point, specifically, the optical flow error smaller than the preset optical flow threshold may be determined as the target optical flow, and the candidate pixel point corresponding to the target optical flow may be determined as the target pixel point, where a value of the preset optical flow threshold may be self-defined according to a user requirement, and is not specifically limited in this exemplary embodiment.
The target pixel points are determined in the candidate pixel points based on the optical flow errors and the preset optical flow threshold value through the optical flow errors calculated by the forward optical flow and the backward optical flow, so that the reliability of the target pixel points can be increased, and the follow-up accuracy is higher when the feature points are matched.
In step S240, repeated texture detection is performed on the target pixel point to obtain a detection result, and a matching feature point pair between the image to be registered and the reference image is determined according to the detection result.
In this example embodiment, after obtaining the target pixel point, repeated texture detection may be performed on the target pixel point, and a detection result may be obtained, where the detection result may include that the target pixel point is a repeated texture pixel point or that the target pixel point is a non-repeated texture pixel point, and when performing repeated texture detection on the target pixel point, steps S710 to S750 may be performed for the target pixel point. Specifically, the method comprises the following steps:
in step S710, a first image block including the target pixel point is obtained;
in an example embodiment of the present disclosure, when obtaining a first image block including the target pixel, an image block with a preset size may be determined as the first image block with the target pixel as a center, where the preset size may be 9 × 9, 7 × 7, 11 × 11, and the like, and may also be customized according to a user requirement, for example, when detecting a large repeated texture, the preset size may be larger, and is not specifically limited in this example embodiment.
In step S720, determining a gradient of the first image block;
in the present exemplary embodiment, after the first image block is determined, the gradient of the first image block, that is, the direction of the edge of the first image block may be determined, and the gradient may be in a horizontal direction, a vertical direction, or a direction inclined at any angle with respect to the horizontal direction, for example, inclined at 45 degrees with respect to the horizontal direction, inclined at 23 degrees with respect to the horizontal direction, and the like, and is not particularly limited in the present exemplary embodiment.
In step S730, determining at least one second image block having the same size as the first image block in the direction of the gradient;
after the gradient of the first image block is determined, at least one second image block having the same size as the first image block may be determined in the direction of the gradient, for example, a second image block having the same size as the first image block may be determined in the positive direction of the gradient, and then a second image block having the same size as the first image block may be determined in the reverse direction of the gradient.
Specifically, as shown in fig. 8, assume that the pixel coordinate point P is a target pixel point { P } as Taking a first image block of 9 × 9 as a patch (p) by taking p as a center, then taking a second image block of 9 × 9 before and after the pixel p in the gradient direction of p according to the step length 5, and respectively taking the first image block and the second image block as the patch (p) l ) And patch (p) r ) The step size may be half of the first image block size, and if the first image block size is an odd number, the first image block size may be any integer adjacent to half of the first image block size, or may be customized according to the use scenario, and the number of the second image blocks in one direction is also plural, for example, two or more second image blocks are respectively taken before and after in the gradient direction of p according to the step size of 1,2 or 3.
In step S740, calculating a similarity between the first image block and the second image block;
in an example embodiment of the present disclosure, the first image block and the second image block are obtainedThen, the similarity between the first image block and the second image block is calculated, and specifically, patch (p) and patch (p) can be extracted separately l ) And patch (p) r ) The pixel values of (1) are arranged from left to right and from top to bottom, and the results are respectively recorded as: l is l 、L 0 And L r (ii) a Calculating L 0 And L l The similarity of (A) is recorded as S 0l Calculating L 0 And L r Has a similarity of S 0r The calculation of the similarity may be accomplished by the following calculation formula, wherein the similarity calculation formula is as follows:
Figure BDA0003747149990000111
wherein L is 1 And L 2 Are pixel value sequences of two image blocks to be compared, i.e. pixel value sequences corresponding to the first image block and the second image block respectively,
Figure BDA0003747149990000112
and
Figure BDA0003747149990000113
is the average of the pixel values of the first image block and the second image block. d (L) 1 ,L 2 ) I.e., the degree of similarity, is in the range of [ -1,1]The closer to 1, the more similar the two image blocks. If L is 1 And L 2 Equal, i.e. the two image blocks are the same, the numerator is equal to the denominator and the correlation result is equal to 1. And k represents the pixel identifications of the first image block and the second image block, namely the sequencing serial numbers of the pixel points.
It should be noted that when determining the similarity between the first image block and the second image block, other manners may also be adopted, for example, comparing the gray level histogram of the image areas, and determining whether the two image areas are similar through the gradient histogram. In addition, the similarity of the image regions may also be measured by calculating a sum of squared errors (SSD), a Sum of Absolute Differences (SAD), a Mean Absolute Difference (MAD), a sum of squared errors (MSD), a normalized product correlation (NCC), structural Similarity (SSIM), and the like between the image regions, which is not particularly limited in the present exemplary embodiment.
In step S750, the detection result is obtained according to the similarity.
In this example embodiment, after the similarity is obtained through calculation, the detection result may be determined according to the similarity, specifically, multiple similarities corresponding to the same target pixel point are determined, and whether the target pixel point is a detection result of a repeated texture pixel point is determined by comparing a similarity threshold and the similarity.
In this exemplary embodiment, a maximum value of a plurality of similarity values corresponding to the same target pixel point may be determined, and then the maximum value is compared with the similarity threshold, and if the maximum value is greater than the preset threshold, the target pixel point is determined to be a repeated texture pixel point. And if the maximum value is smaller than or equal to the similarity threshold value, the target pixel point is settled as a non-repeated texture pixel point.
And after the detection result is obtained, if the target pixel point is a pixel point in the reference image, determining a corresponding matching feature point of the target pixel point of which the detection result is a pixel point with non-repetitive texture in the reference image in the image to be registered.
And if the target pixel points are pixel points in the image to be registered, determining the corresponding matching feature points of the target pixel points of which the detection results are non-repetitive texture pixel points in the image to be registered in the reference image.
The steps can delete repeated texture pixel points, so that the problem of matching error caused by the fact that an error optical flow is generated due to the repeatability of the brightness features of the repeated texture region is solved, and the matching precision of the feature points is improved.
In an example embodiment of the present disclosure, weight information of the target pixel point may also be determined according to the similarity, and then a matching feature point pair between the image to be registered and the reference image is determined according to the weight information. If the phase velocity is higher, the weight information corresponding to the target pixel point is lower.
In this exemplary embodiment, the feature point matching may be performed by using the weight information, for example, when performing parameter optimization or model optimization using the feature point, a lower weight is given to a target loudness point in a repetitive texture region to participate in optimization, and a higher weight is given to a target pixel point in a non-repetitive texture region to participate in optimization, so as to reduce the influence caused by mismatching of a weak texture of the repetitive texture to a certain extent.
Referring to fig. 9, to describe the repeated texture detection in detail, step S910 may be executed first, a target pixel point is input, step S920 is executed, a first image block is determined with the target pixel point as a center, step S930 is executed, a gradient corresponding to the first image block is determined, step S940 may then be executed, a second image block is obtained along a positive direction and a negative direction of the gradient, step S950 is executed, similarities between the two second image blocks and the first image block are calculated, step S960 may be executed after the similarities are obtained, a maximum value in the similarities is determined, step S970 is executed, whether the maximum value is greater than a similarity threshold is determined, if yes, step S980 is executed, the target pixel point is determined to be a repeated texture pixel point, and if not, step S990 is executed, the target pixel point is determined to be a non-repeated texture pixel point.
In an example embodiment of the present disclosure, as illustrated with reference to fig. 10, the feature point matching method of the present disclosure may further include step S250 and step S260.
In step S250, epipolar errors of feature points in the matched feature point pairs are detected.
In this exemplary embodiment, a basic matrix between the image to be registered and the reference image may be first determined, then a corresponding epipolar line of a pixel point in the reference image in the image to be registered is determined, and finally a distance from the pixel point in the image to be registered corresponding to the pixel point in the reference image to the epipolar line is determined as the epipolar line error, specifically, as shown in fig. 11, a basic matrix F of the image to be registered and the reference image is solved, an epipolar line L corresponding to the right image from a feature point p1 in the left image is solved by using the F matrix, and finally a distance from a feature point p2 corresponding to p1 in the right image to the epipolar line L is calculated and is marked as the epipolar line error of the pair of feature points.
In reference to fig. 11, epipolar constraint refers to a point in the left image L whose corresponding matching point in the right image R must be on a straight line, which is the epipolar line. If the point P is a point on the world coordinate system, P is a point on the left image, and P 'is a point on the right image, ppp' forms a plane, called polar plane, and the left and right straight lines l 1 And l 2 Is called polar line, O l And O r The optical centers of the left and right cameras. If a point P in the left image is known, it is desired to find the epipolar line where the corresponding matching point P' in the right image is located. If the matching characteristic points are mismatching points, the relationship is not established, and a larger distance exists between the epipolar line found through epipolar line constraint and the matching points, namely the epipolar line error is larger, so that the points with larger epipolar line error can be judged as mismatching points to be rejected directly according to the size of the epipolar line error.
In step S260, the matching pairs of feature points are updated according to the epipolar error.
In this exemplary embodiment, after obtaining the epipolar error, the matched feature point pairs may be updated by using the epipolar error, specifically, a labeling difference of the epipolar error corresponding to the target loudness and microampere may be first determined, an error threshold may be then determined according to the labeling difference, and finally, the matched feature point pairs may be updated according to the epipolar error and the error threshold.
For example, after the labeling difference σ of the epipolar error is obtained by calculation, the points where the distance between the epipolar error and the mean is greater than 3 σ can be removed according to the 3 σ rule.
In step S250 and step S260, the mismatching points are deleted, so that the accuracy of matching the feature points can be further improved, and the matching degree of the obtained matched feature point pairs can be further improved.
In an exemplary embodiment, the method for deleting the mismatch point is not limited to the above epipolar error, and a conventional ransac point screening scheme may also be adopted, which may be customized according to the user's requirement, and is not specifically limited in this exemplary embodiment.
Referring to fig. 12, describing the disclosed feature point matching method in detail, step S1210 may be performed first, namely, the reference image and the image to be registered are preprocessed, and then step S1220 is performed to obtain a forward optical flow and a backward optical flow, namely, the forward optical flow and the backward optical flow between the reference image and the image to be registered are obtained through the DIS module; then, step S1230 is executed to obtain a target pixel point, step S1240 is executed to repeat texture detection, step S1250 is executed to obtain a matching feature point pair based on the detection result, step S1260 is executed to delete a mismatching point, and step S1270 is executed to obtain an updated matching feature point pair.
In summary, in the exemplary embodiment, on one hand, an optical flow error is determined by a forward optical flow and a backward optical flow between reference images of an image to be registered, and a target pixel point is obtained according to the optical flow error, so that an inaccurate pixel point is eliminated, and accuracy of feature point matching is improved, on the other hand, a matching feature point pair between the image to be registered and the reference images is determined based on a repeated texture detection result, the target pixel point of which the detection result is a repeated texture pixel point is deleted, influence of repeated texture on feature point matching is reduced, accuracy of feature point matching is further improved, and matching degree of the obtained matching feature point pair is improved. And on the other hand, after the matched characteristic point pairs are obtained, polar line errors among the matched characteristic point pairs are calculated, whether the matched characteristic points are in error matching or not is judged based on the polar line errors, the matched characteristic points in error matching are deleted, the matching precision of the characteristic points is further improved, and the matching degree of the obtained matched characteristic point pairs is improved.
It is noted that the above-mentioned figures are merely schematic illustrations of processes involved in methods according to exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed, for example, synchronously or asynchronously in multiple modules.
Further, referring to fig. 13, the present exemplary embodiment further provides a feature point matching apparatus 1300, which includes an optical flow obtaining module 1310, an error determining module 1320, a pixel selecting module 1330, and an image matching module 1340. Wherein:
the optical flow acquisition module 1310 may be used to acquire a forward optical flow and a backward optical flow between the image to be registered and the reference image.
The error determination module 1320 may be configured to determine optical flow errors of candidate pixels in the image to be registered or the reference image according to the forward optical flow and the backward optical flow.
In an example embodiment, the forward optical flow is an optical flow from a reference image to an image to be registered, the backward optical flow is an optical flow from the image to be registered to the reference image, and the error determination module 1320 is configured to determine a first matching point of a candidate pixel point in the reference image in the image to be registered according to the forward optical flow; determining a second matching point of the first matching point in the reference image according to the reverse optical flow; and determining the optical flow error according to the candidate pixel points in the reference image and the second matching points corresponding to the candidate pixel points.
In an example embodiment, the error determining module 1320 may be further configured to determine, according to the inverse optical flow, a third matching point of the candidate pixel point in the image to be registered in the reference image; determining a fourth matching point of the third matching point in the image to be registered according to the forward optical flow; and determining the optical flow error according to the candidate pixel points in the image to be registered and the fourth matching points corresponding to the candidate pixel points.
The pixel selection module 1330 can be configured to select a target pixel from the candidate pixels according to the optical flow errors of the candidate pixels.
In an example embodiment, the pixel selection module 1330 may be configured to obtain a preset optical flow threshold; determining a target optical flow in the optical flow errors according to the preset optical flow threshold; and taking the candidate pixel points corresponding to the target optical flow in the reference image as the target pixel points.
The image matching module 1340 may be configured to perform repeated texture detection on the target pixel point to obtain a detection result, and determine a matching feature point pair between the image to be registered and the reference image according to the detection result.
In an example embodiment of the present disclosure, the image matching module 1340 may be configured to perform the following operations on the target pixel point: acquiring a first image block comprising the target pixel point; determining a gradient of the first image block; determining at least one second image block having the same size as the first image block in the direction of the gradient; calculating the similarity between the first image block and the second image block; and obtaining the detection result according to the similarity.
In this example embodiment, when determining at least one second image block having the same size as the first image block in the direction of the gradient, the image matching module 1340 may determine one second image block having the same size as the first image block in the positive direction of the gradient and another second image block having the same size as the first image block in the negative direction of the gradient; and calculating the similarity between the first image block and the second image block.
In this example embodiment, the image matching module 1340 may be configured to obtain a maximum value of the similarities between the first image block and the second image block; and obtaining the detection result according to the maximum value and the similarity threshold value.
In an example embodiment of the present disclosure, the image matching module 1340 may be configured to obtain a detection result of whether the target pixel point is a repeated texture pixel point by comparing the similarity with a similarity threshold.
In an example embodiment of the present disclosure, the image matching module 1340 may be configured to determine weight information of the target pixel point according to the similarity; and determining matching feature point pairs between the image to be registered and the reference image according to the weight information.
In an example embodiment of the present disclosure, the image matching module 1340 may be configured to determine matching feature points, in the image to be registered, of target pixel points whose detection results are non-repetitive texture pixel points in the reference image; or determining the corresponding matching feature point of the target pixel point of which the detection result is the pixel point of the unrepeated texture in the image to be registered in the reference image.
In an example embodiment of the present disclosure, as shown in fig. 14, the feature point matching apparatus may further include an error detection module 1350 and a matching update module 1360, wherein,
the error detection module 1350 may be used to detect epipolar errors of feature points in the matched feature point pairs.
In an example embodiment, the error detection module 1350 may be configured to determine a base matrix between the image to be registered and a reference image; determining a corresponding epipolar line of a pixel point in the reference image in the image to be registered; and determining the distance from a pixel point corresponding to the pixel point in the reference image in the image to be registered to the polar line as the polar line error.
The match update module 1360 may be used to update the matched pairs of feature points according to the epipolar error.
In an example embodiment of the present disclosure, the match update module 1360 may be configured to determine a standard deviation of epipolar line errors for all target pixel point correspondences; determining an error threshold value according to the standard deviation; and updating the matched feature points according to the epipolar line error and the error threshold.
The specific details of each module in the above apparatus have been described in detail in the method section, and details that are not disclosed may refer to the method section, and thus are not described again.
Exemplary embodiments of the present disclosure also provide an electronic device for performing the above-described feature point matching method, which may be the above-described terminal 110 or the server 130. In general, the electronic device may include a processor and a memory for storing executable instructions of the processor, the processor being configured to perform the above-described image feature point matching method via execution of the executable instructions.
The following takes the mobile terminal 1500 in fig. 15 as an example, and the configuration of the electronic device is exemplarily described. It will be appreciated by those skilled in the art that the configuration in figure 15 can also be applied to fixed type devices, in addition to components specifically intended for mobile purposes.
As shown in fig. 15, the mobile terminal 1500 may specifically include: a processor 1501, a memory 1502, a bus 1503, a mobile communication module 1504, an antenna 1, a wireless communication module 205, an antenna 2, a display screen 1506, a camera module 1507, an audio module 1508, a power module 1509, and a sensor module 1510.
Processor 1501 may include one or more processing units, such as: the Processor 1501 may include an AP (Application Processor), a modem Processor, a GPU (Graphics Processing Unit), an ISP (Image Signal Processor), a controller, an encoder, a decoder, a DSP (Digital Signal Processor), a baseband Processor, and/or an NPU (Neural-Network Processing Unit), etc. The feature point matching method in the present exemplary embodiment may be performed by an AP, a GPU, or a DSP, and when the method involves neural network-related processing, may be performed by an NPU.
An encoder may encode (i.e., compress) an image or video, for example, the target image may be encoded into a particular format to reduce the data size for storage or transmission. The decoder may decode (i.e., decompress) the encoded data of the image or video to restore the image or video data, for example, the encoded data of the target image may be read, and the decoder may decode the encoded data to restore the data of the target image, so as to perform the related processing of feature point matching on the data. The mobile terminal 1500 may support one or more encoders and decoders. In this way, the mobile terminal 1500 may process images or video in a variety of encoding formats, such as: image formats such as JPEG (Joint Photographic Experts Group), PNG (Portable Network Graphics), BMP (Bitmap), and Video formats such as MPEG (Moving Picture Experts Group) 1, MPEG2, h.263, h.264, and HEVC (High Efficiency Video Coding).
The processor 1501 may be connected to the memory 1502 or other components via the bus 1503.
The memory 1502 may be used to store computer-executable program code, which includes instructions. The processor 1501 executes various functional applications and data processing of the mobile terminal 1500 by executing instructions stored in the memory 1502. The memory 1502 may also store application data, such as files for storing images, videos, and the like.
The communication function of the mobile terminal 1500 may be implemented by the mobile communication module 1504, the antenna 1, the wireless communication module 1505, the antenna 2, the modem processor, the baseband processor, and the like. The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. The mobile communication module 1504 may provide a mobile communication solution of 2G, 3G, 4G, 5G, etc. applied to the mobile terminal 1500. The wireless communication module 1505 may provide wireless communication solutions for wireless local area network, bluetooth, near field communication, etc. applied to the mobile terminal 200.
The display screen 1506 is used to implement display functions, such as displaying user interfaces, images, videos, and the like. The camera module 1507 is used to perform a photographing function, such as photographing an image, a video, and the like. The audio module 1508 is used for implementing audio functions, such as playing audio, collecting voice, and the like. Power module 1509 is used to implement power management functions such as charging batteries, powering devices, monitoring battery status, etc. The sensor module 1510 may include a depth sensor 15101, a pressure sensor 15102, a gyro sensor 15103, an air pressure sensor 15104, etc. to implement a corresponding sensing function.
As will be appreciated by one skilled in the art, aspects of the present disclosure may be embodied as a system, method or program product. Accordingly, various aspects of the disclosure may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
Exemplary embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon a program product capable of implementing the above-described method of the present specification. In some possible embodiments, various aspects of the disclosure may also be implemented in the form of a program product comprising program code for causing a terminal device to perform the steps according to various exemplary embodiments of the disclosure as described in the above-mentioned "exemplary methods" section of this specification, when the program product is run on the terminal device.
It should be noted that the computer readable medium shown in the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Furthermore, program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is to be limited only by the terms of the appended claims.

Claims (16)

1. A feature point matching method, comprising:
acquiring a forward optical flow and a reverse optical flow between an image to be registered and a reference image;
determining optical flow errors of candidate pixel points in the image to be registered or the reference image according to the forward optical flow and the backward optical flow;
selecting a target pixel point from the candidate pixel points according to the optical flow errors of the candidate pixel points;
and performing repeated texture detection on the target pixel point to obtain a detection result, and determining a matching feature point pair between the image to be registered and the reference image according to the detection result.
2. The method according to claim 1, characterized in that said forward optical flow is the optical flow from the reference image to the quasi-image to be registered, and said backward optical flow is the optical flow from the quasi-image to be registered to the reference image; the determining optical flow errors of the candidate pixel points in the reference image according to the forward optical flow and the backward optical flow comprises:
determining a first matching point of a candidate pixel point in the reference image in the image to be registered according to the forward optical flow;
determining a second matching point of the first matching point in the reference image according to the inverse optical flow;
and determining the optical flow error according to the candidate pixel points in the reference image and the second matching points corresponding to the candidate pixel points.
3. The method according to claim 1, characterized in that said forward optical flow is the optical flow from the reference image to the quasi-image to be registered, and said backward optical flow is the optical flow from the quasi-image to be registered to the reference image; the determining optical flow errors of the candidate pixel points in the image to be registered according to the forward optical flow and the backward optical flow comprises:
determining a third matching point of a candidate pixel point in the image to be registered in the reference image according to the reverse optical flow;
determining a fourth matching point of the third matching point in the image to be registered according to the forward optical flow;
and determining the optical flow error according to the candidate pixel points in the image to be registered and the fourth matching points corresponding to the candidate pixel points.
4. The method of claim 1, wherein selecting a target pixel from the candidate pixels according to the optical flow errors of the candidate pixels comprises:
acquiring a preset optical flow threshold;
determining a target optical flow in the optical flow errors according to the preset optical flow threshold;
and taking the candidate pixel points corresponding to the target optical flow in the reference image as the target pixel points.
5. The method of claim 1, wherein the performing repeated texture detection on the target pixel point to obtain a detection result comprises:
and executing the following operations on the target pixel point:
acquiring a first image block comprising the target pixel point;
determining a gradient of the first image block;
determining at least one second image block having the same size as the first image block in the direction of the gradient;
calculating the similarity between the first image block and the second image block;
and obtaining the detection result according to the similarity.
6. The method of claim 5, wherein said determining at least one second image block having the same size as said first image block along the direction of said gradient comprises:
determining a second image block with the same size as the first image block along the positive direction of the gradient, and determining another second image block with the same size as the first image block along the negative direction of the gradient;
and calculating the similarity between the first image block and the second image block.
7. The method of claim 6, wherein obtaining the detection result according to the similarity comprises:
acquiring the maximum value of the similarity between the first image block and the second image block;
and obtaining the detection result according to the maximum value and the similarity threshold value.
8. The method of claim 5, wherein the obtaining the detection result according to the similarity comprises:
and obtaining a detection result of whether the target pixel point is a repeated texture pixel point or not by comparing the similarity with a similarity threshold value.
9. The method according to claim 5, wherein the determining matching feature point pairs between the image to be registered and the reference image according to the detection result comprises:
determining the weight information of the target pixel point according to the similarity;
and determining matching feature point pairs between the image to be registered and the reference image according to the weight information.
10. The method of claim 1, wherein the detection result comprises that the target pixel is a repeated texture pixel or a non-repeated texture pixel; the determining the matching feature point pair between the image to be registered and the reference image according to the detection result comprises:
determining the corresponding matching feature points of target pixel points of which the detection results are non-repetitive texture pixel points in the reference image in the image to be registered; or
And determining the corresponding matching feature point of a target pixel point of which the detection result is a pixel point with non-repetitive texture in the image to be registered in the reference image.
11. The method of claim 1, wherein after obtaining the matched pairs of feature points, the method further comprises:
detecting epipolar line errors of feature points in the matched feature point pairs;
and updating the matched characteristic point pairs according to the epipolar error.
12. The method of claim 11, wherein the detecting epipolar errors for feature points in the matched feature points comprises:
determining a basic matrix between the image to be registered and a reference image;
determining a corresponding polar line of a pixel point in the reference image in the image to be registered;
and determining the distance from a pixel point corresponding to the pixel point in the reference image in the image to be registered to the polar line as the polar line error.
13. The method according to claim 11, wherein said updating the matched pairs of feature points according to the epipolar error comprises:
determining standard deviations of polar line errors corresponding to all target pixel points;
determining an error threshold value according to the standard deviation;
and updating the matched feature points according to the epipolar line error and the error threshold.
14. A feature point matching apparatus, characterized by comprising:
the optical flow acquisition module is used for acquiring a forward optical flow and a reverse optical flow between the image to be registered and the reference image;
the error determination module is used for determining optical flow errors of candidate pixel points in the image to be registered or the reference image according to the forward optical flow and the backward optical flow;
a pixel selection module: the optical flow error detection module is used for selecting a target pixel point from the candidate pixel points according to the optical flow error of the candidate pixel points;
and the image matching module is used for carrying out repeated texture detection on the target pixel points to obtain a detection result and determining matching characteristic point pairs between the image to be registered and the reference image according to the detection result.
15. A computer-readable storage medium on which a computer program is stored, the program, when being executed by a processor, implementing the feature point matching method according to any one of claims 1 to 11.
16. An electronic device, comprising:
one or more processors; and
a memory for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the feature point matching method according to any one of claims 1 to 11.
CN202210827776.5A 2022-07-14 2022-07-14 Feature point matching method and device, storage medium and electronic equipment Pending CN115222974A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210827776.5A CN115222974A (en) 2022-07-14 2022-07-14 Feature point matching method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210827776.5A CN115222974A (en) 2022-07-14 2022-07-14 Feature point matching method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN115222974A true CN115222974A (en) 2022-10-21

Family

ID=83611155

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210827776.5A Pending CN115222974A (en) 2022-07-14 2022-07-14 Feature point matching method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN115222974A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113661497A (en) * 2020-04-09 2021-11-16 商汤国际私人有限公司 Matching method, matching device, electronic equipment and computer-readable storage medium
CN117470248A (en) * 2023-12-27 2024-01-30 四川三江数智科技有限公司 Indoor positioning method for mobile robot

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113661497A (en) * 2020-04-09 2021-11-16 商汤国际私人有限公司 Matching method, matching device, electronic equipment and computer-readable storage medium
CN117470248A (en) * 2023-12-27 2024-01-30 四川三江数智科技有限公司 Indoor positioning method for mobile robot
CN117470248B (en) * 2023-12-27 2024-04-02 四川三江数智科技有限公司 Indoor positioning method for mobile robot

Similar Documents

Publication Publication Date Title
CN111598776B (en) Image processing method, image processing device, storage medium and electronic apparatus
CN108038422B (en) Camera device, face recognition method and computer-readable storage medium
CN115222974A (en) Feature point matching method and device, storage medium and electronic equipment
US8718324B2 (en) Method, apparatus and computer program product for providing object tracking using template switching and feature adaptation
CN112270710B (en) Pose determining method, pose determining device, storage medium and electronic equipment
CN111666960B (en) Image recognition method, device, electronic equipment and readable storage medium
CN111429517A (en) Relocation method, relocation device, storage medium and electronic device
CN111694978B (en) Image similarity detection method and device, storage medium and electronic equipment
CN112270755B (en) Three-dimensional scene construction method and device, storage medium and electronic equipment
WO2022206255A1 (en) Visual positioning method, visual positioning apparatus, storage medium and electronic device
CN111612696B (en) Image stitching method, device, medium and electronic equipment
WO2022160857A1 (en) Image processing method and apparatus, and computer-readable storage medium and electronic device
CN112862877A (en) Method and apparatus for training image processing network and image processing
CN112288816A (en) Pose optimization method, pose optimization device, storage medium and electronic equipment
CN114494942A (en) Video classification method and device, storage medium and electronic equipment
CN113409203A (en) Image blurring degree determining method, data set constructing method and deblurring method
CN116524186A (en) Image processing method and device, electronic equipment and storage medium
CN114399648A (en) Behavior recognition method and apparatus, storage medium, and electronic device
CN114973293A (en) Similarity judgment method, key frame extraction method, device, medium and equipment
CN115278189A (en) Image tone mapping method and apparatus, computer readable medium and electronic device
CN114419189A (en) Map construction method and device, electronic equipment and storage medium
CN114418845A (en) Image resolution improving method and device, storage medium and electronic equipment
CN111243046B (en) Image quality detection method, device, electronic equipment and storage medium
CN114139703A (en) Knowledge distillation method and device, storage medium and electronic equipment
CN113658073A (en) Image denoising processing method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination