CN115272726A - Feature matching method and device, electronic equipment and storage medium - Google Patents

Feature matching method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115272726A
CN115272726A CN202210822973.8A CN202210822973A CN115272726A CN 115272726 A CN115272726 A CN 115272726A CN 202210822973 A CN202210822973 A CN 202210822973A CN 115272726 A CN115272726 A CN 115272726A
Authority
CN
China
Prior art keywords
feature
feature point
matching
points
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210822973.8A
Other languages
Chinese (zh)
Inventor
王建国
刘祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hozon New Energy Automobile Co Ltd
Original Assignee
Hozon New Energy Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hozon New Energy Automobile Co Ltd filed Critical Hozon New Energy Automobile Co Ltd
Priority to CN202210822973.8A priority Critical patent/CN115272726A/en
Publication of CN115272726A publication Critical patent/CN115272726A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a feature matching method, which comprises the following steps: acquiring a plurality of first characteristic points in a first image and a plurality of second characteristic points in a second image; selecting a reference feature point from the first feature points; matching the feature vector of the reference feature point with the feature vector of the second feature point to determine a matched feature point; determining the difference between the position coordinates of the reference characteristic points and the position coordinates of the corresponding matched characteristic points to obtain offset; determining a matching area of the candidate characteristic point in the second image according to the offset and the position coordinates of the candidate characteristic point, wherein the candidate characteristic point is other first characteristic points except the reference characteristic point; and matching the feature vector of the candidate feature point with the feature vector of the second feature point in the matching area to determine the target feature point. Therefore, second feature points corresponding to other first feature points can be searched in the same direction, time complexity of the feature matching method is reduced, and influence of external factors is small.

Description

Feature matching method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a feature matching method and apparatus, an electronic device, and a storage medium.
Background
In the driving process, the driving mileage of the vehicle often needs to be calculated, for example, a visual odometer can be adopted to pair the visual feature points in the two frames of images one by one, so that a vehicle pose transformation matrix is calculated according to the deviation of the feature points between the two frames.
In the prior art, there are two main methods for implementing feature point matching: the method is characterized in that the optical flow method is an interframe motion description method based on gray scale invariant assumption, but the gray scale invariant assumption condition has more limitations on application scenes and is difficult to meet in reality, and the matching accuracy is reduced due to the change of external factors such as external illumination, camera exposure and the like; the other is a violent matching algorithm, which tries a large number of possible matching modes to select the optimal matching, and the method has high time complexity and high consumption of required computing resources.
Therefore, in the prior art, the algorithm for performing feature matching on two frames of images is difficult to realize smaller time complexity on the premise of maintaining higher matching precision, so that the application scene of the visual odometer is limited, and the user requirements are difficult to meet.
Disclosure of Invention
In order to solve the technical problems, the present application shows a feature matching method, an apparatus, an electronic device, and a storage medium, so as to solve at least the problem that in the related art, an algorithm for performing feature matching on two frames of images is difficult to implement with a small time complexity on the premise of maintaining a high matching precision, resulting in a limited application scene of a visual odometer and difficulty in meeting user requirements. The technical scheme of the disclosure is as follows: .
In a first aspect, the present application illustrates a method of feature matching, the method comprising:
acquiring a plurality of first characteristic points in a first image and a plurality of second characteristic points in a second image, wherein the first characteristic points and the second characteristic points both have characteristic vectors and position coordinates;
selecting a reference feature point from the first feature points;
matching the feature vector of the reference feature point with the feature vector of the second feature point, and determining a matched feature point matched with the reference feature point;
determining the difference between the position coordinates of the reference characteristic points and the corresponding position coordinates of the matched characteristic points to obtain offset;
determining a matching area of the candidate feature point in the second image according to the offset and the position coordinates of the candidate feature point, wherein the candidate feature point is other first feature points except the reference feature point;
and matching the feature vector of the candidate feature point with the feature vector of a second feature point included in the matching region, and determining a target feature point matched with the candidate feature point so as to realize feature matching of the first image and the second image.
Optionally, the selecting a reference feature point from the first feature points includes:
dividing the first image into a plurality of window areas with preset sizes;
and randomly selecting a preset number of first feature points from each window area as reference feature points.
Optionally, the determining a difference between the position coordinate of the reference feature point and the position coordinate of the corresponding matching feature point to obtain an offset includes:
for each window area, determining the difference between the position coordinates of a target reference feature point and the corresponding position coordinates of the matched feature points to obtain the window offset of the window area, wherein the target reference feature point is a reference feature point included in the window area;
determining a matching region of the candidate feature point in the second image according to the offset and the position coordinate of the candidate feature point, including:
and aiming at each window area, determining a matching area of the candidate characteristic points in the window area in the second image according to the window offset and the position coordinates of the candidate characteristic points in the window area.
Optionally, the determining a matching region of the candidate feature point in the second image according to the reference offset and the position coordinate of the candidate feature point includes:
determining the sum of the reference offset and the position coordinates of the candidate feature points as a central coordinate;
and determining a matching area of the candidate characteristic point in the second image according to the central coordinate and a preset step length, wherein the center of the matching area is the central coordinate, and the side length is the preset step length.
According to a second aspect of the embodiments of the present disclosure, there is provided a feature matching apparatus including:
an acquisition unit configured to perform acquisition of a plurality of first feature points in a first image and a plurality of second feature points in a second image, the first feature points and the second feature points each having a feature vector and a position coordinate;
a selection unit configured to perform selection of a reference feature point from the first feature points;
a first matching unit configured to perform matching of the feature vector of the reference feature point and the feature vector of the second feature point, and determine a matching feature point matching the reference feature point;
an offset determination unit configured to perform determining a difference between the position coordinates of the reference feature point and the corresponding position coordinates of the matching feature point, resulting in an offset amount;
a region determining unit configured to determine a matching region of the candidate feature point in the second image according to the offset and position coordinates of the candidate feature point, wherein the candidate feature point is a first feature point except the reference feature point;
a second matching unit configured to perform matching of the feature vector of the candidate feature point with the feature vector of a second feature point included in the matching region, and determine a target feature point to which the candidate feature point matches, so as to achieve feature matching of the first image and the second image.
Optionally, the selecting unit is configured to perform:
dividing the first image into a plurality of window areas with preset sizes;
and randomly selecting a preset number of first feature points from each window area as reference feature points.
Optionally, the offset determining unit is configured to perform:
for each window area, determining the difference between the position coordinates of a target reference feature point and the corresponding position coordinates of the matched feature points to obtain the window offset of the window area, wherein the target reference feature point is a reference feature point included in the window area;
the region determination unit configured to perform:
and for each window area, determining a matching area of the candidate feature points in the window area in the second image according to the window offset and the position coordinates of the candidate feature points in the window area.
Optionally, the second matching unit is specifically configured to perform:
determining the sum of the reference offset and the position coordinates of the candidate feature points as a central coordinate;
and determining a matching area of the candidate characteristic point in the second image according to the central coordinate and a preset step length, wherein the center of the matching area is the central coordinate, and the side length is the preset step length.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the feature matching method according to any one of the above when executing the program.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the feature matching method of any one of the above.
Compared with the prior art, the method has the following advantages:
acquiring a plurality of first characteristic points in a first image and a plurality of second characteristic points in a second image, wherein the first characteristic points and the second characteristic points both have characteristic vectors and position coordinates; selecting a reference feature point from the first feature points; matching the feature vector of the reference feature point with the feature vector of the second feature point, and determining a matched feature point matched with the reference feature point; determining the difference between the position coordinates of the reference characteristic points and the position coordinates of the corresponding matched characteristic points to obtain offset; determining a matching area of the candidate characteristic point in the second image according to the offset and the position coordinates of the candidate characteristic point, wherein the candidate characteristic point is other first characteristic points except the reference characteristic point; and matching the feature vector of the candidate feature point with the feature vector of the second feature point in the matching area, and determining a target feature point matched with the candidate feature point so as to realize feature matching of the first image and the second image.
Therefore, based on the motion similarity of each feature point in the same image, the first feature point can be considered to have similar linear motion compared with the corresponding feature point in the second image, and then, according to the offset between the reference feature point and the matched feature point, the second feature points corresponding to other first feature points can be searched in the same direction without traversing every possible matching condition between the first feature point and the second feature point, so that the time complexity of the feature matching method is greatly reduced.
Drawings
FIG. 1 is a flow chart of the steps of a feature matching method of the present application;
FIG. 2 is a process diagram of a feature matching method of the present application;
FIG. 3 is a block diagram of a feature matching apparatus of the present application;
fig. 4 is a schematic diagram of an electronic device of the present application.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, the present application is described in further detail with reference to the accompanying drawings and the detailed description.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in other sequences than those illustrated or described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Referring to fig. 1, a flowchart illustrating steps of a feature matching method of the present application is shown, which may specifically include the following steps:
in step S11, a plurality of first feature points in the first image and a plurality of second feature points in the second image are obtained, where the first feature points and the second feature points both have feature vectors and position coordinates.
In some scenes, feature matching needs to be performed on image frames so as to determine the object pose change between the two image frames, for example, in the driving process, visual feature points in the two image frames can be paired one by one, so that a vehicle pose transformation matrix is calculated according to the offset of the feature points between the two image frames.
After the first image and the second image are acquired, the first feature point and the second feature point can be detected by identifying the feature points of the first image and the second image. The first image and the second image may be adjacent image frames, or two similar image frames spaced by a preset number of image frames, and specifically, without limitation, the feature point refers to a pixel point capable of representing an image or an object in an identical or at least very similar invariant form in other similar images containing the same scene or object, that is, the feature point is a representative point in the image, and an object in the image may be analyzed based on the feature point. Specifically, the feature points may be identified by using a haar corner point detection algorithm, a Scale-invariant feature transform (SIFT) feature point detection algorithm, a key point positioning algorithm, or the like, which is not limited in this disclosure.
In the present disclosure, for convenience of description, image frames that need to be feature-matched are referred to as a first image and a second image, where the first image and the second image generally have the same size, contain the same object, and each image frame has a plurality of feature points, which are a first feature point and a second feature point, respectively, each feature point has its corresponding feature vector and position coordinates, the feature vector is used to describe feature information of the feature point and can be obtained by calculation using different operators, and the position coordinates are used to indicate the position of the feature point in the image where the feature point is located.
Wherein the first image can be represented as I1The second image may be denoted as I2The first feature point may be represented as a set
Figure BDA0003740228720000061
The second feature point may be represented as a set
Figure BDA0003740228720000062
Figure BDA0003740228720000063
N and M respectively represent the number of the first feature points and the second feature points, and may be the same or different.
In step S12, a reference feature point is selected from the first feature points.
In this step, a part of the first feature points may be selected as reference feature points, where the selection may be proportionally selected from the feature points included in different regions of the first image, for example, the first image is divided into four regions, one tenth of the feature points are selected as reference feature points for each region, and the first image may be divided into four regions in a partitioning manner, or may be partitioned according to the density of the first feature points, so that the number of the first feature points included in each region after partitioning is similar; or, a preset number of first feature points may be randomly selected as reference feature points; or selecting a first characteristic point with the position coordinate closest to the center of the image as a reference characteristic point; and the like, and are not particularly limited.
In one implementation, the step of selecting the reference feature point from the first feature points may include: dividing a first image into a plurality of window areas with preset sizes; and randomly selecting a preset number of first feature points from each window area as reference feature points. The preset size may be a preset fixed value, may also be determined according to the number of the computing resources or the first feature points, and the preset number may be 1, may also be determined according to the number of the computing resources or the first feature points, and is not limited specifically.
That is, if the preset size of the window area is s, the first image I1Is W high H, then the first image I1Window that can be divided into m x n s wiTherein of
Figure BDA0003740228720000064
In this way, the first image is divided into a plurality of window regions with the same size, and the distances between pixels in the same window region are close, so that the motion changes of the first feature point in the same window region in the second image are also considered to be similar, and therefore, the change of the reference feature point in the second image has stronger reference to the first feature point in the same window region, thereby being beneficial to improving the accuracy of feature matching.
In step S13, the feature vectors of the reference feature points are matched with the feature vectors of the second feature points, and matching feature points matching the reference feature points are determined.
In this step, the reference feature points and each second feature point are sequentially matched, the similarity between the feature vectors is calculated, and the second feature point with the highest similarity is used as the matching feature point matched with the reference feature point. The similarity between feature vectors includes, but is not limited to, any one or more of euclidean distance, pearson correlation coefficient, and cosine similarity, which is not limited specifically.
Specifically, if the reference feature point is expressed as
Figure BDA0003740228720000071
The matching feature points matching the reference feature points are represented as
Figure BDA0003740228720000072
Then, the reference feature point and its matching feature point can be represented as a pair of anchor pairs
Figure BDA0003740228720000073
In step S14, the difference between the position coordinates of the reference feature point and the corresponding position coordinates of the matching feature point is determined, and the offset amount is obtained.
As can be seen from the foregoing, the position coordinates are used to indicate the positions of the feature points in the image, and usually include an abscissa and an ordinate, in this step, the difference between the abscissa of the reference feature point and the abscissa of the corresponding matching feature point, and the difference between the ordinate of the reference feature point and the ordinate of the corresponding matching feature point are calculated, so as to obtain the offset on the abscissa and the ordinate.
For example, if the position coordinates of the reference feature points are expressed as
Figure BDA0003740228720000074
The position coordinates of the matching feature points matching the reference feature points are expressed as
Figure BDA0003740228720000075
The offset can then be expressed as
Figure BDA0003740228720000076
In addition, in the case where there are a plurality of reference feature points, the difference between the position coordinates of each reference feature point and the position coordinates of the corresponding matching feature point may be calculated, and then the average of the calculated differences may be calculated as the offset amount.
In one implementation, when selecting the reference feature points from the first feature points, the first image is divided into a plurality of window regions with preset sizes, and then a preset number of first feature points are randomly selected from each window region as the reference feature points. Then, in this step, the step of calculating an offset amount for each window region, specifically, determining a difference between the position coordinates of the reference feature point and the corresponding position coordinates of the matching feature point to obtain the offset amount, may include: and determining the difference between the position coordinates of the target reference feature points and the position coordinates of the corresponding matching feature points aiming at each window area to obtain the window offset of the window area, wherein the target reference feature points are the reference feature points included in the window area.
That is, the window offset of each window region may be different, and it is understood that the motion changes of feature points with close distances in the second image are similar, while the motion changes of feature points with farther distances in the second image may be different, so that the determination of the matching second feature points for the first feature points at different positions in the first image based on the window offset of the corresponding window may result in more accurate feature matching results than the way of globally calculating the offset based on the first image.
In step S15, a matching area of the candidate feature point in the second image is determined according to the offset and the position coordinates of the candidate feature point, where the candidate feature point is the first feature point except for the reference feature point.
In this step, the offset is added to the position coordinates of the candidate feature point, so that the position of the candidate feature point in the second image after the candidate feature point and the reference feature point undergo the same offset motion can be obtained, and the motion of the candidate feature point and the reference feature point has similarity, so that the approximate area where the second feature point matched with the candidate feature point is located, that is, the matching area of the candidate feature point in the second image, can be determined.
In one implementation, the step of determining a matching region of the candidate feature point in the second image according to the reference offset and the position coordinate of the candidate feature point may include: firstly, determining the sum of the reference offset and the position coordinates of the candidate characteristic points as a central coordinate; and then, determining a matching area of the candidate characteristic point in the second image according to the central coordinate and the preset step length, wherein the center of the matching area is the central coordinate, and the side length is the preset step length.
For example, if the preset step size is 3, the candidate feature points are represented as
Figure BDA0003740228720000081
Its position coordinates are expressed as
Figure BDA0003740228720000082
Then, continuing the example above, the offset is represented as (Δ x, Δ y), and the center coordinate is represented as
Figure BDA0003740228720000083
The second image I can be displayed2In the middle to
Figure BDA0003740228720000084
A window of size 3 x 3 in the center serves as a matching region.
Or if there are a plurality of reference feature points and the obtained offsets are different, then the mean value of the offsets can be calculated, and the sum of the mean value of the offsets and the position coordinates of the candidate feature points is taken as the center coordinate; or, the maximum offset and the minimum offset may be determined from the multiple offsets, then coordinates corresponding to a sum of the maximum offset and the minimum offset and the position coordinates of the candidate feature points are determined, and a corresponding elliptical area is determined according to the obtained coordinates to serve as a matching area, and the like, which is not limited specifically.
In this disclosure, if the first image is divided into a plurality of window regions of a preset size, a preset number of first feature points are randomly selected from each window region as reference feature points, and offsets are respectively calculated for each window region, then the step of determining a matching region of the candidate feature points in the second image according to the offsets and the position coordinates of the candidate feature points may include: and aiming at each window area, determining a matching area of the candidate characteristic points in the window area in the second image according to the window offset and the position coordinates of the candidate characteristic points in the window area.
That is to say, each window determines the corresponding window offset, and then, the first feature point in the same window region determines the matching region in the second image based on the same window offset, so that the motion similarity of the local region can be fully utilized, the error of the motion difference between feature points with longer distances to the feature matching result is reduced, and the accuracy of feature matching is further improved.
In step S16, the feature vector of the candidate feature point is matched with the feature vector of the second feature point included in the matching region, and the target feature point matched with the candidate feature point is determined, so as to implement feature matching between the first image and the second image.
In this step, the candidate feature points are sequentially matched with each second feature point included in the matching region, the similarity between the feature vectors is calculated, and the second feature point with the highest similarity is used as a target feature point matched with the candidate feature points. The similarity between feature vectors includes, but is not limited to, any one or more of euclidean distance, pearson correlation coefficient, and cosine similarity, which is not limited specifically.
In this way, after the feature matching is performed on the first image and the second image, a plurality of anchor pairs may be obtained, where each anchor pair includes a first feature point and a second feature point corresponding to the first feature point, and the number of anchor pairs may be the same as or smaller than the number of first feature points, and is specifically determined according to an actual situation.
Fig. 2 is a schematic process diagram of a feature matching method according to the present embodiment. Specifically, the method comprises the following processes:
first, a first image I may be acquired by a vehicle event recorder or the like1And a second image I2And the first image I is processed1Divided into m x n sWindow { wi-means for, among other things,
Figure BDA0003740228720000091
the first feature point may be represented as a set
Figure BDA0003740228720000101
The second feature point may be represented as a set
Figure BDA0003740228720000102
N and M respectively indicate the number of the first feature points and the second feature points.
Then, for each window { w }iThe following operations are performed:
from the set P1In which a feature point in the window is randomly selected
Figure BDA0003740228720000103
As reference feature points and determining
Figure BDA0003740228720000104
In the set P2Matched feature points in the image
Figure BDA0003740228720000105
As matching feature points, anchor pairs are obtained
Figure BDA0003740228720000106
Calculating a window offset for the window
Figure BDA0003740228720000107
Wherein the content of the first and second substances,
Figure BDA0003740228720000108
is composed of
Figure BDA0003740228720000109
In I1The position coordinates of (a) are determined,
Figure BDA00037402287200001010
is composed of
Figure BDA00037402287200001011
In I2The position coordinates in (1); traverse other candidate feature points in the window
Figure BDA00037402287200001012
Determining the window offset (delta x, delta y) at I2The matching region in (1), for example, may be I2In (1)
Figure BDA00037402287200001013
A region of 3 x 3 in the vicinity, wherein,
Figure BDA00037402287200001014
is composed of
Figure BDA00037402287200001015
In I1And finding the second feature point included in the matching area
Figure BDA00037402287200001016
And obtaining the corresponding target characteristic points, thereby obtaining the anchor pairs included by the window.
And then, repeating the steps, and traversing all windows to obtain anchor pairs included by all windows, namely the feature matching result of the first image and the second image.
As can be seen from the above, according to the technical solution provided in the embodiment of the present disclosure, based on the motion similarity of each feature point in the same image, it can be considered that the first feature point has similar linear motion compared with the feature point corresponding to the second image, and then, according to the offset between the reference feature point and the matching feature point, the second feature point corresponding to another first feature point can be searched in the same direction without traversing every possible matching condition between the first feature point and the second feature point, so that the time complexity of the feature matching method is greatly reduced, and meanwhile, the method is less affected by external factors such as external illumination, camera exposure, and the like, and the application scenario is also wider. In addition, the hardware aspect of the method only depends on a monocular project, the cost is far lower than that of a binocular camera, an RGB-D camera and a laser radar, and regarding the position parameter of the camera, the method only depends on the height relative to the road surface and does not depend on two horizontal components, so that the realization condition is easier to meet.
It is noted that for simplicity of description, the method embodiments are shown as a series of acts or combinations, but those skilled in the art will recognize that the present application is not limited by the order of acts, as some steps may, in accordance with the present application, occur in other orders or concurrently. Further, those skilled in the art will also appreciate that the embodiments described in the specification are exemplary and that no action is necessarily required in this application.
Referring to fig. 3, a block diagram of a feature matching apparatus according to the present application is shown, and the apparatus may specifically include the following modules:
an acquiring unit 201 configured to perform acquiring a plurality of first feature points in a first image and a plurality of second feature points in a second image, the first feature points and the second feature points each having a feature vector and a position coordinate;
a selecting unit 202 configured to perform selecting a reference feature point from the first feature points;
a first matching unit 203 configured to perform matching of the feature vector of the reference feature point and the feature vector of the second feature point, and determine a matching feature point matching the reference feature point;
an offset determination unit 204 configured to perform determining a difference between the position coordinates of the reference feature point and the corresponding position coordinates of the matching feature point, resulting in an offset;
a region determining unit 205 configured to determine a matching region of the candidate feature point in the second image according to the offset and the position coordinates of the candidate feature point, where the candidate feature point is a first feature point other than the reference feature point;
a second matching unit 206 configured to perform matching of the feature vector of the candidate feature point with the feature vector of a second feature point included in the matching region, and determine a target feature point matched with the candidate feature point, so as to implement feature matching of the first image and the second image.
In one implementation, the selecting unit 202 is configured to perform:
dividing the first image into a plurality of window areas with preset sizes;
and randomly selecting a preset number of first feature points from each window area as reference feature points.
In one implementation, the offset determining unit 204 is configured to perform:
for each window area, determining the difference between the position coordinates of a target reference feature point and the corresponding position coordinates of the matched feature points to obtain the window offset of the window area, wherein the target reference feature point is a reference feature point included in the window area;
the region determining unit 205 is configured to perform:
and for each window area, determining a matching area of the candidate feature points in the window area in the second image according to the window offset and the position coordinates of the candidate feature points in the window area.
In an implementation manner, the second matching unit 206 is specifically configured to perform:
determining the sum of the reference offset and the position coordinates of the candidate feature points as a central coordinate;
and determining a matching area of the candidate characteristic point in the second image according to the central coordinate and a preset step length, wherein the center of the matching area is the central coordinate, and the side length is the preset step length.
As can be seen from the above, according to the technical scheme provided by the embodiment of the present disclosure, based on the motion similarity of each feature point in the same image, it can be considered that the first feature point has similar linear motion compared with the corresponding feature point in the second image, and then, according to the offset between the reference feature point and the matching feature point, the second feature point corresponding to other first feature points can be searched in the same direction without traversing each possible matching condition between the first feature point and the second feature point, so that the time complexity of the feature matching method is greatly reduced, and meanwhile, the method is less affected by external factors such as external illumination, camera exposure, and the like, and the application scenario is also wider.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
FIG. 4 is a block diagram illustrating an electronic device in accordance with an example embodiment.
In an exemplary embodiment, a computer-readable storage medium comprising instructions, such as a memory comprising instructions, executable by a processor of an electronic device to perform the above-described method is also provided. Alternatively, the computer-readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical audio playback device, and the like.
In an exemplary embodiment, a computer program product is also provided, which, when run on a computer, causes the computer to implement the above-described method of feature matching.
As can be seen from the above, according to the technical scheme provided by the embodiment of the present disclosure, based on the motion similarity of each feature point in the same image, it can be considered that the first feature point has similar linear motion compared with the corresponding feature point in the second image, and then, according to the offset between the reference feature point and the matching feature point, the second feature point corresponding to other first feature points can be searched in the same direction without traversing each possible matching condition between the first feature point and the second feature point, so that the time complexity of the feature matching method is greatly reduced, and meanwhile, the method is less affected by external factors such as external illumination, camera exposure, and the like, and the application scenario is also wider.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, apparatus, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including the preferred embodiment and all changes and modifications that fall within the scope of the present application.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "include", "including" or any other variations thereof are intended to cover non-exclusive inclusion, so that a process, method, article, or terminal device including a series of elements includes not only those elements but also other elements not explicitly listed or inherent to such process, method, article, or terminal device. Without further limitation, an element defined by the phrases "comprising one of \ 8230; \8230;" does not exclude the presence of additional like elements in a process, method, article, or terminal device that comprises the element.
The above detailed description is given of a feature matching method, a feature matching device, an electronic device, and a storage medium, and specific examples are applied in this text to explain the principles and implementations of the present application, and the descriptions of the above examples are only used to help understand the method and the core ideas of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, the specific implementation manner and the application scope may be changed, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A method of feature matching, comprising:
acquiring a plurality of first characteristic points in a first image and a plurality of second characteristic points in a second image, wherein the first characteristic points and the second characteristic points have characteristic vectors and position coordinates;
selecting a reference feature point from the first feature points;
matching the feature vector of the reference feature point with the feature vector of the second feature point, and determining a matched feature point matched with the reference feature point;
determining the difference between the position coordinates of the reference characteristic points and the corresponding position coordinates of the matched characteristic points to obtain offset;
determining a matching area of the candidate characteristic point in the second image according to the offset and the position coordinates of the candidate characteristic point, wherein the candidate characteristic point is other first characteristic points except the reference characteristic point;
and matching the feature vector of the candidate feature point with the feature vector of a second feature point included in the matching region, and determining a target feature point matched with the candidate feature point so as to realize feature matching of the first image and the second image.
2. The method of claim 1, wherein the selecting a reference feature point from the first feature points comprises:
dividing the first image into a plurality of window areas with preset sizes;
and randomly selecting a preset number of first feature points from each window area as reference feature points.
3. The method according to claim 2, wherein determining a difference between the position coordinates of the reference feature point and the corresponding position coordinates of the matching feature point to obtain an offset comprises:
for each window area, determining the difference between the position coordinates of a target reference feature point and the corresponding position coordinates of the matched feature points to obtain the window offset of the window area, wherein the target reference feature point is a reference feature point included in the window area;
determining a matching region of the candidate feature point in the second image according to the offset and the position coordinate of the candidate feature point, including:
and for each window area, determining a matching area of the candidate feature points in the window area in the second image according to the window offset and the position coordinates of the candidate feature points in the window area.
4. The method according to claim 1, wherein the determining a matching area of the candidate feature point in the second image according to the reference offset and the position coordinates of the candidate feature point comprises:
determining the sum of the reference offset and the position coordinates of the candidate feature points as a central coordinate;
and determining a matching area of the candidate characteristic point in the second image according to the central coordinate and a preset step length, wherein the center of the matching area is the central coordinate, and the side length is the preset step length.
5. A feature matching device, comprising:
an acquisition unit configured to perform acquisition of a plurality of first feature points in a first image and a plurality of second feature points in a second image, the first feature points and the second feature points each having a feature vector and a position coordinate;
a selection unit configured to perform selection of a reference feature point from the first feature points;
a first matching unit configured to perform matching of the feature vector of the reference feature point and the feature vector of the second feature point, and determine a matching feature point matching the reference feature point;
an offset determination unit configured to perform determining a difference between the position coordinates of the reference feature point and the corresponding position coordinates of the matching feature point, resulting in an offset amount;
a region determining unit configured to determine a matching region of the candidate feature point in the second image according to the offset and position coordinates of the candidate feature point, wherein the candidate feature point is a first feature point except the reference feature point;
a second matching unit configured to perform matching of the feature vector of the candidate feature point with the feature vector of a second feature point included in the matching region, and determine a target feature point matched with the candidate feature point so as to achieve feature matching of the first image and the second image.
6. The apparatus according to claim 5, wherein the selection unit is configured to perform:
dividing the first image into a plurality of window areas with preset sizes;
and randomly selecting a preset number of first feature points from each window area as reference feature points.
7. The apparatus of claim 6, wherein the offset determining unit is configured to perform:
for each window area, determining the difference between the position coordinates of a target reference feature point and the corresponding position coordinates of the matched feature points to obtain the window offset of the window area, wherein the target reference feature point is a reference feature point included in the window area;
the region determination unit configured to perform:
and for each window area, determining a matching area of the candidate feature points in the window area in the second image according to the window offset and the position coordinates of the candidate feature points in the window area.
8. The apparatus according to claim 5, wherein the second matching unit is specifically configured to perform:
determining the sum of the reference offset and the position coordinates of the candidate feature points as a central coordinate;
and determining a matching area of the candidate characteristic point in the second image according to the central coordinate and a preset step length, wherein the center of the matching area is the central coordinate, and the side length is the preset step length.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the feature matching method according to any of claims 1 to 4 are implemented when the processor executes the program.
10. A computer-readable storage medium, characterized in that a computer program is stored thereon which, when being executed by a processor, carries out the steps of the feature matching method according to any one of claims 1 to 4.
CN202210822973.8A 2022-07-11 2022-07-11 Feature matching method and device, electronic equipment and storage medium Pending CN115272726A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210822973.8A CN115272726A (en) 2022-07-11 2022-07-11 Feature matching method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210822973.8A CN115272726A (en) 2022-07-11 2022-07-11 Feature matching method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115272726A true CN115272726A (en) 2022-11-01

Family

ID=83765077

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210822973.8A Pending CN115272726A (en) 2022-07-11 2022-07-11 Feature matching method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115272726A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117036490A (en) * 2023-10-10 2023-11-10 长沙能川信息科技有限公司 Method, device, computer equipment and medium for detecting preset bit offset of camera

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117036490A (en) * 2023-10-10 2023-11-10 长沙能川信息科技有限公司 Method, device, computer equipment and medium for detecting preset bit offset of camera
CN117036490B (en) * 2023-10-10 2024-01-19 长沙能川信息科技有限公司 Method, device, computer equipment and medium for detecting preset bit offset of camera

Similar Documents

Publication Publication Date Title
US20150279021A1 (en) Video object tracking in traffic monitoring
US8150181B2 (en) Method of filtering a video sequence image from spurious motion effects
US20140072217A1 (en) Template matching with histogram of gradient orientations
CN109493367B (en) Method and equipment for tracking target object
EP3114687B1 (en) Method and device for processing a picture
US20140270362A1 (en) Fast edge-based object relocalization and detection using contextual filtering
US9105101B2 (en) Image tracking device and image tracking method thereof
US20120131010A1 (en) Techniques to detect video copies
Benseddik et al. SIFT and SURF Performance evaluation for mobile robot-monocular visual odometry
Taşdemir et al. Content-based video copy detection based on motion vectors estimated using a lower frame rate
Son et al. A multi-vision sensor-based fast localization system with image matching for challenging outdoor environments
CN111932582A (en) Target tracking method and device in video image
US10922582B2 (en) Localization of planar objects in images bearing repetitive patterns
CN110942473A (en) Moving target tracking detection method based on characteristic point gridding matching
CN115272726A (en) Feature matching method and device, electronic equipment and storage medium
US11256949B2 (en) Guided sparse feature matching via coarsely defined dense matches
CN112926695B (en) Image recognition method and system based on template matching
Abdelali et al. Fast and robust object tracking via accept–reject color histogram-based method
US9648211B2 (en) Automatic video synchronization via analysis in the spatiotemporal domain
CN116030280A (en) Template matching method, device, storage medium and equipment
CN115249024A (en) Bar code identification method and device, storage medium and computer equipment
Hu et al. Digital video stabilization based on multilayer gray projection
CN108345893B (en) Straight line detection method and device, computer storage medium and terminal
CN112862676A (en) Image splicing method, device and storage medium
Laaroussi et al. Human tracking using joint color-texture features and foreground-weighted histogram

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 314500 988 Tong Tong Road, Wu Tong Street, Tongxiang, Jiaxing, Zhejiang

Applicant after: United New Energy Automobile Co.,Ltd.

Address before: 314500 988 Tong Tong Road, Wu Tong Street, Tongxiang, Jiaxing, Zhejiang

Applicant before: Hezhong New Energy Vehicle Co.,Ltd.

CB02 Change of applicant information