CN115294358A - Feature point extraction method and device, computer equipment and readable storage medium - Google Patents

Feature point extraction method and device, computer equipment and readable storage medium Download PDF

Info

Publication number
CN115294358A
CN115294358A CN202210981085.0A CN202210981085A CN115294358A CN 115294358 A CN115294358 A CN 115294358A CN 202210981085 A CN202210981085 A CN 202210981085A CN 115294358 A CN115294358 A CN 115294358A
Authority
CN
China
Prior art keywords
point
corner
points
map
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210981085.0A
Other languages
Chinese (zh)
Inventor
周震
袁涛
曾纪国
倪亚宇
徐朋飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Goke Microelectronics Co Ltd
Original Assignee
Hunan Goke Microelectronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Goke Microelectronics Co Ltd filed Critical Hunan Goke Microelectronics Co Ltd
Priority to CN202210981085.0A priority Critical patent/CN115294358A/en
Publication of CN115294358A publication Critical patent/CN115294358A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods

Abstract

The embodiment of the application discloses a feature point extraction method and device, computer equipment and a readable storage medium. The method comprises the following steps: acquiring a first corner in a current frame image acquired by an image acquisition device by using an optical flow method; calculating the pose of the image acquisition device by using the first angle point; projecting a first map point corresponding to the first corner point to a pixel plane by using the pose, calculating the distance between the theoretical projection position and the actual projection position of the first map point on the pixel plane, and taking the first map point with the distance smaller than a preset distance threshold value as an inner point; judging whether the number of the inner points is larger than a preset number threshold value or not and the ratio of the number of the inner points to the number of the first map points is larger than a preset ratio threshold value; if so, taking the first corner as an extraction result; if not, the current frame image is used as a key frame, the first corner point is detected and updated by using the characteristic point, and the updated first corner point is used as an extraction result. According to the method and the device, the extraction and the matching of the feature points are not required to be carried out on each frame, and the calculation efficiency is improved.

Description

Feature point extraction method and device, computer equipment and readable storage medium
Technical Field
The present application relates to the field of machine vision technologies, and in particular, to a method and an apparatus for extracting feature points, a computer device, and a readable storage medium.
Background
A Visual synchronous positioning and mapping (vSLAM) system can acquire image information in the environment through an image acquisition device, and presume the motion condition of the image acquisition device in the environment, so as to realize the positioning of the image acquisition device. Meanwhile, the whole environment is modeled, and a globally consistent map is established.
However, when the existing visual synchronous positioning and mapping system performs feature point extraction, a large amount of computing resources are required, which results in low computing efficiency.
Disclosure of Invention
In view of this, an object of the embodiments of the present application is to provide a feature point extraction method, an apparatus, a computer device, and a readable storage medium, which can solve the problem of low computation efficiency when an existing visual synchronous positioning and mapping system performs feature point extraction.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a feature point extraction method, including:
acquiring a first corner in a current frame image acquired by an image acquisition device by using an optical flow method;
calculating the pose of the image acquisition device by using the first angle point;
projecting a first map point corresponding to the first corner point to a pixel plane by using the pose, calculating the distance between the theoretical projection position and the actual projection position of the first map point on the pixel plane, and taking the first map point with the distance smaller than a preset distance threshold value as an inner point;
judging whether the number of the inner points is larger than a preset number threshold value or not and the ratio of the number of the inner points to the number of the first map points is larger than a preset ratio threshold value;
if so, taking the first corner as an extraction result;
if not, the current frame image is used as a key frame, the first corner point is detected and updated by using the characteristic point, and the updated first corner point is used as an extraction result.
According to a specific embodiment disclosed in the present application, the updating the first corner point by using feature point detection, and taking the updated first corner point as an extraction result includes:
creating an image pyramid by using the current frame image, and extracting all second corner points on the image pyramid;
correcting the first corner point by using the second corner point to obtain a third corner point;
extracting a fourth corner point meeting a preset condition by using all map points in the current frame image;
supplementing the total number of the first corner points to the preset number of corner points according to the number of the third corner points and the fourth corner points;
and taking the first corners with the preset number of corners as extraction results.
According to a specific embodiment disclosed in the present application, the correcting the first corner point by using the second corner point to obtain a third corner point includes:
determining a fifth corner point within a preset range in all the second corner points according to the position information of the first corner point, and calculating a descriptor of the fifth corner point;
and determining a fifth corner point which is closest to the first corner point in the fifth corner points as a third corner point according to the descriptor of the fifth corner point.
According to a specific embodiment disclosed in the present application, the extracting, by using all map points in the current frame image, a fourth corner point meeting a preset condition includes:
based on the pose of the image acquisition device, carrying out spatial transformation on the current frame image to obtain a three-dimensional scene, and taking all map points in the three-dimensional scene as candidate map points;
projecting the candidate map points to a camera plane, determining sixth angular points within a preset range in all the candidate map points according to the position information of the candidate map points, and calculating descriptors of the sixth angular points;
and determining a seventh corner point which is closest to the candidate map point in the sixth corner point as a fourth corner point according to the descriptor of the sixth corner point.
According to a specific embodiment disclosed in the present application, supplementing the total number of the first corner points to the number of preset corner points according to the number of the third corner points and the fourth corner points comprises:
determining the quantity to be supplemented of the angular points to be supplemented according to the quantity of the preset angular points, the quantity of the third angular points and the quantity of the fourth angular points of the current frame image;
eliminating corner points with the distance from the third corner point and the fourth corner point being less than a preset distance threshold value from the second corner points to obtain residual corner points;
and selecting the angular points to be supplemented in the number of the residual angular points so as to supplement the total number of the first angular points to the preset angular point number.
According to a specific embodiment disclosed in the present application, the number of layers of the image pyramid is 8, and the ratio is 1.2.
According to a specific embodiment disclosed herein, the method further comprises:
and constructing a map by using the extraction result.
In a second aspect, an embodiment of the present application provides a feature point extraction apparatus, including:
the angular point acquisition module is used for acquiring a first angular point in a current frame image acquired by the image acquisition device by using an optical flow method;
the pose calculation module is used for calculating the pose of the image acquisition device by utilizing the first corner point;
the inner point determining module is used for projecting a first map point corresponding to the first angle point to a pixel plane by using the pose, calculating the distance between the theoretical projection position and the actual projection position of the first map point on the pixel plane, and taking the first map point with the distance smaller than a preset distance threshold value as an inner point;
the judging module is used for judging whether the number of the inner points is greater than a preset number threshold and the ratio of the number of the inner points to the number of the first map points is greater than a preset ratio threshold;
a first extraction result determining module, configured to, if yes, take the first corner as an extraction result;
and the second extraction result determining module is used for taking the current frame image as a key frame, detecting and updating the first angular point by using the characteristic point and taking the updated first angular point as an extraction result if the current frame image is not the key frame.
In a third aspect, embodiments of the present application provide a computer device, including a processor and a memory, where the memory stores a program or instructions, and the program or instructions, when executed by the processor, implement the steps of the method according to the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
The feature point extraction method provided in the embodiment of the application acquires a first corner point in a current frame image acquired by an image acquisition device by using an optical flow method; calculating the pose of the image acquisition device by using the first angle point; projecting a first map point corresponding to the first corner point to a pixel plane by using the pose, calculating the distance between the theoretical projection position and the actual projection position of the first map point on the pixel plane, and taking the first map point with the distance smaller than a preset distance threshold value as an inner point; judging whether the number of the inner points is larger than a preset number threshold value or not and the ratio of the number of the inner points to the number of the first map points is larger than a preset ratio threshold value; if so, taking the first corner as an extraction result; and if not, taking the current frame image as a key frame, detecting and updating the first corner point by using the characteristic point, and taking the updated first corner point as an extraction result. Therefore, feature points do not need to be extracted and matched for each frame, the calculation efficiency is improved, and the running speed of the vslam system can be further improved.
Drawings
To more clearly illustrate the technical solutions of the present application, the drawings required for use in the embodiments will be briefly described below, and it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope of the present application. Like components are numbered similarly in the various figures.
Fig. 1 is a schematic flow chart illustrating a feature point extraction method provided in an embodiment of the present application;
fig. 2 shows a schematic structural diagram of a feature point extraction device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments.
The components of the embodiments of the present application, as generally described and illustrated in the figures herein, could be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
Hereinafter, the terms "including", "having", and their derivatives, which may be used in various embodiments of the present application, are intended to indicate only specific features, numbers, steps, operations, elements, components, or combinations of the foregoing, and should not be construed as first excluding the existence of, or adding to, one or more other features, numbers, steps, operations, elements, components, or combinations of the foregoing.
Furthermore, the terms "first," "second," "third," and the like are used solely to distinguish one from another and are not to be construed as indicating or implying relative importance.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the various embodiments of the present application belong. The terms (such as those defined in commonly used dictionaries) should be interpreted as having a meaning that is consistent with their contextual meaning in the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein in various embodiments.
In order to solve the technical problem that the existing vision synchronous positioning and mapping system is low in calculation efficiency when feature point extraction is carried out, the application provides a feature point extraction method. Referring to fig. 1, fig. 1 is a schematic flow chart of a feature point extraction method according to an embodiment of the present application, where the method includes the following steps:
and 110, acquiring a first corner point in the current frame image acquired by the image acquisition device by using an optical flow method.
Specifically, the image acquisition device is usually arranged on electronic equipment such as an unmanned aerial vehicle and a sweeping robot, and acquires image information of the surrounding environment of the electronic equipment in real time to obtain a video image. The image acquisition device is any one of a monocular camera, a binocular camera and an RGB-D camera, and can be set according to actual requirements, and the image acquisition device is not limited in the embodiment of the application.
It is understood that after the image acquisition device acquires the image, the image may be preprocessed, and the preprocessing may include a filtering operation and/or a distortion removal process. The filtering operation can remove noise in the image and reduce interference; the distortion removal processing can reduce the distortion degree of the image and restore the real scene. The image quality can be improved through preprocessing, and the accuracy of the subsequent processing process is further improved.
In a specific embodiment, the Lucas Kanade optical flow algorithm may be used to search and match the first corner point in the current frame image. A Corner Point (Corner Point) is an extreme Point, i.e. a Point with a particular salient attribute in some respect, and is an end Point of an isolated Point, or line segment, with the greatest or smallest intensity on some attribute. For an image, the corner points of the image generally refer to the connection points of object contours. The corner points are important features of the image and play an important role in understanding and analyzing the image graph. The corner points can effectively reduce the data volume of the information while keeping the important characteristics of the image graph, so that the content of the information is high, the calculation speed is effectively improved, the reliable matching of the image is facilitated, and the real-time processing becomes possible. For the same scene, even if the viewing angle changes, the corner points usually have the characteristic of stable property. Due to the stable property, the angular point is applied to the computer vision fields of three-dimensional scene reconstruction motion estimation, target tracking, target identification, image registration and matching and the like. Meanwhile, because the optical flow method estimates the movement of a moving object by comparing the difference between two continuous frames, the Lucas Kanade optical flow algorithm needs to know the coordinates of the corner point to be acquired in the previous frame image in advance, so in fact, this step starts from the second frame image in the video acquired by the image acquisition device, and the initial feature point needs to be calculated for the first frame image.
And 120, calculating the pose of the image acquisition device by using the first angle point.
Specifically, after step 110, the first corner in the current frame image is associated with the corner in the previous frame image, and the corresponding relationship between the first corner and the map point in the current frame image can be obtained because the corner in the previous frame image and the map point have the corresponding relationship. And then obtaining the pose of the image acquisition device by using the coordinates of the map Point and the coordinates of the first angular Point and adopting a PnP (Perspective-N-Point) method. The PnP method is an algorithm for solving the external parameters of an image capturing device by using a minimum reprojection error under the condition of known or unknown internal parameters of the image capturing device through a plurality of pairs of 3D (3 Dimension) and 2D matching points.
Step 130, projecting a first map point corresponding to the first corner point to a pixel plane by using the pose, calculating the distance between the theoretical projection position and the actual projection position of the first map point on the pixel plane, and taking the first map point with the distance smaller than a preset distance threshold value as an inner point.
Specifically, the pose of the image capturing device obtained in step 120 may be used to project a first map point corresponding to the first angle in the current frame onto the pixel plane. And then, respectively calculating the theoretical projection position and the actual projection position of each first map point on the pixel plane, and taking the first map point of which the distance between the theoretical projection position and the actual projection position is smaller than a preset distance threshold value as an inner point. The inner points indicate that the deviation between the theoretical projection position and the actual projection position is small.
It can be understood that the value of the preset distance threshold may be 5.991 pixels, and may be set according to an actual requirement, which is not limited in this embodiment of the present application.
Step 140, determining whether the number of the interior points is greater than a preset number threshold and the ratio of the number of the interior points to the number of the first map points is greater than a preset ratio threshold.
Specifically, the optical flow tracking effect may be determined according to the number of interior points. In a specific embodiment, the determination is performed by using whether the number of interior points is greater than a preset number threshold, and whether the ratio of the number of interior points to the number of first map points is greater than a preset ratio threshold.
It can be understood that the value of the preset number threshold and the preset ratio threshold may be set according to actual requirements, which is not limited in the embodiment of the present application. For example, the preset number threshold may be 50, and the preset ratio threshold may be 50%.
And 150, if so, taking the first corner as an extraction result.
Specifically, if the number of the interior points is greater than a preset number threshold, and the ratio of the number of the interior points to the number of the first map points is greater than a preset ratio threshold, it indicates that the optical flow tracking state is good, and the first corner point may be directly used as an extraction result for performing optical flow tracking on the next frame image of the current frame image by using the extraction result.
And step 160, if not, taking the current frame image as a key frame, detecting and updating the first corner point by using the characteristic point, and taking the updated first corner point as an extraction result.
Specifically, if the number of the interior points is less than or equal to a preset number threshold, and/or the ratio of the number of the interior points to the number of the first map points is less than or equal to a preset ratio threshold, it is indicated that the optical flow tracking state is poor, and the current frame image is taken as the key frame. And, since the current frame image is determined as a key frame, the first corner point needs to be updated, and the updated first corner point is taken as an extraction result.
In an optional implementation manner, the updating the first corner point and taking the updated first corner point as an extraction result includes:
creating an image pyramid by using the current frame image, and extracting all second corner points on the image pyramid;
correcting the first corner point by using the second corner point to obtain a third corner point;
extracting a fourth corner point meeting a preset condition by using all map points in the current frame image;
supplementing the total number of the first corner points to a preset number of corner points according to the number of the third corner points and the fourth corner points;
and taking the first corner points with the preset number of corner points as extraction results.
Specifically, the image pyramid is a multi-scale expression in the image, is mainly used for image segmentation, and is an effective and conceptually simple structure for explaining the image in multi-resolution. In brief, an image pyramid is a set of subgraphs of the same image at different resolutions. The bottom of the image pyramid is a high resolution representation of the image to be processed, while the top is an approximation of the low resolution, the higher the level, the smaller the image, the lower the resolution. In a specific embodiment, the number of layers of the image pyramid may be 8, with a ratio of 1.2.
And then extracting all second corner points on the image pyramid by using a corner point extraction algorithm. In a specific embodiment, the corner extraction algorithm used may be FAST (feature of Accelerated Segment Test). It is to be understood that other corner point extraction algorithms, such as SIFT (Scale-invariant feature transform), SURF (Speeded Up Robust Features), may also be adopted, and may be set according to actual requirements, which is not limited in the embodiment of the present application.
It is considered that the corner locations may become increasingly inaccurate after multiple frames of optical flow tracking. The corner coordinates need to be re-corrected. In a specific embodiment, the second corner point is used for correcting the first corner point, so as to obtain a third corner point; it is understood that the third corner point is part of the first corner point.
Then, extracting a fourth corner point meeting a preset condition by using all map points in the current frame image; supplementing the total number of the corner points to the preset number of the corner points according to the number of the third corner points and the fourth corner points; and taking the angular points with the preset number of angular points as extraction results. It is understood that the fourth corner point is part of the second corner point.
In an optional embodiment, the correcting the first corner point by using the second corner point to obtain a third corner point includes:
determining a fifth corner point within a preset range in all second corner points according to the position information of the first corner point, and calculating a descriptor of the fifth corner point;
and determining a fifth corner point closest to the first corner point as a third corner point according to the descriptor of the fifth corner point.
Specifically, for each first corner point, four elements of [ u, v, desc, layer ] are included. And u and v are coordinates of the first corner point on the current frame image, desc is a descriptor of the first corner point, and layer represents the layer number of the pyramid where the first corner point is located. And determining a fifth corner point within a preset range [ u +/-s, v +/-s, layer +/-1 ] in all the second corner points, namely constructing a rectangular frame by using the coordinates of the first corner point and enabling the rectangular frame to be in the layer above and below the first corner point. The value of s may be 5 pixels, which may be specifically set according to actual requirements, and this is not limited in this embodiment of the present application. It is understood that the fifth corner point is a part of the second corner point, and the fifth corner point is filtered to obtain the third corner point.
And then calculating the descriptor of the fifth corner point. A descriptor describes the behavior of a corner point within a certain area.
And finally, taking a fifth corner point closest to the first corner point as a third corner point. It is understood that the fifth corner point is discarded if the calculated distance between the fifth corner point and the first corner point is greater than the preset distance threshold. It is further understood that the preset distance threshold may be 50, and may be set according to actual requirements, which is not limited in this embodiment of the application.
In a specific embodiment, the distance calculation is performed using a hamming distance. It is understood that other distances may be used for distance calculation, which is not limited in the embodiments of the present application.
In an optional embodiment, the extracting, by using all map points in the current frame image, a fourth corner point meeting a preset condition includes:
based on the pose of the image acquisition device, carrying out spatial transformation on the current frame image to obtain a three-dimensional scene, and taking all map points in the three-dimensional scene as candidate map points;
projecting the candidate map points to a camera plane, determining sixth angular points within a preset range in all the candidate map points according to the position information of the candidate map points, and calculating descriptors of the sixth angular points;
and determining a seventh corner point which is closest to the candidate map point in the sixth corner points as a fourth corner point according to the descriptor of the sixth corner point.
Specifically, the current frame image is a two-dimensional scene and needs to be converted into a three-dimensional scene, and all map points in the three-dimensional scene are used as candidate map points. And projecting the candidate map points to a camera plane, wherein the subsequent logic is similar to that in the foregoing, and the details are not repeated herein. It is understood that the sixth corner point is a part of the second corner point and is an intermediate result of the calculation of the fourth corner point; and screening the sixth corner point to obtain a seventh corner point serving as a fourth corner point.
Since map points do not have layer properties, but possess size properties representing the actual size of the map point in the scale space. And if the distance between the map point and the image acquisition device is d, the size/d is the projection size of the map point on the pixel plane. The number of layers of map points in the image pyramid can be calculated by the projection size.
In an optional embodiment, the supplementing the total number of the first corner points to a preset number of corner points according to the number of the third corner points and the number of the fourth corner points includes:
determining the quantity to be supplemented of the angular points to be supplemented according to the quantity of the preset angular points, the quantity of the third angular points and the quantity of the fourth angular points of the current frame image;
eliminating corner points, the distances between which and the third corner point and the fourth corner point are smaller than a preset distance threshold value, from the second corner points to obtain residual corner points;
and selecting the corner points to be supplemented in the quantity to be supplemented from the residual corner points so as to supplement the total quantity of the first corner points to the preset corner point quantity.
Specifically, taking the number of the preset corner points as 400 as an example, if the number of the third corner points is 100 and the number of the fourth corner points is 200, then the number of the corner points to be supplemented is 100. Angular points with distances smaller than a preset distance threshold value from the third corner point and the fourth corner point are removed from the second angular points to obtain remaining angular points. Namely, the angular points which are closer to the third corner point and the fourth corner point are removed, so that the repeated acquisition of the angular points is avoided, and the accuracy of the angular points is improved.
And selecting the angular points to be supplemented in the number to be supplemented from the residual angular points. In a specific embodiment, a quadtree is used for selecting and supplementing the corner points to be supplemented. It can be understood that the corner points are selected by using the quadtree, so that the selected corner points are more uniform.
By complementing and perfecting the first corners, the number of the first corners is increased, that is, the number of detectable corners is increased for the next frame image in the current frame image.
In an optional embodiment, the method further comprises:
and constructing a map by using the extraction result.
Specifically, the accuracy and the calculation efficiency of the extraction result are improved, so that the accuracy of the map is improved, and a globally consistent map can be maintained.
The feature point extraction method provided in the embodiment of the application acquires a first corner point in a current frame image acquired by an image acquisition device by using an optical flow method; calculating the pose of the image acquisition device by using the first angle point; projecting a first map point corresponding to the first angle to a pixel plane by using the pose, calculating the distance between the theoretical projection position and the actual projection position of the first map point on the pixel plane, and taking the first map point with the distance smaller than a preset distance threshold value as an inner point; judging whether the number of the inner points is larger than a preset number threshold value or not and the ratio of the number of the inner points to the number of the first map points is larger than a preset ratio threshold value; if so, taking the first corner as an extraction result; and if not, taking the current frame image as a key frame, detecting and updating the first corner point by using the characteristic point, and taking the updated first corner point as an extraction result. Therefore, feature points do not need to be extracted and matched for each frame, the calculation efficiency is improved, and the running speed of the vslam system can be further improved.
Corresponding to the above method embodiment, please refer to fig. 2, fig. 2 is a schematic structural diagram of a feature point extraction apparatus provided in the embodiment of the present application, and the feature point extraction apparatus 1000 includes:
the corner acquiring module 1010 is configured to acquire a first corner in a current frame image acquired by the image acquiring device by using an optical flow method;
a pose calculation module 1020 for calculating a pose of the image capture device using the first angle;
an interior point determining module 1030, configured to project, by using the pose, a first map point corresponding to the first angle point to a pixel plane, calculate a distance between a theoretical projection position and an actual projection position of the first map point on the pixel plane, and use the first map point, of which the distance is smaller than a preset distance threshold, as an interior point;
a determining module 1040, configured to determine whether the number of interior points is greater than a preset number threshold and a ratio of the number of interior points to the number of first map points is greater than a preset ratio threshold;
a first extraction result determining module 1050, configured to take the first corner as an extraction result if the first corner is determined to be the extraction result;
a second extraction result determining module 1060, configured to, if not, use the current frame image as a key frame, update the first corner point by using feature point detection, and use the updated first corner point as an extraction result.
The extraction method and device provided by the embodiment of the application can realize each process of the feature point extraction method in the method embodiment, can achieve the same technical effect, and are not repeated here to avoid repetition.
Optionally, an embodiment of the present application further provides a computer device, which includes a processor and a memory, where the memory stores a program or an instruction, and the program or the instruction, when executed by the processor, implements each process of the above-mentioned embodiment of the feature point extraction method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
Optionally, an embodiment of the present application further provides a computer-readable storage medium, where a program or an instruction is stored on the computer-readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above-mentioned embodiment of the feature point extraction method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
Wherein, the processor is the processor in the computer device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative and, for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, each functional module or unit in each embodiment of the present application may be integrated together to form an independent part, or each module may exist alone, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a smart phone, a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, and various media capable of storing program codes.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application.

Claims (10)

1. A feature point extraction method is characterized by comprising:
acquiring a first corner point in a current frame image acquired by an image acquisition device by using an optical flow method;
calculating the pose of the image acquisition device by using the first angle point;
projecting a first map point corresponding to the first angle to a pixel plane by using the pose, calculating the distance between the theoretical projection position and the actual projection position of the first map point on the pixel plane, and taking the first map point with the distance smaller than a preset distance threshold value as an inner point;
judging whether the number of the inner points is greater than a preset number threshold and the ratio of the number of the inner points to the number of the first map points is greater than a preset ratio threshold;
if so, taking the first corner as an extraction result;
and if not, taking the current frame image as a key frame, detecting and updating the first corner point by using the characteristic point, and taking the updated first corner point as an extraction result.
2. The method according to claim 1, wherein the updating the first corner by using the feature point detection and taking the updated first corner as an extraction result comprises:
creating an image pyramid by using the current frame image, and extracting all second corner points on the image pyramid;
correcting the first corner point by using the second corner point to obtain a third corner point;
extracting a fourth corner point meeting a preset condition by using all map points in the current frame image;
supplementing the total number of the first corner points to a preset number of corner points according to the number of the third corner points and the fourth corner points;
and taking the first corners with the preset number of corners as extraction results.
3. The method of claim 2, wherein the correcting the first corner point by using the second corner point to obtain a third corner point comprises:
determining a fifth corner point within a preset range in all second corner points according to the position information of the first corner point, and calculating a descriptor of the fifth corner point;
and determining a fifth corner point which is closest to the first corner point in the fifth corner points as a third corner point according to the descriptor of the fifth corner point.
4. The method according to claim 2, wherein the extracting a fourth corner point meeting a preset condition by using all map points in the current frame image comprises:
based on the pose of the image acquisition device, carrying out space conversion on the current frame image to obtain a three-dimensional scene, and taking all map points in the three-dimensional scene as candidate map points;
projecting the candidate map points to a camera plane, determining sixth angular points within a preset range in all the candidate map points according to the position information of the candidate map points, and calculating descriptors of the sixth angular points;
and determining a seventh corner point which is closest to the candidate map point in the sixth corner point as a fourth corner point according to the descriptor of the sixth corner point.
5. The method according to claim 2, wherein the supplementing the total number of the first corner points to a preset number of corner points according to the number of the third corner points and the fourth corner points comprises:
determining the quantity to be supplemented of the angular points to be supplemented according to the quantity of the preset angular points, the quantity of the third angular points and the quantity of the fourth angular points of the current frame image;
eliminating corner points with the distance from the third corner point and the fourth corner point being less than a preset distance threshold value from the second corner points to obtain residual corner points;
and selecting the angular points to be supplemented in the number of the residual angular points so as to supplement the total number of the first angular points to the preset angular point number.
6. The method of claim 2, wherein the image pyramid has 8 layers and a ratio of 1.2.
7. The feature point extraction method according to claim 1, characterized by further comprising:
and constructing a map by using the extraction result.
8. A feature point extraction device characterized by comprising:
the angular point acquisition module is used for acquiring a first angular point in a current frame image acquired by the image acquisition device by using an optical flow method;
the pose calculation module is used for calculating the pose of the image acquisition device by utilizing the first corner point;
the inner point determining module is used for projecting a first map point corresponding to the first angle point to a pixel plane by using the pose, calculating the distance between the theoretical projection position and the actual projection position of the first map point on the pixel plane, and taking the first map point with the distance smaller than a preset distance threshold value as an inner point;
the judging module is used for judging whether the number of the inner points is larger than a preset number threshold value or not and the ratio of the number of the inner points to the number of the first map points is larger than a preset ratio threshold value;
a first extraction result determining module, configured to, if yes, take the first corner as an extraction result;
and the second extraction result determining module is used for taking the current frame image as a key frame, detecting and updating the first angular point by using the characteristic point and taking the updated first angular point as an extraction result if the current frame image is not the key frame.
9. A computer device characterized by comprising a processor and a memory, said memory having stored thereon a program or instructions which, when executed by said processor, implement the steps of the feature point extraction method according to any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that a program or instructions are stored thereon, which program or instructions, when executed by a processor, implement the steps of the feature point extraction method according to any one of claims 1-7.
CN202210981085.0A 2022-08-16 2022-08-16 Feature point extraction method and device, computer equipment and readable storage medium Pending CN115294358A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210981085.0A CN115294358A (en) 2022-08-16 2022-08-16 Feature point extraction method and device, computer equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210981085.0A CN115294358A (en) 2022-08-16 2022-08-16 Feature point extraction method and device, computer equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN115294358A true CN115294358A (en) 2022-11-04

Family

ID=83830314

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210981085.0A Pending CN115294358A (en) 2022-08-16 2022-08-16 Feature point extraction method and device, computer equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN115294358A (en)

Similar Documents

Publication Publication Date Title
CN111222395B (en) Target detection method and device and electronic equipment
CN110070580B (en) Local key frame matching-based SLAM quick relocation method and image processing device
CN108629843B (en) Method and equipment for realizing augmented reality
CN109658454B (en) Pose information determination method, related device and storage medium
CN110111388B (en) Three-dimensional object pose parameter estimation method and visual equipment
US9639943B1 (en) Scanning of a handheld object for 3-dimensional reconstruction
Huang et al. Efficient image stitching of continuous image sequence with image and seam selections
CN109472820B (en) Monocular RGB-D camera real-time face reconstruction method and device
JP2015181042A (en) detection and tracking of moving objects
CN111340922A (en) Positioning and mapping method and electronic equipment
CN111340749B (en) Image quality detection method, device, equipment and storage medium
CN111383252B (en) Multi-camera target tracking method, system, device and storage medium
CN109063549B (en) High-resolution aerial video moving target detection method based on deep neural network
CN109934873B (en) Method, device and equipment for acquiring marked image
CN114782499A (en) Image static area extraction method and device based on optical flow and view geometric constraint
CN110673607B (en) Feature point extraction method and device under dynamic scene and terminal equipment
CN111476812A (en) Map segmentation method and device, pose estimation method and equipment terminal
CN111161348A (en) Monocular camera-based object pose estimation method, device and equipment
US10223803B2 (en) Method for characterising a scene by computing 3D orientation
CN110458177B (en) Method for acquiring image depth information, image processing device and storage medium
CN116977671A (en) Target tracking method, device, equipment and storage medium based on image space positioning
CN109816709B (en) Monocular camera-based depth estimation method, device and equipment
CN116051736A (en) Three-dimensional reconstruction method, device, edge equipment and storage medium
CN112802112B (en) Visual positioning method, device, server and storage medium
CN116188535A (en) Video tracking method, device, equipment and storage medium based on optical flow estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination