CN113837202A - Feature point extraction method, image reconstruction method and device - Google Patents

Feature point extraction method, image reconstruction method and device Download PDF

Info

Publication number
CN113837202A
CN113837202A CN202111040342.2A CN202111040342A CN113837202A CN 113837202 A CN113837202 A CN 113837202A CN 202111040342 A CN202111040342 A CN 202111040342A CN 113837202 A CN113837202 A CN 113837202A
Authority
CN
China
Prior art keywords
feature
feature points
grid
image
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111040342.2A
Other languages
Chinese (zh)
Inventor
叶培楚
曾宪贤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Xaircraft Technology Co Ltd
Original Assignee
Guangzhou Xaircraft Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Xaircraft Technology Co Ltd filed Critical Guangzhou Xaircraft Technology Co Ltd
Priority to CN202111040342.2A priority Critical patent/CN113837202A/en
Publication of CN113837202A publication Critical patent/CN113837202A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • G06T17/205Re-meshing

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a method for extracting feature points, a method for reconstructing an image and a device thereof, wherein the method for extracting the feature points comprises the following steps: determining the number of a plurality of feature points preliminarily extracted for an input image; when the number of the plurality of feature points is larger than the number of expected feature points, performing first grid division on the input image based on a space division method to obtain K first grids, wherein K is larger than or equal to the number of the expected feature points; determining the number of feature points of each first grid in the K first grids, and determining the statistical confidence score of each feature point based on the distance between each feature point in the first grids and the center of the first grid when the number of the feature points of the first grids is greater than 1; and screening the feature points of the first grid based on the statistical confidence scores of the feature points to obtain the feature point with the highest statistical confidence score in the first grid. The technical scheme of the application can improve the distribution uniformity of the characteristic points.

Description

Feature point extraction method, image reconstruction method and device
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method for extracting feature points, a method for reconstructing an image, and an apparatus thereof.
Background
In computer vision, an image processing process is an extremely important link, and in the image processing process, the extraction of feature points is an important step. Because the feature points can represent the content of the image, the extraction of the feature points is widely applied to the fields of moving target tracking, object identification, image registration, panoramic image splicing, three-dimensional reconstruction and the like. The problems of bundling and uneven distribution of characteristic points are easy to occur in the conventional extraction method of the characteristic points, and the problems are serious especially for repeated texture scenes. The non-uniform distribution of the feature points directly affects the accuracy of subsequent image processing results.
Disclosure of Invention
In view of this, embodiments of the present application provide a method for extracting feature points, a method for reconstructing an image, and an apparatus, which can improve distribution uniformity of the feature points.
In a first aspect, an embodiment of the present application provides a method for extracting feature points, including: determining the number of a plurality of feature points preliminarily extracted for an input image; when the number of the plurality of feature points is larger than the number of expected feature points, performing first grid division on the input image based on a space division method to obtain K first grids, wherein K is larger than or equal to the number of the expected feature points; determining the number of feature points of each first grid in the K first grids, and determining the statistical confidence score of each feature point based on the distance between each feature point in the first grids and the center of the first grid when the number of the feature points of the first grids is greater than 1; and screening the feature points of the first grid based on the statistical confidence scores of the feature points to obtain the feature point with the highest statistical confidence score in the first grid.
In some embodiments of the present application, determining the statistical confidence score of each feature point based on the distance of each feature point in the first mesh from the center of the first mesh includes: and determining the statistical confidence score of each feature point based on the distance between each feature point in the first grid and the center of the first grid and the response value of each feature point.
In some embodiments of the present application, determining a statistical confidence score of each feature point based on a distance of each feature point in the first mesh from a center of the first mesh and a response value of each feature point includes: and determining the statistical confidence score of each feature point by performing weighted summation on the reciprocal of the distance from each feature point in the first grid to the center of the first grid and the response value of each feature point.
In some embodiments of the present application, the first meshing is performed based on a binary, quadtree, or octree partitioning method.
In some embodiments of the present application, the method for extracting feature points of the first aspect further includes: performing second grid division on the input image based on the preset grid size to obtain T second grids, wherein T is an integer greater than 1; and performing preliminary extraction of the feature points on each second grid in the T second grids to obtain a plurality of feature points.
In some embodiments of the present application, the method for extracting feature points of the first aspect further includes: constructing an image pyramid for an input image, wherein the preliminary extraction of feature points is performed for each of the T second grids to obtain a plurality of feature points, including: extracting extreme points of the second grid based on the image pyramid and the initial threshold; when the number of the extreme points is 0, obtaining a first updating threshold value based on a first rule and an initial threshold value, wherein the first updating threshold value is smaller than the initial threshold value; and extracting extreme points again from the second grid based on the image pyramid and the first updating threshold, wherein the extreme points extracted from the second grid are used as feature points.
In some embodiments of the present application, the preliminary extracting feature points for each of the T second meshes to obtain a plurality of feature points further includes: when the number of the extreme points is larger than the threshold value, obtaining a second updating threshold value based on a second rule and the initial threshold value, wherein the second updating threshold value is larger than the initial threshold value; and extracting the extreme points of the second grid again based on the image pyramid and the second updating threshold.
In some embodiments of the present application, the method for extracting feature points of the first aspect further includes: performing scale pyramid pooling on the input image to obtain a plurality of images in different scale spaces; and calculating feature descriptors corresponding to the feature points with the highest statistical confidence scores in the first grid in the images of the plurality of different scale spaces.
In a second aspect, an embodiment of the present application provides an image reconstruction method, including: extracting the feature points in the first image by adopting the feature point extraction method in the first aspect, and calculating a first feature descriptor corresponding to the feature points in the first image; extracting the feature points in the second image by adopting the feature point extraction method in the first aspect, and calculating a second feature descriptor corresponding to the feature points in the second image; performing feature matching on the first image and the second image based on the first feature descriptor and the second feature descriptor to obtain a matching result; and reconstructing the object to be reconstructed according to the matching result.
In a third aspect, an embodiment of the present application provides an apparatus for extracting feature points, including: a first determination module for determining the number of a plurality of feature points preliminarily extracted for an input image; the first dividing module is used for carrying out first grid division on the plurality of feature points based on a space dividing method to obtain K first grids when the number of the plurality of feature points is larger than the number of expected feature points, wherein K is larger than or equal to the number of the expected feature points; the second determining module is used for determining the number of the feature points of each of the K first grids, and when the number of the feature points of the first grids is greater than 1, the statistical confidence degree score of each feature point is determined based on the distance from each feature point in the first grids to the center of the first grid; and the screening module is used for screening the feature points of the first grid based on the statistical confidence scores of the feature points to obtain the feature point with the highest statistical confidence score in the first grid.
In a fourth aspect, an embodiment of the present application provides an apparatus for reconstructing an image, including: a first extraction module, configured to extract feature points in the first image by using the feature point extraction method described in the first aspect, and calculate a first feature descriptor corresponding to the feature points in the first image; a second extraction module, configured to extract feature points in the second image by using the feature point extraction method described in the first aspect, and calculate a second feature descriptor corresponding to the feature points in the second image; the matching module is used for performing feature matching on the first image and the second image based on the first feature descriptor and the second feature descriptor to obtain a matching result; and the reconstruction module is used for reconstructing the object to be reconstructed according to the matching result.
In a fifth aspect, embodiments of the present application provide an unmanned device, comprising: a processor; a memory for storing processor executable instructions, wherein the processor is configured to perform the method for extracting feature points according to the first aspect or the method for reconstructing an image according to the second aspect.
In a sixth aspect, an embodiment of the present application provides a computer-readable storage medium, where the storage medium stores a computer program for executing the method for extracting feature points according to the first aspect or executing the method for reconstructing an image according to the second aspect.
The embodiment of the application provides a feature point extraction method, an image reconstruction method and an image reconstruction device. The feature points with higher statistical confidence scores are closer to the center of the first grid, so that the feature point extraction method provided by the embodiment of the application can improve the distribution uniformity of all finally obtained feature points, avoid the situation that the feature points cannot be extracted in repeated texture areas in an input image, and effectively prevent the feature points from being piled.
Drawings
Fig. 1 is a schematic system architecture diagram of a feature point extraction system according to an exemplary embodiment of the present application.
Fig. 2 is a schematic flow chart of a feature point extraction method according to an exemplary embodiment of the present application.
Fig. 3 is a schematic flow chart of a feature point extraction method according to another exemplary embodiment of the present application.
Fig. 4 is a schematic flowchart of a feature matching method according to an exemplary embodiment of the present application.
Fig. 5 is a flowchart illustrating an image reconstruction method according to an exemplary embodiment of the present disclosure.
Fig. 6a is a schematic diagram illustrating an extraction result obtained by a conventional feature point extraction method.
Fig. 6b is a schematic diagram illustrating an extraction result obtained by using the feature point extraction method according to an embodiment of the present application.
Fig. 7a is a schematic diagram showing another extraction result obtained by the conventional feature point extraction method.
Fig. 7b is a schematic diagram illustrating another extraction result obtained by using the feature point extraction method according to an embodiment of the present application.
Fig. 8 is a schematic structural diagram of an extraction apparatus for feature points according to an exemplary embodiment of the present application.
Fig. 9 is a schematic structural diagram of a feature matching apparatus according to an exemplary embodiment of the present application.
Fig. 10 is a schematic structural diagram of an image reconstruction apparatus according to an exemplary embodiment of the present application.
Fig. 11 is a block diagram illustrating an unmanned aerial vehicle for performing a feature point extraction method according to an exemplary embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Summary of the application
In farmland surveying and reconstruction, generally, feature points are extracted from an input image, a data association relation between multi-frame images is generated based on the feature points, then, relative motion between frames (between the images) is estimated, and finally, a BundleAdjustment (light beam adjustment method) is utilized to optimize a camera posture and a three-dimensional scene structure.
Under the condition that the number of expected feature points is M, a common feature point extraction method is to extract feature points of the whole image by adopting a uniform threshold value, then sort all the extracted feature points from large to small according to response values, finally extract the first M feature points from a sorting result, and calculate feature descriptors of the first M feature points as final feature output.
For repeated texture scenes, such as banana forests, wheat lands, corn lands and the like in farmland scenes, feature points are generally difficult to extract by using a uniform threshold value, because neighborhood information in the repeated texture scenes is very close and has relatively low distinctiveness. When the threshold condition is severe, the feature points cannot be extracted; when the threshold condition is more relaxed, the extracted feature points have lower quality.
In addition, for a scene composed of repeated textures and roads, most of the extracted feature points are concentrated on two sides of the roads, and the farmland with repeated textures can only extract very few feature points (see fig. 7a), that is, the feature points are piled up.
In summary, when feature points are extracted from a scene containing repeated textures by using a conventional feature point extraction method, the problems of feature point bunching and uneven distribution are likely to occur.
Exemplary System
Fig. 1 is a system architecture diagram of a feature point extraction system 100 according to an exemplary embodiment of the present application, which illustrates an application scenario of feature point extraction for an image. The feature point extraction system 100 includes a computer device 110 and an image acquisition device 120.
The image capture device 120 may capture a particular scene, such as a field, a road, a building, etc., to obtain one or more images.
The computer device 110 may acquire one or more images from the image capturing device 120 and perform feature point extraction on the acquired images, and a specific extraction method may be described in the following description.
The computer device 110 may be a general-purpose computer or a computer device composed of an application-specific integrated circuit, and the like, which is not limited in this embodiment. For example, the Computer device 110 may be a mobile terminal device such as a tablet Computer, or may be a Personal Computer (PC), such as a laptop portable Computer and a desktop Computer. One skilled in the art will appreciate that the number of computer devices 110 described above may be one or more, and that the types may be the same or different. The number and the type of the computer devices 110 are not limited in the embodiments of the present application.
The image capture device 120 may be a camera, such as a monocular camera, a binocular camera, or a trinocular camera. In an embodiment, the image capture device 120 may be a camera mounted on a drone.
In some embodiments, computer device 110 and image capture device 120 may be separate devices, and computer device 110 may be communicatively coupled to one or more image capture devices 120. In other embodiments, image-capturing device 120 may be integrated into computer device 110, and one or more image-capturing devices 120 may be integrated into computer device 110.
It should be noted that the above application scenarios are only presented to facilitate understanding of the spirit and principles of the present application, and the embodiments of the present application are not limited thereto. Rather, embodiments of the present application may be applied to any scenario where it may be applicable.
Exemplary method
Fig. 2 is a schematic flow chart of a feature point extraction method according to an exemplary embodiment of the present application. The method of fig. 2 may be performed by a computer device. As shown in fig. 2, the method for extracting the feature point includes the following steps.
210: the number of a plurality of feature points preliminarily extracted for the input image is determined.
Specifically, the extraction of the feature points may be performed for the entire image to obtain a plurality of feature points. Alternatively, the whole image may be divided into a plurality of regions, and feature points may be extracted for each region, so as to obtain a plurality of feature points corresponding to the whole image.
In one embodiment, a reasonable threshold may be set to obtain a plurality of feature points distributed in different regions of the input image.
The Feature points in the embodiments of the present application may be Scale Invariant Feature Transform (SIFT) Feature points, Speeded Up Robust Feature (SURF) Feature points, orb (organized FAST and organized brief) Feature points, or other types of Feature points.
220: when the number of the plurality of feature points is larger than the number of the expected feature points, performing first grid division on the input image based on a space division method to obtain K first grids, wherein K is larger than or equal to the number of the expected feature points.
The number of the expected feature points is M, and when the number of the extracted feature points is greater than the number M of the expected feature points, the M feature points need to be screened out from the feature points. In order to ensure that the screened M feature points are uniformly distributed on the input image, the input image may be subjected to first mesh division, and the extracted feature points are divided into a plurality of first meshes, for example, the feature points may be divided into corresponding first meshes according to coordinates of the feature points. There may be 0, 1 or more feature points in each first grid.
Specifically, the shape of the grid may be a square, and the size of the grid may be represented by the side length of the square. The square grids with equal side lengths can realize uniform division of a plurality of characteristic points, and improve the distribution uniformity of the subsequently obtained characteristic points. In some embodiments, the shape of the mesh may be a polygon such as a rectangle, a diamond, a triangle, or other regular or irregular shapes, and the specific mesh shape may be set according to actual needs, which is not limited in this application.
In an embodiment, feature points may be extracted as many as possible, so that the number of extracted feature points is greater than the number of expected feature points by M, which may ensure that feature points are extracted in each region of the input image, especially for repeated texture regions, thereby facilitating subsequent feature point matching and three-dimensional reconstruction processes.
230: and determining the number of the feature points of each of the K first grids, and determining the statistical confidence score of each feature point based on the distance from each feature point in the first grid to the center of the first grid when the number of the feature points of the first grid is greater than 1.
Specifically, the number of feature points in each first grid may be counted, and when the number of feature points in the first grid is greater than 1, the statistical confidence score S of each feature point in the first grid is determined. The statistical confidence score S may characterize the distance d of the feature point from the first grid center, e.g., a higher statistical confidence score S indicates a closer feature point to the first grid center.
240: and screening the feature points of the first grid based on the statistical confidence scores of the feature points to obtain the feature point with the highest statistical confidence score in the first grid.
And sorting the feature points in the first grid according to the statistical confidence score S of each feature point in the first grid, reserving the feature point with the maximum statistical confidence score S, and deleting other feature points.
The feature points of the K first grids can be respectively screened according to the statistical confidence score S, so that the feature points closer to the center of the first grid are selected from each first grid, and therefore, all finally obtained feature points can be guaranteed to be uniformly distributed in the input image.
The embodiment of the application provides a method for extracting feature points, which is characterized in that an input image is subjected to first grid division based on a space division method to obtain a plurality of first grids, and feature points in each first grid are screened according to statistical confidence scores to reserve the feature points with the highest statistical confidence scores in each first grid. The feature points with higher statistical confidence scores are closer to the center of the first grid, so that the feature point extraction method provided by the embodiment of the application can improve the distribution uniformity of all finally obtained feature points, avoid the situation that the feature points cannot be extracted in repeated texture areas in an input image, and effectively prevent the feature points from being piled.
According to an embodiment of the present application, determining a statistical confidence score of each feature point based on a distance from each feature point in the first mesh to a center of the first mesh includes: and determining the statistical confidence score of each feature point based on the distance between each feature point in the first grid and the center of the first grid and the response value of each feature point.
Specifically, the larger the response value, the higher the quality of the corresponding feature point. The response value of the feature point may be a response value of a corner in a feature extraction and detection technique, for example, a Harris response value, a Shi-Tomasi response value, or other response values.
When a plurality of feature points in the first mesh are screened, distances from a center of the first mesh to the plurality of feature points may be different, and response values of the plurality of feature points may also be different. In addition, there is no relation between the distance between the feature point and the center of the first grid and the response value of the feature point, wherein the distance between the feature point and the center of the first grid affects the distribution uniformity of the finally obtained feature point, and the response value of the feature point affects the overall quality of the finally obtained feature point.
Therefore, a tradeoff is required between the distance between the feature point and the first grid center and the response value of the feature point, so as to improve the distribution uniformity of the feature point and improve the overall quality of the feature point as much as possible.
In this embodiment, the statistical confidence score of the feature point is determined based on the distance between the feature point and the center of the first grid and the response value of the feature point, and the factors affecting the distribution uniformity of the feature point and the factors affecting the overall quality of the feature point may be considered comprehensively, so that the feature point which is closer to the center of the first grid and has higher quality may be screened from the first grid, so as to obtain a more robust feature point, and the effect of improving the distribution uniformity and the overall quality of the finally obtained feature point may be achieved.
According to an embodiment of the present application, determining a statistical confidence score of each feature point based on a distance from each feature point in the first mesh to a center of the first mesh and a response value of each feature point includes: and determining the statistical confidence score of each feature point by performing weighted summation on the reciprocal of the distance from each feature point in the first grid to the center of the first grid and the response value of each feature point.
Specifically, the calculation formula of the statistical confidence score S may be: s is w1 × 1/d + w2 × f, where d is the distance from the feature point to the center of the first grid, f is the response value of the feature point, w1 is the first weight value, and w2 is the second weight value. w1 and w2 can be set according to actual needs.
In other embodiments, the statistical confidence score S may be obtained according to other calculation formulas as long as it can comprehensively consider the distance of the feature point from the first grid center and the response value of the feature point.
According to an embodiment of the application, the first mesh partitioning is performed based on a binary tree, quadtree or octree partitioning method.
Specifically, the binary tree segmentation method can obtain 2 per segmentationtEach node, the quadtree division method can obtain 4tThe node, octree division method can obtain 8 in each divisiontAnd each node, wherein t is the division times. Regardless of the binary, quadtree, or octree segmentation method, there is a node before segmentation, in which all of the preliminarily extracted plurality of feature points (mentioned in step 210) are located, and which corresponds to the entire input image. As the number of divisions increases, a plurality of nodes may be obtained, and the plurality of nodes may correspond to the plurality of first meshes one to one.
A suitable number of segmentation results (K first meshes) can be obtained at a reasonable segmentation speed based on a quadtree segmentation method, and thus the quadtree segmentation method may be employed in some embodiments.
For example, a first mesh division is performed by a quadtree division method, and when the number of nodes obtained by the division is 4tAnd when the number of the expected feature points is larger than or equal to the number M of the expected feature points, stopping dividing to obtain K nodes (corresponding to K first grids). If the number of the feature points in the first grid is larger than 1, screening the feature points of the first grid based on the statistical confidence scoreAnd (4) selecting.
According to an embodiment of the present application, the method for extracting feature points further includes: performing second grid division on the input image based on the preset grid size to obtain T second grids, wherein T is an integer greater than 1; and performing preliminary extraction of the feature points on each second grid in the T second grids to obtain a plurality of feature points.
Specifically, the whole input image may be divided into T second grids through second grid division, feature points are extracted from each second grid respectively to obtain a plurality of feature points, then, the whole input image is subjected to first grid division to obtain K first grids, and the feature points of each first grid are screened based on the statistical confidence score to obtain the feature points with the highest statistical confidence score in the first grids.
The shape of the second grid is similar to that of the first grid, and is not described in detail here. The mesh size may be a side length or an area of the mesh.
In an embodiment, the second grid is square, and the predetermined grid size may refer to a side length W of the square.
For example, if the preset grid size is W, the number of rows of the input image is r, and the number of columns is c, the number of grids in each row is r/W, the number of grids in each column is c/W, the total number of grids is rGrid × cGrid, and the size of each grid is W × W.
W can be determined according to the calculation amount and the homogenization requirement, namely can be preset according to the requirement. For example, the larger W, the larger the size of the second grid, and the smaller the number of the second grids, so that when extracting feature points for one second grid, more feature points need to be extracted from the second grid based on a certain threshold value to facilitate subsequent screening. In this process, since the number of second meshes is small, the amount of calculation can be reduced; however, due to the large size of the second mesh, it may happen that the feature points in the second mesh are concentrated in a local area in the second mesh, and no feature point is extracted from other areas in the second mesh, which may result in uneven distribution of the finally extracted feature points. Therefore, it is necessary to balance the calculation amount and the homogenization requirement when setting W to obtain enough and relatively uniformly distributed feature points so as to screen out the final feature points from the feature points based on the above-mentioned spatial division method.
In an embodiment, the number T of the second grids may be greater than the number M of the expected feature points, so that each second grid may extract at least one feature point, and the number of the initially extracted feature points may be sufficiently large to facilitate subsequently screening out feature points with higher distribution uniformity and higher quality from the large number of feature points.
In this embodiment, the input image is firstly subjected to second mesh division, and each second mesh is respectively subjected to feature point extraction to obtain a plurality of feature points, so that it can be ensured that each second mesh can extract feature points as much as possible, and preparation is made for finally obtaining uniformly distributed feature points. In addition, in this embodiment, the first mesh division is performed on the input image, and the feature points of each first mesh are screened based on the statistical confidence score, so that the distribution uniformity of the finally obtained feature points can be further improved, and the overall quality of the finally obtained feature points can be improved as much as possible.
According to an embodiment of the present application, the method for extracting feature points further includes: constructing an image pyramid for an input image, wherein the preliminary extraction of feature points is performed for each of the T second grids to obtain a plurality of feature points, including: extracting extreme points of the second grid based on the image pyramid and the initial threshold; when the number of the extreme points is 0, obtaining a first updating threshold value based on a first rule and an initial threshold value, wherein the first updating threshold value is smaller than the initial threshold value; and extracting extreme points again from the second grid based on the image pyramid and the first updating threshold, wherein the extreme points extracted from the second grid are used as feature points.
In particular, the image pyramid may be a differential gaussian pyramid. The initial threshold, which may be denoted as T0, is preset. And extracting extreme points of each grid according to the initial threshold value. The initial threshold may be used to measure the difference between a certain pixel point and surrounding pixel points on the image, such as the difference in pixel values (gray scale or brightness difference). Specifically, if the difference between the pixel value of a certain pixel and the pixel values of surrounding pixels is greater than the initial threshold, the pixel can be extracted as the extreme point. Of course, in other embodiments, the specific meaning of the initial threshold and the specific extraction manner of the extreme point may be set according to actual requirements, which is not limited in this application embodiment.
For any pixel point in the second grid, in the current difference Gaussian pyramid image IiAnd the previous layer differential Gaussian pyramid image Ii-1And the next layer of differential Gaussian pyramid image Ii+1Intensity value (e.g., luminance value, or pixel value) statistics are performed in 26 neighborhoods. And if the intensity values of the current pixel point in the three image layers are maximum or minimum, taking the pixel point as a candidate extreme point.
Further, the sub-pixel point is found by performing sub-pixel calculation on the pixel point, for example, interpolating the pixels in the neighborhood of the candidate extreme point. And judging whether the difference of the intensity values between the sub-pixel points and the surrounding pixel points is greater than or equal to an initial threshold value, if so, converting the candidate extreme point corresponding to the sub-pixel point into a formal extreme point.
If the number of the formal extreme points extracted from the second grid is 0, it indicates that the current second grid is in a scene with weak texture or serious repeated texture, such as a farmland scene of a wheat field, a corn field and the like, at this time, a threshold value needs to be reduced, conditions are extracted loosely, so that the extreme points (the formal extreme points) are extracted from the current second grid, and the distribution uniformity of finally extracted feature points is ensured. That is, the first updated threshold T1 is derived based on the first rule and the initial threshold T0. Specifically, the first rule may be to take one-half of the existing threshold as the new threshold, i.e., T1 ═ T0/2.
And extracting extreme points again from the second grid based on the image pyramid and the threshold T1, and if the number of the extracted extreme points is still 0, continuing to obtain a new threshold T2 based on the first rule and the threshold T1. And so on until the number of extreme points extracted for the current second grid is greater than 0. The extreme points extracted from the second mesh may be used as feature points.
In this embodiment, the first rule may be other content as long as it can be ensured that the first update threshold is smaller than the initial threshold.
By setting the first rule, the threshold value can be appropriately reduced for a scene with weak texture or severe repeated texture, so that the feature points are extracted from the current grid, and the distribution uniformity of the finally extracted feature points is ensured. In addition, extreme points are extracted based on the image pyramid, invariance of a scale space can be guaranteed, and further characteristic points with more robust scale factors are obtained.
In an embodiment, when performing the sub-pixel calculation on the candidate extreme point based on the image pyramid, the main direction may be further counted to facilitate the subsequent calculation of the feature descriptor.
According to an embodiment of the present application, the preliminary extraction of feature points is performed on each of the T second grids to obtain a plurality of feature points, and the method further includes: when the number of the extreme points is larger than the threshold value, obtaining a second updating threshold value based on a second rule and the initial threshold value, wherein the second updating threshold value is larger than the initial threshold value; and extracting the extreme points of the second grid again based on the image pyramid and the second updating threshold.
In this embodiment, the threshold value may be preset according to actual needs, and may be, for example, a value of 5, 10, or the like.
If the number of the formal extreme points extracted from the second grid is greater than the threshold value, it indicates that the second grid is in a scene with rich texture, such as a scene with many types of urban or farmland crops, and at this time, the threshold value needs to be increased to prevent too much time consumption for subsequent feature extraction. That is, a second updated threshold T1' is derived based on the second rule and the initial threshold T0. Specifically, the second rule may be to take twice the existing threshold as the new threshold, i.e., T1' ═ 2 × T0.
And extracting extreme points again from the second grid based on the image pyramid and the threshold T1 ', and if the number of the extracted extreme points is still greater than the threshold, continuing to obtain a new threshold T2 ' based on the second rule and the threshold T1 '. And so on until the number of extreme points extracted for the current second grid is less than or equal to the threshold. The extreme points extracted from the second mesh may be used as feature points.
In this embodiment, the second rule may be other content as long as it can be ensured that the second update threshold is greater than the initial threshold.
By setting the second rule and the threshold value, the threshold value can be appropriately increased for scenes with abundant textures, and the number of extracted feature points is prevented from being large, so that the operation time can be saved, and the operation efficiency can be improved.
For the feature point extraction of the repeated texture scene, the embodiment of the application adopts a gridding strategy to uniformly extract the feature points of the scene, so that the feature point bunching can be prevented, and each grid adopts self-adaptive dynamic threshold adjustment, so that the feature points can be extracted in each area at the maximum probability, and sufficient preparation is provided for finally obtaining the uniformly distributed feature points.
According to an embodiment of the present application, the method for extracting feature points further includes: performing scale pyramid pooling on the input image to obtain a plurality of images in different scale spaces; and calculating feature descriptors corresponding to the feature points with the highest statistical confidence scores in the first grid in the images of the plurality of different scale spaces.
Particularly, in an agricultural scene, the unmanned aerial vehicle is not always on the same horizontal line in the flight process, and particularly in mountainous regions, fruit forests and other scenes, the height change is very obvious. Relying only on the scale invariance of the extreme point (or feature point) detection stage is not robust enough for three-dimensional reconstruction. Therefore, when calculating the feature descriptors corresponding to the feature points, a multi-scale factor can be introduced. To account for multi-scale constraints as much as possible, a scale pyramid pooling operation may be performed on the input image.
For example, if the number of pooling layers is L, the convolution kernel sizes are 3 × 3, 5 × 5, …, (1+2L) × (1+2L), respectively. And performing L-time convolution on the input image according to the convolution kernel to obtain images in different scale spaces. For any feature point, the feature descriptors corresponding to the feature points can be counted in the L +1 images (including the original input image), so that the feature points with more robust scale factors can be obtained.
In one embodiment, the size of the image (images of multiple different scale spaces) obtained by the scale pyramid pooling operation is consistent with that of the input image, but the degree of blurring is different. That is, the sizes of the L +1 images are uniform, but the degrees of blurring are different. Different gradient angles are searched from the L +1 images, and then the quantity of the different gradient angles is counted to obtain a vector (such as a 128-dimensional vector) for representing the feature descriptor. Compared with the method for counting different gradient angles only in an input image, the method for counting the gradient angles in the L +1 images can count different gradient angles in the L +1 images, and then calculate the feature descriptors corresponding to the feature points, so that more robust feature points of scale factors can be obtained.
In this embodiment, the different gradient angles may include 30 degrees, 60 degrees, 90 degrees, and the like, and the specific angle value may be set according to actual needs. In an embodiment, the statistics of the number of different gradient angles may be performed in the above process of extracting the extreme point, wherein the main direction may be a gradient angle with a largest value among the different gradient angles.
Fig. 3 is a schematic flow chart of a feature point extraction method according to another exemplary embodiment of the present application. The embodiment of fig. 3 is an example of the embodiment of fig. 2, and the same parts are not described again to avoid repetition. As shown in fig. 3, the method for extracting the feature point includes the following steps.
310: and performing second grid division on the input image based on the preset grid size to obtain T second grids.
T is an integer greater than 1.
320: and constructing an image pyramid aiming at the input image, and extracting extreme points of the second grid based on the image pyramid and the initial threshold.
330: when the number of extreme points is 0, an updated threshold is obtained based on the first rule and the initial threshold.
In this step, the update threshold is smaller than the initial threshold. The first rule and the acquisition process of the update threshold may be referred to the description in the above embodiments.
340: and when the number of the extreme points is larger than the threshold value, obtaining an updated threshold value based on the second rule and the initial threshold value.
In this step, the update threshold is greater than the initial threshold. The second rule and the acquisition process of the update threshold may be referred to the description in the above embodiments.
350: and extracting the extreme points of the second grid again based on the image pyramid and the updating threshold.
360: and determining the number of the feature points by taking the extreme points extracted from each second grid as the feature points.
For a specific process of acquiring the extreme point, reference may be made to the description in the above embodiment, and details are not repeated here to avoid repetition.
In this step, if the total number of feature points obtained from the second mesh is less than or equal to the number of desired feature points, these feature points may be directly used as final feature points.
370: and when the number of the plurality of feature points is larger than the number of the expected feature points, performing first grid division on the input image based on a space division method to obtain K first grids.
K is greater than or equal to the number of desired feature points. For the spatial division method (for example, the quadtree division method), and the specific content of the first mesh division, reference may be made to the description in the foregoing embodiments, and details are not repeated here to avoid repetition.
380: and when the number of the feature points of the first grid is more than 1, determining the statistical confidence score of each feature point in the first grid, and screening the feature points of the first grid based on the statistical confidence score of each feature point to obtain the feature point with the highest statistical confidence score in the first grid.
Specifically, the statistical confidence score of each feature point may be determined by performing weighted summation on the reciprocal of the distance from each feature point in the first mesh to the center of the first mesh and the response value of each feature point.
390: and summarizing the characteristic points with the highest statistical confidence scores in each first grid as final characteristic points.
The number of feature points obtained can be M or slightly larger than M, which is mainly related to the number of nodes of the quadtree division. Optionally, when the number of the feature points obtained in the summary is greater than M, all the feature points may be sorted based on the response value to extract M top feature points as final feature points.
For the extracted final feature points, a feature descriptor corresponding to each feature point can be calculated, and the result is output.
The feature point extraction method provided by the embodiment of the application is a feature point extraction method aiming at gridding and self-adaptive dynamic threshold of a farmland repeated texture scene. In addition, in the extraction process of the feature points, the more reliable the feature points closer to the center of the mesh (first mesh), the less reliable the feature points closer to the edge of the mesh. According to the embodiment of the application, the feature points are screened based on the statistical confidence scores, and the distribution positions and response values of the feature points in the grids can be comprehensively considered, so that more robust feature points can be obtained.
Fig. 4 is a schematic flowchart of a feature matching method according to an exemplary embodiment of the present application. As shown in fig. 4, the feature matching method includes the following.
410: and extracting the feature points in the first image, and calculating a first feature descriptor corresponding to the feature points in the first image.
Specifically, the feature points in the first image may be extracted by the feature point extraction method provided in the above-described embodiment.
420: and extracting the feature points in the second image, and calculating a second feature descriptor corresponding to the feature points in the second image.
Specifically, the feature points in the second image may be extracted by the feature point extraction method provided in the above-described embodiment.
430: and performing feature matching on the first image and the second image based on the first feature descriptor and the second feature descriptor to obtain a matching result.
Specifically, an approximate nearest neighbor ratio method can be adopted to select an initial matching pair, and by calculating the essential matrix F, RANSAC is executed to filter out matching pairs which do not satisfy geometric constraints, so as to obtain a final matching result.
The embodiment of the application provides a feature matching method, which is characterized in that a plurality of first grids are obtained by performing first grid division on an input image based on a space division method, and feature points in each first grid are screened according to a statistical confidence score so as to reserve the feature points with the highest statistical confidence score in each first grid. The feature points with higher statistical confidence scores are closer to the center of the first grid, so that the feature point extraction method provided by the embodiment of the application can improve the distribution uniformity of all finally obtained feature points, avoid the situation that the feature points cannot be extracted in repeated texture areas in an input image, and effectively prevent the feature points from being piled.
Fig. 5 is a flowchart illustrating an image reconstruction method according to an exemplary embodiment of the present disclosure. As shown in fig. 5, the image reconstruction method includes the following steps.
510: and extracting the feature points in the first image, and calculating a first feature descriptor corresponding to the feature points in the first image.
Specifically, the feature points in the first image may be extracted by the feature point extraction method provided in the above-described embodiment.
520: and extracting the feature points in the second image, and calculating a second feature descriptor corresponding to the feature points in the second image.
Specifically, the feature points in the second image may be extracted by the feature point extraction method provided in the above-described embodiment.
530: and performing feature matching on the first image and the second image based on the first feature descriptor and the second feature descriptor to obtain a matching result.
Specifically, an approximate nearest neighbor ratio method can be adopted to select an initial matching pair, and by calculating the essential matrix F, RANSAC is executed to filter out matching pairs which do not satisfy geometric constraints, so as to obtain a final matching result.
540: and reconstructing the object to be reconstructed according to the matching result.
The object to be reconstructed may be a farmland, a road, a building, etc.
The embodiment of the application provides an image reconstruction method, which includes the steps of carrying out first grid division on an input image based on a space division method to obtain a plurality of first grids, and screening feature points in each first grid according to a statistical confidence score to reserve the feature points with the highest statistical confidence score in each first grid. The feature points with higher statistical confidence scores are closer to the center of the first grid, so that the feature point extraction method provided by the embodiment of the application can improve the distribution uniformity of all finally obtained feature points, avoid the situation that the feature points cannot be extracted in repeated texture areas in an input image, and effectively prevent the feature points from being piled.
The method for extracting the feature points can be used in the field of three-dimensional reconstruction, and can also be used in the fields of moving target tracking, object identification, image registration, panoramic image splicing and the like which need to extract the feature points.
Fig. 6a is a schematic diagram illustrating an extraction result obtained by a conventional feature point extraction method. Fig. 6b is a schematic diagram illustrating an extraction result obtained by using the feature point extraction method according to an embodiment of the present application. Fig. 7a is a schematic diagram showing another extraction result obtained by the conventional feature point extraction method. Fig. 7b is a schematic diagram illustrating another extraction result obtained by using the feature point extraction method according to an embodiment of the present application. Comparing fig. 6a and 6b, and fig. 7a and 7b, it can be seen that by using the feature point extraction method provided by the embodiment of the present application, feature points can be extracted in almost all regions of the whole image, so as to ensure uniform distribution of the feature points, and provide reliable data input for subsequent motion estimation and three-dimensional reconstruction.
Exemplary devices
Fig. 8 is a schematic structural diagram of an apparatus 800 for extracting feature points according to an exemplary embodiment of the present application. As shown in fig. 8, the extraction device 800 includes: a first determination module 810, a first division module 820, a second determination module 830, and a screening module 840.
The first determination module 810 is used to determine the number of the plurality of feature points preliminarily extracted for the input image. The first partitioning module 820 is configured to perform first grid partitioning on the plurality of feature points based on a spatial partitioning method to obtain K first grids when the number of the plurality of feature points is greater than the number of expected feature points, where K is greater than or equal to the number of expected feature points. The second determining module 830 is configured to determine the number of feature points in each of the K first grids, and determine the statistical confidence score of each feature point based on the distance from each feature point in the first grid to the center of the first grid when the number of feature points in the first grid is greater than 1. The screening module 840 is configured to screen the feature points of the first grid based on the statistical confidence scores of the feature points to obtain the feature point with the highest statistical confidence score in the first grid.
The embodiment of the application provides a feature point extraction device, which is used for performing first grid division on an input image based on a space division method to obtain a plurality of first grids, and screening feature points in each first grid according to a statistical confidence score to reserve the feature point with the highest statistical confidence score in each first grid. The feature points with higher statistical confidence scores are closer to the center of the first grid, so that the feature point extraction method provided by the embodiment of the application can improve the distribution uniformity of all finally obtained feature points, avoid the situation that the feature points cannot be extracted in repeated texture areas in an input image, and effectively prevent the feature points from being piled.
According to an embodiment of the present application, the second determining module 830 is configured to determine the statistical confidence score of each feature point based on a distance between each feature point in the first grid and the center of the first grid and a response value of each feature point.
According to an embodiment of the present application, the second determining module 830 is configured to determine the statistical confidence score of each feature point by performing weighted summation on an inverse of a distance between each feature point in the first grid and the center of the first grid and a response value of each feature point.
According to an embodiment of the application, the first mesh partitioning is performed based on a binary tree, quadtree or octree partitioning method.
According to an embodiment of the present application, the extracting apparatus 800 further includes a second dividing module 850, configured to: performing second grid division on the input image based on the preset grid size to obtain T second grids, wherein T is an integer greater than 1; and performing preliminary extraction of the feature points on each second grid in the T second grids to obtain a plurality of feature points.
The second partitioning module 850 is further configured to construct an image pyramid for the input image according to an embodiment of the present application. The second partitioning module 850 is configured to: extracting extreme points of the second grid based on the image pyramid and the initial threshold; when the number of the extreme points is 0, obtaining a first updating threshold value based on a first rule and an initial threshold value, wherein the first updating threshold value is smaller than the initial threshold value; and extracting extreme points again from the second grid based on the image pyramid and the first updating threshold, wherein the extreme points extracted from the second grid are used as feature points.
According to an embodiment of the present application, the second dividing module 850 is configured to: when the number of the extreme points is larger than the threshold value, obtaining a second updating threshold value based on a second rule and the initial threshold value, wherein the second updating threshold value is larger than the initial threshold value; and extracting the extreme points of the second grid again based on the image pyramid and the second updating threshold.
According to an embodiment of the present application, the extracting apparatus 800 further includes a calculating module 860 configured to: performing scale pyramid pooling on the input image to obtain a plurality of images in different scale spaces; and calculating feature descriptors corresponding to the feature points with the highest statistical confidence scores in the first grid in the images of the plurality of different scale spaces.
It should be understood that, in the above embodiments, the operations and functions of the first determining module 810, the first dividing module 820, the second determining module 830, the screening module 840, the second dividing module 850, and the calculating module 860 may refer to the description of the feature point extracting method provided in the above embodiment of fig. 2 or fig. 3, and are not described herein again to avoid repetition.
Fig. 9 is a schematic structural diagram of a feature matching apparatus 900 according to an exemplary embodiment of the present application. As shown in fig. 9, the feature matching apparatus 900 includes: a first extraction module 910, a second extraction module 920, and a matching module 930.
The first extraction module 910 is configured to extract feature points in the first image by using the feature point extraction method provided in the above embodiment, and calculate a first feature descriptor corresponding to the feature points in the first image. The second extraction module 920 is configured to extract feature points in the second image by using the feature point extraction method provided in the above embodiment, and calculate a second feature descriptor corresponding to the feature points in the second image. The matching module 930 is configured to perform feature matching on the first image and the second image based on the first feature descriptor and the second feature descriptor to obtain a matching result.
The embodiment of the application provides a feature matching device, which is used for performing first grid division on an input image based on a space division method to obtain a plurality of first grids, and screening feature points in each first grid according to a statistical confidence score to reserve the feature points with the highest statistical confidence score in each first grid. The feature points with higher statistical confidence scores are closer to the center of the first grid, so that the feature point extraction method provided by the embodiment of the application can improve the distribution uniformity of all finally obtained feature points, avoid the situation that the feature points cannot be extracted in repeated texture areas in an input image, and effectively prevent the feature points from being piled.
It should be understood that, in the above embodiment, the operations and functions of the first extracting module 910, the second extracting module 920, and the matching module 930 may refer to the description in the feature matching method provided in the above embodiment of fig. 4, and are not described herein again to avoid repetition.
Fig. 10 is a schematic structural diagram illustrating an apparatus 1000 for reconstructing an image according to an exemplary embodiment of the present application. As shown in fig. 10, the image reconstruction apparatus 1000 includes: a first extraction module 1010, a second extraction module 1020, a matching module 1030, and a reconstruction module 1040.
The first extraction module 1010 is configured to extract feature points in the first image by using the feature point extraction method provided in the above embodiment, and calculate a first feature descriptor corresponding to the feature points in the first image. The second extraction module 1020 is configured to extract feature points in the second image by using the feature point extraction method provided in the foregoing embodiment, and calculate a second feature descriptor corresponding to the feature points in the second image. The matching module 1030 is configured to perform feature matching on the first image and the second image based on the first feature descriptor and the second feature descriptor to obtain a matching result. The reconstruction module 1040 is configured to reconstruct the object to be reconstructed according to the matching result.
The embodiment of the application provides an image reconstruction method, which includes the steps of carrying out first grid division on an input image based on a space division method to obtain a plurality of first grids, and screening feature points in each first grid according to a statistical confidence score to reserve the feature points with the highest statistical confidence score in each first grid. The feature points with higher statistical confidence scores are closer to the center of the first grid, so that the feature point extraction method provided by the embodiment of the application can improve the distribution uniformity of all finally obtained feature points, avoid the situation that the feature points cannot be extracted in repeated texture areas in an input image, and effectively prevent the feature points from being piled.
It should be understood that, for the operations and functions of the first extracting module 1010, the second extracting module 1020, the matching module 1030, and the reconstructing module 1040 in the foregoing embodiment, reference may be made to the description of the image reconstructing method provided in the foregoing embodiment in fig. 5, and in order to avoid repetition, the description is not repeated here.
Fig. 11 is a block diagram illustrating an unmanned aerial vehicle 1100 for performing a feature point extraction method according to an exemplary embodiment of the present application.
Referring to fig. 11, drone 1100 includes a processing component 1110 that further includes one or more processors, and memory resources, represented by memory 1120, for storing instructions, such as application programs, that are executable by processing component 1110. The application programs stored in memory 1120 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1110 is configured to execute instructions to perform the above-described feature point extraction method, feature matching method, or image reconstruction method. In an embodiment, the drone may be a drone.
Drone 1100 may also include a power component configured to perform power management of drone 1100, a wired or wireless network interface configured to connect drone 1100 to a network, and an input-output (I/O) interface. The drone 1100, such as a Windows Server, may be operated based on an operating system stored in a memory 1120TM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTMOr the like.
A non-transitory computer readable storage medium, wherein instructions, when executed by a processor of the drone 1100, enable the drone 1100 to perform a feature point extraction method, a feature matching method, or an image reconstruction method. The method for extracting the feature points comprises the following steps: determining the number of a plurality of feature points preliminarily extracted for an input image; when the number of the plurality of feature points is larger than the number of expected feature points, performing first grid division on the input image based on a space division method to obtain K first grids, wherein K is larger than or equal to the number of the expected feature points; determining the number of feature points of each first grid in the K first grids, and determining the statistical confidence score of each feature point based on the distance between each feature point in the first grids and the center of the first grid when the number of the feature points of the first grids is greater than 1; and screening the feature points of the first grid based on the statistical confidence scores of the feature points to obtain the feature point with the highest statistical confidence score in the first grid. The feature matching method comprises the following steps: extracting the feature points in the first image by adopting the feature point extraction method provided by the embodiment, and calculating a first feature descriptor corresponding to the feature points in the first image; extracting the feature points in the second image by adopting the feature point extraction method provided by the embodiment, and calculating a second feature descriptor corresponding to the feature points in the second image; and performing feature matching on the first image and the second image based on the first feature descriptor and the second feature descriptor to obtain a matching result. The image reconstruction method comprises the following steps: extracting the feature points in the first image by adopting the feature point extraction method provided by the embodiment, and calculating a first feature descriptor corresponding to the feature points in the first image; extracting the feature points in the second image by adopting the feature point extraction method provided by the embodiment, and calculating a second feature descriptor corresponding to the feature points in the second image; performing feature matching on the first image and the second image based on the first feature descriptor and the second feature descriptor to obtain a matching result; and reconstructing the object to be reconstructed according to the matching result.
All the above optional technical solutions can be combined arbitrarily to form optional embodiments of the present application, and are not described herein again.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program check codes, such as a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in the description of the present application, the terms "first", "second", "third", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In addition, in the description of the present application, "a plurality" means two or more unless otherwise specified.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modifications, equivalents and the like that are within the spirit and principle of the present application should be included in the scope of the present application.

Claims (13)

1. A method for extracting feature points is characterized by comprising the following steps:
determining the number of a plurality of feature points preliminarily extracted for an input image;
when the number of the plurality of feature points is larger than the number of expected feature points, performing first grid division on the input image based on a space division method to obtain K first grids, wherein K is larger than or equal to the number of the expected feature points;
determining the number of feature points of each first grid in the K first grids, and determining the statistical confidence score of each feature point based on the distance from each feature point in the first grid to the center of the first grid when the number of feature points of the first grid is greater than 1;
and screening the feature points of the first grid based on the statistical confidence scores of the feature points to obtain the feature point with the highest statistical confidence score in the first grid.
2. The method according to claim 1, wherein the determining the statistical confidence score of each feature point based on the distance from the feature point in the first mesh to the center of the first mesh comprises:
and determining the statistical confidence score of each feature point based on the distance between each feature point in the first grid and the center of the first grid and the response value of each feature point.
3. The method according to claim 2, wherein the determining the statistical confidence score of each feature point based on the distance between each feature point in the first grid and the center of the first grid and the response value of each feature point comprises:
and determining the statistical confidence score of each feature point by performing weighted summation on the reciprocal of the distance from each feature point in the first grid to the center of the first grid and the response value of each feature point.
4. The method of extracting feature points according to claim 1, wherein the first mesh division is performed based on a binary tree, a quadtree, or an octree segmentation method.
5. The method of extracting a feature point according to any one of claims 1 to 4, characterized by further comprising:
performing second grid division on the input image based on a preset grid size to obtain T second grids, wherein T is an integer greater than 1;
and performing preliminary extraction of feature points on each second grid in the T second grids to obtain the plurality of feature points.
6. The method of extracting feature points according to claim 5, further comprising:
constructing an image pyramid for the input image,
wherein the performing preliminary extraction of feature points for each of the T second meshes to obtain the plurality of feature points includes:
extracting extreme points of the second grid based on the image pyramid and an initial threshold;
when the number of the extreme points is 0, obtaining a first updating threshold value based on a first rule and the initial threshold value, wherein the first updating threshold value is smaller than the initial threshold value;
and extracting extreme points from the second grid again based on the image pyramid and the first updating threshold, wherein the extreme points extracted from the second grid are used as feature points.
7. The method according to claim 6, wherein the performing preliminary extraction of feature points for each of the T second meshes to obtain the plurality of feature points further comprises:
when the number of the extreme points is larger than a threshold value, obtaining a second updating threshold value based on a second rule and the initial threshold value, wherein the second updating threshold value is larger than the initial threshold value;
and extracting the extreme points of the second grid again based on the image pyramid and the second updating threshold.
8. The method of extracting a feature point according to any one of claims 1 to 4, characterized by further comprising:
performing scale pyramid pooling operation on the input image to obtain a plurality of images in different scale spaces;
and calculating feature descriptors corresponding to the feature points with the highest statistical confidence scores in the first grid in the images of the plurality of different scale spaces.
9. A method of reconstructing an image, comprising:
extracting the feature points in the first image by adopting the feature point extraction method of any one of claims 1 to 8, and calculating a first feature descriptor corresponding to the feature points in the first image;
extracting the feature points in the second image by adopting the feature point extraction method of any one of claims 1 to 8, and calculating a second feature descriptor corresponding to the feature points in the second image;
performing feature matching on the first image and the second image based on the first feature descriptor and the second feature descriptor to obtain a matching result;
and reconstructing the object to be reconstructed according to the matching result.
10. An extraction device of a feature point, characterized by comprising:
a first determination module for determining the number of a plurality of feature points preliminarily extracted for an input image;
the first dividing module is used for carrying out first grid division on the plurality of feature points based on a space dividing method to obtain K first grids when the number of the plurality of feature points is larger than the number of expected feature points, wherein K is larger than or equal to the number of the expected feature points;
a second determining module, configured to determine the number of feature points in each of the K first grids, and when the number of feature points in the first grid is greater than 1, determine a statistical confidence score of each feature point based on a distance from each feature point in the first grid to a center of the first grid;
and the screening module is used for screening the feature points of the first grid based on the statistical confidence scores of the feature points to obtain the feature point with the highest statistical confidence score in the first grid.
11. An apparatus for reconstructing an image, comprising:
a first extraction module, configured to extract feature points in a first image by using the feature point extraction method according to any one of claims 1 to 8, and calculate a first feature descriptor corresponding to the feature points in the first image;
a second extraction module, configured to extract the feature points in the second image by using the feature point extraction method according to any one of claims 1 to 8, and calculate a second feature descriptor corresponding to the feature points in the second image;
a matching module, configured to perform feature matching on the first image and the second image based on the first feature descriptor and the second feature descriptor to obtain a matching result;
and the reconstruction module is used for reconstructing the object to be reconstructed according to the matching result.
12. An unmanned device, comprising:
a processor;
a memory for storing the processor-executable instructions,
wherein the processor is configured to execute the method for extracting feature points according to any one of claims 1 to 8, or execute the method for reconstructing an image according to claim 9.
13. A computer-readable storage medium, characterized in that the storage medium stores a computer program for executing the method of extracting a feature point according to any one of claims 1 to 8 or the method of reconstructing an image according to claim 9.
CN202111040342.2A 2021-09-06 2021-09-06 Feature point extraction method, image reconstruction method and device Pending CN113837202A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111040342.2A CN113837202A (en) 2021-09-06 2021-09-06 Feature point extraction method, image reconstruction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111040342.2A CN113837202A (en) 2021-09-06 2021-09-06 Feature point extraction method, image reconstruction method and device

Publications (1)

Publication Number Publication Date
CN113837202A true CN113837202A (en) 2021-12-24

Family

ID=78962326

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111040342.2A Pending CN113837202A (en) 2021-09-06 2021-09-06 Feature point extraction method, image reconstruction method and device

Country Status (1)

Country Link
CN (1) CN113837202A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116245260A (en) * 2023-05-12 2023-06-09 国网吉林省电力有限公司信息通信公司 Optimization method for deploying 5G base station based on substation resources
CN116863342A (en) * 2023-09-04 2023-10-10 江西啄木蜂科技有限公司 Large-scale remote sensing image-based pine wood nematode dead wood extraction method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116245260A (en) * 2023-05-12 2023-06-09 国网吉林省电力有限公司信息通信公司 Optimization method for deploying 5G base station based on substation resources
CN116245260B (en) * 2023-05-12 2023-08-08 国网吉林省电力有限公司信息通信公司 Optimization method for deploying 5G base station based on substation resources
CN116863342A (en) * 2023-09-04 2023-10-10 江西啄木蜂科技有限公司 Large-scale remote sensing image-based pine wood nematode dead wood extraction method
CN116863342B (en) * 2023-09-04 2023-11-21 江西啄木蜂科技有限公司 Large-scale remote sensing image-based pine wood nematode dead wood extraction method

Similar Documents

Publication Publication Date Title
US10943145B2 (en) Image processing methods and apparatus, and electronic devices
CN108416307B (en) Method, device and equipment for detecting pavement cracks of aerial images
US11830230B2 (en) Living body detection method based on facial recognition, and electronic device and storage medium
CN107301402B (en) Method, device, medium and equipment for determining key frame of real scene
CN108764039B (en) Neural network, building extraction method of remote sensing image, medium and computing equipment
CN109685045B (en) Moving target video tracking method and system
CN110176023B (en) Optical flow estimation method based on pyramid structure
CN113837202A (en) Feature point extraction method, image reconstruction method and device
CN110310301B (en) Method and device for detecting target object
US20240078680A1 (en) Image segmentation method, network training method, electronic equipment and storage medium
CN112560619B (en) Multi-focus image fusion-based multi-distance bird accurate identification method
CN109685806B (en) Image significance detection method and device
CN114140683A (en) Aerial image target detection method, equipment and medium
CN113759338B (en) Target detection method and device, electronic equipment and storage medium
CN113159300A (en) Image detection neural network model, training method thereof and image detection method
CN116977674A (en) Image matching method, related device, storage medium and program product
CN116092035A (en) Lane line detection method, lane line detection device, computer equipment and storage medium
CN116310688A (en) Target detection model based on cascade fusion, and construction method, device and application thereof
CN116310899A (en) YOLOv 5-based improved target detection method and device and training method
CN112580442B (en) Behavior identification method based on multi-dimensional pyramid hierarchical model
CN114723883A (en) Three-dimensional scene reconstruction method, device, equipment and storage medium
CN113837201A (en) Feature point extraction method, image reconstruction method and device
Morar et al. GPU accelerated 2D and 3D image processing
CN113048950A (en) Base station antenna inclination angle measuring method and device, storage medium and computer equipment
CN116631319B (en) Screen display compensation method, intelligent terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination