CN109272041B - Feature point selection method and device - Google Patents

Feature point selection method and device Download PDF

Info

Publication number
CN109272041B
CN109272041B CN201811105698.8A CN201811105698A CN109272041B CN 109272041 B CN109272041 B CN 109272041B CN 201811105698 A CN201811105698 A CN 201811105698A CN 109272041 B CN109272041 B CN 109272041B
Authority
CN
China
Prior art keywords
feature point
feature points
target
target object
storage area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811105698.8A
Other languages
Chinese (zh)
Other versions
CN109272041A (en
Inventor
孙炼杰
陈建冲
高江涛
周毅
杨旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201811105698.8A priority Critical patent/CN109272041B/en
Publication of CN109272041A publication Critical patent/CN109272041A/en
Application granted granted Critical
Publication of CN109272041B publication Critical patent/CN109272041B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/211Selection of the most significant subset of features

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses a method and a device for selecting feature points, wherein the method comprises the following steps: acquiring at least one acquired image which is acquired by an image acquisition unit at M acquisition positions and aims at a target object, wherein the at least one acquired image which is acquired at the M acquisition positions and aims at the target object is stored in M storage areas, and M is a positive integer which is more than or equal to 2; extracting feature points of at least one acquired image stored in each storage area of the at least one storage area; acquiring attribute information of the characteristic points of the acquired images obtained in each storage area; and obtaining target feature points of the target object at corresponding acquisition positions based on the attribute information of the feature points of the acquired image obtained in each storage area.

Description

Feature point selection method and device
Technical Field
The present application relates to a feature point selection technology, and in particular, to a feature point selection method and apparatus.
Background
In practical applications, in the related art, for example, the identification of an object and the identification of the pose of the object are both realized by matching the feature points of the object. The feature points are used as points capable of representing the features of the object, and the selection of the feature points plays a crucial role in object identification and identification of the pose of the object. At present, if the number of the feature points is selected too much, the identification time is greatly prolonged, and the identification instantaneity is insufficient; if the number of the feature points is not enough or the feature points are not good-quality feature points, the recognition accuracy is influenced. Therefore, how to select the feature points can meet the requirements of both real-time identification and identification precision becomes a technical problem to be solved urgently.
Disclosure of Invention
In order to solve the existing technical problem, embodiments of the present application provide a method and an apparatus for selecting a feature point, which can at least improve an identification speed and an identification accuracy.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides a method for selecting feature points, which comprises the following steps:
acquiring at least one acquired image which is acquired by an image acquisition unit at M acquisition positions and aims at a target object, wherein the at least one acquired image which is acquired at the M acquisition positions and aims at the target object is stored in M storage areas, and M is a positive integer which is more than or equal to 2;
extracting feature points of at least one acquired image stored in each storage area of the at least one storage area;
acquiring attribute information of the characteristic points of the acquired images obtained in each storage area;
and obtaining target feature points of the target object at corresponding acquisition positions based on the attribute information of the feature points of the acquired image obtained in each storage area.
In the foregoing solution, the obtaining attribute information of the feature point of the captured image obtained in each storage region, and based on the attribute information of the feature point of the captured image obtained in each storage region, obtaining the target feature point of the target object at the corresponding capture position includes:
at least acquiring frequency information of all collected images stored in a corresponding storage area of the feature point; or at least acquiring frequency information of the feature points appearing in all acquired images stored in all storage areas;
and screening out the target characteristic points based on the frequency information.
In the foregoing solution, the obtaining attribute information of the feature point of the captured image obtained in each storage region, and based on the attribute information of the feature point of the captured image obtained in each storage region, obtaining the target feature point of the target object at the corresponding capture position includes:
at least obtaining descriptor information of each feature point obtained from different collected images in the same storage area, wherein the descriptor information is information for representing the attribute of the feature point;
acquiring similarity information between descriptor information of feature points in different acquired images in the same storage area;
and obtaining the target characteristic points based on the similarity information.
In the above scheme, the method further comprises:
the characteristic points of the at least one collected image stored in the at least one storage area are characteristic points expressed by two-dimensional coordinates;
after extracting the feature points expressed in two-dimensional coordinates, the method further includes:
performing coordinate conversion on the characteristic points to obtain characteristic points expressed by three-dimensional coordinates;
accordingly, the target feature point of the target object at the M acquisition position is a three-dimensional target feature point of the target object at the M acquisition position.
In the above scheme, the method further comprises:
reconstructing and/or identifying the target object at a corresponding acquisition position based on target feature points of the target object at the corresponding acquisition position.
The embodiment of the application provides a device is selected to characteristic point, the device includes:
a first acquisition unit configured to acquire at least one captured image for the target object captured by the image capture unit at M capture positions, where the at least one captured image for the target object captured at the M capture positions is stored in M storage areas, and M is a positive integer greater than or equal to 2;
a first extraction unit configured to extract feature points of at least one captured image stored in each of the at least one storage area;
the second acquisition unit is used for acquiring the attribute information of the characteristic points of the acquired images obtained in the storage areas;
a third obtaining unit, configured to obtain a target feature point of the target object at a corresponding collection position based on attribute information of feature points of the collected image obtained in the respective storage areas.
In the foregoing solution, the third obtaining unit is further configured to:
at least acquiring frequency information of all collected images stored in the corresponding storage areas of all the characteristic points; or at least acquiring frequency information of all collected images stored in all storage areas by each feature point;
and screening out the target characteristic points based on the frequency information.
In the foregoing solution, the third obtaining unit is further configured to:
at least obtaining descriptor information of each feature point obtained from different collected images in the same storage area, wherein the descriptor information is information for representing the attribute of the feature point;
acquiring similarity information between descriptor information of feature points in different acquired images in the same storage area;
and obtaining the target characteristic points based on the similarity information.
In the above scheme, the feature points of the at least one acquired image stored in the at least one storage area are feature points expressed by two-dimensional coordinates;
the device further comprises: the conversion unit is used for carrying out coordinate conversion on the characteristic points to obtain the characteristic points expressed by three-dimensional coordinates;
accordingly, the target feature point of the target object at the M acquisition position is a three-dimensional target feature point of the target object at the M acquisition position.
In the above scheme, the apparatus further comprises: a reconstruction and/or identification unit for: reconstructing and/or identifying the target object at a corresponding acquisition position based on target feature points of the target object at the corresponding acquisition position.
The method and the device for selecting the feature points provided by the embodiment of the application comprise the following steps: acquiring at least one acquired image which is acquired by an image acquisition unit at M acquisition positions and aims at a target object, wherein the at least one acquired image which is acquired at the M acquisition positions and aims at the target object is stored in M storage areas, and M is a positive integer which is more than or equal to 2; extracting feature points of at least one acquired image stored in each storage area of the at least one storage area; acquiring attribute information of the characteristic points of the acquired images obtained in each storage area; and obtaining target feature points of the target object at corresponding acquisition positions based on the attribute information of the feature points of the acquired image obtained in each storage area.
Compared with the method that all the feature points extracted from the collected image are used for object identification and/or reconstruction in the related technology, the method has the advantages that the number of the feature points is proper, the identification time of the object identification and/or reconstruction is not prolonged, and the real-time performance of the identification and/or reconstruction can be ensured. Meanwhile, the characteristic points used for object identification and/or reconstruction are high-quality characteristic points selected based on the attribute information of the characteristic points, and the identification and/or reconstruction accuracy can be further ensured.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic flowchart of a first embodiment of a feature point selection method provided in the present application;
FIG. 2 is a schematic illustration of an acquisition location provided herein;
fig. 3 is a schematic flowchart of a second embodiment of a feature point selection method provided in the present application;
fig. 4 is a schematic flowchart of a third embodiment of a feature point selection method provided in the present application;
fig. 5 is a schematic structural diagram of an embodiment of a feature point selecting apparatus provided in the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions in the embodiments of the present application will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application. In the present application, the embodiments and features of the embodiments may be arbitrarily combined with each other without conflict. The steps illustrated in the flow charts of the figures may be performed in a computer system such as a set of computer-executable instructions. Also, while a logical order is shown in the flow diagrams, in some cases, the steps shown or described may be performed in an order different than here.
As will be appreciated by those skilled in the art, the focus of the embodiments of the present application is on how to select good-quality feature points, so as to facilitate subsequent object reconstruction and/or identification, increase the identification (and/or reconstruction) speed, and improve the identification (and/or reconstruction) accuracy. After the high-quality feature points are selected, object reconstruction and/or object identification are performed by using the selected high-quality feature points, or other matters related to the use significance of the feature points are performed.
The present application provides a first embodiment of a feature point selection method. As shown in fig. 1, the method includes:
step 101: acquiring at least one acquired image which is acquired by an image acquisition unit at M acquisition positions and aims at a target object, wherein the at least one acquired image which is acquired at the M acquisition positions and aims at the target object is stored in M storage areas, and M is a positive integer which is more than or equal to 2;
the main body for executing the steps 101-104 is a selection device of the feature point.
In step 101, the feature point selecting means reads at least one stored image stored in at least one of the M storage areas. Wherein the captured images stored in the different storage areas are images captured by the image capturing unit from different capturing positions. I.e. the captured images stored in the same storage area are captured by the image capturing unit from the same capture location.
Step 102: extracting feature points of at least one acquired image stored in each storage area of the at least one storage area;
in this step, the feature point selecting means extracts feature points of one or more captured images stored in one or more storage areas. In a certain storage region, the feature points of all captured images stored in the storage region may be extracted, or the feature points of a part of captured images stored in the storage region may be extracted, preferably the feature points of all captured images in the same storage region are extracted. The number of feature points extracted from different captured images in the same storage area may be the same or different, preferably the same. The number of feature points extracted from one captured image in different storage areas may be the same or different, preferably the same.
Step 103: acquiring attribute information of the characteristic points of the acquired images obtained in each storage area;
here, the attribute information of the feature point may be regarded as frequency information of occurrence of the feature point, and may also be regarded as key points and descriptor information of the feature point. As will be appreciated by those skilled in the art, in the field of image processing, the feature points of an image are typically made up of two parts: a Keypoint (Keypoint) and a Descriptor (Descriptor). The key point refers to the position of the feature point in the image, and may of course include information such as direction and scale; the descriptor is generally a vector, and can be used to describe information of pixels around the key point, such as color of the pixel, gray level of the pixel, number of pixels, and the like, according to the design specification.
Step 104: and obtaining target feature points of the target object at corresponding acquisition positions based on the attribute information of the feature points of the acquired image obtained in each storage area.
Here, unlike the related art in which all the feature points extracted from the captured image are used for subsequent object recognition and/or reconstruction, in the embodiment of the present application, based on the attribute information of the feature points of the captured image extracted from each storage region, part of the feature points are filtered, and the selected part of the feature points are used as target feature points for subsequent object recognition and/or reconstruction. Therefore, compared with the method for selecting the target characteristic points based on the attribute information of the characteristic points and using the target characteristic points for the characteristic points used in the subsequent identification and/or reconstruction of the object in the related art, the method has the advantages that the number of the characteristic points is suitable, the identification time of the object identification and/or reconstruction is not prolonged, and the real-time performance of the identification and/or reconstruction can be ensured. Meanwhile, the characteristic points used for object identification and/or reconstruction are high-quality characteristic points selected based on the attribute information of the characteristic points, and the identification and/or reconstruction accuracy can be further ensured.
It can be understood that: in step 101, the image capturing unit is a camera device, such as a video camera or a video camera, in the feature point selecting device. The target object may be any object that needs to be reconstructed and/or identified, such as a car, a building, a person, etc. In the embodiment of the application, for the same target object, the target object is not acquired at only one acquisition position, but images of the target object under different acquisition angles are acquired at M (M ≧ 2) acquisition positions. In addition, a storage area is opened up for collecting images, furthermore, several storage areas are opened up for collecting images at several collecting positions, and the collected images collected at the various collecting positions are stored in the corresponding storage areas. In general, the number of images acquired at the same acquisition position may be one, or may be two or more, preferably two or more. The number of images acquired at different acquisition positions may be the same or different.
It should be understood that the acquisition positions in the embodiments of the present application refer to positions within a certain latitude and longitude range, such as (30 ° north latitude, 120 ° east longitude), (32 ° north latitude, 124 ° east longitude, (34 ° north longitude), 128 ° east longitude), and they are considered to belong to the same acquisition position in a broad sense although they have some difference in latitude and longitude. And storing the (captured) images captured at the different capture positions in the corresponding storage areas may be understood with reference to the description of fig. 2. In the embodiment of the application, the center coordinate of the target object is taken as a circle center, a certain radius is taken as a sphere, and the length of the selected radius at least enables the camera device to be outside the spherical surface. The shooting of the target object by the camera outside the spherical surface is equivalent to the shooting of the target object at a certain acquisition position in space. Considering that any object on the earth appears rather small in space, the spherical area is divided according to longitude and latitude information, for example, the spherical area is divided into the same spherical area every 30 degrees of longitude and latitude, and the shooting position of the camera is considered to be a collecting position in a broad sense as long as the shooting position falls into the spherical area, no matter the shooting position shoots at several positions in the spherical area. As shown in fig. 2 where the spherical area is 2, in the spherical area 1, the shooting positions may be positions 1, 2, and 3. In the spherical area 2, the shooting positions may be positions 4, 5, and 6. In the embodiment of the present application, it can be considered that the shooting positions 1, 2, and 3 are the same acquisition position, and the shooting positions 4, 5, and 6 are another same acquisition position. The images photographed from the photographing positions 1, 2, and 3 are stored in the same storage area. The images photographed from the photographing positions 4, 5, and 6 are stored in another same storage area. It can be seen that the number of acquisition positions is the number of spherical areas divided. The division of the acquisition positions enables certain difference to exist between the images stored in the same storage area in the shooting angle, and further enables certain difference to exist between the extracted feature points, so that the problem that the application and use significance of the extraction of the feature points is not large due to the fact that the images stored in the same storage area are approximately the same and the difference is not obvious can be solved.
It can be understood that: the feature points extracted from the acquired image are all feature points expressed by two-dimensional coordinates, and the feature points applied in the subsequent object reconstruction and/or identification process should be three-dimensional feature points, based on which in the embodiment of the present application, after the feature points expressed by two-dimensional coordinates are extracted and before the target feature point is obtained, the feature point selecting method further includes: performing coordinate conversion on the feature points represented by the two-dimensional coordinates to obtain feature points represented by the three-dimensional coordinates; accordingly, the target feature point of the target object at the M acquisition position is a three-dimensional target feature point of the target object at the M acquisition position.
In an optional embodiment, after the feature points expressed by the two-dimensional coordinates are extracted, coordinate conversion is not performed on the feature points, but after the target feature points are obtained, coordinate conversion is performed on the target feature points to obtain the target feature points expressed by the three-dimensional coordinates, so that the feature points used for object reconstruction and/or identification are the feature points expressed by the three-dimensional coordinates (three-dimensional feature points).
In the above scheme, the conversion of the feature point from the two-dimensional coordinate to the three-dimensional coordinate can be performed by adopting the related technology. For example, a method such as triangulation, a boolean model, or map projection transformation performs conversion of two-dimensional coordinates into three-dimensional coordinates. For a specific implementation process, please refer to the related description, which is not repeated herein.
In an optional embodiment of the present application, after step 104, the method further comprises: reconstructing and/or identifying the target object at a corresponding acquisition position based on target feature points of the target object at the corresponding acquisition position. It can be understood that the target feature point is extracted by using the attribute information of the feature point in the image shot at the acquisition position a, and the target object at the acquisition position a is subjected to stereo reconstruction and/or identification by using the target feature point. The extracted target feature points are extracted according to the attribute information of the feature points, are high-quality feature points, and at least can improve the speed and the precision of stereo reconstruction and/or identification. For a specific implementation process of object reconstruction and/or identification, please refer to the related description, which is not repeated herein.
The present application provides a second embodiment of the feature point selection method, which further describes a specific implementation process of obtaining the target feature point of the target object at the corresponding collection position (step 104) based on the attribute information of the feature point of the collected image obtained in each storage region in the first embodiment, where the attribute information of the feature point of the collected image obtained in each storage region is obtained.
As shown in fig. 3, a second embodiment of the feature point selection method is shown in fig. 3, and includes:
step 301: acquiring at least one acquired image which is acquired by an image acquisition unit at M acquisition positions and aims at a target object, wherein the at least one acquired image which is acquired at the M acquisition positions and aims at the target object is stored in M storage areas, and M is a positive integer which is more than or equal to 2;
step 302: extracting feature points of at least one acquired image stored in each storage area of the at least one storage area;
step 303: at least acquiring frequency information of all collected images stored in the corresponding storage areas of all the characteristic points; or at least acquiring frequency information of all collected images stored in all storage areas by each feature point;
step 304: and screening out the target characteristic points based on the frequency information.
The execution subject of steps 301 to 304 is a feature point selection device. For the understanding of steps 301 to 302, reference is made to the aforementioned understanding of steps 101 to 102, and the description is not repeated here.
In this embodiment, the attribute information of the feature point may be regarded as frequency information of occurrence of the feature point. And screening the target characteristic points from the extracted characteristic points according to the frequency information of the characteristic points appearing in the collected image. The target characteristic points screened out according to the frequency information of the characteristic points are necessarily characteristic points with higher frequency, and the characteristic points are often key characteristic points of object identification and/or reconstruction, so that the target characteristic points screened out in the above way are necessarily high-quality characteristic points, and at least the identification speed and the accuracy of the subsequent object identification and/or reconstruction process can be improved.
The following description is given by way of example to enhance the understanding of the present solution. Taking 10 captured images read from each storage area as an example, a certain number of feature points are extracted from each of the 10 captured images, for example, 200 feature points are read from each image. How to select good-quality feature points from the 200 × 10 — 2000 feature points can be performed as follows.
Taking one of the feature points a extracted from the 1 st image as an example, assuming that the 1 st image is read from the storage area 1, the frequency (frequency information) of the feature point a appearing in all the captured images stored in the storage area 1 is calculated, if the calculated frequency of the feature point a appearing reaches a first predetermined frequency value, the feature point a is retained, otherwise, the feature point a is filtered. If the number of captured images stored in the storage area 1 is 100 and the feature point a appears in 80 captured images, the frequency (frequency information) at which the feature point a appears in all the captured images stored in the storage area 1 is (80/100)% -80%, it is determined whether 80% has reached the first predetermined frequency value of 60%, and if it is determined that 80% has reached, the feature point a is retained and taken as one target feature point.
In addition, the frequency of the feature point A appearing in all the acquired images stored in all the storage areas can be calculated, whether the frequency of the feature point A appearing reaches a second preset frequency value or not is judged, if yes, the feature point A is reserved, and if not, the feature point A is filtered. Assume that there are 10 storage areas, and the number of images acquired in each storage area is 100. The frequency value of the feature point a appearing in 10 × 100 — 1000 captured images is calculated, and if the feature point a appears only in 700 of the 1000 captured images, the frequency of the feature point a appearing in all the captured images stored in all the storage areas is (700/1000)% -70%, it is determined whether 70% has reached the second predetermined frequency value of 50%, and if it is determined that it has reached, the feature point a is retained and taken as one target feature point.
In the above scheme, the feature points with higher frequency of occurrence in the acquired image are extracted from the extracted feature points as target feature points, that is, the feature points with higher frequency of occurrence in the acquired image are input to the object recognition and/or reconstruction model as high-quality feature points, so that the recognition and/or reconstruction speed and accuracy can be improved.
The third embodiment of the feature point selection method provided in the present application further describes a specific implementation process of obtaining attribute information of feature points of the captured image obtained in each storage region in the first embodiment, and obtaining target feature points of the target object at corresponding capture positions based on the attribute information of the feature points of the captured image obtained in each storage region (step 104).
As shown in fig. 4, a second embodiment of the feature point selection method is shown in fig. 4, and includes:
step 401: acquiring at least one acquired image which is acquired by an image acquisition unit at M acquisition positions and aims at a target object, wherein the at least one acquired image which is acquired at the M acquisition positions and aims at the target object is stored in M storage areas, and M is a positive integer which is more than or equal to 2;
step 402: extracting feature points of at least one acquired image stored in each storage area of the at least one storage area;
step 403: at least obtaining descriptor information of each feature point obtained from different collected images in the same storage area, wherein the descriptor information is information for representing the attribute of the feature point;
step 404: acquiring similarity information between descriptor information of feature points in different acquired images in the same storage area;
step 405: and obtaining the target characteristic points based on the similarity information.
The main execution body of the steps 401 to 405 is a feature point selection device. For the understanding of steps 401 to 402, refer to the aforementioned understanding of steps 101 to 102, and the description is not repeated here.
In this embodiment, the attribute information of the feature point is the key point and descriptor information of the feature point. The descriptor is used for describing information of pixels around the key point, such as color, pixel gray scale, pixel number and the like of the pixels, the information of surrounding pixels of different feature points is different, the information of surrounding pixels of the same feature point is the same, and the information of surrounding pixels of the feature point can also represent the feature point, so that the descriptor information of the feature point is used as a basis for screening out high-quality feature points.
The following description is given by way of example to enhance the understanding of the present solution. Taking 10 captured images read from each storage area as an example, a certain number of feature points are extracted from each of the 10 captured images, for example, 200 feature points are read from each image. The descriptor information of each of the 200 × 10 ═ 2000 feature points obtained from the memory area 1 is read, and 2000 descriptor information are obtained. And calculating the similarity between every two descriptor information. If the similarity between the descriptor information of one feature point and the descriptor information of more than a certain number, for example, more than 10%, of the other 1999 descriptors reaches a third predetermined threshold, for example, 55%, it may be considered that the feature point corresponding to the descriptor appears more frequently in all the acquired images stored in the storage area 1, and the feature point is an important feature point capable of reconstruction and/or identification, and may be taken as a target feature point.
In the scheme, the high-quality feature points are screened out according to the descriptor information of the feature points, and the speed and the precision of reconstruction and/or identification can be at least increased.
In a fourth embodiment of the feature point selection method provided by the present application, for the specific implementation process in the first embodiment, based on the attribute information of the feature point of the acquired image obtained in each storage area, obtaining the target feature point of the target object at the corresponding acquisition position (step 104) may also be combined with the foregoing second and third embodiments of the method. And combining the appearance frequency of the feature points in the acquired image and the similarity between the descriptor information of the feature points to screen out the target feature points. Compared with the prior art that all the characteristic points extracted from the acquired image are used for subsequent object identification and/or reconstruction, the method that the high-quality characteristic points are used for the subsequent object identification and/or reconstruction can at least improve the speed and the accuracy of identification and/or reconstruction.
For the two aspects of the occurrence frequency of the feature points in the acquired image and the similarity between the descriptor information of the feature points, the specific implementation process of screening out the target feature points should be understood by combining the combination of the foregoing second embodiment and third embodiment, which is not described herein again. The first predetermined frequency value, the second predetermined frequency value and the third predetermined threshold value may be the same or different, as the case may be.
The present application provides a feature point selection device, as shown in fig. 5, the device including: a first acquisition unit 500, a first extraction unit 501, a second acquisition unit 502, and a third acquisition unit 503; wherein the content of the first and second substances,
a first acquiring unit 500, configured to acquire at least one captured image for the target object captured at M capturing positions by the image capturing unit, where the at least one captured image for the target object captured at the M capturing positions is stored in M storage areas, and M is a positive integer greater than or equal to 2;
a first extraction unit 501, configured to extract feature points of at least one captured image stored in each storage area of at least one storage area;
a second obtaining unit 502, configured to obtain attribute information of feature points of the acquired image obtained in each storage area;
a third obtaining unit 502, configured to obtain a target feature point of the target object at a corresponding collecting position based on the attribute information of the feature point of the collected image obtained in each storage area.
Wherein, the third obtaining unit 502 is further configured to:
at least acquiring frequency information of all collected images stored in the corresponding storage areas of all the characteristic points; or at least acquiring frequency information of all collected images stored in all storage areas by each feature point;
and screening out the target characteristic points based on the frequency information.
Wherein, the third obtaining unit 502 is further configured to:
at least obtaining descriptor information of each feature point obtained from different collected images in the same storage area, wherein the descriptor information is information for representing the attribute of the feature point;
acquiring similarity information between descriptor information of feature points in different acquired images in the same storage area;
and obtaining the target characteristic points based on the similarity information.
In the above scheme, the feature points of the at least one acquired image stored in the at least one storage area are feature points expressed by two-dimensional coordinates;
the device further comprises: the conversion unit is used for carrying out coordinate conversion on the characteristic points to obtain the characteristic points expressed by three-dimensional coordinates;
accordingly, the target feature point of the target object at the M acquisition position is a three-dimensional target feature point of the target object at the M acquisition position.
In the above scheme, the apparatus further comprises: a reconstruction and/or identification unit for: reconstructing and/or identifying the target object at a corresponding acquisition position based on target feature points of the target object at the corresponding acquisition position.
It should be noted that, in the feature point selection device according to the embodiment of the present application, because the principle of solving the problem of the device is similar to that of the feature point selection method, the implementation process and the implementation principle of the feature point selection device can be described by referring to the implementation process and the implementation principle of the feature point selection method, and repeated details are not repeated.
The embodiment of the present application further provides a storage medium, which includes a stored program, where the program at least executes the feature point selection method shown in fig. 1, fig. 3, and/or fig. 4 when running.
The embodiment of the present application further provides a processor, configured to execute a program, where the program executes at least the feature point selection method shown in fig. 1, fig. 3, and/or fig. 4 when the program is executed by the processor.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Alternatively, the integrated units described above in the present application may be stored in a computer-readable storage medium if they are implemented in the form of software functional modules and sold or used as independent products. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or portions thereof contributing to the prior art may be embodied in the form of a software product stored in a storage medium, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a removable storage device, a ROM, a RAM, a magnetic or optical disk, or various other media that can store program code.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (8)

1. A method for selecting feature points is characterized by comprising the following steps:
acquiring at least one acquired image which is acquired by an image acquisition unit at each acquisition position of M acquisition positions and aims at a target object, wherein the at least one acquired image which is acquired at each acquisition position and aims at the target object is stored in a corresponding storage area, and M is a positive integer greater than or equal to 2; the acquisition positions are area positions, and the image acquisition unit comprises acquisition angles corresponding to at least two shooting positions when the image acquisition unit acquires images of the target object at each acquisition position;
extracting feature points of at least one acquired image stored in each storage area of the at least one storage area;
acquiring attribute information of the characteristic points of the acquired images obtained in each storage area;
obtaining target feature points of the target object at corresponding collection positions based on the attribute information of the feature points of the collected images obtained in the storage areas;
reconstructing and/or identifying the target object at a corresponding acquisition position based on target feature points of the target object at the corresponding acquisition position.
2. The method according to claim 1, wherein the obtaining of the attribute information of the feature points of the captured image obtained in each of the storage areas, and obtaining the target feature point of the target object at the corresponding capture position based on the attribute information of the feature points of the captured image obtained in each of the storage areas, comprises:
at least acquiring frequency information of all collected images stored in a corresponding storage area of the feature point; or at least acquiring frequency information of the feature points appearing in all acquired images stored in all storage areas;
and screening out the target characteristic points based on the frequency information.
3. The method according to claim 1, wherein the obtaining of the attribute information of the feature points of the captured image obtained in each of the storage areas, and obtaining the target feature point of the target object at the corresponding capture position based on the attribute information of the feature points of the captured image obtained in each of the storage areas, comprises:
at least obtaining descriptor information of each feature point obtained from different collected images in the same storage area, wherein the descriptor information is information for representing the attribute of the feature point;
acquiring similarity information between descriptor information of feature points in different acquired images in the same storage area;
and obtaining the target characteristic points based on the similarity information.
4. The method of claim 1, further comprising:
the characteristic points of the at least one collected image stored in the at least one storage area are characteristic points expressed by two-dimensional coordinates;
after extracting the feature points expressed in two-dimensional coordinates, the method further includes:
performing coordinate conversion on the characteristic points to obtain characteristic points expressed by three-dimensional coordinates;
accordingly, the target feature point of the target object at the M acquisition position is a three-dimensional target feature point of the target object at the M acquisition position.
5. A feature point selection apparatus, characterized in that the apparatus comprises:
the first acquisition unit is used for acquiring at least one acquired image which is acquired by the image acquisition unit at each acquisition position of M acquisition positions and aims at the target object, wherein the at least one acquired image which is acquired at each acquisition position and aims at the target object is stored in a corresponding storage area, and M is a positive integer which is more than or equal to 2; the acquisition positions are area positions, and the image acquisition unit comprises acquisition angles corresponding to at least two shooting positions when the image acquisition unit acquires images of the target object at each acquisition position;
a first extraction unit configured to extract feature points of at least one captured image stored in each of the at least one storage area;
the second acquisition unit is used for acquiring the attribute information of the characteristic points of the acquired images obtained in the storage areas;
a third obtaining unit, configured to obtain target feature points of the target object at corresponding collection positions based on attribute information of feature points of the collected image obtained in each storage area;
a fourth reconstruction and/or identification unit for reconstructing and/or identifying the target object at the corresponding acquisition position based on the target feature point of the target object at the corresponding acquisition position.
6. The apparatus of claim 5, wherein the third obtaining unit is further configured to:
at least acquiring frequency information of all collected images stored in the corresponding storage areas of all the characteristic points; or at least acquiring frequency information of all collected images stored in all storage areas by each feature point;
and screening out the target characteristic points based on the frequency information.
7. The apparatus of claim 5, wherein the third obtaining unit is further configured to:
at least obtaining descriptor information of each feature point obtained from different collected images in the same storage area, wherein the descriptor information is information for representing the attribute of the feature point;
acquiring similarity information between descriptor information of feature points in different acquired images in the same storage area;
and obtaining the target characteristic points based on the similarity information.
8. The apparatus according to claim 5, wherein the feature points of the at least one captured image stored in the at least one storage area are feature points expressed in two-dimensional coordinates;
the device further comprises: the conversion unit is used for carrying out coordinate conversion on the characteristic points to obtain the characteristic points expressed by three-dimensional coordinates;
accordingly, the target feature point of the target object at the M acquisition position is a three-dimensional target feature point of the target object at the M acquisition position.
CN201811105698.8A 2018-09-21 2018-09-21 Feature point selection method and device Active CN109272041B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811105698.8A CN109272041B (en) 2018-09-21 2018-09-21 Feature point selection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811105698.8A CN109272041B (en) 2018-09-21 2018-09-21 Feature point selection method and device

Publications (2)

Publication Number Publication Date
CN109272041A CN109272041A (en) 2019-01-25
CN109272041B true CN109272041B (en) 2021-10-22

Family

ID=65198000

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811105698.8A Active CN109272041B (en) 2018-09-21 2018-09-21 Feature point selection method and device

Country Status (1)

Country Link
CN (1) CN109272041B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103336963A (en) * 2013-07-08 2013-10-02 天脉聚源(北京)传媒科技有限公司 Method and device for image feature extraction
CN105654547A (en) * 2015-12-23 2016-06-08 中国科学院自动化研究所 Three-dimensional reconstruction method

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5588812B2 (en) * 2010-09-30 2014-09-10 日立オートモティブシステムズ株式会社 Image processing apparatus and imaging apparatus using the same
US9036031B2 (en) * 2010-12-23 2015-05-19 Samsung Electronics Co., Ltd. Digital image stabilization method with adaptive filtering
KR20140112909A (en) * 2013-03-14 2014-09-24 삼성전자주식회사 Electronic device and method for generating panorama image
CN104616348A (en) * 2015-01-15 2015-05-13 东华大学 Method for reconstructing fabric appearance based on multi-view stereo vision
CN104809724A (en) * 2015-04-21 2015-07-29 电子科技大学 Automatic precise registration method for multiband remote sensing images
CN206931119U (en) * 2016-10-21 2018-01-26 微景天下(北京)科技有限公司 Image mosaic system
CN108090497B (en) * 2017-12-28 2020-07-07 Oppo广东移动通信有限公司 Video classification method and device, storage medium and electronic equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103336963A (en) * 2013-07-08 2013-10-02 天脉聚源(北京)传媒科技有限公司 Method and device for image feature extraction
CN105654547A (en) * 2015-12-23 2016-06-08 中国科学院自动化研究所 Three-dimensional reconstruction method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于特征点相似度的匹配定位算法;甄巍松 等;《武汉工程大学学报》;20150430;第33卷(第4期);摘要 *

Also Published As

Publication number Publication date
CN109272041A (en) 2019-01-25

Similar Documents

Publication Publication Date Title
CN110458895B (en) Image coordinate system conversion method, device, equipment and storage medium
CN111243093B (en) Three-dimensional face grid generation method, device, equipment and storage medium
CN108256404B (en) Pedestrian detection method and device
CN107393000B (en) Image processing method, image processing device, server and computer-readable storage medium
CN106897648B (en) Method and system for identifying position of two-dimensional code
JP6798183B2 (en) Image analyzer, image analysis method and program
CN108875481B (en) Method, device, system and storage medium for pedestrian detection
CN110781911B (en) Image matching method, device, equipment and storage medium
CN110645986A (en) Positioning method and device, terminal and storage medium
CN108009466B (en) Pedestrian detection method and device
CN110363179B (en) Map acquisition method, map acquisition device, electronic equipment and storage medium
CN111008935B (en) Face image enhancement method, device, system and storage medium
CN107346414B (en) Pedestrian attribute identification method and device
JP7159384B2 (en) Image processing device, image processing method, and program
CN112802033B (en) Image processing method and device, computer readable storage medium and electronic equipment
US20190206117A1 (en) Image processing method, intelligent terminal, and storage device
CN111263955A (en) Method and device for determining movement track of target object
CN110675426A (en) Human body tracking method, device, equipment and storage medium
CN108229281B (en) Neural network generation method, face detection device and electronic equipment
CN114842466A (en) Object detection method, computer program product and electronic device
CN114241012A (en) High-altitude parabolic determination method and device
CN108109164B (en) Information processing method and electronic equipment
CN109272041B (en) Feature point selection method and device
CN115115847B (en) Three-dimensional sparse reconstruction method and device and electronic device
CN112257666B (en) Target image content aggregation method, device, equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant