CN111814792B - Feature point extraction and matching method based on RGB-D image - Google Patents

Feature point extraction and matching method based on RGB-D image Download PDF

Info

Publication number
CN111814792B
CN111814792B CN202010923532.8A CN202010923532A CN111814792B CN 111814792 B CN111814792 B CN 111814792B CN 202010923532 A CN202010923532 A CN 202010923532A CN 111814792 B CN111814792 B CN 111814792B
Authority
CN
China
Prior art keywords
point
points
feature
attribute
rgb
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010923532.8A
Other languages
Chinese (zh)
Other versions
CN111814792A (en
Inventor
李月华
谢天
李小倩
朱世强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lab
Original Assignee
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lab filed Critical Zhejiang Lab
Priority to CN202010923532.8A priority Critical patent/CN111814792B/en
Publication of CN111814792A publication Critical patent/CN111814792A/en
Application granted granted Critical
Publication of CN111814792B publication Critical patent/CN111814792B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a feature point extracting and matching method based on RGB-D images, which comprises the steps of firstly calibrating an RGB-D camera to obtain internal and external parameters of the RGB-D camera, then collecting a picture, correcting the picture according to the internal and external parameters, extracting feature points on the corrected RGB image through a local feature extraction method, obtaining depth information of the feature points from a depth map, calculating an interested region near the feature points in the depth map according to the depth information of the feature points, converting pixels in the interested region into three-dimensional point clouds, and selecting n points closest to the feature points to obtain the adjacent three-dimensional point clouds; finally, a covariance matrix of the adjacent three-dimensional point clouds is obtained through calculation, singular value decomposition is carried out, and three eigenvalues lambda arranged from large to small are obtained1、λ2、λ3And judging the space geometric characteristic attribute of the characteristic points according to the university relation of the characteristic values, and performing characteristic matching on the characteristic points belonging to the same attribute. The method has simple principle and high accuracy of image matching.

Description

Feature point extraction and matching method based on RGB-D image
Technical Field
The invention relates to the field of image feature extraction, in particular to a feature point extraction and matching method based on RGB-D images.
Background
There are many feature extraction and matching methods for 2-dimensional RGB images, such as SIFT, SURF, FAST, BRIEF, ORB, etc., and these features have been widely used in practical algorithms. However, in a scene with high similarity of features, such as the gobi, desert, or extraterrestrial (e.g., moon, mars), the feature extraction is prone to be inaccurate or mismatching.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a feature point extraction and matching method based on an RGB-D image, and the specific technical scheme is as follows:
a feature point extracting and matching method based on RGB-D images specifically comprises the following steps:
s1: calibrating the RGB-D camera to obtain internal and external parameters of the RGB-D camera;
s2: collecting a picture, and correcting the picture according to the internal and external parameters obtained in S1 to obtain a corrected RGB image;
s3: extracting characteristic points on the corrected RGB image by a local characteristic extraction method, and acquiring depth information of the characteristic points from the depth mapd_depth
S4: depth information from feature pointsd_depthCalculating an interested area near the feature point in the depth map, converting pixels in the interested area into three-dimensional point cloud, and selecting n points closest to the feature point to obtain adjacent three-dimensional point cloud;
s5: calculating to obtain a covariance matrix of the adjacent three-dimensional point cloud, and performing singular value decomposition to obtain three eigenvalues lambda arranged from large to small1、λ2、λ3When the characteristic value satisfies λ12<1.5、λ23>10、λ13If the attribute is more than 10, setting the attribute of the feature point as a surface point; when the characteristic value satisfies λ12>10、λ13>10、λ23If the attribute is less than 1.5, the attribute of the feature point is set as a line point; when the three characteristic values satisfy lambda12<1.5、λ13<1.5、λ23If the attribute is less than 1.5, the attribute of the feature point is set as a cluster point;
s6: and carrying out feature matching on the feature points belonging to the same attribute.
Further, the step S4 is implemented by the following sub-steps:
s4.1: determining an r multiplied by r interested area with the feature point as the center, and filtering out pixel points with depth values smaller than 1m and larger than 7 m; wherein r is the side length of the region of interest, expressed in number of pixels, determined by calculating k-d_ depthAnd rounding up to obtain; k represents a proportionality coefficient and controls the size of the region of interest;
s4.2: will feelTwo-dimensional image coordinates of region of interest pixels (u i,v i) Point cloud converted into world coordinate system (x i,y i, z i) I =1,2 · · wherein the three-dimensional points corresponding to the feature points are represented by (a) ((b))x 0,y 0, z 0);
S4.3: comparing all the point clouds: (x i,y i, z i) Three-dimensional points corresponding to feature points: (x 0,y 0, z 0) And selecting n three-dimensional points closest to the three-dimensional points corresponding to the feature points to form an adjacent three-dimensional point cloud set { (x 1,y 1, z 1), (x 2,y 2, z 2),···,(x n,y n, z n) }, i.e. a great faceX i}={(x i,y i, z i)}, i=1,2···n。
Further, the step S5 is implemented by the following sub-steps:
s5.1: calculating the mean value according to
Figure 486141DEST_PATH_IMAGE001
Figure 119729DEST_PATH_IMAGE002
S5.2: the covariance matrix sigma of the point set formed by the characteristic point and the adjacent points is obtained by calculation
Figure 131547DEST_PATH_IMAGE003
S5.3: SVD singular value decomposition of covariance matrix
Figure 955147DEST_PATH_IMAGE004
Wherein R is an orthogonal matrix; lambda [ alpha ]1、λ2、λ3For three eigenvalues arranged from large to small, when the eigenvalue satisfies lambda12<1.5、λ23>10、λ13If the attribute is more than 10, setting the attribute of the feature point as a surface point; when the characteristic value satisfies λ12>10、λ13>10、λ23If the attribute is less than 1.5, the attribute of the feature point is set as a line point; when the three characteristic values satisfy lambda12<1.5、λ13<1.5、λ23If < 1.5, the feature point attribute is set as a cluster point.
Furthermore, the value range of the side length r of the region of interest is more than or equal to 4 and less than or equal to 10.
The invention has the following beneficial effects:
the feature point extraction method is simple in principle, can improve the accuracy of image matching of RGB-D equipment under the condition of similar texture, and can be applied to various image feature extraction containing RGB information and depth information.
Drawings
FIG. 1 is a flow chart of a feature point extraction and matching method based on RGB-D images according to the present invention;
FIG. 2 is a diagram of the effect of ORB feature based extraction and matching;
FIG. 3 is a diagram illustrating the effect of the method of the present invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and preferred embodiments, and the objects and effects of the present invention will become more apparent, it being understood that the specific embodiments described herein are merely illustrative of the present invention and are not intended to limit the present invention.
The feature point extraction and matching method based on the RGB-D image is specially designed for RGB-D camera equipment, wherein the RGB-D camera equipment mainly refers to motion sensing equipment which can simultaneously acquire image information and object depth information, such as Kinect and Xtion. Before starting, a camera to be calibrated is used for shooting a plurality of chessboard pictures under different visual angles, and then internal parameters and external parameters of the RGB camera and the depth camera are respectively calculated by using a GML Calibration Toolbox.
As shown in fig. 1, the method for extracting and matching feature points based on RGB-D images of the present invention includes the following steps:
s1: calibrating the RGB-D camera to obtain internal and external parameters of the RGB-D camera;
s2: collecting a picture, and correcting the picture according to the internal and external parameters obtained in S1 to obtain a corrected RGB image;
s3: extracting characteristic points on the corrected RGB image by a local characteristic extraction method, and acquiring depth information of the characteristic points from the depth mapd_depth
S4: depth information from feature pointsd_depthCalculating an interested area near the feature point in the depth map, converting pixels in the interested area into three-dimensional point cloud, and selecting n points closest to the feature point to obtain adjacent three-dimensional point cloud;
s4.1: determining an r multiplied by r interested area with the feature point as the center, and filtering out pixel points with depth values smaller than 1m and larger than 7 m;
wherein r is the side length of the region of interest, expressed in number of pixels, determined by calculating k-d_depthAnd rounding up to obtain; k represents a proportionality coefficient and controls the size of the region of interest; the farther the three-dimensional point represented by the feature point is from the camera, the smaller the selected area is, so as to ensure that the three-dimensional point corresponding to the pixel of the selected region of interest is near the three-dimensional point represented by the feature point. The value range of r is preferably 4-10.
S4.2: two-dimensional image coordinates of region-of-interest pixels (u i,v i) Point cloud converted into world coordinate system (x i,y i, z i) I =1,2 · · wherein the three-dimensional points corresponding to the feature points are represented by (a) ((b))x 0,y 0, z 0);
S4.3: comparing all the point clouds: (x i,y i, z i) And characteristic pointCorresponding three-dimensional points (x 0,y 0, z 0) And selecting the 15 three-dimensional points closest to the three-dimensional points corresponding to the feature points to form a neighboring three-dimensional point cloud set { (x 1,y 1, z 1), (x 2,y 2, z 2),···,(x n,y n, z n) }, i.e. a great faceX i}={(x i,y i, z i)}, i=1,2···15;
S5: calculating to obtain a covariance matrix of the adjacent three-dimensional point cloud, and performing singular value decomposition to obtain three eigenvalues lambda arranged from large to small1、λ2、λ3And classify it as follows:
s5.1: calculating the mean value according to
Figure 874561DEST_PATH_IMAGE001
Figure 365586DEST_PATH_IMAGE002
S5.2: the covariance matrix sigma of the point set formed by the characteristic point and the adjacent points is obtained by calculation
Figure 548305DEST_PATH_IMAGE003
S5.3: SVD singular value decomposition of covariance matrix
Figure 124780DEST_PATH_IMAGE004
Wherein R is an orthogonal matrix; lambda [ alpha ]1、λ2、λ3For three eigenvalues arranged from large to small, when the eigenvalue satisfies lambda12<1.5、λ23>10、λ13When the attribute is more than 10, the attribute of the feature point is setDefining as a surface point; when the characteristic value satisfies λ12>10、λ13>10、λ23If the attribute is less than 1.5, the attribute of the feature point is set as a line point; when the three characteristic values satisfy lambda12<1.5、λ13<1.5、λ23If < 1.5, the feature point attribute is set as a cluster point.
S6: and carrying out feature matching on the feature points belonging to the same attribute.
Fig. 2 is a result of feature extraction and matching based on ORB, and fig. 3 is a result of feature extraction and matching of the method of the present invention. As can be seen from fig. 2, by using the conventional ORB feature extraction method, mismatching is likely to occur in some visual feature approximation environments, the method provided by the present invention fuses spatial information of features, and feature points having the same spatial geometric features can be registered, thereby improving the accuracy of feature point matching, as shown in fig. 3.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and although the invention has been described in detail with reference to the foregoing examples, it will be apparent to those skilled in the art that various changes in the form and details of the embodiments may be made and equivalents may be substituted for elements thereof. All modifications, equivalents and the like which come within the spirit and principle of the invention are intended to be included within the scope of the invention.

Claims (3)

1. A feature point extraction and matching method based on RGB-D images is characterized by comprising the following steps:
s1: calibrating the RGB-D camera to obtain internal and external parameters of the RGB-D camera;
s2: collecting a picture, and correcting the picture according to the internal and external parameters obtained in S1 to obtain a corrected RGB image;
s3: extracting characteristic points on the corrected RGB image by a local characteristic extraction method, and acquiring depth information of the characteristic points from the depth mapd_depth
S4: depth information from feature pointsd_depthCalculating an interested area near the feature point in the depth map, converting pixels in the interested area into three-dimensional point cloud, and selecting n points closest to the feature point to obtain adjacent three-dimensional point cloud; the S4 is realized by the following substeps:
s4.1: determining an r multiplied by r interested area with the feature point as the center, and filtering out pixel points with depth values smaller than 1m and larger than 7 m; wherein r is the side length of the region of interest, expressed in number of pixels, determined by calculating k-d_depthAnd rounding up to obtain; k represents a proportionality coefficient and controls the size of the region of interest;
s4.2: two-dimensional image coordinates of region-of-interest pixels (u i,v i) Point cloud converted into world coordinate system (x i,y i, z i) I =1,2 · · wherein the three-dimensional points corresponding to the feature points are represented by (a) ((b))x 0,y 0, z 0);
S4.3: comparing all the point clouds: (x i,y i, z i) Three-dimensional points corresponding to feature points: (x 0,y 0, z 0) And selecting n three-dimensional points closest to the three-dimensional points corresponding to the feature points to form an adjacent three-dimensional point cloud set { (x 1,y 1, z 1), (x 2,y 2, z 2),···,(x n,y n, z n) }, i.e. a great faceX i}={(x i,y i, z i)}, i=1,2···n;
S5: calculating to obtain a covariance matrix of the adjacent three-dimensional point cloud, and performing singular value decomposition to obtain three eigenvalues lambda arranged from large to small1、λ2、λ3When the characteristic value satisfies λ12<1.5、λ23>10、λ13If the attribute is more than 10, setting the attribute of the feature point as a surface point; when the characteristic value satisfies λ12>10、λ13>10、λ23If the attribute is less than 1.5, the attribute of the feature point is set as a line point; when the three characteristic values satisfy lambda12<1.5、λ13<1.5、λ23If the attribute is less than 1.5, the attribute of the feature point is set as a cluster point;
s6: and carrying out feature matching on the feature points belonging to the same attribute.
2. The RGB-D image-based feature point extracting and matching method according to claim 1, wherein the S5 is implemented by the following sub-steps:
s5.1: calculating the mean value according to
Figure 411262DEST_PATH_IMAGE001
Figure 930624DEST_PATH_IMAGE002
S5.2: the covariance matrix sigma of the point set formed by the characteristic point and the adjacent points is obtained by calculation
Figure 37120DEST_PATH_IMAGE003
S5.3: SVD singular value decomposition of covariance matrix
Figure 639003DEST_PATH_IMAGE004
Wherein R is an orthogonal matrix; lambda [ alpha ]1、λ2、λ3For three eigenvalues arranged from large to small, when the eigenvalue satisfies lambda12<1.5、λ23>10、λ13If the attribute is more than 10, setting the attribute of the feature point as a surface point; when the characteristic value satisfies λ12>10、λ13>10、λ23If the attribute is less than 1.5, the attribute of the feature point is set as a line point; when the three characteristic values satisfy lambda12<1.5、λ13<1.5、λ23If < 1.5, the feature point attribute is set as a cluster point.
3. The method for extracting and matching feature points based on RGB-D images as claimed in claim 1, wherein the side length r of the region of interest is in a range of 4 ≤ r ≤ 10.
CN202010923532.8A 2020-09-04 2020-09-04 Feature point extraction and matching method based on RGB-D image Active CN111814792B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010923532.8A CN111814792B (en) 2020-09-04 2020-09-04 Feature point extraction and matching method based on RGB-D image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010923532.8A CN111814792B (en) 2020-09-04 2020-09-04 Feature point extraction and matching method based on RGB-D image

Publications (2)

Publication Number Publication Date
CN111814792A CN111814792A (en) 2020-10-23
CN111814792B true CN111814792B (en) 2020-12-29

Family

ID=72859930

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010923532.8A Active CN111814792B (en) 2020-09-04 2020-09-04 Feature point extraction and matching method based on RGB-D image

Country Status (1)

Country Link
CN (1) CN111814792B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112489215B (en) * 2020-11-27 2022-09-16 之江实验室 Single-drawing-batch three-dimensional road parametric modeling method with road surface marks
CN114720993B (en) * 2022-03-30 2024-08-20 上海木蚁机器人科技有限公司 Robot positioning method, apparatus, electronic device and storage medium
CN116170601B (en) * 2023-04-25 2023-07-11 之江实验室 Image compression method based on four-column vector block singular value decomposition
CN117114782B (en) * 2023-10-24 2024-05-28 佛山电力设计院有限公司 Construction engineering cost analysis method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101490055B1 (en) * 2013-10-30 2015-02-06 한국과학기술원 Method for localization of mobile robot and mapping, and apparatuses operating the same
CN111311679A (en) * 2020-01-31 2020-06-19 武汉大学 Free floating target pose estimation method based on depth camera

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101490055B1 (en) * 2013-10-30 2015-02-06 한국과학기술원 Method for localization of mobile robot and mapping, and apparatuses operating the same
CN111311679A (en) * 2020-01-31 2020-06-19 武汉大学 Free floating target pose estimation method based on depth camera

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于无源信标的移动机器人室内定位技术研究;李月华;《中国博士学位论文全文数据库信息科技辑》;20190115;第91-112页 *

Also Published As

Publication number Publication date
CN111814792A (en) 2020-10-23

Similar Documents

Publication Publication Date Title
CN111814792B (en) Feature point extraction and matching method based on RGB-D image
CN110443836B (en) Point cloud data automatic registration method and device based on plane features
CN108010036B (en) Object symmetry axis detection method based on RGB-D camera
CN108596961B (en) Point cloud registration method based on three-dimensional convolutional neural network
CN106228507B (en) A kind of depth image processing method based on light field
CN103810744B (en) It is backfilled a little in cloud
CN107977996B (en) Space target positioning method based on target calibration positioning model
CN109472820B (en) Monocular RGB-D camera real-time face reconstruction method and device
CN112200203B (en) Matching method of weak correlation speckle images in oblique field of view
CN110796691B (en) Heterogeneous image registration method based on shape context and HOG characteristics
CN104574347A (en) On-orbit satellite image geometric positioning accuracy evaluation method on basis of multi-source remote sensing data
CN109003307B (en) Underwater binocular vision measurement-based fishing mesh size design method
Cherian et al. Accurate 3D ground plane estimation from a single image
CN108401565B (en) Remote sensing image registration method based on improved KAZE features and Pseudo-RANSAC algorithms
CN114396875A (en) Rectangular parcel volume measurement method based on vertical shooting of depth camera
CN114998773B (en) Characteristic mismatching elimination method and system suitable for aerial image of unmanned aerial vehicle system
CN108462866A (en) A kind of 3D stereo-picture color calibration methods based on matching and optimization
US11475629B2 (en) Method for 3D reconstruction of an object
KR102547333B1 (en) Depth Image based Real-time ground detection method
CN113642397A (en) Object length measuring method based on mobile phone video
CN111369435B (en) Color image depth up-sampling method and system based on self-adaptive stable model
CN111179271B (en) Object angle information labeling method based on retrieval matching and electronic equipment
Shen et al. A 3D modeling method of indoor objects using Kinect sensor
CN108416815B (en) Method and apparatus for measuring atmospheric light value and computer readable storage medium
CN108805896B (en) Distance image segmentation method applied to urban environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant