CN111814792A - Feature point extraction and matching method based on RGB-D image - Google Patents

Feature point extraction and matching method based on RGB-D image Download PDF

Info

Publication number
CN111814792A
CN111814792A CN202010923532.8A CN202010923532A CN111814792A CN 111814792 A CN111814792 A CN 111814792A CN 202010923532 A CN202010923532 A CN 202010923532A CN 111814792 A CN111814792 A CN 111814792A
Authority
CN
China
Prior art keywords
point
feature
rgb
attribute
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010923532.8A
Other languages
Chinese (zh)
Other versions
CN111814792B (en
Inventor
李月华
谢天
李小倩
朱世强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lab
Original Assignee
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lab filed Critical Zhejiang Lab
Priority to CN202010923532.8A priority Critical patent/CN111814792B/en
Publication of CN111814792A publication Critical patent/CN111814792A/en
Application granted granted Critical
Publication of CN111814792B publication Critical patent/CN111814792B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a feature point extracting and matching method based on RGB-D images, which comprises the steps of firstly calibrating an RGB-D camera to obtain internal and external parameters of the RGB-D camera, then collecting a picture, correcting the picture according to the internal and external parameters, extracting feature points on the corrected RGB image through a local feature extraction method, obtaining depth information of the feature points from a depth map, calculating an interested region near the feature points in the depth map according to the depth information of the feature points, converting pixels in the interested region into three-dimensional point clouds, and selecting n points closest to the feature points to obtain the adjacent three-dimensional point clouds; finally, a covariance matrix of the adjacent three-dimensional point clouds is obtained through calculation, singular value decomposition is carried out, and three eigenvalues lambda arranged from large to small are obtained1、λ2、λ3According to the characteristic valueThe university relationship of (2) judges the space geometric feature attribute of the feature points, and performs feature matching on the feature points belonging to the same attribute. The method has simple principle and high accuracy of image matching.

Description

Feature point extraction and matching method based on RGB-D image
Technical Field
The invention relates to the field of image feature extraction, in particular to a feature point extraction and matching method based on RGB-D images.
Background
There are many feature extraction and matching methods for 2-dimensional RGB images, such as SIFT, SURF, FAST, BRIEF, ORB, etc., and these features have been widely used in practical algorithms. However, in a scene with high similarity of features, such as the gobi, desert, or extraterrestrial (e.g., moon, mars), the feature extraction is prone to be inaccurate or mismatching.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a feature point extraction and matching method based on an RGB-D image, and the specific technical scheme is as follows:
a feature point extracting and matching method based on RGB-D images specifically comprises the following steps:
s1: calibrating the RGB-D camera to obtain internal and external parameters of the RGB-D camera;
s2: collecting a picture, and correcting the picture according to the internal and external parameters obtained in S1 to obtain a corrected RGB image;
s3: extracting characteristic points on the corrected RGB image by a local characteristic extraction method, and acquiring depth information of the characteristic points from the depth mapd_depth
S4: depth information from feature pointsd_depthCalculating an interested area near the feature point in the depth map, converting pixels in the interested area into three-dimensional point cloud, and selecting n points closest to the feature point to obtain adjacent three-dimensional point cloud;
s5: calculating to obtain a covariance matrix of the adjacent three-dimensional point cloud, and performing singular value decomposition to obtain three eigenvalues lambda arranged from large to small1、λ2、λ3When the characteristic value satisfies λ12<1.5、λ23>10、λ13If the attribute is more than 10, setting the point attribute as a surface point; when the characteristic value satisfies λ12>10、λ13>10、λ23If the point attribute is less than 1.5, the point attribute is set as a line point; when the three characteristic values satisfy lambda12<1.5、λ13<1.5、λ23If the point attribute is less than 1.5, the point attribute is set as a cluster point;
s6: and carrying out feature matching on the feature points belonging to the same attribute.
Further, the step S4 is implemented by the following sub-steps:
s4.1: determining an r multiplied by r interested area with the feature point as the center, and filtering out pixel points with depth values smaller than 1m and larger than 7 m; wherein r is the side length of the region of interest, expressed in number of pixels, determined by calculating k-d_depthAnd rounding up to obtain; k represents a proportionality coefficient and controls the size of the region of interest;
s4.2: two-dimensional image coordinates of region-of-interest pixels (u i,v i) Point cloud converted into world coordinate system (x i,y i, z i) I =1,2 · · wherein the three-dimensional points corresponding to the feature points are represented by (a) ((b))x 0,y 0, z 0);
S4.3: comparing all the point clouds: (x i,y i, z i) Three-dimensional points corresponding to feature points: (x 0,y 0, z 0) And selecting n three-dimensional points closest to the three-dimensional points corresponding to the feature points to form an adjacent three-dimensional point cloud set { (x 1,y 1, z 1), (x 2,y 2, z 2),···,(x n,y n, z n) }, i.e. a great faceX i}={(x i,y i, z i)}, i=1,2···n。
Further, the step S5 is implemented by the following sub-steps:
s5.1: calculating the mean value according to
Figure 516400DEST_PATH_IMAGE001
Figure 964698DEST_PATH_IMAGE002
S5.2: the covariance matrix sigma of the point set formed by the point and the nearby point is obtained by calculation
Figure 150960DEST_PATH_IMAGE003
S5.3: SVD singular value decomposition of covariance matrix
Figure 342907DEST_PATH_IMAGE004
Wherein R is an orthogonal matrix; lambda [ alpha ]1、λ2、λ3For three eigenvalues arranged from large to small, when the eigenvalue satisfies lambda12<1.5、λ23>10、λ13If the attribute is more than 10, setting the point attribute as a surface point; when the characteristic value satisfies λ12>10、λ13>10、λ23If the point attribute is less than 1.5, the point attribute is set as a line point; when the three characteristic values satisfy lambda12<1.5、λ13<1.5、λ23If < 1.5, the point attribute is set to cluster point.
Furthermore, the value range of the side length r of the region of interest is more than or equal to 4 and less than or equal to 10.
The invention has the following beneficial effects:
the feature point extraction method is simple in principle, can improve the accuracy of image matching of RGB-D equipment under the condition of similar texture, and can be applied to various image feature extraction containing RGB information and depth information.
Drawings
FIG. 1 is a flow chart of a feature point extraction and matching method based on RGB-D images according to the present invention;
FIG. 2 is a diagram of the effect of ORB feature based extraction and matching;
FIG. 3 is a diagram illustrating the effect of the method of the present invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and preferred embodiments, and the objects and effects of the present invention will become more apparent, it being understood that the specific embodiments described herein are merely illustrative of the present invention and are not intended to limit the present invention.
The feature point extraction and matching method based on the RGB-D image is specially designed for RGB-D camera equipment, wherein the RGB-D camera equipment mainly refers to motion sensing equipment which can simultaneously acquire image information and object depth information, such as Kinect and Xtion. Before starting, a camera to be calibrated is used for shooting a plurality of chessboard pictures under different visual angles, and then internal parameters and external parameters of the RGB camera and the depth camera are respectively calculated by using a GML Calibration Toolbox.
As shown in fig. 1, the method for extracting and matching feature points based on RGB-D images of the present invention includes the following steps:
s1: calibrating the RGB-D camera to obtain internal and external parameters of the RGB-D camera;
s2: collecting a picture, and correcting the picture according to the internal and external parameters obtained in S1 to obtain a corrected RGB image;
s3: extracting characteristic points on the corrected RGB image by a local characteristic extraction method, and acquiring depth information of the characteristic points from the depth mapd_depth
S4: depth information from feature pointsd_depthCalculating an interested area near the feature point in the depth map, converting pixels in the interested area into three-dimensional point cloud, and selecting n points closest to the feature point to obtain adjacent three-dimensional point cloud;
s4.1: determining an r multiplied by r interested area with the feature point as the center, and filtering out pixel points with depth values smaller than 1m and larger than 7 m;
wherein r is the side length of the region of interest, expressed in number of pixels, determined by calculating k-d_depthAnd rounding up to obtain; k representing a scaling factor, controlling the region of interestSize; the farther the three-dimensional point represented by the feature point is from the camera, the smaller the selected area is, so as to ensure that the three-dimensional point corresponding to the pixel of the selected region of interest is near the three-dimensional point represented by the feature point. The value range of r is preferably 4-10.
S4.2: two-dimensional image coordinates of region-of-interest pixels (u i,v i) Point cloud converted into world coordinate system (x i,y i, z i) I =1,2 · · wherein the three-dimensional points corresponding to the feature points are represented by (a) ((b))x 0,y 0, z 0);
S4.3: comparing all the point clouds: (x i,y i, z i) Three-dimensional points corresponding to feature points: (x 0,y 0, z 0) And selecting the 15 three-dimensional points closest to the three-dimensional points corresponding to the feature points to form a neighboring three-dimensional point cloud set { (x 1,y 1, z 1), (x 2,y 2, z 2),···,(x n,y n, z n) }, i.e. a great faceX i}={(x i,y i, z i)}, i=1,2···15;
S5: calculating to obtain a covariance matrix of the adjacent three-dimensional point cloud, and performing singular value decomposition to obtain three eigenvalues lambda arranged from large to small1、λ2、λ3And classify it as follows:
s5.1: calculating the mean value according to
Figure 813072DEST_PATH_IMAGE001
Figure 166693DEST_PATH_IMAGE002
S5.2: the covariance matrix sigma of the point set formed by the point and the nearby point is obtained by calculation
Figure 840251DEST_PATH_IMAGE005
S5.3: SVD singular value decomposition of covariance matrix
Figure 570309DEST_PATH_IMAGE006
Wherein R is an orthogonal matrix; lambda [ alpha ]1、λ2、λ3For three eigenvalues arranged from large to small, when the eigenvalue satisfies lambda12<1.5、λ23>10、λ13If the attribute is more than 10, setting the point attribute as a surface point; when the characteristic value satisfies λ12>10、λ13>10、λ23If the point attribute is less than 1.5, the point attribute is set as a line point; when the three characteristic values satisfy lambda12<1.5、λ13<1.5、λ23If < 1.5, the point attribute is set to cluster point.
S6: and carrying out feature matching on the feature points belonging to the same attribute.
Fig. 2 is a result of feature extraction and matching based on ORB, and fig. 3 is a result of feature extraction and matching of the method of the present invention. As can be seen from fig. 2, by using the conventional ORB feature extraction method, mismatching is likely to occur in some visual feature approximation environments, the method provided by the present invention fuses spatial information of features, and feature points having the same spatial geometric features can be registered, thereby improving the accuracy of feature point matching, as shown in fig. 3.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and although the invention has been described in detail with reference to the foregoing examples, it will be apparent to those skilled in the art that various changes in the form and details of the embodiments may be made and equivalents may be substituted for elements thereof. All modifications, equivalents and the like which come within the spirit and principle of the invention are intended to be included within the scope of the invention.

Claims (4)

1. A feature point extraction and matching method based on RGB-D images is characterized by comprising the following steps:
s1: calibrating the RGB-D camera to obtain internal and external parameters of the RGB-D camera;
s2: collecting a picture, and correcting the picture according to the internal and external parameters obtained in S1 to obtain a corrected RGB image;
s3: extracting characteristic points on the corrected RGB image by a local characteristic extraction method, and acquiring depth information of the characteristic points from the depth mapd_depth
S4: depth information from feature pointsd_depthCalculating an interested area near the feature point in the depth map, converting pixels in the interested area into three-dimensional point cloud, and selecting n points closest to the feature point to obtain adjacent three-dimensional point cloud;
s5: calculating to obtain a covariance matrix of the adjacent three-dimensional point cloud, and performing singular value decomposition to obtain three eigenvalues lambda arranged from large to small1、λ2、λ3When the characteristic value satisfies λ12<1.5、λ23>10、λ13If the attribute is more than 10, setting the point attribute as a surface point; when the characteristic value satisfies λ12>10、λ13>10、λ23If the point attribute is less than 1.5, the point attribute is set as a line point; when the three characteristic values satisfy lambda12<1.5、λ13<1.5、λ23If the point attribute is less than 1.5, the point attribute is set as a cluster point;
s6: and carrying out feature matching on the feature points belonging to the same attribute.
2. The RGB-D image-based feature point extracting and matching method according to claim 1, wherein the S4 is implemented by the following sub-steps:
s4.1: determining a region of interest of r × r size centered on the feature point and filtering out the depthPixel points with the values smaller than 1m and larger than 7 m; wherein r is the side length of the region of interest, expressed in number of pixels, determined by calculating k-d_depthAnd rounding up to obtain; k represents a proportionality coefficient and controls the size of the region of interest;
s4.2: two-dimensional image coordinates of region-of-interest pixels (u i,v i) Point cloud converted into world coordinate system (x i,y i, z i) I =1,2 · · wherein the three-dimensional points corresponding to the feature points are represented by (a) ((b))x 0,y 0, z 0);
S4.3: comparing all the point clouds: (x i,y i, z i) Three-dimensional points corresponding to feature points: (x 0,y 0, z 0) And selecting n three-dimensional points closest to the three-dimensional points corresponding to the feature points to form an adjacent three-dimensional point cloud set { (x 1,y 1, z 1), (x 2,y 2, z 2),···,(x n,y n, z n) }, i.e. a great faceX i}={(x i,y i, z i)}, i=1,2···n。
3. The RGB-D image-based feature point extracting and matching method according to claim 1, wherein the S5 is implemented by the following sub-steps:
s5.1: calculating the mean value according to
Figure 499777DEST_PATH_IMAGE001
Figure 24300DEST_PATH_IMAGE002
S5.2: the covariance matrix sigma of the point set formed by the point and the nearby point is obtained by calculation
Figure 935886DEST_PATH_IMAGE003
S5.3: SVD singular value decomposition of covariance matrix
Figure 204057DEST_PATH_IMAGE004
Wherein R is an orthogonal matrix; lambda [ alpha ]1、λ2、λ3For three eigenvalues arranged from large to small, when the eigenvalue satisfies lambda12<1.5、λ23>10、λ13If the attribute is more than 10, setting the point attribute as a surface point; when the characteristic value satisfies λ12>10、λ13>10、λ23If the point attribute is less than 1.5, the point attribute is set as a line point; when the three characteristic values satisfy lambda12<1.5、λ13<1.5、λ23If < 1.5, the point attribute is set to cluster point.
4. The method for extracting and matching feature points based on RGB-D images as claimed in claim 2, wherein the side length r of the region of interest is in a range of 4-10.
CN202010923532.8A 2020-09-04 2020-09-04 Feature point extraction and matching method based on RGB-D image Active CN111814792B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010923532.8A CN111814792B (en) 2020-09-04 2020-09-04 Feature point extraction and matching method based on RGB-D image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010923532.8A CN111814792B (en) 2020-09-04 2020-09-04 Feature point extraction and matching method based on RGB-D image

Publications (2)

Publication Number Publication Date
CN111814792A true CN111814792A (en) 2020-10-23
CN111814792B CN111814792B (en) 2020-12-29

Family

ID=72859930

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010923532.8A Active CN111814792B (en) 2020-09-04 2020-09-04 Feature point extraction and matching method based on RGB-D image

Country Status (1)

Country Link
CN (1) CN111814792B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112489215A (en) * 2020-11-27 2021-03-12 之江实验室 Single-drawing-batch three-dimensional road parametric modeling method with road surface marks
CN114720993A (en) * 2022-03-30 2022-07-08 上海木蚁机器人科技有限公司 Robot positioning method, robot positioning device, electronic device, and storage medium
CN116170601A (en) * 2023-04-25 2023-05-26 之江实验室 Image compression method based on four-column vector block singular value decomposition
CN117114782A (en) * 2023-10-24 2023-11-24 佛山电力设计院有限公司 Construction engineering cost analysis method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101490055B1 (en) * 2013-10-30 2015-02-06 한국과학기술원 Method for localization of mobile robot and mapping, and apparatuses operating the same
CN111311679A (en) * 2020-01-31 2020-06-19 武汉大学 Free floating target pose estimation method based on depth camera

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101490055B1 (en) * 2013-10-30 2015-02-06 한국과학기술원 Method for localization of mobile robot and mapping, and apparatuses operating the same
CN111311679A (en) * 2020-01-31 2020-06-19 武汉大学 Free floating target pose estimation method based on depth camera

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李月华: "基于无源信标的移动机器人室内定位技术研究", 《中国博士学位论文全文数据库信息科技辑》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112489215A (en) * 2020-11-27 2021-03-12 之江实验室 Single-drawing-batch three-dimensional road parametric modeling method with road surface marks
CN112489215B (en) * 2020-11-27 2022-09-16 之江实验室 Single-drawing-batch three-dimensional road parametric modeling method with road surface marks
CN114720993A (en) * 2022-03-30 2022-07-08 上海木蚁机器人科技有限公司 Robot positioning method, robot positioning device, electronic device, and storage medium
CN116170601A (en) * 2023-04-25 2023-05-26 之江实验室 Image compression method based on four-column vector block singular value decomposition
CN116170601B (en) * 2023-04-25 2023-07-11 之江实验室 Image compression method based on four-column vector block singular value decomposition
CN117114782A (en) * 2023-10-24 2023-11-24 佛山电力设计院有限公司 Construction engineering cost analysis method
CN117114782B (en) * 2023-10-24 2024-05-28 佛山电力设计院有限公司 Construction engineering cost analysis method

Also Published As

Publication number Publication date
CN111814792B (en) 2020-12-29

Similar Documents

Publication Publication Date Title
CN111814792B (en) Feature point extraction and matching method based on RGB-D image
CN110443836B (en) Point cloud data automatic registration method and device based on plane features
CN108010036B (en) Object symmetry axis detection method based on RGB-D camera
CN108596961B (en) Point cloud registration method based on three-dimensional convolutional neural network
CN109961506A (en) A kind of fusion improves the local scene three-dimensional reconstruction method of Census figure
CN109472820B (en) Monocular RGB-D camera real-time face reconstruction method and device
CN107977996B (en) Space target positioning method based on target calibration positioning model
CN112200203B (en) Matching method of weak correlation speckle images in oblique field of view
CN108305277B (en) Heterogeneous image matching method based on straight line segments
CN110796691B (en) Heterogeneous image registration method based on shape context and HOG characteristics
Cherian et al. Accurate 3D ground plane estimation from a single image
CN109003307B (en) Underwater binocular vision measurement-based fishing mesh size design method
CN108960267A (en) System and method for model adjustment
CN110793441B (en) High-precision object geometric dimension measuring method and device
CN114396875B (en) Rectangular package volume measurement method based on vertical shooting of depth camera
JP2015121524A (en) Image processing apparatus, control method thereof, imaging apparatus, and program
US11475629B2 (en) Method for 3D reconstruction of an object
CN113642397A (en) Object length measuring method based on mobile phone video
CN111369435B (en) Color image depth up-sampling method and system based on self-adaptive stable model
CN111179271B (en) Object angle information labeling method based on retrieval matching and electronic equipment
CN116958434A (en) Multi-view three-dimensional reconstruction method, measurement method and system
CN108491826B (en) Automatic extraction method of remote sensing image building
Novacheva Building roof reconstruction from LiDAR data and aerial images through plane extraction and colour edge detection
KR102547333B1 (en) Depth Image based Real-time ground detection method
Haque et al. Robust feature-preserving denoising of 3D point clouds

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant