CN113739786A - Indoor environment sensing method, device and equipment for quadruped robot - Google Patents

Indoor environment sensing method, device and equipment for quadruped robot Download PDF

Info

Publication number
CN113739786A
CN113739786A CN202110873329.9A CN202110873329A CN113739786A CN 113739786 A CN113739786 A CN 113739786A CN 202110873329 A CN202110873329 A CN 202110873329A CN 113739786 A CN113739786 A CN 113739786A
Authority
CN
China
Prior art keywords
point
feature
plane
feature matching
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110873329.9A
Other languages
Chinese (zh)
Inventor
朱孟周
朱睿
孙志明
黄强
陈久林
梁伟
黄浩声
姚楠
李虎成
黄哲忱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Electric Power Research Institute of State Grid Jiangsu Electric Power Co Ltd
Original Assignee
Electric Power Research Institute of State Grid Jiangsu Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Electric Power Research Institute of State Grid Jiangsu Electric Power Co Ltd filed Critical Electric Power Research Institute of State Grid Jiangsu Electric Power Co Ltd
Priority to CN202110873329.9A priority Critical patent/CN113739786A/en
Publication of CN113739786A publication Critical patent/CN113739786A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3807Creation or updating of map data characterised by the type of data
    • G01C21/383Indoor data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3837Data obtained from a single source
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an indoor environment sensing method, device, equipment and storage medium for a quadruped robot, comprising the following steps: collecting an RGB (Red Green blue) image and a depth image of each collection point in an indoor environment by using an RGB-D (Red Green blue-D) camera installed on the quadruped robot; extracting and matching point features based on the RGB images to obtain a point feature matching set of two adjacent frames of images; extracting and matching plane features based on the depth map to obtain a full matching set of the plane features of two adjacent frames of point clouds; based on geometric feature constraint between the point feature matching set and the plane feature full matching set and geometric feature constraint between plane features, screening the plane feature full matching set twice to obtain a target plane feature matching set; and performing motion solution according to the target plane feature matching set, and splicing two adjacent frames of point clouds by using the obtained transformation matrix to construct an indoor environment map. The method, the device, the equipment and the storage medium provided by the invention ensure the precision of the SLAM algorithm under the condition of lacking the characteristic points.

Description

Indoor environment sensing method, device and equipment for quadruped robot
Technical Field
The invention relates to the technical field of mobile robot vision, in particular to an indoor environment sensing method, device and equipment for a quadruped robot and a computer readable storage medium.
Background
With the continuous development of machinery, materials, computers and control science, robotics has advanced sufficiently in recent years, and is gradually playing an increasingly important role in home entertainment, industrial automation, military operations and search and rescue work in hazardous environments. Mobile robots that are relatively to fixed bases have gained unprecedented importance, and currently, mobile robots are roughly classified into the following types: wheeled robot, tracked robot, spherical robot and legged robot. The first three robots are generally applied to a relatively simple flat road environment and are useless for a complex rugged terrain environment. Most of the foot robots are applied to dangerous work such as searching, rescue and the like, the environment is usually complex, and the foot robots not only can avoid obstacles through flexible posture adjustment, but also can support discontinuous terrains in a crossing mode. Therefore, research on the legged robot has been a hot spot in the field of mobile robot research.
The quadruped robot has unique advantages in a moving mode, can adapt to more complex working scenes, and meanwhile means that in order to realize the movement of the quadruped robot in a complex environment, more factors need to be considered, even though the artificial indoor environment with a simpler structure is adopted, all the quadruped robot is not flat ground and regular obstacles, obstacles with different shapes and ground areas which are difficult to pass exist, the terrain of a foot-falling point planned by periodic gait is possibly not favorable for the stability of the robot, so that the foot-falling point is selected in a proper area according to environment information, and the movement of the robot is planned according to the terrain of the foot-falling point, so that the movement stability of the quadruped robot can be effectively improved. In order to realize the purposes of selecting a path, bypassing obstacles and realizing autonomous movement of the quadruped robot in a strange environment, the robot needs to rely on a sensor carried by the robot to realize the functions of sensing an external environment and realizing self-positioning in the environment. The sensors of the quadruped robot comprise internal sensors and external sensors, wherein the internal sensors such as an inertial sensor and a joint angle sensor cannot realize accurate positioning of the robot due to accumulated errors, and the key sensors for realizing positioning and mapping of the quadruped robot are generally external sensors such as a camera or a laser radar. Considering the high price of the three-dimensional laser radar, the research on the robot positioning and map building algorithm based on the vision is significant.
Synchronous positioning and mapping (SLAM for short) can be described as a process in which a robot estimates its own pose and senses the external environment continuously in the moving process of the robot based on internal and external sensors of the robot and constructs an environment map step by step under the condition that the robot does not have any environment prior information and the pose of the robot is uncertain. In the conventional feature method SLAM, a point feature is easily affected by light and shadow, a line feature is easily affected by end point detection, and the number of features is small although the robustness of a plane feature is high. Under a simple indoor environment with weak texture, ORB feature points are few, a large error exists between an estimated track of the existing SLAM algorithm based on the visual feature points and a real camera track, and the algorithm precision is low.
In summary, it can be seen that, in the case of lack of feature points in an indoor environment, how to improve the accuracy of estimating a trajectory and constructing a map by the SLAM algorithm is a problem to be solved at present.
Disclosure of Invention
The invention aims to provide an indoor environment sensing method, an indoor environment sensing device, indoor environment sensing equipment and a computer readable storage medium for a quadruped robot, and aims to solve the problem that in the prior art, an SLAM algorithm based on visual feature points is low in algorithm precision in a simple indoor environment.
In order to solve the technical problem, the invention provides an indoor environment sensing method for a quadruped robot, which comprises the following steps: in the operating process of the quadruped robot, collecting RGB images and depth images of each collecting point in the indoor environment to be sensed by using an RGB-D camera; extracting ORB feature points from the RGB images, and matching the ORB feature points of two adjacent frames of images to obtain a point feature matching set of the two adjacent frames of images; converting the depth image into point cloud data, extracting plane features according to the point cloud data, and acquiring a plane feature full-matching set of two adjacent frames of point clouds through full association; screening the plane feature matching pairs in the plane feature full matching set based on the geometric feature constraint between the point feature matching set and the plane feature full matching set to obtain a plane feature matching candidate set; performing secondary screening on the plane feature matching pairs in the plane feature matching candidate set according to geometric feature constraints among the plane features to obtain a target plane feature matching set; solving the motion transformation between the two adjacent frames of point clouds according to the target plane feature matching set to obtain a transformation matrix between the two adjacent frames of point clouds; and splicing the two adjacent frames of point clouds according to a transformation matrix between the two adjacent frames of point clouds so as to estimate the running track of the quadruped robot and construct a three-dimensional point cloud map of the indoor environment to be perceived.
Preferably, the extracting ORB feature points from the RGB images and matching ORB feature points of two adjacent frames of images to obtain a point feature matching set of the two adjacent frames of images includes:
constructing an image pyramid of the RGB image;
detecting FAST angular points on each layer of images of the image pyramid, calculating Harris response values of the FAST angular points, and selecting N isolated angular points with the maximum Harris response values as feature points of the RGB images;
calculating the direction of each characteristic point by using a gray scale centroid method so as to rotate the circumference of each characteristic point to the same direction;
calculating BRIEF descriptors of the feature points to obtain ORB feature points of the RGB image;
calculating the Hamming distance between the BRIEF descriptor of each ORB feature point in the first frame of RGB image and the BRIEF descriptors of all ORB feature points in the second frame of RGB image, and matching the two ORB feature points with the minimum Hamming distance to obtain all point feature matching pairs of the two adjacent frames of images;
and acquiring the minimum Hamming distance between all the point feature matching pairs, taking the Hamming distance between the point feature matching pairs smaller than twice of the minimum Hamming distance as a judgment basis, and deleting the mismatching in all the point feature matching pairs to obtain the point feature matching sets of the two adjacent frames of images.
Preferably, the extracting planar features from the point cloud data comprises:
deleting outliers in the point cloud data by adopting a filtering algorithm;
and extracting plane features from the filtered point cloud data by using a RANSAC algorithm.
Preferably, the screening the plane feature matching pairs in the plane feature full matching set based on the geometric feature constraint between the point feature matching set and the plane feature full matching set to obtain a plane feature matching candidate set includes:
sorting the point feature matching pairs in the point feature matching set according to the Hamming distance, and selecting the first m groups of point feature matching pairs;
according to the geometric feature constraint D of the ith group of point feature matching pairs and the jth group of plane feature matching pairsijObtaining the score S (D) of the jth group of plane feature matching pairsij) (ii) a Wherein i is 1,2, …, m, j is 1,2, …, k, k is the total number of matching pairs of planar features in the full matching set of planar features;
calculating the total score of the jth group of plane feature matching pairs
Figure BDA0003189462360000031
And judging the total score SjWhether the threshold value is greater than a set threshold value;
if the total score is SjAnd if the set value is larger than the set threshold value, extracting the jth group of plane feature matching pairs to the plane feature matching candidate set.
Preferably, the geometric feature constraint D according to the ith group of point feature matching pairs and the jth group of plane feature matching pairsijObtaining the score S (D) of the jth group of plane feature matching pairsij) The method comprises the following steps:
obtaining the ith group of point feature matching pairs
Figure BDA0003189462360000041
Wherein,
Figure BDA0003189462360000042
for the ORB feature points in the first frame RGB image,
Figure BDA0003189462360000043
the ORB feature points in the second frame of RGB image;
obtaining the jth group of plane feature matching pairs Pj=(PAj,PBj) Wherein, PAjFor the extracted planar features in the first frame of point cloud data, PBjExtracting plane features from the second frame of point cloud data;
computing
Figure BDA0003189462360000044
To PAjDispersion of
Figure BDA0003189462360000045
Get the first separationA difference;
wherein,
Figure BDA0003189462360000046
is composed of
Figure BDA0003189462360000047
Is determined by the three-dimensional coordinates of (a),
Figure BDA0003189462360000048
is PAjUnit normal vector of
Figure BDA0003189462360000049
Is PAjDistance from the origin of the spatial coordinate system;
computing
Figure BDA00031894623600000410
To PBjDispersion of
Figure BDA00031894623600000411
Obtaining a second dispersion;
wherein,
Figure BDA00031894623600000412
is composed of
Figure BDA00031894623600000413
Is determined by the three-dimensional coordinates of (a),
Figure BDA00031894623600000414
is PBjUnit normal vector of
Figure BDA00031894623600000415
Is PBjA distance from the origin of the spatial coordinate system;
calculating an absolute value of a difference between the first dispersion and the second dispersion
Figure BDA00031894623600000416
Figure BDA00031894623600000417
So as to be dependent on said absolute value DijDetermining a score S (D) of said jth set of matching pairs of planar featuresij)。
Preferably, the geometric feature constraint between the planar features is any two pairs of planar feature matching Pairs (PA) in the planar feature matching candidate setx,PBx)、(PAy,(PBy) Plane PAxNormal vector of (1)
Figure BDA00031894623600000418
And plane PAyNormal vector of (1)
Figure BDA00031894623600000419
Are all equal to the plane PBxNormal vector of (1)
Figure BDA00031894623600000420
And plane PByNormal vector of (1)
Figure BDA00031894623600000421
The included angle of (a).
Preferably, the stitching the two adjacent frames of point clouds according to the transformation matrix between the two adjacent frames of point clouds so as to estimate the operation track of the quadruped robot, and after constructing the three-dimensional point cloud map of the indoor environment to be perceived, the stitching method includes:
constructing a point-edge pose graph according to a transformation matrix between the camera pose corresponding to each frame of image acquired by the RGB-D camera and the point clouds of the two adjacent frames;
extracting key frames according to the point edge pose graph, performing loop detection to form edges among the key frames, and generating an optimized pose graph;
and optimizing the running track of the quadruped robot and the three-dimensional point cloud map of the indoor environment to be perceived according to the optimized pose map.
The invention also provides an indoor environment sensing device facing the quadruped robot, which comprises:
the system comprises an acquisition module, a depth acquisition module and a control module, wherein the acquisition module is used for acquiring RGB images and depth images of each acquisition point in the indoor environment to be sensed by utilizing an RGB-D camera in the running process of the quadruped robot;
the point feature matching module is used for extracting ORB feature points from the RGB images and matching the ORB feature points of the two adjacent frames of images to obtain a point feature matching set of the two adjacent frames of images;
the plane feature matching module is used for converting the depth image into point cloud data, extracting plane features according to the point cloud data and acquiring a plane feature full-matching set of two adjacent frames of point clouds through full association;
the first screening module is used for screening the plane feature matching pairs in the plane feature full matching set based on the geometric feature constraint between the point feature matching set and the plane feature full matching set to obtain a plane feature matching candidate set;
the second screening module is used for carrying out secondary screening on the plane feature matching pairs in the plane feature matching candidate set according to geometric feature constraints among the plane features to obtain a target plane feature matching set;
the motion solving module is used for solving the motion transformation between the two adjacent frames of point clouds according to the target plane feature matching set to obtain a transformation matrix between the two adjacent frames of point clouds;
and the mapping module is used for splicing the two adjacent frames of point clouds according to a transformation matrix between the two adjacent frames of point clouds so as to estimate the running track of the quadruped robot and construct a three-dimensional point cloud map of the indoor environment to be perceived.
The invention also provides an indoor environment sensing device facing the quadruped robot, which comprises:
a memory for storing a computer program; and a processor for implementing the steps of the indoor environment sensing method for the quadruped robot when executing the computer program.
The invention also provides a computer readable storage medium, which stores a computer program, and the computer program is executed by a processor to realize the steps of the indoor environment sensing method for the quadruped robot.
The indoor environment sensing method for the quadruped robot, provided by the invention, aims at the requirement of the quadruped robot for sensing the indoor environment, and the RGB images and the depth images of all the acquisition points in the indoor environment to be sensed are acquired through the RGB-D camera installed on the quadruped robot. And extracting and matching point features based on the RGB image, converting the depth map into point cloud data, and extracting and matching plane features based on the point cloud data. Screening plane feature matching pairs in a plane feature full matching set according to geometric feature constraint between the point feature matching pairs of two adjacent frames of images and the plane feature matching pairs of two adjacent frames of point clouds; and secondly, carrying out secondary screening on the plane feature matching pairs in the plane feature matching candidate set according to geometric feature constraints among the plane features to obtain a target plane feature matching set. And solving the motion transformation between two adjacent frames of point clouds according to the target plane feature matching set to obtain a transformation matrix between the two adjacent frames of point clouds, thereby realizing the estimation of the running track of the quadruped robot and the construction of a three-dimensional point cloud map of the indoor environment to be perceived. The method provided by the invention constructs the RGB-D visual odometer based on the point characteristics and the plane characteristics aiming at the simple indoor environment with weak texture, ensures the precision of the SLAM algorithm under the condition of lacking characteristic points, ensures the tracking performance and simultaneously improves the robustness of the algorithm, and the deviation between the estimated track and the real track of the camera is smaller.
Drawings
In order to more clearly illustrate the embodiments or technical solutions of the present invention, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained based on these drawings without creative efforts.
FIG. 1 is a flowchart of a first embodiment of an indoor environment sensing method for a quadruped robot according to the present invention;
FIG. 2 is a flowchart of a specific operation of performing point feature extraction and matching based on RGB images;
FIG. 3 is a flowchart of the detailed operation of performing planar feature extraction and matching based on depth images;
FIG. 4 is a flowchart of the detailed operation of screening planar feature matching pairs based on geometric feature constraints between point feature matching pairs and planar feature matching pairs;
FIG. 5 is a flowchart of the detailed operation of the screening of the matching pairs of planar features based on geometric feature constraints between the planar features;
FIG. 6 is a flowchart of a first embodiment of the indoor environment sensing method for a quadruped robot according to the present invention;
fig. 7 is a block diagram of an indoor environment sensing device for a quadruped robot according to an embodiment of the present invention.
Detailed Description
The core of the invention is to provide an indoor environment sensing method, device, equipment and computer readable storage medium for the quadruped robot, and under a simple indoor environment with weak texture, the deviation between the estimated track of the SLAM algorithm and the real track of the camera is small, so that the accuracy and the robustness of the algorithm are improved.
In order that those skilled in the art will better understand the disclosure, the invention will be described in further detail with reference to the accompanying drawings and specific embodiments. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart illustrating a method for sensing indoor environment of a quadruped robot according to a first embodiment of the present invention; the specific operation steps are as follows:
step S11: in the operating process of the quadruped robot, collecting RGB images and depth images of each collecting point in the indoor environment to be sensed by using an RGB-D camera;
in order to meet the requirement of the quadruped robot for sensing the indoor environment, images need to be continuously acquired by an RGB-D camera (RGB + Depth) installed on the quadruped robot, so that the problems of positioning and image building are solved by utilizing continuous images acquired by the RGB-D camera.
Step S12: extracting ORB feature points from the RGB images, and matching the ORB feature points of two adjacent frames of images to obtain a point feature matching set of the two adjacent frames of images;
step S13: converting the depth image into point cloud data, extracting plane features according to the point cloud data, and acquiring a plane feature full-matching set of two adjacent frames of point clouds through full association;
step S14: screening the plane feature matching pairs in the plane feature full matching set based on the geometric feature constraint between the point feature matching set and the plane feature full matching set to obtain a plane feature matching candidate set;
step S15: performing secondary screening on the plane feature matching pairs in the plane feature matching candidate set according to geometric feature constraints among the plane features to obtain a target plane feature matching set;
the geometric feature constraint between the planar features is to match Pairs (PA) for any two pairs of planar features in the planar feature matching candidate setx,PBx)、(PAy,PBy) Plane PAxNormal vector of (1)
Figure BDA0003189462360000071
And plane PAyNormal vector of (1)
Figure BDA0003189462360000072
Are all equal to the plane PBxNormal vector of (1)
Figure BDA0003189462360000073
And plane PByNormal vector of (1)
Figure BDA0003189462360000081
The included angle of (a).
Step S16: solving the motion transformation between the two adjacent frames of point clouds according to the target plane feature matching set to obtain a transformation matrix between the two adjacent frames of point clouds;
the motion solution of the visual odometer is to solve the relationship between the poses of two acquisition points for two frames of RGB-D data continuously acquired in motion, the motion of the quadruped robot and the RGB-D camera is rigid motion, the pose is transformed into Euclidean transformation which can be described as a rotation motion and a translation motion, and therefore, the transformation matrix T consisting of a rotation matrix R and a translation vector T can be adopted for representing:
Figure BDA0003189462360000082
set the feature points in the original frame and the target frame as PiAnd { P'iFeature point P in the original frameiCan pass through
Figure BDA0003189462360000083
Converting to obtain corresponding feature point P 'in the target frame'i
Step S17: and splicing the two adjacent frames of point clouds according to a transformation matrix between the two adjacent frames of point clouds so as to estimate the running track of the quadruped robot and construct a three-dimensional point cloud map of the indoor environment to be perceived.
Referring to fig. 2, step S12 specifically includes:
step S121: constructing an image pyramid of the RGB image;
step S122: detecting FAST angular points on each layer of images of the image pyramid, calculating Harris response values of the FAST angular points, and selecting N isolated angular points with the maximum Harris response values as feature points of the RGB images;
extracting the FAST corner mainly considers the gray values of pixel points around the characteristic point, firstly, the gray values of the candidate points and the pixel points around the candidate points are calculated, the candidate points and the gray values around the candidate points are compared, and if the number of the pixel points with larger difference of the gray values around the candidate points exceeds a threshold value, the candidate points are extracted as the FAST corner.
In the invention, firstly, an upper limit of an extracted corner point is specified, Harris response values of FAST corner points are calculated and compared, the corner points and side lines are distinguished, and the isolated corner points of the first N maximum response values are reserved as a corner point set.
Step S123: calculating the direction of each characteristic point by using a gray scale centroid method so as to rotate the circumference of each characteristic point to the same direction;
step S124: calculating BRIEF descriptors of the feature points to obtain ORB feature points of the RGB image;
the ORB feature points use an improved BRIEF descriptor. The BRIEF randomly determines two pixel points p and q near the key point, and compares the gray values of p and q, wherein if p is larger, 1 is selected, and if q is larger, 0 is selected. The ORB feature points adopt 128-dimensional BRIEF descriptors to randomly select 128 pairs of p and q pixel points, and because the ORB calculates the direction of the key points in the feature point extraction stage, the key points can be rotated to the same direction in the circumference, and then the rotated BRIEF descriptors are calculated, so that the descriptors of the ORB have better rotation invariance.
Step S125: calculating the Hamming distance between the BRIEF descriptor of each ORB feature point in the first frame of RGB image and the BRIEF descriptors of all ORB feature points in the second frame of RGB image, and matching the two ORB feature points with the minimum Hamming distance to obtain all point feature matching pairs of the two adjacent frames of images;
point feature matching based on Hamming distance (Hamming) between BRIEF description descriptors of ORB feature points: the hamming distance between descriptors is obtained by the xor operation between descriptors plus the operation of counting the number of "1" s in binary coding. Firstly, calculating the Hamming distance between a descriptor of each feature point in a first frame RGB image and all feature point descriptors in a second frame RGB image, sequencing and selecting the feature point with the minimum Hamming distance for matching to obtain a group of full matching results.
Step S126: and acquiring the minimum Hamming distance between all the point feature matching pairs, taking the Hamming distance between the point feature matching pairs smaller than twice of the minimum Hamming distance as a judgment basis, and deleting the mismatching in all the point feature matching pairs to obtain the point feature matching sets of the two adjacent frames of images.
The full matching result of the point features has a large amount of mismatching, the full matching result can be preliminarily screened by taking the Hamming distance less than twice the minimum distance as a reference, and the point feature matching pairs with the Hamming distance more than or equal to twice the minimum distance in the full matching result are deleted, so that the screening of the mismatching is realized.
According to the method provided by the embodiment of the invention, the extracted point features are ORB feature points extracted from the RGB image, and the ORB consists of an 'aided FAST' key point and a BRIEF descriptor, and has the advantages of rapid extraction and rotation invariance. In the invention, the scale problem is eliminated by constructing an image pyramid and detecting angular points on each layer of the image pyramid; the direction of the key point is described by a gray scale centroid method, so that the robustness of expression among different images is improved.
Referring to fig. 3, step S13 specifically includes:
step S131: converting the depth image into point cloud data;
the invention extracts planar features from the depth image. The extraction of the plane features is to extract a series of planes from the three-dimensional point set, and the RGB-D camera cannot directly acquire the three-dimensional point set, so that a depth image obtained by the RGB-D camera needs to be converted into point cloud data. According to the imaging relationship of the pinhole camera, the pixel coordinates [ u, v, d ] (d is depth data) in the depth image can be converted into the space coordinates [ x, y, z ], and the conversion formula is as follows:
Figure BDA0003189462360000101
Figure BDA0003189462360000102
Figure BDA0003189462360000103
wherein f isx,fyRespectively, the focal lengths of the RGB-D camera on the axis; c. Cx,cyS is the zoom factor of the depth image for the aperture center of the RGB-D camera.
Step S132: filtering the point cloud data, and deleting outliers in the point cloud data;
and presetting a percentage threshold, and performing primary processing on the point cloud data by using a filtering algorithm to remove the partial point cloud with the maximum point cloud average distance. And calculating the average distance between each point cloud and the a points closest to the point cloud, wherein the points with extremely large distances can be regarded as outliers and deleted from the point cloud data.
Step S133: recording the total number of the filtered point clouds, and randomly selecting a part of point cloud fitting planes in the filtered point clouds by adopting a RANSAC algorithm;
plane features are extracted from filtered point clouds by using a RANSAC algorithm, and the process of fitting a plane by using the RANSAC algorithm is essentially to search a group of model parameters A in point cloud dataP;BP;CP;DPSuch that the parameter determined by the set of parameters is APx+BPy+CPz+DPThe number of points on the plane represented by 0 exceeds the set threshold; wherein (A)P;BP;CP) A unit normal vector, | D, representing a planePAnd | is the distance of the plane from the origin of the space coordinate system.
The RANSAC algorithm randomly selects a point fitting plane in part of the point cloud in each iteration, and searches a model with the maximum number of points as a final output result.
Step S134: storing a residual point cloud iterative fitting plane until a plane containing a first preset number of inner points cannot be fitted in the residual point cloud or the residual point cloud number is smaller than a second preset value, and finishing the extraction of an initial segmentation plane;
step S135: performing neighborhood plane fitting estimation on the interior points of the current initial segmentation plane to obtain a normal vector of each interior point in the current initial segmentation plane;
step S136: deleting the interior points of which the included angle between the normal vector of the interior points in the current initial segmentation plane and the normal vector of the current initial segmentation plane is larger than a preset angle, and judging whether the number of the remaining interior points is larger than or equal to a preset percentage of the number of the original interior points;
step S137: if not, deleting the current initial segmentation plane;
step S138: if so, re-fitting the plane by using the normal vector mean of the residual interior points to generate the plane characteristic of the point cloud data;
and recording the total number P of the filtered point clouds, fitting a plane to the filtered point clouds, storing the residual point clouds, and continuously fitting until the plane containing the number A of inner points cannot be fitted in the point clouds or the number of the residual point clouds is smaller than a threshold value B, and finishing the extraction of the plane features. In order to ensure sufficient operation speed and to contain a certain error, the threshold of the RANSAC algorithm cannot be set too small.
At this time, the plane features of the preliminary segmentation only represent that certain points are included in the region with the distance from the plane being less than the threshold, and curved surfaces and wrong segmentation exist. Therefore, neighborhood plane fitting can be carried out on the segmented interior points to estimate each point cloud normal vector, interior points with an included angle larger than 10 degrees with the segmented plane normal vector are deleted, if the number of the remaining interior points is less than 70 percent of the number of the original interior points, the group of planes are abandoned, and if the number of the remaining interior points is enough, the normal vector mean values of the remaining points are used for re-fitting the planes.
It should be noted that the first preset number, the second preset value, the preset angle and the preset percentage in the embodiment of the present invention may be set according to practical applications, and are not limited herein.
Step S139: and acquiring a full matching set of plane features of two adjacent frames of point clouds through full association.
The quantity of the plane features extracted from the point cloud based on the RANSAC algorithm is not too large, so that a feature full-matching set can be obtained through full association, and a potential plane feature matching candidate set is screened out through geometric rigid constraint between point and surface features.
Based on the use scene of the invention, a large plane with more internal points is expected to be extracted as a matching reference, and the extraction speed is accelerated.
Referring to fig. 4, step S14 specifically includes:
step S141: sorting the point feature matching pairs in the point feature matching set according to the Hamming distance, and selecting the first m groups of point feature matching pairs
Figure BDA0003189462360000121
Wherein, FiMatching pairs for the ith group of point features of the two adjacent frame images,
Figure BDA0003189462360000122
for the ORB feature points in the first frame RGB image,
Figure BDA0003189462360000123
the ORB feature points in the second frame of RGB image;
step S142: obtaining the jth group of plane feature matching pairs P in the plane feature full matching setj=(PAj,PBj),j=1,2,…,k;
Wherein, PAjFor the extracted planar features in the first frame of point cloud data, PBjK is the total number of the plane feature matching pairs in the plane feature full matching set;
step S143: computing
Figure BDA0003189462360000124
To PAjDispersion of
Figure BDA0003189462360000125
Obtaining a first dispersion;
will be provided with
Figure BDA0003189462360000126
Is input to the PAjIn the planar model of (2), a first dispersion is obtained, and the first dispersion calculation formula is as follows:
Figure BDA0003189462360000127
wherein,
Figure BDA0003189462360000128
is composed of
Figure BDA0003189462360000129
Is determined by the three-dimensional coordinates of (a),
Figure BDA00031894623600001210
is PAjUnit normal vector of
Figure BDA00031894623600001211
Is PAjDistance from the origin of the spatial coordinate system;
step S144: computing
Figure BDA00031894623600001212
To PBjDispersion of
Figure BDA00031894623600001213
Obtaining a second dispersion;
the second dispersion calculation formula is as follows:
Figure BDA00031894623600001214
wherein,
Figure BDA00031894623600001215
is composed of
Figure BDA00031894623600001216
Is determined by the three-dimensional coordinates of (a),
Figure BDA00031894623600001217
is PBjUnit normal vector of
Figure BDA00031894623600001218
Is PBjA distance from the origin of the spatial coordinate system;
step S145: calculating an absolute value D of a difference between the first dispersion and the second dispersionijAccording to said absolute value DijObtaining the score S (D) of the jth group of plane feature matching pairsij);
In the present invention, the absolute value D of the difference between the first dispersion and the second dispersion is determinedijAs a geometric feature constraint between the point feature matching pair and the plane feature matching pair, the calculation formula is as follows:
Figure BDA0003189462360000131
the geometric feature constraint DijThe score correspondence with the matching pairs of planar features is shown in table 1:
TABLE 1 plane feature matching degree table
Fraction S Dij
0 Dij≥0.005m
1 0.01m≤Dij≤0.05m
2 0.005m≤Dij≤0.01m
3 Dij≤0.005m
Step S146: calculating the total score of the jth group of plane feature matching pairs
Figure BDA0003189462360000132
Step S147: judging the total score SjIf the total score is greater than the set threshold value, if the total score is SjAnd if the set value is larger than the set threshold value, extracting the jth group of plane feature matching pairs to the plane feature matching candidate set.
The higher the total score of the plane feature matching pair is, the higher the matching degree of the plane feature is, and the plane feature matching pair with the total score larger than a set threshold h in the plane feature full matching set is selected and extracted to a plane feature matching candidate set { (PA)j,PBj)|Sj≥h}。
Referring to fig. 5, step S15 specifically includes:
step S151: according to the total score of each plane feature matching pair in the plane feature matching candidate set, the plane feature matching pairs in the plane feature matching candidate set are ranked to generate a candidate set
Figure BDA0003189462360000133
Wherein e is the number of the plane feature matching pairs in the plane feature matching candidate set; n-1, 2, …, e-1;
Figure BDA0003189462360000134
respectively a matching pair P of planar featuresn、Pn+1All the plane feature matching pairs in the candidate set U are registered as elements of the candidate set U.
Step S152: the first element P in the candidate set U is added1Is extracted to the collectionAnd V ═ P1And deleting the first element in the candidate set U;
step S153: traversing the elements in the candidate set U, and deleting the elements containing the same plane characteristics as the elements in the set V and the elements not meeting the plane-plane rigidity constraint;
planar feature matching Pair (PA)x,PBx)、(PAy,PBy) The face-to-face stiffness constraints in between are:
Figure BDA0003189462360000141
wherein, theta0Is a first angle threshold.
Step S154: circularly executing the step S152 to the step S153 until the candidate set U is an empty set;
step S155: traversing the elements in the set V, judging whether each element in the set V and the rest elements are parallel plane matching pairs, if so, deleting the current element to obtain a processed set V, and setting the processed set V as the target plane feature matching set.
If-plane feature matching Pair (PA)x,PBx)、(PAy,PBy) Satisfies the following conditions:
Figure BDA0003189462360000142
defining said planar feature matching pair as a parallel planar matching pair, θ1Is a second angle threshold.
In the embodiment, an RGB (red, green, blue) -D camera is used for extracting an RGB (red, green, blue) map and a depth map of the current position to generate a point cloud, and point features and plane features in the point cloud are extracted by considering the view field features of the height level of the quadruped robot and are matched with the point features and the plane extracted from the next frame of point cloud. And screening the fully-matched plane feature matching pairs twice according to the geometric feature constraint between the point feature matching pairs and the plane feature matching pairs and the geometric feature constraint between the plane features to obtain a target plane feature matching set. And estimating the relative pose between two frames of point clouds according to the target plane feature matching set, and continuously splicing the new frame and the old frame according to the relative pose.
The embodiment of the invention provides a plane feature extraction method based on point cloud, a plane feature matching pair screening method based on geometric feature constraint of a point feature matching pair and a plane feature matching pair screening method based on geometric feature constraint among plane features, wherein the point feature extraction method comprises the following steps of extracting point feature and surface feature of a point cloud; a motion solving method between two frames of point clouds under different point-surface matching numbers is researched, and an RGB-D visual odometer based on point-surface characteristics is established.
Since the RGB-D visual odometer used at the front end of the SLAM in the above embodiment uses two-by-two concatenation of point clouds, there is an accumulated error, and this map is not accurate enough for the process of working for a long time to construct the whole indoor map. Considering that the visual odometer is constructed based on point and surface features, the back end of the SLAM can optimize the track and the map by extracting key frames, constructing a pose graph and designing loop detection.
Referring to fig. 6, fig. 6 is a flowchart illustrating a second embodiment of an indoor environment sensing method for a quadruped robot according to the present invention; the specific operation steps are as follows:
step S61: in the moving process of the quadruped robot, an Intel D435i camera is used for collecting RGB images and depth images of each collection point in the indoor environment to be sensed;
step S62: extracting FAST characteristic points based on the RGB image, and extracting FAST-12 operators by using OpenCV to obtain point characteristics of the RGB image;
in this embodiment, using OpenCV to extract the FAST-12 operator, the grayscale threshold may be set to 20%; it should be noted that, in other embodiments of the present invention, the grayscale threshold may be set according to actual situations, and is not limited herein.
Step S63: matching point features of two adjacent frames of images, and deleting mismatching in a point feature full matching result based on a Hamming distance between point feature matching pairs to obtain a point feature matching set after the two adjacent frames of images are screened;
step S64: converting the depth image into point cloud data, extracting plane features according to the point cloud data, and acquiring a plane feature full-matching set of two adjacent frames of point clouds through full association;
step S65: based on the geometric feature constraint between the point feature matching set and the plane feature full matching set and the geometric feature constraint between plane features, performing twice screening on the plane feature matching pairs in the plane feature full matching set to obtain a target plane feature matching set;
step S66: splicing the two adjacent frames of point clouds according to a transformation matrix solved by the target plane feature matching set;
step S67: constructing a point-edge pose graph according to a transformation matrix between the camera pose corresponding to each frame of image acquired by the RGB-D camera and the point clouds of the two adjacent frames;
the pose graph is a graph formed by camera poses and consists of nodes V and edges E from the perspective of graph theory, and is represented by the following formula:
G={V,E}
wherein, the node is the camera pose corresponding to each frame of RGB-D image, and is V1,V2,…,VnRepresents; the edge is a relative motion transformation matrix obtained by feature matching and motion solution of two adjacent frames of images, such as a node ViAnd node VjEdge E betweenij=Tij. Because of the error in the motion transformation between poses, pose graph optimization is essentially a least squares problem:
Figure BDA0003189462360000161
wherein,
Figure BDA0003189462360000162
respectively as pose node ViAnd VjAn estimate of (d).
Setting an objective function as E, and solving the gradient of the pair V according to the objective function E
Figure BDA0003189462360000163
Adjusting the pose V to enable E to converge to a minimum value;
Figure BDA0003189462360000164
where η is a learning factor and is expressed as a step length.
Step S68: extracting key frames according to the point edge pose graph, performing loop detection to form edges among the key frames, and generating an optimized pose graph;
the visual odometer is characterized in that all frames are spliced into a map, the calculated amount is too large, and the visual odometer belongs to a chain structure and cannot be applied to pose map optimization. Therefore, in this embodiment, the number of frames to be spliced is reduced by extracting the key frames, and loop detection is constructed as an edge between the key frames.
Further, the key frame extraction comprises the following specific steps:
(1) constructing an initial key frame empty set F { }, and enabling a first frame image F0Extracting to a set F;
(2) for a newly extracted frame image fkCalculating the last frame image in the set F and FkPoint-to-surface matching value T ofkAccording to TkJudgment of fkIf f is less than the preset valuekIf the point-to-surface matching number of (f) is less than a preset value, f is discardedk
(3) If fkThe point-surface matching number is more than or equal to a preset value, and a threshold value T is setmaxAnd Tmin,TmaxFor maximum point-to-surface matching of values, TminFor the minimum point-surface matching value, f is judgedkWhether or not T is satisfiedmax<Tk<TminIf so, will fkPut as a new key frame at the end of set F.
The extraction of the key frames ensures that image frames with matching errors, too small motion and calculation errors are filtered, and when a new key frame is added each time, loop detection is carried out once to construct edges among the key frames. The loop detection process is as follows:
(1) and (3) short-distance looping: for newly added key frame flMatching the last x frames in the set F, and if the last x frames in the set F are matched, adding one edge;
(2) random looping for newly added key frame flMatching the frame with the randomly selected y frame in the set F, and if the frame is matched with the randomly selected y frame in the set F, adding one edge;
(3) similar looping for newly added keyframe flMatch it with f in a random loopmAnd matching the nearby z frames, and adding one edge if the z frames are matched.
Step S69: and optimizing the running track of the quadruped robot and the three-dimensional point cloud map of the indoor environment to be perceived according to the optimized pose map.
Through the key frame screening and loop detection processes, pairwise matching visual odometers can be converted into a graph structure with key frames as nodes, and therefore camera tracks and maps can be optimized through pose graph optimization. And taking the optimized pose map as the optimal running track of the quadruped robot, and splicing the point clouds corresponding to each frame of image into a global consistency three-dimensional point cloud map by using the motion transformation relation among the frames.
The indoor environment sensing method for the quadruped robot mainly comprises a visual front end based on point features and plane features and an optimized rear end based on a pose graph. The implementation is based on the characteristic of weak texture of a simple indoor environment, and the accuracy of the SLAM algorithm under the condition that the indoor environment is lack of characteristic points is ensured by adopting the visual front end based on point characteristics and plane characteristics; and the optimization rear end based on the pose graph and loop detection are combined, so that the accumulated error of the visual odometer is reduced, and finally, an RGB-D SLAM algorithm based on point-surface characteristics is established.
To test the accuracy of the method provided in this example, the present invention conducted tests on an RGB-D data set provided by the computer vision laboratory, university of munich industries, germany (TUM). The dataset is accompanied by a real trajectory (groudtruth) of the camera pose, which can be used to verify the accuracy of the algorithm by comparing the estimated trajectory of SLAM with the real trajectory. Two types of accuracy indicators are defined in the TUM data set: absolute Trajectory Error (ATE) and Relative Pose Error (RPE). Two data sets, namely a desk Fr1/xyz and an indoor environment Fr2/pioneer, are selected from the TUM data set for testing. Experimental results show that the method provided by the invention can improve the precision of the SLAM algorithm by fusing plane features in an indoor environment with fewer ORB feature points, and the estimated track of the algorithm is closer to the real track of a camera.
Referring to fig. 7, fig. 7 is a block diagram of an indoor environment sensing device for a quadruped robot according to an embodiment of the present invention; the specific device may include:
the system comprises an acquisition module 100, a depth image acquisition module and a control module, wherein the acquisition module is used for acquiring RGB images and depth images of each acquisition point in the indoor environment to be sensed by utilizing an RGB-D camera in the running process of the quadruped robot;
a point feature matching module 200, configured to extract ORB feature points from the RGB images, and match the ORB feature points of two adjacent frames of images to obtain a point feature matching set of the two adjacent frames of images; (ii) a
The plane feature matching module 300 is configured to convert the depth image into point cloud data, extract plane features according to the point cloud data, and obtain a full matching set of plane features of two adjacent frames of point clouds through full association;
a first screening module 400, configured to screen a plane feature matching pair in the full plane feature matching set based on a geometric feature constraint between the point feature matching set and the full plane feature matching set, so as to obtain a candidate plane feature matching set;
the second screening module 500 is configured to perform secondary screening on the plane feature matching pairs in the plane feature matching candidate set according to geometric feature constraints among the plane features to obtain a target plane feature matching set;
the motion solving module 600 is configured to solve motion transformation between the two adjacent frames of point clouds according to the target plane feature matching set, so as to obtain a transformation matrix between the two adjacent frames of point clouds;
and the mapping module 700 is configured to splice the two adjacent frames of point clouds according to a transformation matrix between the two adjacent frames of point clouds so as to estimate a running track of the quadruped robot and construct a three-dimensional point cloud map of the indoor environment to be perceived.
The indoor environment sensing apparatus for a quadruped robot of this embodiment is used to implement the aforementioned indoor environment sensing method for a quadruped robot, and therefore specific embodiments of the indoor environment sensing apparatus for a quadruped robot can be found in the foregoing embodiments of the indoor environment sensing method for a quadruped robot, for example, the acquisition module 100, the point feature matching module 200, the plane feature matching module 300, the first screening module 400, the second screening module 500, the motion solving module 600, and the mapping module 700 are respectively used to implement steps S101, S102, S103, S104, S105, S106, and S107 in the aforementioned indoor environment sensing method for a quadruped robot, so specific embodiments thereof can refer to descriptions of corresponding embodiments of each part, and are not repeated here.
The specific embodiment of the present invention further provides an indoor environment sensing device for a quadruped robot, including: a memory for storing a computer program; and a processor for implementing the steps of the indoor environment sensing method for the quadruped robot when executing the computer program.
Embodiments of the present invention further provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the above-mentioned indoor environment sensing method for a quadruped robot.
The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The indoor environment sensing method, device, equipment and computer readable storage medium for the quadruped robot provided by the invention are described in detail above. The principles and embodiments of the present invention are explained herein using specific examples, which are presented only to assist in understanding the method and its core concepts. It should be noted that, for those skilled in the art, it is possible to make various improvements and modifications to the present invention without departing from the principle of the present invention, and those improvements and modifications also fall within the scope of the claims of the present invention.

Claims (10)

1. An indoor environment sensing method for a quadruped robot is characterized by comprising the following steps:
in the operating process of the quadruped robot, collecting RGB images and depth images of each collecting point in the indoor environment to be sensed by using an RGB-D camera;
extracting ORB feature points from the RGB images, and matching the ORB feature points of two adjacent frames of images to obtain a point feature matching set of the two adjacent frames of images;
converting the depth image into point cloud data, extracting plane features according to the point cloud data, and acquiring a plane feature full-matching set of two adjacent frames of point clouds through full association;
screening the plane feature matching pairs in the plane feature full matching set based on the geometric feature constraint between the point feature matching set and the plane feature full matching set to obtain a plane feature matching candidate set;
performing secondary screening on the plane feature matching pairs in the plane feature matching candidate set according to geometric feature constraints among the plane features to obtain a target plane feature matching set;
solving the motion transformation between the two adjacent frames of point clouds according to the target plane feature matching set to obtain a transformation matrix between the two adjacent frames of point clouds;
and splicing the two adjacent frames of point clouds according to a transformation matrix between the two adjacent frames of point clouds so as to estimate the running track of the quadruped robot and construct a three-dimensional point cloud map of the indoor environment to be perceived.
2. The method as claimed in claim 1, wherein the extracting ORB feature points in the RGB image and matching ORB feature points of two adjacent frames of images to obtain a point feature matching set of the two adjacent frames of images comprises:
constructing an image pyramid of the RGB image;
detecting FAST angular points on each layer of images of the image pyramid, calculating Harris response values of the FAST angular points, and selecting N isolated angular points with the maximum Harris response values as feature points of the RGB images;
calculating the direction of each characteristic point by using a gray scale centroid method so as to rotate the circumference of each characteristic point to the same direction;
calculating BRIEF descriptors of the feature points to obtain ORB feature points of the RGB image;
calculating the Hamming distance between the BRIEF descriptor of each ORB feature point in the first frame of RGB image and the BRIEF descriptors of all ORB feature points in the second frame of RGB image, and matching the two ORB feature points with the minimum Hamming distance to obtain all point feature matching pairs of the two adjacent frames of images;
and acquiring the minimum Hamming distance between all the point feature matching pairs, taking the Hamming distance between the point feature matching pairs smaller than twice of the minimum Hamming distance as a judgment basis, and deleting the mismatching in all the point feature matching pairs to obtain the point feature matching sets of the two adjacent frames of images.
3. The method of claim 1, wherein the extracting planar features from the point cloud data comprises:
deleting outliers in the point cloud data by adopting a filtering algorithm;
and extracting plane features from the filtered point cloud data by using a RANSAC algorithm.
4. The method of claim 1, wherein the screening of the planar feature matching pairs in the full planar feature matching set based on the geometric feature constraint between the point feature matching set and the full planar feature matching set to obtain a candidate planar feature matching set comprises:
sorting the point feature matching pairs in the point feature matching set according to the Hamming distance, and selecting the first m groups of point feature matching pairs;
according to the geometric feature constraint D of the ith group of point feature matching pairs and the jth group of plane feature matching pairsijObtaining the score S (D) of the jth group of plane feature matching pairsij) (ii) a Wherein i is 1,2, …, m, j is 1,2, …, k, k is the total number of matching pairs of planar features in the full matching set of planar features;
calculating the total score of the jth group of plane feature matching pairs
Figure FDA0003189462350000021
And judging the total score SjWhether or not it is greater than a predetermined thresholdA value;
if the total score is SjAnd if the set value is larger than the set threshold value, extracting the jth group of plane feature matching pairs to the plane feature matching candidate set.
5. The method of claim 4, wherein the geometric feature constraint D is based on the ith set of point feature matching pairs and the jth set of planar feature matching pairsijObtaining the score S (D) of the jth group of plane feature matching pairsij) The method comprises the following steps:
obtaining the ith group of point feature matching pairs
Figure FDA0003189462350000022
Wherein,
Figure FDA0003189462350000023
for the ORB feature points in the first frame RGB image,
Figure FDA0003189462350000024
the ORB feature points in the second frame of RGB image;
obtaining the jth group of plane feature matching pairs Pj=(PAj,PBj) Wherein, PAjFor the extracted planar features in the first frame of point cloud data, PBjExtracting plane features from the second frame of point cloud data;
computing
Figure FDA0003189462350000025
To PAjDispersion of
Figure FDA0003189462350000026
Obtaining a first dispersion;
wherein,
Figure FDA0003189462350000031
is composed of
Figure FDA00031894623500000317
Is determined by the three-dimensional coordinates of (a),
Figure FDA0003189462350000032
is PAjUnit normal vector of
Figure FDA0003189462350000033
Figure FDA0003189462350000034
Is PAjDistance from the origin of the spatial coordinate system;
computing
Figure FDA0003189462350000035
To PBjDispersion of
Figure FDA0003189462350000036
Obtaining a second dispersion;
wherein,
Figure FDA0003189462350000037
is composed of
Figure FDA0003189462350000038
Is determined by the three-dimensional coordinates of (a),
Figure FDA0003189462350000039
is PBjUnit normal vector of
Figure FDA00031894623500000310
Figure FDA00031894623500000311
Is PBjA distance from the origin of the spatial coordinate system;
calculating an absolute value of a difference between the first dispersion and the second dispersion
Figure FDA00031894623500000312
Figure FDA00031894623500000313
So as to be dependent on said absolute value DijDetermining a score S (D) of said jth set of matching pairs of planar featuresij)。
6. The method of claim 1, wherein the geometric feature constraint between planar features is for any two pairs of planar feature matching Pairs (PAs) in the planar feature matching candidate setx,PBx)、(PAy,PBy) Plane PAxNormal vector of (1)
Figure FDA00031894623500000318
And plane PAyNormal vector of (1)
Figure FDA00031894623500000314
Are all equal to the plane PBxNormal vector of (1)
Figure FDA00031894623500000315
And plane PByNormal vector of (1)
Figure FDA00031894623500000316
The included angle of (a).
7. The method of claim 1, wherein the stitching the two adjacent frames of point clouds according to the transformation matrix between the two adjacent frames of point clouds to estimate the trajectory of the quadruped robot, and the constructing the three-dimensional point cloud map of the indoor environment to be perceived comprises:
constructing a point-edge pose graph according to a transformation matrix between the camera pose corresponding to each frame of image acquired by the RGB-D camera and the point clouds of the two adjacent frames;
extracting key frames according to the point edge pose graph, performing loop detection to form edges among the key frames, and generating an optimized pose graph;
and optimizing the running track of the quadruped robot and the three-dimensional point cloud map of the indoor environment to be perceived according to the optimized pose map.
8. An indoor environment sensing apparatus for a quadruped robot, comprising:
the system comprises an acquisition module, a depth acquisition module and a control module, wherein the acquisition module is used for acquiring RGB images and depth images of each acquisition point in the indoor environment to be sensed by utilizing an RGB-D camera in the running process of the quadruped robot;
the point feature matching module is used for extracting ORB feature points from the RGB images and matching the ORB feature points of the two adjacent frames of images to obtain a point feature matching set of the two adjacent frames of images;
the plane feature matching module is used for converting the depth image into point cloud data, extracting plane features according to the point cloud data and acquiring a plane feature full-matching set of two adjacent frames of point clouds through full association;
the first screening module is used for screening the plane feature matching pairs in the plane feature full matching set based on the geometric feature constraint between the point feature matching set and the plane feature full matching set to obtain a plane feature matching candidate set;
the second screening module is used for carrying out secondary screening on the plane feature matching pairs in the plane feature matching candidate set according to geometric feature constraints among the plane features to obtain a target plane feature matching set;
the motion solving module is used for solving the motion transformation between the two adjacent frames of point clouds according to the target plane feature matching set to obtain a transformation matrix between the two adjacent frames of point clouds;
and the mapping module is used for splicing the two adjacent frames of point clouds according to a transformation matrix between the two adjacent frames of point clouds so as to estimate the running track of the quadruped robot and construct a three-dimensional point cloud map of the indoor environment to be perceived.
9. An indoor environment sensing apparatus for a quadruped robot, comprising:
a memory for storing a computer program;
a processor for implementing the steps of the method for indoor environment perception for a quadruped robot according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, wherein a computer program is stored on the computer-readable storage medium, and when executed by a processor, the computer program implements the steps of the indoor environment sensing method for the quadruped robot according to any one of claims 1 to 7.
CN202110873329.9A 2021-07-30 2021-07-30 Indoor environment sensing method, device and equipment for quadruped robot Pending CN113739786A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110873329.9A CN113739786A (en) 2021-07-30 2021-07-30 Indoor environment sensing method, device and equipment for quadruped robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110873329.9A CN113739786A (en) 2021-07-30 2021-07-30 Indoor environment sensing method, device and equipment for quadruped robot

Publications (1)

Publication Number Publication Date
CN113739786A true CN113739786A (en) 2021-12-03

Family

ID=78729603

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110873329.9A Pending CN113739786A (en) 2021-07-30 2021-07-30 Indoor environment sensing method, device and equipment for quadruped robot

Country Status (1)

Country Link
CN (1) CN113739786A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114078151A (en) * 2022-01-19 2022-02-22 季华实验室 Point cloud fusion method and device, electronic equipment and storage medium
CN114485608A (en) * 2021-12-13 2022-05-13 武汉中海庭数据技术有限公司 Local point cloud rapid registration method for high-precision map making

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104851094A (en) * 2015-05-14 2015-08-19 西安电子科技大学 Improved method of RGB-D-based SLAM algorithm
CN107590827A (en) * 2017-09-15 2018-01-16 重庆邮电大学 A kind of indoor mobile robot vision SLAM methods based on Kinect
CN110243390A (en) * 2019-07-10 2019-09-17 北京华捷艾米科技有限公司 The determination method, apparatus and odometer of pose
CN112258618A (en) * 2020-11-04 2021-01-22 中国科学院空天信息创新研究院 Semantic mapping and positioning method based on fusion of prior laser point cloud and depth map
CN112435262A (en) * 2020-11-27 2021-03-02 广东电网有限责任公司肇庆供电局 Dynamic environment information detection method based on semantic segmentation network and multi-view geometry
CN112752028A (en) * 2021-01-06 2021-05-04 南方科技大学 Pose determination method, device and equipment of mobile platform and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104851094A (en) * 2015-05-14 2015-08-19 西安电子科技大学 Improved method of RGB-D-based SLAM algorithm
CN107590827A (en) * 2017-09-15 2018-01-16 重庆邮电大学 A kind of indoor mobile robot vision SLAM methods based on Kinect
CN110243390A (en) * 2019-07-10 2019-09-17 北京华捷艾米科技有限公司 The determination method, apparatus and odometer of pose
CN112258618A (en) * 2020-11-04 2021-01-22 中国科学院空天信息创新研究院 Semantic mapping and positioning method based on fusion of prior laser point cloud and depth map
CN112435262A (en) * 2020-11-27 2021-03-02 广东电网有限责任公司肇庆供电局 Dynamic environment information detection method based on semantic segmentation network and multi-view geometry
CN112752028A (en) * 2021-01-06 2021-05-04 南方科技大学 Pose determination method, device and equipment of mobile platform and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王祺鑫: "基于RGB-D 相机的四足机器人环境感知与静步态规划研究", CNKI优秀硕士学位论文全文库, 15 January 2021 (2021-01-15), pages 9 - 27 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114485608A (en) * 2021-12-13 2022-05-13 武汉中海庭数据技术有限公司 Local point cloud rapid registration method for high-precision map making
CN114485608B (en) * 2021-12-13 2023-10-10 武汉中海庭数据技术有限公司 Local point cloud rapid registration method for high-precision map making
CN114078151A (en) * 2022-01-19 2022-02-22 季华实验室 Point cloud fusion method and device, electronic equipment and storage medium
CN114078151B (en) * 2022-01-19 2022-04-22 季华实验室 Point cloud fusion method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
Mutz et al. Large-scale mapping in complex field scenarios using an autonomous car
CN107741234B (en) Off-line map construction and positioning method based on vision
Yousif et al. An overview to visual odometry and visual SLAM: Applications to mobile robotics
Nieto et al. Recursive scan-matching SLAM
CN109186606B (en) Robot composition and navigation method based on SLAM and image information
Zhang et al. Hierarchical topic model based object association for semantic SLAM
KR100493159B1 (en) Landmark, apparatus and method for determining position of autonomous vehicles effectively
Mu et al. Research on SLAM algorithm of mobile robot based on the fusion of 2D LiDAR and depth camera
Li et al. Automatic targetless LiDAR–camera calibration: a survey
CN105096386A (en) Method for automatically generating geographic maps for large-range complex urban environment
CN112396656A (en) Outdoor mobile robot pose estimation method based on fusion of vision and laser radar
CN110223351B (en) Depth camera positioning method based on convolutional neural network
CN111652929A (en) Visual feature identification and positioning method and system
CN115639823B (en) Method and system for controlling sensing and movement of robot under rugged undulating terrain
CN113739786A (en) Indoor environment sensing method, device and equipment for quadruped robot
CN111709988B (en) Method and device for determining characteristic information of object, electronic equipment and storage medium
Ma et al. Merging grid maps of different resolutions by scaling registration
CN114088081A (en) Map construction method for accurate positioning based on multi-segment joint optimization
Lim et al. AdaLIO: Robust adaptive LiDAR-inertial odometry in degenerate indoor environments
Rehder et al. Submap-based SLAM for road markings
Amorós et al. Trajectory estimation and optimization through loop closure detection, using omnidirectional imaging and global-appearance descriptors
CN117029817A (en) Two-dimensional grid map fusion method and system
CN114742894A (en) Multi-camera calibration method in large scene, information processing terminal and storage medium
Dai et al. An intensity-enhanced LiDAR SLAM for unstructured environments
Pauls et al. Automatic mapping of tailored landmark representations for automated driving and map learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination