WO2022174603A1 - 一种位姿预测方法、位姿预测装置及机器人 - Google Patents
一种位姿预测方法、位姿预测装置及机器人 Download PDFInfo
- Publication number
- WO2022174603A1 WO2022174603A1 PCT/CN2021/124611 CN2021124611W WO2022174603A1 WO 2022174603 A1 WO2022174603 A1 WO 2022174603A1 CN 2021124611 W CN2021124611 W CN 2021124611W WO 2022174603 A1 WO2022174603 A1 WO 2022174603A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- line
- feature
- pair
- robot
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 54
- 238000005259 measurement Methods 0.000 claims abstract description 16
- 238000000605 extraction Methods 0.000 claims description 45
- 238000004590 computer program Methods 0.000 claims description 21
- 230000006870 function Effects 0.000 claims description 21
- 230000003287 optical effect Effects 0.000 claims description 11
- 238000005457 optimization Methods 0.000 claims description 5
- 238000010586 diagram Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 238000012545 processing Methods 0.000 description 6
- 238000001514 detection method Methods 0.000 description 4
- 238000007781 pre-processing Methods 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 238000010276 construction Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000000047 product Substances 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Definitions
- the present application belongs to the technical field of vision algorithms, and in particular, relates to a pose prediction method, a pose prediction device, a robot, and a computer-readable storage medium.
- the present application provides a pose prediction method, a pose prediction device, a robot and a computer-readable storage medium, which can realize more accurate and robust robot pose prediction.
- the present application provides a method for predicting a pose.
- the method for predicting a pose is applied to a robot provided with a binocular camera, where the camera includes a first camera and a second camera, and the method for predicting a pose includes:
- each pair of feature points includes a first feature point and a second feature point
- the first image is based on the first camera.
- the second image is obtained from the image collected at the current moment, the second image is obtained based on the image collected by the second camera at the current moment, the first feature point is within the first image, and the second feature point is within the second image ;
- each pair of line features includes a first line feature and a second line feature, and the first line feature is in the second line feature.
- the second line feature is in the second image;
- the predicted pose of the robot is obtained based on the at least one pair of line features, the at least one pair of feature points, and the inertial data output by the inertial measurement unit of the robot.
- the present application provides a position and posture prediction device, the above-mentioned position and posture prediction device is applied to a robot provided with a binocular camera, the above-mentioned binocular camera includes a first camera and a second camera, and the above-mentioned position and posture prediction device includes:
- the first search unit is used to search for at least a pair of matching feature points in the first image and the second image, wherein each pair of feature points includes a first feature point and a second feature point.
- One image is obtained based on the image collected by the first camera at the current moment
- the second image is obtained based on the image collected by the second camera at the current moment
- the first feature point is in the first image, the second feature point in the second image above;
- the second search unit is configured to search for at least a pair of matching line features in the first image and the second image, wherein each pair of line features includes a first line feature and a second line feature, and the above The first line feature is in the first image, and the second line feature is in the second image;
- a prediction unit configured to obtain the predicted pose of the robot based on the at least one pair of line features, the at least one pair of feature points, and the inertial data output by the inertial measurement unit of the robot.
- the present application provides a robot.
- the robot includes a memory, a processor, a binocular camera, and a computer program stored in the memory and running on the processor.
- the processor implements the computer program when the processor executes the computer program. The steps of the method of the first aspect above.
- the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, implements the steps of the method in the first aspect.
- the present application provides a computer program product, wherein the computer program product includes a computer program, and when the computer program is executed by one or more processors, the steps of the method of the first aspect are implemented.
- the beneficial effect of the present application is: for a robot provided with a first camera and a second camera (ie, a binocular camera), in the first image and the second image, a matching At least one pair of feature points, wherein each pair of feature points includes a first feature point and a second feature point, the first image is obtained based on the image collected by the first camera at the current moment, and the second image is based on the above The image obtained by the second camera at the current moment, the first feature point is in the first image, the second feature point is in the second image, and also in the first image and the second image.
- a first camera and a second camera ie, a binocular camera
- the predicted pose of the robot can be obtained based on the at least one pair of line features, the at least one pair of feature points, and the inertial data output by the inertial measurement unit of the robot.
- the above process combines feature points and line features to predict the pose of the robot.
- the geometric structure information in the environment where the robot is located can be obtained through the combination of feature points and line features, so that the robot can be used in challenging weak texture and Accurate and robust pose prediction can also be achieved in low visible light scenes. It can be understood that, for the beneficial effects of the second aspect to the fifth aspect, reference may be made to the relevant description in the first aspect, which is not repeated here.
- FIG. 1 is a schematic diagram of an implementation flowchart of a pose prediction method provided by an embodiment of the present application
- FIG. 2 is an exemplary diagram of two line segments before splicing provided by an embodiment of the present application
- FIG. 3 is an exemplary diagram of a line segment obtained after splicing provided by an embodiment of the present application.
- FIG. 4 is a schematic diagram of a three-dimensional line feature reprojection residual provided by an embodiment of the present application.
- FIG. 5 is a schematic structural diagram of a pose prediction apparatus provided by an embodiment of the present application.
- FIG. 6 is a schematic structural diagram of a robot provided by an embodiment of the present application.
- the pose prediction method is applied to a robot provided with binocular cameras.
- one of the binocular cameras is referred to as the first camera, and the other camera is referred to as the second camera.
- the first camera may be a left-eye camera, and the second camera may be a right-eye camera.
- the pose prediction method includes:
- Step and 101 in the first image and the second image, find out at least a pair of matching feature points.
- the original images collected by the first camera and the second camera are first converted into grayscale images by the robot, and a series of preprocessing is performed on the grayscale images to improve subsequent data processing efficiency.
- the preprocessing operation may be a Gaussian blur operation to reduce the noise of the grayscale image, make the grayscale image smoother, and reduce the level of detail of the grayscale image.
- the preprocessed grayscale image can be used in each step of the embodiments of the present application. That is, the first image refers to the grayscale image obtained by preprocessing based on the original image captured by the first camera at the current moment, and the second image refers to the image captured by the second camera at the current moment.
- the grayscale image obtained after preprocessing the original image is a Gaussian blur operation to reduce the noise of the grayscale image, make the grayscale image smoother, and reduce the level of detail of the grayscale image.
- the feature point pair For a pair of feature points, that is, a feature point pair, the feature point pair consists of two feature points, a first feature point and a second feature point, wherein the first feature point is in the first image, and the first feature point is in the first image.
- the two feature points are in the second image. That is, a feature point in the first image and a feature point matching the feature point in the second image constitute a pair of feature points.
- the robot may first extract at least one feature point from the first image, and then search for feature points matching each feature point of the first image in the second image by means of optical flow matching , to obtain at least one pair of feature points.
- one feature point in the second image can only match at most one feature point in the first image, and similarly, one feature point in the first image can only match at most one feature point in the second image. Points are matched; that is, there will be no duplication of feature points in different feature point pairs.
- the robot will first determine whether the currently obtained first image is the initial first image, that is, whether the first image is based on the first frame image collected after the first camera is activated.
- the feature points of the first image can be extracted based on a preset first feature point extraction method, wherein the first feature point extraction method is related to corners; if If the first image is not the original first image, the feature points of the first image may be extracted based on a preset second feature point extraction method, wherein the second feature point extraction method is related to optical flow.
- the first feature point extraction method and the second feature point extraction method are briefly introduced below:
- the first feature point extraction method refers to: performing FAST (Features from Accelerated Segment Test) corner detection on the first image, and using the extracted corner points as feature points.
- FAST Features from Accelerated Segment Test
- N a threshold of the number of feature points is set here, that is, the extracted corner points can be based on each According to the response value of the corner points, N corner points are selected as feature points.
- N can be set to any integer in the interval [70,100].
- the second feature point extraction method refers to: performing optical flow tracking on the feature points obtained from the previous frame of the first image to obtain M feature points of the first image.
- the image of the previous frame of the first image mentioned here refers to the first image preprocessed based on the original image obtained by the first camera at the previous moment, that is, the first image of the previous frame. an image. For example, if the first image A1 is obtained based on the first camera at time i, and the first image A2 is obtained based on the first camera at time i+1, then it can be considered that the second image A1 is the previous frame of the first image A2. .
- the second image obtained after optical flow tracking is performed on the N feature points of the first image A1.
- the number of feature points in the image A2 is M, and M ⁇ N.
- M is much smaller than N
- the robot may have moved by a relatively large amount, and it can be determined that the currently obtained first image is a key frame.
- the robot will supplement the first image with new feature points by performing FAST corner detection on the first image again, so that the number of feature points in the first image reaches N. For example, assuming that time i is the initial time and N is 100, the robot will extract 100 corner points from the first image A1 as its feature points; For optical flow tracking, only 40 feature points are obtained in the first image A2.
- the robot Since 40 ⁇ 50, the first image A2 is judged as a key frame; the robot will also perform corner extraction on the first image A2, adding 60 These corner points, together with the 40 feature points obtained by optical flow tracking, are used as the final feature points of the first image A2.
- Step 102 Find out at least a pair of matching line features in the first image and the second image.
- the robot also finds at least a pair of matching line features in the first image and the second image, wherein each pair of line features includes a first line feature and a second line feature, The first line feature is within the first image and the second line feature is within the second image. That is, a line feature in the first image and a line feature matching the line feature in the second image constitute a pair of line features.
- the robot may first perform line feature extraction operations on the first image and the second image, respectively, to obtain at least one line feature (denoted as the third line feature) extracted from the first image, and the line feature extracted from the second image.
- At least one line feature obtained (referred to as the fourth line feature); then, the robot can match each third line feature with each fourth line feature, and based on the matching third line feature and fourth line feature to obtain at least one pair of line features. Similar to step 101, in different line feature pairs, the line features will not be repeated.
- the line feature extraction operation can be implemented as follows: first, based on a preset line extraction algorithm, extract the line segments in the image, wherein the line extraction algorithm may be the LSD (Line Segment Detector) algorithm, or are other algorithms, which are not limited here.
- the line extraction algorithm may be the LSD (Line Segment Detector) algorithm, or are other algorithms, which are not limited here.
- LSD Line Segment Detector
- only line segments longer than a preset length are considered here. For example, only line segments with a pixel length exceeding 30 pixels may be considered.
- a long line segment may be extracted into multiple short line segments during the extraction process. Therefore, the robot will first detect whether there are two adjacent line segments in the same straight line, and put the line segment in the same line.
- Two adjacent line segments on the same straight line are spliced to ensure the continuity of line segment tracking.
- the two adjacent line segments refer to: the end point of one line segment is within a preset distance from the start point of the other line segment. It should be noted that the end point and start point of the line segment are determined by the straight line extraction algorithm, which will not be repeated here. After the above process is completed, the line segment in the obtained image is the line feature of the image.
- the robot can determine whether two adjacent line segments are on the same straight line by the following method: denote the two adjacent line segments as the first line segment and the second line segment respectively, and calculate the normal vector of the first line segment respectively. and the normal vector of the second line segment; if the error between the normal vector of the first line segment and the normal vector of the second line segment is within a preset error range, it is determined that the first line segment and the second line segment are on the same straight line.
- Figure 2 shows an example of two adjacent line segments on the same straight line
- Figure 3 shows the splicing of two adjacent line segments on the same straight line shown in Figure 2 An example of the new line segment obtained later. It can be seen that two adjacent line segments (line segment 1 and line segment 2) on the same straight line are spliced into a new line segment 3, and the new line segment 3 can be used as a line feature of the image.
- the robot can describe the extracted line features through an LBD (Line Band Discriptor) descriptor, and judge whether the two line features match based on the LBD descriptor.
- LBD Line Band Discriptor
- a mismatch may occur when the robot judges whether the two line features match.
- two line features that may be mismatched may be eliminated by the following methods: the midpoints and slopes of the two line features to be detected are obtained respectively, so as to determine whether the two line features are mismatched.
- the midpoint of the third line feature, the midpoint of the fourth line feature, the slope of the third line feature, and the fourth line feature may be based on to determine whether the third line feature and the fourth line feature are mismatched, specifically: obtaining the coordinates of the midpoint of the third line feature in the first image, and denoting it as the first coordinate; obtaining the fourth line The coordinates of the midpoint of the feature in the second image, denoted as the second coordinate; detect whether the distance between the first coordinate and the second coordinate is within a preset error distance; calculate the third line based on the slope of the third line feature
- the inclination angle of the feature relative to the x-axis of the image coordinate system is denoted as the first angle; the inclination angle of the fourth line feature relative to the x-axis of the image coordinate system is calculated based on the slope of the fourth line feature, denoted as the second angle ; Detect whether the angle difference between the first angle and the second angle is within the preset
- the line features extracted from the first image can also be screened out once by the above method, specifically: based on the optical flow tracking method, determining the line features obtained from the previous frame of the first image and the The matching relationship of the line features obtained from the first image.
- the line feature obtained from the previous frame of the first image is recorded as the fifth line feature
- the line feature obtained from the first image is recorded as the sixth line feature; There is usually no large movement in a short period of time.
- the midpoint of the fifth line feature and the sixth line feature can be based on The midpoint, the slope of the fifth line feature and the slope of the sixth line feature, to determine whether the fifth line feature and the sixth line feature are mismatched, specifically: obtaining the midpoint of the fifth line feature The coordinates in the previous frame of the first image are recorded as the third coordinates; the coordinates of the midpoint of the sixth line feature in the first image are obtained and recorded as the fourth coordinates; Whether the distance of the fourth coordinate is within the preset error distance; calculate the inclination angle of the fifth line feature relative to the x-axis of the image coordinate system based on the slope of the fifth line feature, which is recorded as the third angle; based on the sixth line feature Calculate the inclination angle of the sixth line feature relative to the x-axis of the image coordinate system, denoted as the fourth angle; detect whether the angle difference between the third angle and the fourth angle is within the preset error angle; only the The distance between the third coordinate
- the first category is the line features that can be matched in the second image
- the second category is the line features that cannot be matched in the second image. Matched line features in the two images.
- line features matching itself can be obtained directly based on the second image to obtain line feature pairs.
- a line feature matching itself may be acquired based on the previous frame image of the first image obtained by the first camera to obtain a line feature pair.
- each line feature retained after the first image is screened can form a corresponding pair of line features.
- the line features of the first type may be recorded as binocular line features
- the line features of the second type may be recorded as non-binocular line features
- the robot can find at least a pair of matching line features based on the non-binocular line features in the first image and the previous frame of the first image, wherein each pair of line features formed in this way includes a pair of line features.
- a non-binocular first line feature and a non-binocular second line feature the non-binocular first line feature is in the first image, and the non-binocular second line feature is in the previous frame of the first image.
- the line feature pair finally obtained by the robot may not only include at least a pair of line features obtained based on the binocular line feature, the first image and the second image, but also may include a non-binocular line feature based on the first image and the first image. and at least one pair of line features obtained from the previous frame of the first image.
- Step 103 Obtain the predicted pose of the robot based on the at least one pair of line features, the at least one pair of feature points, and the inertial data output by the inertial measurement unit of the robot.
- the robot is preset with an objective function
- the objective function can be based on the at least one pair of feature points determined in step 101, the at least one pair of line features determined in step 102, and the inertial measurement unit (Inertial Measurement Unit) of the robot.
- IMU Inertial Measurement Unit
- the objective function is specifically:
- B represents the set of inertial data output by the IMU
- C represents the set composed of at least one pair of feature points obtained in step 101
- L represents the set composed of at least one pair of line features obtained in step 102
- X represents the robot
- the estimated value of the system state which includes the pose of the robot, as well as the pose of the feature points and line features in space
- z represents the observed value of the robot's system state
- r represents the difference between the observed value of the system state and the estimated value of the system state , that is, the system state difference.
- the set L may include line feature pairs obtained based on non-binocular line features, the first image, and an image of the previous frame of the first image.
- the robot extracted 10 line features in the first image; among these 10 line features, one line feature was eliminated because it could not find a matching line feature in the previous frame image, and only the remaining 9 line features; that is, these 9 line features can find matching line features in the previous frame image, denoted as line features 1, 2, 3, 4, 5, 6, 7, 8 and 9;
- line features 1, 2, 3, 4, 5, 6, 7, 8 and 9 By matching with the line features extracted from the second image, it is found that the line features 1, 3 and 4 cannot be matched with any line feature in the second image, then the line features 1, 3 and 4 will be respectively matched with the previous frame.
- the matched line features in the image together form 3 pairs of line features; the line features 2, 5, 6, 7, 8 and 9 are respectively combined with the matched line features in the second image to form 6 pairs of line features; so far, After completing the construction of 9 pairs of line features, the 9 pairs of line features can constitute a set L.
- the first part is the residual between the integral value of the IMU and the true value.
- the second part is the residual difference between the coordinates (ie pixel positions) obtained after the three-dimensional feature point is reprojected back to the two-dimensional image coordinate system and the actually observed coordinates of the feature point in the first image.
- the three-dimensional feature points are obtained by binocular triangulation of a pair of feature points.
- the third part is similar in principle to the second part, reprojecting the residuals for the three-dimensional line features.
- the three-dimensional line feature reprojection residual refers to the straight line obtained after the three-dimensional line feature is projected from the world coordinate system to the normalized plane of the first camera and the second camera, and the normalized The vertical distance between the start and end points of the line features of the original first image in the plane.
- the three-dimensional line feature is obtained by triangulating a pair of line features.
- the estimated value X of the system state including the position and attitude of the robot is tightly coupled and optimized by the least squares method, so that the value of the entire objective function is minimized.
- X is the optimal solution, and the X of the optimal solution contains the position of the current robot. Pose prediction value.
- the two are optimized together when the objective function is iteratively solved; that is, the pose of the robot, the three-dimensional feature points and the three-dimensional The line features of are iteratively adjusted together to minimize the objective function.
- the X of the optimal solution not only contains the predicted value of the pose of the current robot, but also contains the optimized three-dimensional feature points and line features.
- the robot can thus save the optimized feature points and line features in space and their feature descriptions, form a word bag, and save it as a map for loop closure detection and correction; at the same time, the map can also be used for the next When navigating and positioning once, it is used for relocation.
- the pose of the robot is predicted by combining the feature points and the line features, and the geometric structure information in the environment where the robot is located can be obtained through the combination of the feature points and the line features, so that the robot can Accurate and robust pose prediction can also be achieved in challenging weak texture and low visible light scenes.
- an embodiment of the present application provides a pose prediction device, and the pose prediction device is applied to a robot provided with a binocular camera, and the binocular camera includes a first camera and a second camera. Camera.
- the pose prediction apparatus 500 in the embodiment of the present application includes:
- the first search unit 501 is configured to search for at least one pair of matching feature points in the first image and the second image, wherein each pair of feature points includes a first feature point and a second feature point.
- the first image is obtained based on the image collected by the first camera at the current moment
- the second image is obtained based on the image collected by the second camera at the current moment
- the first feature point is in the first image
- the second image is obtained.
- the feature points are in the above-mentioned second image;
- the second search unit 502 is configured to search for at least a pair of matching line features in the first image and the second image, wherein each pair of line features includes a first line feature and a second line feature, The first line feature is in the first image, and the second line feature is in the second image;
- the prediction unit 503 is configured to obtain the predicted pose of the robot based on the at least one pair of line features, the at least one pair of feature points, and the inertial data output by the inertial measurement unit of the robot.
- the above-mentioned first search unit 501 includes:
- a feature point extraction subunit for extracting at least one feature point from the first image
- the feature point matching subunit is used for searching the second image respectively for the feature points matching the respective feature points in the first image, so as to obtain at least one pair of feature points.
- the above-mentioned feature point extraction subunit includes:
- a detection subunit configured to detect whether the above-mentioned first image is obtained based on the first frame of image collected after the above-mentioned first camera is activated
- the first feature point extraction subunit is configured to extract feature points based on a preset first feature point extraction method if the above-mentioned first image is obtained based on the above-mentioned first frame image, wherein the above-mentioned first feature point extraction method and angle point related;
- the second feature point extraction subunit is configured to extract feature points based on a preset second feature point extraction method if the first image is not obtained based on the first frame image, wherein the second feature point extraction method is the same as related to optical flow.
- the above-mentioned second search unit 502 includes:
- a line feature extraction subunit is used to perform line feature extraction operations on the first image and the second image respectively to obtain a third line feature and a fourth line feature, wherein the third line feature is extracted from the first image.
- the line feature of , the fourth line feature is the line feature extracted from the second image;
- the line feature matching subunit is used for matching each third line feature with each fourth line feature respectively;
- the line feature pair obtaining subunit is used to obtain at least one pair of line features based on the matched third line feature and the fourth line feature.
- the above-mentioned line feature extraction subunit includes:
- a line segment extraction subunit configured to extract line segments exceeding a preset length in the above image based on a preset straight line extraction algorithm for any image in the above-mentioned first image and the above-mentioned second image;
- the line segment splicing subunit is used for splicing two adjacent line segments on the same straight line, wherein the above two adjacent line segments refer to: the end point of one line segment is within a preset distance of the start point of another line segment.
- the above-mentioned line feature pair acquisition subunit includes:
- a judgment subunit for each pair of matching third line features and fourth line features, based on the midpoint of the third line feature, the midpoint of the fourth line feature, the slope of the third line feature, and the fourth line feature The slope of the line feature is used to determine whether the third line feature and the fourth line feature are mismatched;
- the determining subunit is configured to determine the third line feature and the fourth line feature as a pair of line features if the third line feature and the fourth line feature are not mismatched.
- the above prediction unit 503 includes:
- the objective function optimization sub-unit is used to optimize the objective function based on the preset optimization method, wherein the objective function adopts the inertia output of the at least one pair of line features, the at least one pair of feature points, and the inertial measurement unit of the robot. data as constraints;
- the predicted pose obtaining subunit is used to obtain the predicted pose of the robot based on the optimized objective function.
- the pose of the robot is predicted by combining the feature points and the line features, and the geometric structure information in the environment where the robot is located can be obtained through the combination of the feature points and the line features, so that the robot can Accurate and robust pose prediction can also be achieved in challenging weak texture and low visible light scenes.
- the robot 6 in the embodiment of the present application also provides a robot, please refer to FIG. 6
- the robot 6 in the embodiment of the present application includes: a memory 601 , one or more processors 602 (only one is shown in FIG. 6 ), a binocular camera 603 and a computer program stored on the memory 601 and executable on the processor.
- the binocular camera 603 includes a first camera and a second camera;
- the memory 601 is used to store software programs and units, and the processor 602 executes various functional applications and data processing by running the software programs and units stored in the memory 601, to obtain the resources corresponding to the above preset events.
- the processor 602 implements the following steps by running the above-mentioned computer program stored in the memory 601:
- each pair of feature points includes a first feature point and a second feature point
- the first image is based on the first camera.
- the second image is obtained from the image collected at the current moment, the second image is obtained based on the image collected by the second camera at the current moment, the first feature point is within the first image, and the second feature point is within the second image ;
- each pair of line features includes a first line feature and a second line feature, and the first line feature is in the second line feature.
- the second line feature is in the second image;
- the predicted pose of the robot is obtained based on the at least one pair of line features, the at least one pair of feature points, and the inertial data output by the inertial measurement unit of the robot.
- the matching At least one pair of feature points including:
- the feature points matching the respective feature points in the first image are respectively searched to obtain at least one pair of feature points.
- the above-mentioned extraction of at least one feature point from the first image includes:
- the feature points are extracted based on a preset first feature point extraction method, wherein the first feature point extraction method is related to corner points;
- feature points are extracted based on a preset second feature point extraction method, wherein the second feature point extraction method is related to optical flow.
- searching for at least a pair of matching line features includes:
- At least one pair of line features is obtained based on the matching third and fourth line features.
- the above-mentioned line feature extraction operation includes:
- the two adjacent line segments on the same straight line are spliced, wherein the above-mentioned two adjacent line segments refer to: the end point of one line segment is within a preset distance of the start point of the other line segment.
- At least one pair of line features is obtained based on the matching third line feature and fourth line feature, including:
- the third line feature and the fourth line feature are determined as a pair of line features.
- the above-mentioned inertial data is based on the above-mentioned at least one pair of line features, the above-mentioned at least one pair of characteristic points, and the inertial measurement unit of the robot. , and predict the pose of the above robot, including:
- the objective function is optimized based on a preset optimization method, wherein the objective function uses the at least one pair of line features, the at least one pair of feature points, and the inertial data output by the inertial measurement unit of the robot as constraints;
- the processor 602 may be a central processing unit (Central Processing Unit, CPU), and the processor may also be other general-purpose processors, digital signal processors (Digital Signal Processor, DSP) , Application Specific Integrated Circuit (ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
- a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
- Memory 601 may include read-only memory and random access memory, and provides instructions and data to processor 602 . Part or all of memory 601 may also include non-volatile random access memory. For example, the memory 601 may also store information of device categories.
- the pose of the robot is predicted by combining the feature points and the line features, and the geometric structure information in the environment where the robot is located can be obtained through the combination of the feature points and the line features, so that the robot can Accurate and robust pose prediction can also be achieved in challenging weak texture and low visible light scenes.
- the disclosed apparatus and method may be implemented in other manners.
- the system embodiments described above are only illustrative.
- the division of the above-mentioned modules or units is only a logical function division. In actual implementation, there may be other division methods.
- multiple units or components may be combined. Either it can be integrated into another system, or some features can be omitted, or not implemented.
- the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
- the units described above as separate components may or may not be physically separated, and components shown as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
- the above-mentioned integrated units are implemented in the form of software functional units and sold or used as independent products, they may be stored in a computer-readable storage medium.
- the present application can implement all or part of the processes in the methods of the above embodiments, and can also be completed by instructing the associated hardware through a computer program, and the above computer program can be stored in a computer-readable storage medium, the computer When the program is executed by the processor, the steps of the foregoing method embodiments can be implemented.
- the above-mentioned computer program includes computer program code
- the above-mentioned computer program code may be in the form of source code, object code form, executable file or some intermediate form.
- the above-mentioned computer-readable storage medium may include: any entity or device capable of carrying the above-mentioned computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer-readable memory, a read-only memory (ROM, Read-Only Memory) ), random access memory (RAM, Random Access Memory), electrical carrier signals, telecommunication signals, and software distribution media, etc.
- a recording medium a U disk, a removable hard disk, a magnetic disk, an optical disk
- a computer-readable memory a read-only memory (ROM, Read-Only Memory) ), random access memory (RAM, Random Access Memory), electrical carrier signals, telecommunication signals, and software distribution media, etc.
- ROM Read-Only Memory
- RAM Random Access Memory
- electrical carrier signals telecommunication signals
- software distribution media etc.
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Automation & Control Theory (AREA)
- Image Analysis (AREA)
- Manipulator (AREA)
Abstract
本申请公开了一种位姿预测方法、位姿预测装置、机器人及计算机可读存储介质。其中,该方法应用于设置有双目摄像头的机器人,双目摄像头包括第一摄像头及第二摄像头,该方法包括:在第一摄像头当前时刻所采集的第一图像及第二摄像头当前时刻所采集的第二图像中,查找出相匹配的至少一对特征点;在第一图像及第二图像中,查找出相匹配的至少一对线特征;基于至少一对线特征、至少一对特征点及机器人的惯性测量单元所输出的惯性数据,获得机器人的预测位姿。通过本申请方案,可实现更加精确及鲁棒的机器人的位姿预测。
Description
本申请要求于2021年02月21日在中国专利局提交的、申请号为202110194534.2的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请属于视觉算法技术领域,尤其涉及一种位姿预测方法、位姿预测装置、机器人及计算机可读存储介质。
在机器人的定位建图过程中,需要获得机器人准确的位姿预测。现有技术中,往往依赖于视觉传感器,采用基于特征点匹配的特征点法或基于光度一致的直接法来构建约束条件,实现对机器人的位姿预测。但是在某些室内场景下,比如低纹理环境或暗光环境时,采用基于特征点匹配的特征点法难以提取到有效的特征点,而基于光度一致的直接法的使用性也大大下降。也即,在弱纹理和暗光场景下,可能无法对机器人的位姿进行精确预测。
本申请提供了一种位姿预测方法、位姿预测装置、机器人及计算机可读存储介质,可实现更加精确及鲁棒的机器人的位姿预测。
第一方面,本申请提供了一种位姿预测方法,上述位姿预测方法应用于设置有双目摄像头的机器人,上述双目摄像头包括第一摄像头及第二摄像头,上述位姿预测方法包括:
在第一图像及第二图像中,查找出相匹配的至少一对特征点,其中,每一对特征点包含一个第一特征点及一个第二特征点,上述第一图像基于上述第一摄像头当前时刻所采集的图像而得,上述第二图像基于上述第二摄像头当前时刻所采集的图像而得,上述第一特征点在上述第一图像内,上述第二特征点在上述第二图像内;
在上述第一图像及上述第二图像中,查找出相匹配的至少一对线特征,其中,每一对线特征包含一条第一线特征及一条第二线特征,上述第一线特征在上述第一图像内,上述第二线特征在上述第二图像内;
基于上述至少一对线特征、上述至少一对特征点及上述机器人的惯性测量单元所输出的惯性数据,获得上述机器人的预测位姿。
第二方面,本申请提供了一种位姿预测装置,上述位姿预测装置应用于设置有双目摄像头的机器人,上述双目摄像头包括第一摄像头及第二摄像头,上述位姿预测装置包括:
第一查找单元,用于在第一图像及第二图像中,查找出相匹配的至少一对特征点,其中,每一对特征点包含一个第一特征点及一个第二特征点,上述第一图像基于上述第一摄像头当前时刻所采集的图像而得,上述第二图像基于上述第二摄像头当前时刻所采集的图像而得,上述第一特征点在上述第一图像内,上述第二特征点在上述第二图像内;
第二查找单元,用于在上述第一图像及上述第二图像中,查找出相匹配的至少一对线特征,其中,每一对线特征包含一条第一线特征及一条第二线特征,上述第一线特征在上述第一图像内,上述第二线特征在上述第二图像内;
预测单元,用于基于上述至少一对线特征、上述至少一对特征点及上述机器人的惯性测量单元所输出的惯性数据,获得上述机器人的预测位姿。
第三方面,本申请提供了一种机器人,上述机器人包括存储器、处理器、双目摄像头以及存储在上述存储器中并可在上述处理器上运行的计算机程序,上述处理器执行上述计算机程序时实现如上述第一方面的方法的步骤。
第四方面,本申请提供了一种计算机可读存储介质,上述计算机可读存储介质存储有 计算机程序,上述计算机程序被处理器执行时实现如上述第一方面的方法的步骤。
第五方面,本申请提供了一种计算机程序产品,上述计算机程序产品包括计算机程序,上述计算机程序被一个或多个处理器执行时实现如上述第一方面的方法的步骤。
本申请与现有技术相比存在的有益效果是:对于设置有第一摄像头及第二摄像头(也即双目摄像头)的机器人来说,在第一图像及第二图像中,查找出相匹配的至少一对特征点,其中,每一对特征点包含一个第一特征点及一个第二特征点,上述第一图像基于第一摄像头当前时刻所采集的图像而得,上述第二图像基于上述第二摄像头当前时刻所采集的图像而得,上述第一特征点在上述第一图像内,上述第二特征点在上述第二图像内,同时还会在上述第一图像及上述第二图像中,查找出相匹配的至少一对线特征,其中,每一对线特征包含一条第一线特征及一条第二线特征,上述第一线特征在上述第一图像内,上述第二线特征在上述第二图像内,最后可基于上述至少一对线特征、上述至少一对特征点及上述机器人的惯性测量单元所输出的惯性数据,获得上述机器人的预测位姿。上述过程结合了特征点和线特征来对机器人的位姿进行预测,可通过特征点和线特征的结合来获得机器人所处环境内的几何结构信息,使得机器人能够在有挑战性的弱纹理和低可见光照场景下也能够实现精确及鲁棒的位姿预测。可以理解的是,上述第二方面至第五方面的有益效果可以参见上述第一方面中的相关描述,在此不再赘述。
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的位姿预测方法的实现流程示意图;
图2是本申请实施例提供的拼接前的两条线段的示例图;
图3是本申请实施例提供的拼接后所得线段的示例图;
图4是本申请实施例提供的三维的线特征重投影残差的示意图;
图5是本申请实施例提供的位姿预测装置的结构示意图;
图6是本申请实施例提供的机器人的结构示意图。
以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本申请。在其它情况中,省略对众所周知的系统、装置、电路以及方法的详细说明,以免不必要的细节妨碍本申请的描述。
为了说明本申请所提出的技术方案,下面通过具体实施例来进行说明。
下面对本申请实施例提供的一种位姿预测方法进行描述。该位姿预测方法应用于设置有双目摄像头的机器人,为便于说明,记双目摄像头中的其中一个摄像头为第一摄像头,另一个摄像头为第二摄像头。仅作为示例,该第一摄像头可以是左目摄像头,该第二摄像头可以是右目摄像头。请参阅图1,该位姿预测方法包括:
步骤与101,在第一图像及第二图像中,查找出相匹配的至少一对特征点。
在本申请实施例中,第一摄像头及第二摄像头所采集的原始图像会先被机器人转换为灰度图像,并对该灰度图像进行一系列预处理,以提升后续的数据处理效率。举例来说,预处理操作可以是高斯模糊操作,以减少灰度图像的噪声,使得灰度图像更加平滑,降低灰度图像的细节层次。预处理后的灰度图像即可投入本申请实施例的各个步骤中进行使用。也即,第一图像指的是当前时刻基于该第一摄像头所采集的原始图像进行预处理操作后所得的灰度图像,而第二图像则指的是当前时刻基于该第二摄像头所采集的原始图像进行预 处理操作后所得的灰度图像。
对于一对特征点,也即一特征点对来说,该特征点对由第一特征点及第二特征点这两个特征点所构成,其中,第一特征点在第一图像内,第二特征点在第二图像内。也即,第一图像中的一个特征点,和第二图像中的与该特征点相匹配的特征点构成了一对特征点。
在一些实施例中,机器人可先从第一图像中提取出至少一个特征点,然后再在第二图像中通过光流匹配的方式查找出与第一图像的每个特征点相匹配的特征点,以获得至少一对特征点。需要注意的是,第二图像中的一个特征点至多仅能与第一图像中的一个特征点相匹配,同样,第一图像中的一个特征点也至多仅能与第二图像中的一个特征点相匹配;也即,不同的特征点对中,不会有特征点发生重复。为实现特征点的快速提取,机器人将首先判断当前所获得的第一图像是否为初始的第一图像,也即,该第一图像是否基于第一摄像头启动后所采集到的第一帧图像而得;若该第一图像是初始的第一图像,则可基于预设的第一特征点提取方式提取该第一图像的特征点,其中,该第一特征点提取方式与角点相关;若第一图像不是初始的第一图像,则可基于预设的第二特征点提取方式提取该第一图像的特征点,其中,该第二特征点提取方式与光流相关。下面对该第一特征点提取方式及第二特征点提取方式进行简单介绍:
第一特征点提取方式指的是:对第一图像进行FAST(Features from Accelerated Segment Test)角点检测,将提取到的角点作为特征点。考虑到通常情况下,所提取到的角点的数量较多,甚至可能达到上千个,因而,此处设定有一特征点数量阈值N,也即,可从提取到的角点中基于各角点的响应值而选定N个角点作为特征点。考虑到特征点太少可能会导致所预测的位姿不准确,特征点太多可能会增加机器人的系统运算量,因而可设定N为[70,100]这一区间中的任一整数。
第二特征点提取方式指的是:对第一图像的前一帧图像所获得的特征点进行光流跟踪,获得该第一图像的M个特征点。需要注意的是,这里所说的第一图像的前一帧图像,指的是基于第一摄像头在前一时刻所获得的原始图像进行预处理而得的第一图像,也即前一帧第一图像。举例来说,i时刻基于第一摄像头获得了第一图像A1,i+1时刻基于第一摄像头获得了第一图像A2,那么即可认为第二图像A1就是第一图像A2的前一帧图像。考虑到机器人处于移动状态时,第一图像A1的N个特征点难以都在第二图像A2中出现,因而,在对第一图像A1的N个特征点进行光流跟踪后所得到的第二图像A2中的特征点的数量为M个,且M≤N。
在一些实施例中,可能出现M远小于N的情况下,例如
此时认为机器人可能发生了较大幅度的移动,可判断当前所获得的第一图像为关键帧。在这种情况下,机器人将再次以对该第一图像进行FAST角点检测的方式,为该第一图像补充新的特征点,使得该第一图像的特征点达到N个。举例来说,假定i时刻为初始时刻,N为100,则机器人将从第一图像A1中提取出100个角点作为其特征点;又假定通过对第一图像A1中的100个特征点的光流跟踪,在第一图像A2中仅获得了40个特征点,由于40<50,因而,判断第一图像A2为关键帧;机器人会对第一图像A2也进行角点提取,补入60个角点,与之前通过光流跟踪所获得的40个特征点一起,共同作为第一图像A2最终的特征点。
步骤102,在上述第一图像及上述第二图像中,查找出相匹配的至少一对线特征。
在本申请实施例中,机器人还会在第一图像及第二图像中,查找出相匹配的至少一对线特征,其中,每一对线特征包含一条第一线特征及一条第二线特征,该第一线特征在第一图像内,该第二线特征在第二图像内。也即,第一图像中的一条线特征,和第二图像中的与该线特征相匹配的线特征构成了一对线特征。示例性地,机器人可先分别在第一图像及第二图像进行线特征提取操作,获得第一图像中所提取到的至少一条线特征(记作第三线特征),以及第二图像中所提取到的至少一条线特征(记作第四线特征);然后,机器 人可将每条第三线特征分别与各条第四线特征行匹配操作,并基于相匹配的第三线特征及第四线特征来获得至少一对线特征。与步骤101类似,不同的线特征对中,不会有线特征发生重复。
针对任一图像,该线特征提取操作可通过如下方式实现:先基于预设的直线提取算法,提取出图像中的线段,其中,该直线提取算法可以是LSD(Line Segment Detector)算法,也可以是其它算法,此处不作限定。为了避免线特征的误提取,提升后续的线特征处理效率,此处仅考虑长于预设长度的线段,例如,可仅考虑像素长度超过30个像素的线段。又考虑到在图像噪声或局部模糊的影响下,一条长线段可能在提取过程中被提取为了多条短线段,因而,机器人还会先检测是否有相邻的两条线段处于同一直线,并将处于同一直线的相邻的两条线段进行拼接,以保障线段跟踪的连续性。其中,相邻的两条线段指的是:一条线段的终点在另一条线段的起点的预设距离内。需要注意的是,线段的终点和起点是由直线提取算法而判断的,此处不作赘述。在上述过程完成后,所获得的图像中的线段即为该图像的线特征。
示例性地,机器人可通过如下方式判断相邻的两条线段是否处于同一直线:将相邻的两条线段分别记作第一线段及第二线段,并分别计算第一线段的法向量及第二线段的法向量;若第一线段的法向量与第二线段的法向量的误差在预设误差范围内,则确定该第一线段及该第二线段处于同一直线。请参阅图2及图3,图2给出了处于同一直线的相邻的两条线段的示例,图3给出了对图2所示出的处于同一直线的相邻的两条线段进行拼接后所获得的新的线段的示例。可见,处于同一直线的相邻的两条线段(线段1及线段2),被拼接为了新的线段3,该新的线段3即可作为图像的一条线特征。
在一些实施例中,机器人可通过LBD(Line Band Discriptor)描述子来描述所提取出的线特征,并基于LBD描述子来对两条线特征是否相匹配进行判断。
在一些实施例中,机器人在对两条线特征是否相匹配进行判断时,可能出现误匹配的情况。本申请实施例可通过如下方式来剔除可能出现误匹配的两条线特征:分别获取待检测的两条线特征的中点及斜率,以此来判断这两条线特征是否误匹配。
则针对每一对相匹配的第三线特征及第四线特征来说,可基于该第三线特征的中点、该第四线特征的中点、该第三线特征的斜率及该第四线特征的斜率,来判断该第三线特征与该第四线特征是否为误匹配,具体为:获取该第三线特征的中点在第一图像中的坐标,记作第一坐标;获取该第四线特征的中点在第二图像中的坐标,记作第二坐标;检测该第一坐标与该第二坐标的距离是否在预设的误差距离内;基于该第三线特征的斜率计算该第三线特征相对于图像坐标系的x轴的倾斜角度,记作第一角度;基于该第四线特征的斜率计算该第四线特征相对于图像坐标系的x轴的倾斜角度,记作第二角度;检测该第一角度与该第二角度的角度差值是否在预设的误差角度内;只有该第一坐标与该第二坐标的距离在预设的误差距离内,并且,该第一角度与该第二角度的角度差值在预设的误差角度内时,才认为该第三线特征及该第四线特征确实是相匹配的,可确定为一对线特征。
类似地,还可以通过上述方式来对第一图像中所提取出的线特征进行一次筛除,具体为:基于光流跟踪的方式,确定第一图像的前一帧图像所获得的线特征与该第一图像所获得的线特征的匹配关系。为便于说明,在一个匹配关系中,将第一图像的前一帧图像所获得的线特征记作第五线特征,将该第一图像所获得的线特征记作第六线特征;由于机器人通常不会在短时间内出现大幅度的移动,因而,针对每一对相匹配的第五线特征及第六线特征来说,可基于该第五线特征的中点、该第六线特征的中点、该第五线特征的斜率及该第六线特征的斜率,来判断该第五线特征与该第六线特征是否为误匹配,具体为:获取该第五线特征的中点在该第一图像的前一帧图像中的坐标,记作第三坐标;获取该第六线特征的中点在该第一图像中的坐标,记作第四坐标;检测该第三坐标与该第四坐标的距离是否在预设的误差距离内;基于第五线特征的斜率计算第五线特征相对于图像坐标系的x轴的倾斜角度,记作第三角度;基于第六线特征的斜率计算第六线特征相对于图像坐标系的 x轴的倾斜角度,记作第四角度;检测该第三角度与该第四角度的角度差值是否在预设的误差角度内;只有该第三坐标与该第四坐标的距离在预设的误差距离内,并且,该第三角度与该第四角度的角度差值在预设的误差角度内时,才确定该第五线特征及该第六线特征确实是相匹配的,此时,可保留该第六线特征;反之,可确定该第五线特征及该第六线特征不相匹配,则可将该第六线特征从该第一图像中剔除,以实现对第一图像中所提取到的线特征的筛选。
在一些实施例中,考虑到不是所有第一图像中的线特征均可在第二图像中找到相匹配的线特征,也即,可能有一些第一图像中的线特征无法在第二图像中找到相匹配的线特征,因而,对于第一图像中的线特征来说,可被划分为两类:第一类是能够在第二图像有所匹配的线特征,第二类是不能在第二图像中有所匹配的线特征。对于第一类,可直接基于第二图像来获取与自身相匹配的线特征,以获得线特征对。对于第二类,可基于第一摄像头所获得的该第一图像的前一帧图像来获取与自身相匹配的线特征,以获得线特征对。通过上述方式,可使得第一图像筛选后所保留的各个线特征均能够形成对应的一对线特征。为便于说明,本申请实施例可将第一类的线特征记作双目线特征,将第二类的线特征记作非双目线特征,则在存在非双目线特征的情况下,机器人可在第一图像及第一图像的前一帧图像中,基于非双目线特征查找出相匹配的至少一对线特征,其中,以这种方式所形成的每一对线特征包含一条非双目第一线特征及一条非双目第二线特征,该非双目第一线特征在该第一图像内,该非双目第二线特征在该第一图像的前一帧图像内。这样一来,机器人最终所获得的线特征对不仅可能包括基于双目线特征、第一图像及第二图像所获得的至少一对线特征,还可能包括基于非双目线特征、第一图像及第一图像的前一帧图像所获得的至少一对线特征。
步骤103,基于上述至少一对线特征、上述至少一对特征点及上述机器人的惯性测量单元所输出的惯性数据,获得上述机器人的预测位姿。
在本申请实施例中,机器人预先设定有一目标函数,该目标函数可基于步骤101所确定的至少一对特征点,步骤102所确定的至少一对线特征及机器人的惯性测量单元(Inertial Measurement Unit,IMU)所输出的惯性数据来进行约束,该目标函数具体为:
其中,B表示IMU所输出的惯性数据的集合;C表示由步骤101所获得的至少一对特征点构成的集合;L表示由步骤102所获得的至少一对线特征构成的集合;X表示机器人的系统状态估计值,其包含了机器人的位姿,以及特征点和线特征在空间中的位姿;z表示机器人的系统状态观测值;r表示系统状态观测值与系统状态估计值的差值,即系统状态差值。
需要注意的是,集合L可能包含有基于非双目线特征、第一图像及第一图像的前一帧图像所获得的线特征对。举例来说,机器人在第一图像中提取出了10条线特征;这10条线特征中,有1条线特征因无法在前一帧图像中找到相匹配的线特征而被剔除,仅剩余9条线特征;也即,这9条线特征均可在前一帧图像中找到相匹配的线特征,记作线特征1、2、3、4、5、6、7、8及9;通过与第二图像中所提取出的线特征进行匹配,发现线特征1、3及4无法与第二图像中的任一线特征匹配上,则线特征1、3及4将分别与前一帧图像中相匹配的线特征一起,构成3对线特征;线特征2、5、6、7、8及9将分别与第二图像中相匹配的线特征一起,构成6对线特征;至此,完成9对线特征的构建,这9对线特征即可构成集合L。
由上式可知,r具体包括三部分:
第一部分为IMU的积分值与真实值的残差。
第二部分为三维的特征点重投影回二维的图像坐标系后所得的坐标(也即像素位置)与真实观测到的该特征点在第一图像中的坐标的残差。其中,三维的特征点由一对特征点进行双目三角化而获得。
第三部分与第二部分的原理相似,为三维的线特征重投影残差。请参阅图4,该三维的线特征重投影残差指的是:三维的线特征由世界坐标系投影到第一摄像头及第二摄像头的归一化平面后所得的直线,与该归一化平面中原来的第一图像的线特征的起点和终点的垂直距离。其中,三维的线特征由一对线特征进行三角化而获得。
将这3个残差加在一起,构成上述目标函数。通过最小二乘方法紧耦合优化包含机器人的位姿的系统状态估计值X,使得整个目标函数的值最小,此时的X即为最优解,该最优解的X包含有当前机器人的位姿预测值。
在一些实施例中,考虑到三维的特征点及线特征也是目标函数的优化变量,二者在对目标函数进行迭代求解时一同被优化;也即,机器人的位姿、三维的特征点及三维的线特征在一同被迭代调整,使得目标函数最小。这样一来,最优解的X除了包含有当前机器人的位姿预测值之外,还包含有优化后的三维的特征点及线特征。机器人由此可将优化后的特征点和线特征在空间中的位置及其特征描述进行保存,构成一个词袋,保存为地图,用于回环检测及校正;同时,该地图也可以用于下一次导航定位时,做重定位使用。
由上可见,通过本申请实施例,结合了特征点和线特征来对机器人的位姿进行预测,可通过特征点和线特征的结合来获得机器人所处环境内的几何结构信息,使得机器人能够在有挑战性的弱纹理和低可见光照场景下也能够实现精确及鲁棒的位姿预测。
对应于前文所提出的位姿预测方法,本申请实施例提供了一种位姿预测装置,上述位姿预测装置应用于设置有双目摄像头的机器人,上述双目摄像头包括第一摄像头及第二摄像头。请参阅图5,本申请实施例中的位姿预测装置500包括:
第一查找单元501,用于在第一图像及第二图像中,查找出相匹配的至少一对特征点,其中,每一对特征点包含一个第一特征点及一个第二特征点,上述第一图像基于上述第一摄像头当前时刻所采集的图像而得,上述第二图像基于上述第二摄像头当前时刻所采集的图像而得,上述第一特征点在上述第一图像内,上述第二特征点在上述第二图像内;
第二查找单元502,用于在上述第一图像及上述第二图像中,查找出相匹配的至少一对线特征,其中,每一对线特征包含一条第一线特征及一条第二线特征,上述第一线特征在上述第一图像内,上述第二线特征在上述第二图像内;
预测单元503,用于基于上述至少一对线特征、上述至少一对特征点及上述机器人的惯性测量单元所输出的惯性数据,获得上述机器人的预测位姿。
可选地,上述第一查找单元501,包括:
特征点提取子单元,用于从第一图像中提取出至少一个特征点;
特征点匹配子单元,用于在第二图像中,分别查找与上述第一图像中的各个特征点相匹配的特征点,以获得至少一对特征点。
可选地,上述特征点提取子单元,包括:
检测子单元,用于检测上述第一图像是否基于上述第一摄像头启动后所采集到的第一帧图像而得;
第一特征点提取子单元,用于若上述第一图像基于上述第一帧图像而得,则基于预设的第一特征点提取方式提取特征点,其中,上述第一特征点提取方式与角点相关;
第二特征点提取子单元,用于若上述第一图像不基于上述第一帧图像而得,则基于预设的第二特征点提取方式提取特征点,其中,上述第二特征点提取方式与光流相关。
可选地,上述第二查找单元502,包括:
线特征提取子单元,用于分别在上述第一图像及上述第二图像进行线特征提取操作,获得第三线特征以及第四线特征,其中,上述第三线特征为上述第一图像中所提取到的线特征,上述第四线特征为上述第二图像中所提取到的线特征;
线特征匹配子单元,用于将每条第三线特征分别与各条第四线特征行匹配操作;
线特征对获取子单元,用于基于相匹配的第三线特征及第四线特征,获得至少一对线特征。
可选地,上述线特征提取子单元,包括:
线段提取子单元,用于针对上述第一图像及上述第二图像中的任一图像,基于预设的直线提取算法,提取上述图像中超过预设长度的线段;
线段拼接子单元,用于将处于同一直线的相邻的两条线段进行拼接,其中,上述相邻的两条线段指的是:一条线段的终点在另一条线段的起点的预设距离内。
可选地,上述线特征对获取子单元,包括:
判断子单元,用于针对每一对相匹配的第三线特征及第四线特征,基于上述第三线特征的中点、上述第四线特征的中点、上述第三线特征的斜率及上述第四线特征的斜率,判断上述第三线特征与上述第四线特征是否为误匹配;
确定子单元,用于若上述第三线特征及上述第四线特征不为误匹配,则将上述第三线特征及上述第四线特征确定为一对线特征。
可选地,上述预测单元503,包括:
目标函数优化子单元,用于基于预设的优化方法对目标函数进行优化,其中,上述目标函数采用上述至少一对线特征、上述至少一对特征点及上述机器人的惯性测量单元所输出的惯性数据作为约束条件;
预测位姿获取子单元,用于基于优化后的目标函数,获得上述机器人的预测位姿。
由上可见,通过本申请实施例,结合了特征点和线特征来对机器人的位姿进行预测,可通过特征点和线特征的结合来获得机器人所处环境内的几何结构信息,使得机器人能够在有挑战性的弱纹理和低可见光照场景下也能够实现精确及鲁棒的位姿预测。
本申请实施例还提供了一种机器人,请参阅图6,本申请实施例中的机器人6包括:存储器601,一个或多个处理器602(图6中仅示出一个)、双目摄像头603及存储在存储器601上并可在处理器上运行的计算机程序。其中,双目摄像头603包括第一摄像头及第二摄像头;存储器601用于存储软件程序以及单元,处理器602通过运行存储在存储器601的软件程序以及单元,从而执行各种功能应用以及数据处理,以获取上述预设事件对应的资源。具体地,处理器602通过运行存储在存储器601的上述计算机程序时实现以下步骤:
在第一图像及第二图像中,查找出相匹配的至少一对特征点,其中,每一对特征点包含一个第一特征点及一个第二特征点,上述第一图像基于上述第一摄像头当前时刻所采集的图像而得,上述第二图像基于上述第二摄像头当前时刻所采集的图像而得,上述第一特征点在上述第一图像内,上述第二特征点在上述第二图像内;
在上述第一图像及上述第二图像中,查找出相匹配的至少一对线特征,其中,每一对线特征包含一条第一线特征及一条第二线特征,上述第一线特征在上述第一图像内,上述第二线特征在上述第二图像内;
基于上述至少一对线特征、上述至少一对特征点及上述机器人的惯性测量单元所输出的惯性数据,获得上述机器人的预测位姿。
假设上述为第一种可能的实施方式,则在第一种可能的实施方式作为基础而提供的第二种可能的实施方式中,上述在第一图像及第二图像中,查找出相匹配的至少一对特征点,包括:
从第一图像中提取出至少一个特征点;
在第二图像中,分别查找与上述第一图像中的各个特征点相匹配的特征点,以获得至少一对特征点。
在上述第二种可能的实施方式作为基础而提供的第三种可能的实施方式中,上述从第一图像中提取出至少一个特征点,包括:
检测上述第一图像是否基于上述第一摄像头启动后所采集到的第一帧图像而得;
若上述第一图像基于上述第一帧图像而得,则基于预设的第一特征点提取方式提取特征点,其中,上述第一特征点提取方式与角点相关;
若上述第一图像不基于上述第一帧图像而得,则基于预设的第二特征点提取方式提取特征点,其中,上述第二特征点提取方式与光流相关。
在上述第一种可能的实施方式作为基础而提供的第四种可能的实施方式中,上述在上述第一图像及上述第二图像中,查找出相匹配的至少一对线特征,包括:
分别在上述第一图像及上述第二图像进行线特征提取操作,获得第三线特征以及第四线特征,其中,上述第三线特征为上述第一图像中所提取到的线特征,上述第四线特征为上述第二图像中所提取到的线特征;
将每条第三线特征分别与各条第四线特征行匹配操作;
基于相匹配的第三线特征及第四线特征,获得至少一对线特征。
在上述第四种可能的实施方式作为基础而提供的第五种可能的实施方式中,针对上述第一图像及上述第二图像中的任一图像,上述线特征提取操作包括:
基于预设的直线提取算法,提取上述图像中超过预设长度的线段;
将处于同一直线的相邻的两条线段进行拼接,其中,上述相邻的两条线段指的是:一条线段的终点在另一条线段的起点的预设距离内。
在上述第四种可能的实施方式作为基础而提供的第六种可能的实施方式中,上述基于相匹配的第三线特征及第四线特征,获得至少一对线特征,包括:
针对每一对相匹配的第三线特征及第四线特征,基于上述第三线特征的中点、上述第四线特征的中点、上述第三线特征的斜率及上述第四线特征的斜率,判断上述第三线特征与上述第四线特征是否为误匹配;
若上述第三线特征及上述第四线特征不为误匹配,则将上述第三线特征及上述第四线特征确定为一对线特征。
在上述第一种可能的实施方式作为基础而提供的第七种可能的实施方式中,上述基于上述至少一对线特征、上述至少一对特征点及上述机器人的惯性测量单元所输出的惯性数据,预测得到上述机器人的位姿,包括:
基于预设的优化方法对目标函数进行优化,其中,上述目标函数采用上述至少一对线特征、上述至少一对特征点及上述机器人的惯性测量单元所输出的惯性数据作为约束条件;
基于优化后的目标函数,获得上述机器人的预测位姿。
应当理解,在本申请实施例中,所称处理器602可以是中央处理单元(Central Processing Unit,CPU),该处理器还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
存储器601可以包括只读存储器和随机存取存储器,并向处理器602提供指令和数据。存储器601的一部分或全部还可以包括非易失性随机存取存储器。例如,存储器601还可以存储设备类别的信息。
由上可见,通过本申请实施例,结合了特征点和线特征来对机器人的位姿进行预测,可通过特征点和线特征的结合来获得机器人所处环境内的几何结构信息,使得机器人能够在有挑战性的弱纹理和低可见光照场景下也能够实现精确及鲁棒的位姿预测。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将上述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成 的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述系统中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者外部设备软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
在本申请所提供的实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的系统实施例仅仅是示意性的,例如,上述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,装置或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。
上述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
上述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,也可以通过计算机程序来指令相关联的硬件来完成,上述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,上述计算机程序包括计算机程序代码,上述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。上述计算机可读存储介质可以包括:能够携带上述计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机可读存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质等。需要说明的是,上述计算机可读存储介质包含的内容可以根据司法管辖区内立法和专利实践的要求进行适当的增减,例如在某些司法管辖区,根据立法和专利实践,计算机可读存储介质不包括是电载波信号和电信信号。
以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。
Claims (10)
- 一种位姿预测方法,其特征在于,所述位姿预测方法应用于设置有双目摄像头的机器人,所述双目摄像头包括第一摄像头及第二摄像头,所述位姿预测方法包括:在第一图像及第二图像中,查找出相匹配的至少一对特征点,其中,每一对特征点包含一个第一特征点及一个第二特征点,所述第一图像基于所述第一摄像头当前时刻所采集的图像而得,所述第二图像基于所述第二摄像头当前时刻所采集的图像而得,所述第一特征点在所述第一图像内,所述第二特征点在所述第二图像内;在所述第一图像及所述第二图像中,查找出相匹配的至少一对线特征,其中,每一对线特征包含一条第一线特征及一条第二线特征,所述第一线特征在所述第一图像内,所述第二线特征在所述第二图像内;基于所述至少一对线特征、所述至少一对特征点及所述机器人的惯性测量单元所输出的惯性数据,获得所述机器人的预测位姿。
- 如权利要求1所述的位姿预测方法,其特征在于,所述在第一图像及第二图像中,查找出相匹配的至少一对特征点,包括:从所述第一图像中提取出至少一个特征点;在所述第二图像中,分别查找与所述第一图像中的各个特征点相匹配的特征点,以获得至少一对特征点。
- 如权利要求2所述的位姿预测方法,其特征在于,所述从所述第一图像中提取出至少一个特征点,包括:检测所述第一图像是否基于所述第一摄像头启动后所采集到的第一帧图像而得;若所述第一图像基于所述第一帧图像而得,则基于预设的第一特征点提取方式提取特征点,其中,所述第一特征点提取方式与角点相关;若所述第一图像不基于所述第一帧图像而得,则基于预设的第二特征点提取方式提取特征点,其中,所述第二特征点提取方式与光流相关。
- 如权利要求1所述的位姿预测方法,其特征在于,所述在所述第一图像及所述第二图像中,查找出相匹配的至少一对线特征,包括:分别在所述第一图像及所述第二图像进行线特征提取操作,获得第三线特征以及第四线特征,其中,所述第三线特征为所述第一图像中所提取到的线特征,所述第四线特征为所述第二图像中所提取到的线特征;将每条第三线特征分别与各条第四线特征行匹配操作;基于相匹配的第三线特征及第四线特征,获得至少一对线特征。
- 如权利要求4所述的位姿预测方法,其特征在于,针对所述第一图像及所述第二图像中的任一图像,所述线特征提取操作包括:基于预设的直线提取算法,提取所述图像中超过预设长度的线段;将处于同一直线的相邻的两条线段进行拼接,其中,所述相邻的两条线段指的是:一条线段的终点在另一条线段的起点的预设距离内。
- 如权利要求4所述的位姿预测方法,其特征在于,所述基于相匹配的第三线特征及第四线特征,获得至少一对线特征,包括:针对每一对相匹配的第三线特征及第四线特征,基于所述第三线特征的中点、所述第四线特征的中点、所述第三线特征的斜率及所述第四线特征的斜率,判断所述第三线特征与所述第四线特征是否为误匹配;若所述第三线特征及所述第四线特征不为误匹配,则将所述第三线特征及所述第四线特征确定为一对线特征。
- 如权利要求1所述的位姿预测方法,其特征在于,所述基于所述至少一对线特征、所述至少一对特征点及所述机器人的惯性测量单元所输出的惯性数据,预测得到所述机器 人的位姿,包括:基于预设的优化方法对目标函数进行优化,其中,所述目标函数采用所述至少一对线特征、所述至少一对特征点及所述机器人的惯性测量单元所输出的惯性数据作为约束条件;基于优化后的目标函数,获得所述机器人的预测位姿。
- 一种位姿预测装置,其特征在于,所述位姿预测装置应用于设置有双目摄像头的机器人,所述双目摄像头包括第一摄像头及第二摄像头,所述位姿预测装置包括:第一查找单元,用于在第一图像及第二图像中,查找出相匹配的至少一对特征点,其中,每一对特征点包含一个第一特征点及一个第二特征点,所述第一图像基于所述第一摄像头当前时刻所采集的图像而得,所述第二图像基于所述第二摄像头当前时刻所采集的图像而得,所述第一特征点在所述第一图像内,所述第二特征点在所述第二图像内;第二查找单元,用于在所述第一图像及所述第二图像中,查找出相匹配的至少一对线特征,其中,每一对线特征包含一条第一线特征及一条第二线特征,所述第一线特征在所述第一图像内,所述第二线特征在所述第二图像内;预测单元,用于基于所述至少一对线特征、所述至少一对特征点及所述机器人的惯性测量单元所输出的惯性数据,获得所述机器人的预测位姿。
- 一种机器人,包括存储器、处理器、双目摄像头以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至7任一项所述的方法。
- 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至7任一项所述的方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110194534.2A CN112950709B (zh) | 2021-02-21 | 2021-02-21 | 一种位姿预测方法、位姿预测装置及机器人 |
CN202110194534.2 | 2021-02-21 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022174603A1 true WO2022174603A1 (zh) | 2022-08-25 |
Family
ID=76244975
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/124611 WO2022174603A1 (zh) | 2021-02-21 | 2021-10-19 | 一种位姿预测方法、位姿预测装置及机器人 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112950709B (zh) |
WO (1) | WO2022174603A1 (zh) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112950709B (zh) * | 2021-02-21 | 2023-10-24 | 深圳市优必选科技股份有限公司 | 一种位姿预测方法、位姿预测装置及机器人 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109558879A (zh) * | 2017-09-22 | 2019-04-02 | 华为技术有限公司 | 一种基于点线特征的视觉slam方法和装置 |
US20190204084A1 (en) * | 2017-09-29 | 2019-07-04 | Goertek Inc. | Binocular vision localization method, device and system |
CN110763251A (zh) * | 2019-10-18 | 2020-02-07 | 华东交通大学 | 视觉惯性里程计优化的方法及系统 |
CN112115980A (zh) * | 2020-08-25 | 2020-12-22 | 西北工业大学 | 基于光流跟踪和点线特征匹配的双目视觉里程计设计方法 |
CN112950709A (zh) * | 2021-02-21 | 2021-06-11 | 深圳市优必选科技股份有限公司 | 一种位姿预测方法、位姿预测装置及机器人 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109166149B (zh) * | 2018-08-13 | 2021-04-02 | 武汉大学 | 一种融合双目相机与imu的定位与三维线框结构重建方法与系统 |
CN109579840A (zh) * | 2018-10-25 | 2019-04-05 | 中国科学院上海微系统与信息技术研究所 | 一种点线特征融合的紧耦合双目视觉惯性slam方法 |
CN110060277A (zh) * | 2019-04-30 | 2019-07-26 | 哈尔滨理工大学 | 一种多特征融合的视觉slam方法 |
CN111160298B (zh) * | 2019-12-31 | 2023-12-01 | 深圳市优必选科技股份有限公司 | 一种机器人及其位姿估计方法和装置 |
-
2021
- 2021-02-21 CN CN202110194534.2A patent/CN112950709B/zh active Active
- 2021-10-19 WO PCT/CN2021/124611 patent/WO2022174603A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109558879A (zh) * | 2017-09-22 | 2019-04-02 | 华为技术有限公司 | 一种基于点线特征的视觉slam方法和装置 |
US20190204084A1 (en) * | 2017-09-29 | 2019-07-04 | Goertek Inc. | Binocular vision localization method, device and system |
CN110763251A (zh) * | 2019-10-18 | 2020-02-07 | 华东交通大学 | 视觉惯性里程计优化的方法及系统 |
CN112115980A (zh) * | 2020-08-25 | 2020-12-22 | 西北工业大学 | 基于光流跟踪和点线特征匹配的双目视觉里程计设计方法 |
CN112950709A (zh) * | 2021-02-21 | 2021-06-11 | 深圳市优必选科技股份有限公司 | 一种位姿预测方法、位姿预测装置及机器人 |
Also Published As
Publication number | Publication date |
---|---|
CN112950709A (zh) | 2021-06-11 |
CN112950709B (zh) | 2023-10-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11668571B2 (en) | Simultaneous localization and mapping (SLAM) using dual event cameras | |
CN110322500B (zh) | 即时定位与地图构建的优化方法及装置、介质和电子设备 | |
WO2021233029A1 (en) | Simultaneous localization and mapping method, device, system and storage medium | |
US11830216B2 (en) | Information processing apparatus, information processing method, and storage medium | |
US7733404B2 (en) | Fast imaging system calibration | |
JP5722502B2 (ja) | モバイルデバイスのための平面マッピングおよびトラッキング | |
CN110660098B (zh) | 基于单目视觉的定位方法和装置 | |
EP3547256A1 (en) | Extracting a feature descriptor for an image feature | |
EP3028252A1 (en) | Rolling sequential bundle adjustment | |
CN110349212B (zh) | 即时定位与地图构建的优化方法及装置、介质和电子设备 | |
WO2013029674A1 (en) | Method of matching image features with reference features | |
CN112652020A (zh) | 一种基于AdaLAM算法的视觉SLAM方法 | |
CN113012224A (zh) | 定位初始化方法和相关装置、设备、存储介质 | |
CN115376109A (zh) | 障碍物检测方法、障碍物检测装置以及存储介质 | |
JP6922348B2 (ja) | 情報処理装置、方法、及びプログラム | |
WO2022174603A1 (zh) | 一种位姿预测方法、位姿预测装置及机器人 | |
CN110673607A (zh) | 动态场景下的特征点提取方法、装置、及终端设备 | |
CN112288817B (zh) | 基于图像的三维重建处理方法及装置 | |
CN111814869B (zh) | 一种同步定位与建图的方法、装置、电子设备及存储介质 | |
CN110660134B (zh) | 三维地图构建方法、三维地图构建装置及终端设备 | |
CN117057086B (zh) | 基于目标识别与模型匹配的三维重建方法、装置及设备 | |
WO2024212559A1 (zh) | 一种室内环境的视觉slam方法、装置、设备及存储介质 | |
US20200294315A1 (en) | Method and system for node vectorisation | |
CN115493580A (zh) | 地图构建方法、装置、移动设备及计算机可读存储介质 | |
CN112348865A (zh) | 一种回环检测方法、装置、计算机可读存储介质及机器人 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21926317 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21926317 Country of ref document: EP Kind code of ref document: A1 |