CN112734852B  Robot mapping method and device and computing equipment  Google Patents
Robot mapping method and device and computing equipment Download PDFInfo
 Publication number
 CN112734852B CN112734852B CN202110348706.7A CN202110348706A CN112734852B CN 112734852 B CN112734852 B CN 112734852B CN 202110348706 A CN202110348706 A CN 202110348706A CN 112734852 B CN112734852 B CN 112734852B
 Authority
 CN
 China
 Prior art keywords
 pose
 preset
 robot
 image
 frame
 Prior art date
 Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
 Active
Links
Images
Classifications

 G—PHYSICS
 G06—COMPUTING; CALCULATING OR COUNTING
 G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
 G06T7/00—Image analysis
 G06T7/70—Determining position or orientation of objects or cameras
 G06T7/73—Determining position or orientation of objects or cameras using featurebased methods

 G—PHYSICS
 G01—MEASURING; TESTING
 G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
 G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00  G01C19/00
 G01C21/005—Navigation; Navigational instruments not provided for in groups G01C1/00  G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching

 G—PHYSICS
 G01—MEASURING; TESTING
 G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
 G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00  G01C19/00
 G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00  G01C19/00 by using measurements of speed or acceleration
 G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00  G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
 G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00  G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
 G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00  G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with noninertial navigation instruments

 G—PHYSICS
 G01—MEASURING; TESTING
 G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
 G01C22/00—Measuring distance traversed on the ground by vehicles, persons, animals or other moving solid bodies, e.g. using odometers, using pedometers
Abstract
The application discloses a robot mapping method, a robot mapping device and a robot computing device, wherein the robot can perform cluster optimization on continuous multiframe image data, Inertial Measurement Unit (IMU) data and wheel speed odometer data which are collected in a preset time period, and obtain an optimized pose and a variance value of the optimized pose of a preset label in a camera coordinate system of the robot in a current frame image; performing extended Kalman EKF state recursion on IMU data and wheel speed odometer data corresponding to the current frame image, and determining a variance value of the pose of the robot and the pose of the robot; and then updating the state of EKF measurement on the pose optimization, the pose variance value of the pose optimization, the pose of the robot and the pose variance value of the robot, and acquiring the target pose and the variance value of the target pose of a preset label in a world coordinate system in continuous multiframe images collected in a preset time period. The robot mapping method performs robot mapping through cluster optimization and EKF tightness fusion, and can improve mapping precision.
Description
Technical Field
The application relates to the technical field of computer vision, in particular to a robot image building method and device and computing equipment.
Background
With the demand for increased manufacturing flexibility, shortened or accelerated product cycle, increased labor costs, and increased safety requirements, mobile robots are rapidly developing to meet the everincreasing demands of people. Today, Automated Guided Vehicles (AGVs) have found widespread use in automated warehouses, factory material transport systems, logistics distribution systems, flexible assembly systems, and other intelligent transportation sites. The core technology of autonomous guided vehicle navigation is synchronous positioning and mapping (SLAM), and an environment map and the realtime position of a guided vehicle can be obtained through the SLAM, so that the guided vehicle can efficiently and intelligently move to a destination without human intervention.
Laser SLAM and visual SLAM are mainly used at present. However, the camera cost of visual SLAM is much lower than that of a laser sensor. Furthermore, the visual SLAM can obtain a 6 degree of freedom pose of the target object to be positioned, relative to a 2dimensional laser SLAM. As a result, relatively inexpensive visual SLAM has been rapidly developed.
However, the method depends on the visual SLAM only, so that an accumulated error exists, the visual SLAM precision based on the visual feature points is greatly reduced, and in order to improve the robustness of robot positioning, a twodimensional code tag can be added for auxiliary positioning when the robot lands. The global pose and variance of the twodimensional code label need to be obtained in advance in the related technology, but the global pose and variance of the label cannot be accurately obtained through surveying and mapping and the like. In addition, for the situations of image blur, pose ambiguity, noise and other pose estimation instability appearing in the observation of a single twodimensional code label, the pose of the twodimensional code cannot be accurately calculated based on the traditional loose coupling algorithm. In the multisensor fusion process, the SLAM effect can be greatly influenced by the external parameter precision and the time offset of each sensor.
Disclosure of Invention
The application provides a robot mapping method, a robot mapping device and computing equipment, which are used for improving reliability and accuracy of visual mapping.
In a first aspect, embodiments of the present application provide a robot mapping method, which may be performed by a robot, or may be performed by a server in communication with the robot (the robot collects data and transmits the data to the server), and the present application is not limited in this respect. Only taking an execution subject as an example of a robot, the robot may perform bundle optimization on continuous multiframe image data, Inertial Measurement Unit (IMU) data and wheel speed odometer data acquired within a preset time period to obtain an optimized pose and a variance value of the optimized pose of a preset label in a camera coordinate system of the robot in a current frame image; performing Extended Kalman Filter (EKF) state recursion on the IMU data and the wheel speed odometer data corresponding to the current frame image, and determining a variance value of the pose of the robot and the pose of the robot; and updating the state of EKF measurement on the pose optimization, the pose variance value of the pose optimization, the pose of the robot and the pose variance value of the robot, and acquiring the target pose and the variance value of the target pose of the preset label in the world coordinate system in continuous multiframe images collected in a preset time period.
In the embodiment of the application, when the robot builds the map, the robot refers to IMU data and wheel speed odometer data while adopting continuous multiframe images, and the accuracy of the preset label pose in the map building process can be ensured by the mode. In addition, the method for optimizing the cluster and predicting the EKF state is further fused during mapping, so that the mapping accuracy of the robot is higher, and the target pose and the variance value of the target pose of the preset label in the world coordinate system can be calculated more quickly.
In one possible implementation mode, the robot carries out online optimization on external parameters and time offsets among the sensors in the EKF measurement state updating process, and the optimized external parameters and time offsets are used for bundle optimization; the external parameter comprises: the relative pose of the IMU and the camera, and the relative pose of the wheel speed odometer and the camera.
In the embodiment of the application, external parameters and time offset are added during cluster optimization, so that the optimization pose of the preset label in the acquired current frame image in the camera coordinate system of the robot is more reliable, and the positioning accuracy of the robot map building is higher.
In a possible implementation manner, the robot can acquire continuous N frames of images acquired within a preset time period through a camera of the robot, and screen the N frames of images to determine M frames of images including preset labels; n, M is an integer; m is greater than or equal to 2 and less than or equal to N; the method comprises the steps of performing bundle optimization on position information of corner points of preset labels in M frames of images, projection information of the corner points of the preset labels in a coordinate system where a camera of the robot is located, IMU data and wheel speed odometer data, and determining an optimized pose of the preset labels in the coordinate system where the camera of the robot is located, a variance value of the optimized pose, a projection error of the preset labels, a Jacobian matrix and a covariance matrix.
In the embodiment of the application, the robot performs cluster optimization on the position information of the corner points of the preset labels in the M frames of images, the projection information of the corner points of the preset labels in the coordinate system where the camera of the robot is located, IMU data and wheel speed odometry data, so that the optimized pose, the variance value of the optimized pose, the projection error of the preset labels, the Jacobian matrix and the covariance matrix of the preset labels in the coordinate system where the camera of the robot is located can be determined. By the method, the drawing establishing accuracy of the robot can be improved.
In one possible implementation, the continuous multiframe images in the bundleoptimized sliding window are determined by:
if the frame number of the image frames in the sliding window is less than or equal to the sliding window threshold value, performing cluster optimization on all the image frames; if the frame number of the image frames in the sliding window is larger than the sliding window threshold value, all the image frames are subjected to cluster optimization and then a preset frame image is screened out, and the constraint on the sliding window of the image to be rejected is reserved through a Schuler complement method.
In the embodiment of the application, the image frames are screened in the sliding window, so that the positioning accuracy of the robot mapping is higher.
In one possible implementation, the screening of the preset frame images for bundle optimization is determined by one or more of the following ways:
acquiring an identification list of all preset labels in a current frame image;
rejecting images which do not comprise preset labels in the multiframe images; if the identifier of the preset label detected in the Nth frame image is different from the identifier of the preset label in the Mth frame image, rejecting the Nth frame image; if the identification of the preset label in any frame of image is the same as the identification of the preset label in the Mth frame of image in the preset time period, scoring the images except the Mth frame of image according to the information degree, and eliminating the frame image with the lowest score; wherein M, N is an integer, and M is the number of the last frame images acquired within a preset time period.
In the embodiment of the application, the technical optimization is carried out by screening the preset frame images, so that the optimized pose of the preset label is more reliable.
In one possible implementation, the projection error and the jacobian matrix of the preset label may be determined as follows:
calculating twodimensional coordinate information of the corner points of the preset label in the M frames of images by a perspective Npoint projection PNP algorithm, and determining the initial pose of the preset label in a camera coordinate system of the robot; determining threedimensional coordinate information of each corner point of the preset label in a world coordinate system according to the initial pose of the preset label in a camera coordinate system of the robot and the initial pose of the robot; projecting the threedimensional coordinate information of the corner point of the preset label to a coordinate system where a camera is located to obtain the twodimensional coordinate information of the corner point of the preset label after projection; calculating the difference value between the twodimensional coordinate information of the corner point of the projected preset label and the twodimensional coordinate information of the corner point of the preset label before projection to determine the projection error of the preset label; and determining a Jacobian matrix of the projection error of the preset label relative to the optimized state quantity in the sliding window.
In the embodiment of the application, an initial pose of an ith preset label in an Mth frame image in a camera coordinate system is determined according to a threedimensional coordinate of an angular point in the ith preset label in a preset label coordinate system and a twodimensional pixel coordinate of the preset label in an image frame M of the ith preset label observed for the first time through a perspectiveNpoint (PNP) algorithm; according to the initial pose of the ith preset label in the M frame image and the global pose of the camera in the M frame image, determining global threedimensional coordinate information of the corner point of the preset label, and then determining the projection error of the preset label to ensure the positioning accuracy of the robot mapping.
In one possible implementation mode, the variance value of the target pose is related to the angular point observation uncertainty of each preset label of the sliding window; wherein, the observation uncertainty of the corner point of any preset label in any image frame has a functional relation with the following parameters:
the method comprises the steps of presetting twodimensional coordinate information of corner points of a label, presetting linear speed, angular speed and acceleration of an image acquisition device of an image frame where the label is located, and presetting pixel area of the label in the image frame.
In the embodiment of the application, the variance value of the target pose of the preset label has a functional relationship with various parameters, so that the variance value of the target pose obtained by the application is more consistent with an actual scene.
In a possible implementation manner, the robot can add the prior information of the preset label as a constraint item to the bundle optimization and the measurement update; the prior information of the preset labels is the relative pose and uncertainty variance of any two preset labels, or the global pose and uncertainty variance of any one preset label in a world coordinate system.
In the embodiment of the application, the robot can also adjust the pose of the preset label in the world coordinate system based on the prior information, so that the robot is higher in mapping accuracy and more reliable in mapping result.
In a possible implementation manner, the robot may update the state of the EKF measurement for the projection constraint of the preset tag, the prior relative pose constraint of the preset tag, the prior global pose constraint of the preset tag, the measurement constraint of the wheel speed odometer, and the time offset constraint, to obtain an updated EKF state.
In the embodiment of the application, the accuracy of robot mapping can be improved by updating the projection constraint of the preset tag, the pose constraint of the preset tag, the time offset and the like.
In one possible implementation, the EKF status includes: and the target pose of the preset label in the world coordinate system, the external parameters among the sensors, the time offset among the sensors, the pose and the speed of the current frame robot and the zero offset of the IMU are preset in the continuous multiframe images collected in the preset time period.
In a second aspect, an embodiment of the present application further provides a robot mapping apparatus, including: a bundle optimization unit, a state prediction unit and a measurement update unit.
The cluster optimization unit is used for performing cluster optimization on continuous multiframe image data, inertial measurement unit IMU data and wheel speed odometer data which are collected in a preset time period, and acquiring an optimized pose of a preset label in a camera coordinate system of the robot and a variance value of the optimized pose in a current frame image; the state prediction unit is used for performing extended Kalman EKF state recursion on the IMU data and the wheel speed odometer data corresponding to the current frame image and determining a variance value of the pose of the robot and the pose of the robot; and the measurement updating unit is used for carrying out EKF measurement state updating on the optimized pose of the preset label, the variance value of the optimized pose of the preset label, the pose of the robot and the variance value of the pose of the robot, and acquiring the target pose and the variance value of the target pose of the preset label in a world coordinate system in continuous multiframe images acquired within a preset time period.
In a possible implementation manner, the measurement updating unit may further perform online optimization on external parameters and time offsets between the sensors in a state updating process of the measurement of the EKF, and use the optimized external parameters and time offsets for bundle optimization; the external parameter comprises: the relative pose of the IMU and the camera, and the relative pose of the wheel speed odometer and the camera.
In a possible implementation manner, the bundle optimizing unit is specifically configured to: acquiring continuous N frames of images acquired within a preset time period through a camera of the robot, screening the N frames of images, and determining M frames of images including preset labels; n, M is an integer; m is greater than or equal to 2 and less than or equal to N; the method comprises the steps of performing bundle optimization on position information of corner points of preset labels in M frames of images, projection information of the corner points of the preset labels in a coordinate system where a camera of the robot is located, IMU data and wheel speed odometer data, and determining an optimized pose of the preset labels in the coordinate system where the camera of the robot is located, a variance value of the optimized pose, a projection error of the preset labels, a Jacobian matrix and a covariance matrix.
In one possible implementation, the continuous multiframe images in the bundleoptimized sliding window are determined by:
if the frame number of the image frames in the sliding window is less than or equal to the sliding window threshold value, performing cluster optimization on all the image frames; and if the frame number of the image frames in the sliding window is greater than the sliding window threshold value, performing bundle optimization on all the image frames, screening out preset frame images, and keeping the constraint on the sliding window of the rejected images through a Schuler complement method.
In one possible implementation, the screening of the preset frame images for bundle optimization is determined by one or more of the following ways:
acquiring an identification list of all preset labels in a current frame image;
rejecting images which do not comprise preset labels in the multiframe images; if the identifier of the preset label detected in the Nth frame image is different from the identifier of the preset label in the Mth frame image, rejecting the Nth frame image; if the identification of the preset label in any frame of image is the same as the identification of the preset label in the Mth frame of image in the preset time period, scoring the images except the Mth frame of image according to the information degree, and eliminating the frame image with the lowest score; wherein M, N is an integer, and M is the number of the last frame images acquired within a preset time period.
In one possible implementation, the projection error and the jacobian matrix of the preset label may be determined as follows:
calculating twodimensional coordinate information of the corner points of the preset label in the M frames of images by a perspective Npoint projection PNP algorithm, and determining the initial pose of the preset label in a camera coordinate system of the robot; determining threedimensional coordinate information of each corner point of the preset label in a world coordinate system according to the initial pose of the preset label in a camera coordinate system of the robot and the initial pose of the robot; projecting the threedimensional coordinate information of the corner point of the preset label to a coordinate system where the camera is located to obtain twodimensional coordinate information of the corner point of the preset label after projection; calculating a difference value between the twodimensional coordinate information of the corner point of the projected preset label and the twodimensional coordinate information of the corner point of the preset label before projection to determine a projection error of the preset label; and determining a Jacobian matrix of the projection error of the preset label relative to the optimized state quantity in the sliding window.
In one possible implementation mode, the variance value of the target pose is related to the angular point observation uncertainty of each preset label of the sliding window; wherein the observation uncertainty of the corner of any preset label in any image frame has a functional relation with the following parameters:
the method comprises the steps of presetting twodimensional coordinate information of corner points of a label, presetting linear speed, angular speed and acceleration of an image acquisition device of an image frame where the label is located, and presetting pixel area of the label in the image frame.
In a possible implementation manner, the bundle optimizing unit is specifically configured to: adding the prior information of a preset label as a constraint item into bundle optimization and measurement updating; the prior information of the preset labels is the relative pose and uncertainty variance of any two preset labels, or the global pose and uncertainty variance of any one preset label in a world coordinate system.
In a possible implementation manner, the robot may update the state of the EKF measurement for the projection constraint of the preset tag, the prior relative pose constraint of the preset tag, the prior global pose constraint of the preset tag, the measurement constraint of the wheel speed odometer, and the time offset constraint, to obtain an updated EKF state.
In one possible implementation, the EKF status includes: and the target pose of the preset label in the world coordinate system, the external parameters among the sensors, the time offset among the sensors, the pose and the speed of the current frame robot and the zero offset of the IMU are preset in the continuous multiframe images collected in the preset time period.
In a third aspect, an embodiment of the present application further provides a computing device, including: a memory and a processor; a memory for storing program instructions; and the processor is used for calling the program instructions stored in the memory and establishing the robot map according to any one of the first aspect of the obtained program instructions.
In a fourth aspect, the present application further provides a computer storage medium, where the computer storage medium stores computerexecutable instructions, and the computerexecutable instructions are configured to cause a computer to execute any robot mapping method in the present application.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments of the present invention will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic view of an application scenario of a robot mapping method provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of a robot mapping method according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of a robot mapping method according to an embodiment of the present disclosure;
fig. 4 is a schematic flow chart of a schulvin compensation method provided in an embodiment of the present application;
fig. 5 is a schematic structural diagram of a robot mapping device according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a computing device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention.
It should be noted that the terms "first," "second," and the like in this application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
Next, an application scenario to which the present application may be applied is described with reference to fig. 1, where fig. 1 includes a robot, a preset positioning tag, and a server. In actual application, whether a server is needed or not is specifically executed according to the setting of the robot calculation granularity, and if the robot calculation granularity is high, the robot can autonomously complete complex calculation, and serverassisted calculation is not needed; if the calculation granularity of the robot is low, the robot can send the data to be calculated to the server, and the data processing efficiency can be improved through the calculation of the server. When the application scene comprises the server, the robot is in communication connection with the server, the robot and the server can also jointly calculate data to be calculated, and then the server assists the robot in drawing construction.
It should be noted that the preset label is attached to the space environment of the robot map, such as: factory ceilings, office roofs, and the like, are substantially maintained in a constant position. The preset label in this application may be any pattern that can identify information, such as: twodimensional codes, bar codes, etc., and the present application is not particularly limited thereto. The application takes a preset label as an example of a twodimensional code, wherein the twodimensional code is a binary square mark and is composed of a wide black edge and an internal binary matrix, and the internal matrix arrangement determines an Identity Document (ID) of the twodimensional code. The black border facilitates rapid detection of the image, the binary encoding can verify the ID, and allows application of error detection and correction techniques.
In the application, the robot collects continuous multiframe images through the camera, the images comprise the twodimensional codes, the poses of the twodimensional codes in a world coordinate system are determined through bundle optimization of IMU data and wheel speed odometry data and measurement updating of EKF, and the image is built based on the poses of the twodimensional codes in the world coordinate system.
In practical applications, the robot mapping may be performed by the robot in the application scenario shown in fig. 1, may also be performed by the server, and may also be performed by the cooperation between the robot and the server, and the application is not limited to the specific implementation subject herein. Here, only the execution subject is taken as an example of the robot, and as shown in fig. 2, the robot can be executed by referring to the following steps when creating the drawing:
In practical implementation, the camera of the robot may be a monocular camera, a binocular camera, or a multiview camera, and the present application is not limited in particular. And the robot takes the target images with the same time stamp as the current frame image information according to the acquired target images of the latest frame. It should be noted that, when the robot mapping process shown in fig. 2 is performed to map, the pose of the preset tag in the world coordinate system is unknown, then a twodimensional code detection program is called to sequentially detect the local area where each target image twodimensional code of the current frame is located, and the IDs and twodimensional coordinates of all the twodimensional codes detected in the image are returned.
The twodimensional code detection process comprises the following steps:
(1) and detecting a candidate area of the twodimensional code. This step is mainly to find out which proprietary contours (e.g. squares) of the twodimensional code are possible in the whole image area. The twodimensional code is segmented using adaptive thresholds, then outline contours are extracted from the thresholded image and those that are not convex polygons, as well as those that are not square, are discarded. In the process, the too small or too large outline, too close convex polygon and the like can be eliminated through a filtering operation.
(2) And analyzing the candidate area to determine the type of the twodimensional code. After a twodimensional code candidate area is obtained through robot detection, whether a twodimensional code of a specified type exists in the candidate area is determined through analyzing the internal code of the twodimensional code candidate area, then, projection transformation is carried out on an image to obtain a standard form (a front view), and a white position and a black position are separated from the standard image through thresholding. Dividing the twodimensional code into different lattices according to the size and the boundary size of the twodimensional code, counting the number of black and white pixels falling in each lattice to determine whether the black bits or the white bits exist, and finally determining whether the twodimensional code belongs to the type of the designated dictionary according to the flag bits.
(3) And outputting related data information of the twodimensional codes, and matching the twodimensional code type detected in the last step with the specified dictionary to determine the ID of each twodimensional code. And for each twodimensional code, determining twodimensional coordinates of four corner points according to the twodimensional code outer frame area.
And 202, performing EKF (extended Kalman Filter) state recursion on IMU (inertial measurement Unit) data and wheel speed odometer data corresponding to the current frame image by the robot, and determining the variance value of the pose of the robot and the pose of the robot.
It should be noted that, the step 202 and the step 201 may not be in a different order in actual execution, and the step 202 may be executed first and then the step 201 may be executed, or may be executed simultaneously, and the present application is not limited specifically herein.
And 203, the robot updates the EKF measured state of the optimized pose of the preset label, the variance value of the optimized pose, the pose of the robot and the variance value of the pose of the robot, and acquires the pose of the preset label in a world coordinate system and the variance value of the target pose in continuous multiframe images acquired within a preset time period.
The variance value of the target pose can be used for representing the pose reliability of a preset label in the current frame image in a world coordinate system; the variance value of the target pose is related to the angular point observation uncertainty of each preset label of the sliding window; wherein, the observation uncertainty of the corner point of any preset label in any image frame has a functional relation with the following parameters: the method comprises the steps of presetting twodimensional coordinate information of corner points of a label, presetting linear speed, angular speed and acceleration of an image acquisition device of an image frame where the label is located, and presetting pixel area of the label in the image frame.
In the embodiment of the application, when the robot builds the map, the robot adopts continuous multiframe images and simultaneously refers to IMU data and wheel speed odometer data, and the accuracy of the map building can be ensured by the mode. In addition, the method for optimizing the cluster and predicting the EKF state is further fused during mapping, so that the positioning accuracy of the robot mapping is higher, and the pose and the target variance value of the preset label in the world coordinate system can be calculated more quickly.
In one embodiment, the robot can perform online optimization on external parameters and time offsets among the sensors in the EKF measurement state updating process, and the optimized external parameters and time offsets are used for bundle optimization; the external ginseng comprises: the relative pose of the IMU and the camera, and the relative pose of the wheel speed odometer and the camera. In the application, the robot performs cluster optimization on the external parameters during drawing construction, so that the drawing construction accuracy can be ensured.
The following describes the scheme of the present application with reference to the execution flowchart shown in fig. 3, specifically as follows:
1.1 preprocessing of sensor data
In an optional embodiment, the robot may acquire, through a plurality of cameras of the robot, that is, the camera module in fig. 3, N consecutive frames of images acquired within a preset time period, and filter the N frames of images to determine M frames of images including a preset tag; n, M is an integer; m is greater than or equal to 2 and less than or equal to N; the method comprises the steps of performing bundle optimization on position information of corner points of preset labels in M frames of images, projection information of the corner points of the preset labels in a coordinate system where a camera of the robot is located, IMU data and wheel speed odometer data, and determining an optimized pose of the preset labels in the coordinate system where the camera of the robot is located, a variance value of the optimized pose, a projection error of the preset labels, a Jacobian matrix and a covariance matrix. The method comprises the following specific steps:
A. preprocessing of twodimensional codes in pictures
The robot may first acquire each eye image of the latest frame, and use each eye image with the same time stamp as the current frame image information. And then calling a twodimensional code detection program, sequentially detecting the local area where the twodimensional code of each target image of the current frame is located, and returning the IDs and the twodimensional coordinates of all the twodimensional codes detected in the image. And then outputting all twodimension code observation information of each target image in the current frame, wherein the twodimension code observation information can comprise: the ID of the twodimensional code, the ID or the timestamp of an image frame where the twodimensional code is located, the ID of a camera observing the twodimensional code, and twodimensional coordinates of four corner points of the image can be pixel uv values or normalized coordinates (the four corner points are output according to a specified sequence: for example, clockwise traversal is started from the upper left corner of the twodimensional code), if the twodimensional code is observed for the first time, the relative pose of the twodimensional code and the camera can be calculated according to PNP, and then the initial value of the global pose of the twodimensional code can be calculated based on the global pose of the current frame.
IMU preintegration
Obtaining IMU data: and storing the IMU data between two continuous frames of images into a cache list.
Time stamp alignment: the integration start and end times of the IMU data are aligned with the image frame time by interpolation based on the IMU and camera time offsets.
Calculating a preintegral: and (5) by utilizing a median method, iterating and integrating to obtain the state increment of the position, the speed and the angle between the two image frames.
Calculating the Jacobian and variance matrix: and calculating a Jacobian matrix and a variance matrix of the state quantity between the two frames of images for a subsequent clustering optimization process.
C. Wheel speed odometer preintegration
Acquiring wheel speed odometer data: and storing the wheel speed mileage data between two continuous frames of images into a cache list.
Time stamp alignment: and aligning the integration starting time and the integration ending time of the wheel speed odometer data with the image frame time through interpolation based on the wheel speed meter and the time offset of the camera.
Calculating a preintegral: and (5) by utilizing a median method, iteratively integrating to obtain the state increment of the position and the angle between the two image frames.
Calculating the Jacobian and variance matrix: and calculating a Jacobian matrix and a variance matrix of the state quantity between the two frames of images for a subsequent clustering optimization process.
1.2 Cluster optimization
The main purpose of cluster optimization is to optimize the current window system state according to the reprojection error of the twodimensional code corner points observed in multiple frames and multiple purposes, the observation uncertainty of the reprojection error, IMU preintegration and wheel speed odometer preintegration, so that a more robust and accurate observation value of the relative pose of the twodimensional code and the current robot coordinate system is obtained.
1.2.1 optimization State of the System
State X to be optimized in the current window^{S}Comprises the following steps: inertial state (including position, orientation, velocity, accelerometer bias, and gyroscope bias) of the n +1 frame within the window, external reference of each eye camera to the IMU, external reference of wheel speed odometer to IMU, global pose of all twodimensional codes that have been observed:
wherein the content of the first and second substances,represents the inertial state of the kth frame; whereinThe IMU global pose (i.e., the pose in the world coordinate system) representing the kth frame;respectively representing the global position and the attitude of the IMU of the k frame;representing the IMU global velocity of the kth frame;represents the accelerometer bias;stands for gyroscope bias;representing the global pose of the ith twodimensional code;respectively representing the global position and the posture of the ith twodimensional code;external parameters representing each camera and IMU; whereinA location component representing the external reference;an angular component representative of the external reference;representing external parameters of a wheel speed odometer and an IMU;a location component representing the external reference;representing the angular component of the external reference.
1.2.2 optimization constraint construction of System
The sliding window bundle optimization process can be constructed as a nonlinear least squares problem:
wherein the content of the first and second substances, Xrepresenting the state of sliding window optimization;a priori information constraint representing marginalization;representing a reprojection constraint for a preset tag;Ta list representing the observed preset tags;representing observation data of the kth image frame of the corner point of the ith preset label in the sliding window;representing an uncertainty variance matrix of the corner point of the ith preset label in the kth image frame of the sliding window;representing relative pose constraints of preset labels;representing the relative pose observation of the ith preset label and the jth preset label;representing a relative pose uncertainty variance matrix of the ith preset label and the jth preset label;representing global pose constraints of preset labels;representing the global pose observation of the ith preset label;representing a global pose uncertainty variance matrix of the ith preset label;a preintegration constraint representing IMU data;Ba list representing image frames subject to preintegration constraints for IMU data;representing a covariance matrix corresponding to the preintegration state increment of the IMU data;a preintegration result representing IMU data;preintegration residuals representing wheel speed odometer data;Ea list representing image frames with wheel speed odometer preintegration constraints;representing a covariance matrix corresponding to the wheel speed odometer preintegration;representing the result of preintegration of wheel speed odometer data. Furthermore, the constraints for the wheel speed odometer may employ a robust kernel function, taking into account the case of wheel speed odometer slippageρForm (a).
Wherein, the prior information constraint represents the constraint of the marginalized information of the sliding window to the current sliding window state; the reprojection constraint of the twodimensional code represents the projection error constraint of the global threedimensional point of all the angular points in each twodimensional code observed by each frame in the sliding window to the twodimensional point; the wheel speed odometer measurement constraint represents the measurement constraint of the wheel speed odometer preintegration of two adjacent frames in the sliding window; the IMU measurement constraint represents a measurement constraint of IMU preintegration of two adjacent frames in the sliding window. The relative pose constraint of the twodimension codes represents the known relative poses of all the twodimension codes in the sliding window and the uncertainty thereof to the state X to be optimized in the window^{S}E.g. known relative poses of the ith and jth twodimensional codesAnd the relative positionThe uncertainty variance of attitude isThe constraint term is optional. Similarly, the global pose constraint of the twodimensional code represents the known global poses of all the twodimensional codes in the sliding windowAnd its uncertaintyFor state X to be optimized in window^{S}Of (3) is performed.
1.2.3 construction of reprojection constraint of twodimensional code
The construction of the twodimensional code reprojection constraint is that all image frames in the sliding window are traversed, all corner points on each twodimensional code observed by each image frame are subjected to a global pose state to be solved based on each twodimensional code to obtain a threedimensional global coordinate of each corner point, the threedimensional global coordinate is projected onto a twodimensional plane of each image frame, and then the threedimensional global coordinate is subtracted from a twodimensional coordinate observed value of the corner point to obtain residual error constraint.
In one embodiment, the projection error and jacobian matrix of the preset label can be determined as follows:
determining the initial pose of the ith preset label in the camera coordinate system of the robot in the Mth frame image of the robot according to the threedimensional coordinates of the corner point in the ith preset label in the preset label coordinate system and the twodimensional pixel coordinates of the preset label in the image frame M of the ith preset label observed for the first time through a PNP algorithm; determining global threedimensional coordinate information of corner points of the preset labels according to the initial pose of the ith preset label in the camera coordinate system in the Mth frame image and the global pose of the camera in the Mth frame image; projecting the threedimensional coordinate information of the corner point of the preset label to a coordinate system where a camera is located to obtain the twodimensional coordinate information of the corner point of the preset label after projection; calculating the difference value between the twodimensional coordinate information of the corner point of the projected preset label and the twodimensional coordinate information of the corner point of the preset label before projection, and determining the projection error of the preset label in the Mth frame image; and determining a Jacobian matrix of the preset label projection error to the optimized state quantity in the sliding window.
It should be noted that, in the present application, the construction process of the reprojection constraint of the twodimensional code mainly includes the following steps:
(1) twodimensional code pose initialization
Because the twodimensional code is arbitrarily posted in the image building stage, the global position and pose of the twodimensional code cannot be acquired in advance. When a certain twodimensional code is observed for the first time, the global pose of the twodimensional code needs to be initialized. The method comprises the steps of observing the coordinates of the corner points of the twodimensional code for the first time, and obtaining the relative pose of the twodimensional code and a camera by a PNP methodThen, the global pose of the camera is superposed to obtain an initial value of the global pose of the twodimensional code：
Wherein the content of the first and second substances,representing the initial value of the global position of the twodimensional code;indicating the initial value of the global rotation.
(2) Global coordinate of each angular point of twodimensional code
A. Local threedimensional coordinates of the corner points are acquired. According to the physical size of each twodimensional code, the local threedimensional coordinates of each corner point in the twodimensional code under the twodimensional code coordinate system can be known。
B. The global threedimensional coordinates of the corner points are calculated. According to each oneGlobal pose of twodimensional codeAnd local threedimensional coordinates of the angular points on the twodimensional code, and global threedimensional coordinates of all angular points on the twodimensional code can be obtained through calculation. When a certain twodimensional code is observed for the first time, the global poseTaking the initial value of the global pose of the twodimensional code obtained by the initialization of the last step. If a certain twodimensional code is not observed for the first time, the global poseAnd taking the global pose of the twodimensional code in the state after the previous bundling optimization. The calculation formula of the global threedimensional coordinates is as follows:
wherein the content of the first and second substances,representing global threedimensional coordinates;representing the initial value of the global position of the twodimensional code;indicating the initial value of the global rotation.
(3) Calculation of twodimensional code reprojection error
And projecting the threedimensional coordinates of each angular point in the twodimensional code obtained in the last step onto a twodimensional plane of an image frame based on the global position and posture of the image frame observing the twodimensional code, and further calculating the difference between the threedimensional coordinates and the observed value of the twodimensional coordinates of the angular point to obtain residual constraint. Namely:
converting the threedimensional global coordinates of all corner points on each twodimensional code into the threedimensional coordinates on the camera coordinate system of the frame:
calculating the twodimensional reprojection error of each corner point:
wherein the content of the first and second substances,the physical meaning is as follows: threedimensional points of a camera coordinate system are converted to a mapping function matrix of normalized coordinates.
(4) Computation of twodimensional code reprojection constrained Jacobian (Jacobian) matrices
In order to obtain a Jacobian matrix of the twodimensional code reprojection error pair sliding window for optimizing state quantity (the frame position, the external parameter and the twodimensional code global position) the Jacobian matrix of the twodimensional code reprojection error pair sliding window for each key frame position, the Jacobian matrix of the twodimensional code reprojection error pair external parameter and the Jacobian matrix of the twodimensional code reprojection error pair twodimensional code global position are included. Different from the positioning stage (knowing the global pose of the twodimensional code), the step needs to additionally solve the Jacobian matrix of the global pose of the twodimensional code. According to the chain derivation method:
A. jacobian matrix of position components of twodimensional code reprojection errors to image frame posesComprises the following steps:
wherein the content of the first and second substances,representing 3D points on the errorpair camera coordinate systemA derivative of (a);representsDerivative of the image frame position disturbance:。
jacobian matrix of pose components of twodimensional code reprojection errors to the keyframe poseComprises the following steps:
wherein the content of the first and second substances,representsDerivative to the image frame pose disturbance:
B. jacobian matrix of position components of twodimensional code reprojection errors tobeoptimized external parametersComprises the following steps:
wherein the content of the first and second substances,representsP ^{ C }A derivative of the disturbance of the external reference position to be optimized; jacobian matrix for treating optimal external reference attitude by twodimensional code reprojection errorComprises the following steps:
representsP ^{ C }Derivative of the disturbance of the pose of the external reference to be optimized:
C. twodimensional code reprojection error to twodimensional code global poseJacobian matrix of (a):
whereinRepresentsFor twodimensional code corner coordinatesP ^{ W }The derivative of (a) of (b),representing twodimensional code corner coordinatesP ^{ W }A derivative of the position component.
wherein the content of the first and second substances,representing twodimensional code corner coordinatesP ^{ W }Derivatives of the attitude components.
(5) Calculation of observation uncertainty variance of each angular point in twodimensional code
In order to quantify the single observation uncertainty of each angular point in the single twodimensional code, an uncertain variance function of the twodimensional code is configured according to a large number of actual test conclusions, namely:
in the above formula, u and v represent twodimensional coordinates of an angular point in an image in the twodimensional code, linear _ vel represents a camera moving linear velocity of an image frame where the twodimensional code is located, angular _ vel represents a camera rotation angular velocity of the image frame where the twodimensional code is located, acc represents an acceleration of a camera of the image frame where the twodimensional code is located, and S represents a pixel area of the twodimensional code in the image.
According to the actual scene and the configuration of the sensor, the above formula can be simplified as follows:
where u _ max represents the maximum value of the abscissa in the image pixel coordinates, and v _ max represents the maximum value of the ordinate in the image pixel coordinates.
(6) Twodimensional code constraint validity judgment strategy
In order to avoid the situation that the accuracy or the stability of the sliding window optimization are reduced due to twodimension code false detection or twodimension point detection errors in the twodimension code, a twodimension code constraint validity judgment strategy is set, namely, whether the constraint of the twodimension code is added into the sliding window optimization function is judged by judging the stability of a certain twodimension code continuously observed in the sliding window. For example, the effective observation times of a certain twodimensional code in each frame in the sliding window are counted, and if the effective observation times are greater than N times, the observation constraint of the twodimensional code in the sliding window is considered to be effective. And carrying out effective observability judgment of single observation according to the single observation uncertainty variance of each corner point. For example, the uncertainty variance of a single observation of a preset tag corner point is smaller than a threshold value, and the observation is considered to be a valid observation.
1.2.4 design of marginalization strategy
In one embodiment, the continuous multiframe images in the bundleoptimized sliding window are determined by: if the frame number of the image frames in the sliding window is less than or equal to the sliding window threshold value, performing cluster optimization on all the image frames; if the frame number of the image frames in the sliding window is larger than the sliding window threshold value, all the image frames are subjected to cluster optimization and then are screened to obtain preset frame images, and the constraint of the eliminated image on the sliding window is retained through a schuln complement method. For example, if the sliding window threshold is 32 and the number of image frames in the sliding window is 18, performing bundle optimization on all 18 image frames; if the number of the image frames in the sliding window is 33, performing bundle optimization on the 33 frames, and removing the image frames through a marginalization strategy based on preset label observation information after optimization. The image frames which are removed are subjected to marginalization prior constraint construction through a Schur complement method and are used for cluster optimization. The marginalization process is a process of decomposing the joint probability distribution into marginal probability distribution and conditional probability distribution, so that an older state in a cluster optimization window can be marginalized into a sliding window, information of the sliding window is kept, sparsity of a corresponding information matrix is guaranteed, balance of computing resources and computing precision is achieved, and the marginalization function can be achieved by performing Schur complement operation on the matrix. To clearly illustrate the schulren's method, schematically illustrated by fig. 4, in fig. 4 (a) are indicated images of consecutive 4 frames, Xp1, Xp2, Xp3 and Xp4, wherein the preset labels Xm1, Xm2, Xm3 can be observed in the Xp1 frame. However, images of Xp1 frames are removed during set number optimization, and the schulren's complement method is shown in fig. 4 (b), and the association among the preset labels Xm1, Xm2, and Xm3 is still retained.
In one embodiment, the screening strategy is determined by one or more of the following:
it should be noted that the robot needs to acquire the identifier lists of all the preset tags in the current frame image, and then remove the images that do not include the preset tags from the multiframe image.
Mode 1: and if the identifier of the preset label detected in the Nth frame image is different from the identifier of the preset label in the Mth frame image, rejecting the Nth frame image. Wherein M, N is an integer, and M is an index of the last frame image acquired within a preset time period.
Mode 2: if the identification of the preset label in any frame of image is the same as the identification of the preset label in the Mth frame of image in the preset time period, the images except the Mth frame of image are scored according to the information degree, and the frame image with the lowest score is removed. Wherein M, N is an integer, and M is an index of the last frame image acquired within a preset time period.
Specifically, if the number of image frames in the sliding window is still greater than the sliding window threshold after the image frames in the sliding window are screened by the method 1, the image frames may be scored according to the information degree, and the image frames may be removed from low to high until the sliding window threshold is met.
It should be noted that the abovementioned screening operation of image frames in the sliding window may be understood as an marginalization operation, and by marginalization, a sliding window of a fixed number of image frames may be maintained, so as to control the amount of computation, and in the form of marginalization constraint, the constraint of the marginalized data on the state in the sliding window is retained in a schulren complementary manner.
Different from a sliding window marginalization strategy based on feature point tracking, the marginalization strategy based on twodimensional code observation information is provided. The strategy mainly marginalizes all image frames in the current bundling optimization window according to the ID lists { IDs } of all twodimensional codes observed in each purpose of the latest frame.
And if the number of the image frames in the window is less than or equal to the threshold value N, reserving all the image frames in the window. If the number of image frames in the window is greater than the threshold value N, all the image frames in the window are traversed, and the image frame marginalization is not limited to be carried out according to the following three conditions:
case 1: if a certain image frame does not detect the twodimensional code, the image frame is discarded. And the IMU for that frame and the preintegration constraint of the wheel speed odometer are retained to the next image frame.
Case 2: and if the twodimensional code is detected in a certain image frame, but the detected twodimensional code does not belong to the { IDs } list of the latest frame, discarding the image frame. And the IMU for that frame and the preintegration constraint of the wheel speed odometer are retained to the next image frame. Meanwhile, the constraint relation of the frame to other frames in the window is reserved through the Schulk complement operation.
Case 3: and if all the image frames in the window detect the twodimensional codes, and part or all of the detected twodimensional codes belong to the { IDs } list, keeping the latest image frame, and discarding the image frame with the lowest information degree after other image frames are scored according to the information degree. And the IMU for that frame and the preintegration constraint of the wheel speed odometer are retained to the next image frame. Meanwhile, the constraint relation of the frame to other frames in the window is reserved through the Schulk complement operation. The degree of information of the image frame is in direct proportion to the number of twodimensional codes which are observed by the frame and belong to the { IDs } list, and is in inverse proportion to the observation variance of the twodimensional codes.
1.2.5 twodimensional code observation output based on sliding window optimization state
Based on the state after the sliding window optimization, all the twodimensional codes observed in each target of the latest frame are traversed to obtain the relative position of each twodimensional code relative to each target of the latest frame, namely:
in the formulaRepresenting the relative pose of the ith twodimensional code calculated based on the state after the sliding window optimization relative to the coordinate system of the latest frame image;representing the outer parameter after the sliding window optimization;the represented pose after sliding window optimization;and representing the global pose of the ith twodimensional code after the sliding window optimization.
Two, state prediction
The EKFbased multisensor fusion framework is adopted, and the time/space calibration error can be automatically compensated.
2.1 EKF optimized State
The EKF status may include: target poses of labels in a world coordinate system, external parameters among sensors, time offsets among sensors, poses and speeds of the robot in the current frame and zero offset of an IMU are preset in continuous multiframe images collected in a preset time period. Such as: system state vector of EKFComprises the following steps: current inertial navigation stateGlobal pose, external parameters and time offset of the twodimensional code, namely:
wherein the content of the first and second substances,representing the inertial state of the current frame;representing the IMU global pose of the current frame;respectively representing the IMU global position and the attitude of the current frame;representing the IMU global velocity of the current frame;represents the accelerometer bias;stands for gyroscope bias;representing the global pose of the ith twodimensional code;respectively representing the global position and the posture of the ith twodimensional code;external parameters representing each camera and IMU;a location component representing the external reference;an angular component representative of the external reference;representing external parameters of a wheel speed odometer and an IMU;a location component representing the external reference;an angular component representative of the external reference;represents a time offset of the camera from the IMU;representing the time offset of the wheel speed odometer and the IMU. Chinese and foreign ginsengAndthe initial value of (a) is taken from the optimized result after the cluster optimization and the global pose of the twodimensional codeThe initial value is taken from the optimized result after the bundling optimization.
2.2 State recurrence formula
When motion measurement information is received, the EKF recurses the status. According to the configuration and the operation scene of the sensor, three state recursion modes can be adopted:
A. threedimensional recursion of IMU
The recursion process is as follows: when a frame of IMU measurement information is received, the state and covariance of the IMU from the moment k1 to the moment k are deduced according to the dynamic model of the IMU by using the acquired linear acceleration of the accelerometer and the angular velocity of the gyroscope as follows:
whereinState X representing time k^{E}Predicting a value;representing the state value at the k1 moment;accelerometer measurements representative of the IMU;gyroscope measurements representative of the IMU;represents the covariance of the state at time k;represents the covariance of the state at time k1;is a first order Jacobian matrix to the system state;is the covariance of the IMU measurement noise.
B. Threedimensional recursion of wheel speed odometer
The recursion process is as follows: when a frame of wheel speed odometer measurement information is received, the state and covariance deduced from the moment k1 to the moment k are as follows according to a dynamic model of the wheel speed odometer by utilizing the collected linear velocity and angular velocity:
whereinState X representing time k^{E}Predicting a value;representing the state value at the k1 moment;a linear velocity measurement representative of a wheel speed odometer;an angular velocity measurement representative of a wheel speed odometer;represents the covariance of the state at time k;represents the covariance of the state at time k1;is a first order Jacobian matrix to the system state;is the covariance of the wheel speed odometer measurement noise.
C. Hybrid recursion of wheel speed odometer and IMU
The recursion process is as follows: when a frame of wheel speed odometer and IMU measurement information is received, the state and covariance deduced from the moment k1 to the moment k are as follows according to the dynamic models of the wheel speed odometer and the gyroscope by utilizing the collected linear speed of the wheel speed odometer and the angular speed of the IMU gyroscope:
whereinState X representing time k^{E}Predicting a value;representing the state value at the k1 moment;a linear velocity measurement representative of a wheel speed odometer;gyroscope measurements representative of the IMU;represents the covariance of the state at time k;represents the covariance of the state at time k1;is a first order Jacobian matrix to the system state;is the covariance of the wheel speed odometer and gyroscope measurement noise.
Third, status update
The robot can update the EKF state by adopting projection constraint of a preset label, prior relative pose constraint of the preset label, prior global pose constraint of the preset label, measurement constraint of a wheel speed odometer and time offset constraint to obtain an updated EKF state. And transmitting the updated external parameters and the time offsets of the sensors to the bundling optimization link in real time, wherein the updated external parameters are used for initial values of external parameter optimization in bundling optimization, and the updated time offsets are used for time alignment and interpolation of measurement constraints in the bundling optimization. The specific content of the state updating link is as follows:
3.1 general nonlinear measurement function
In order to be compatible with multimodal sensor input measurements, the present application employs a generic nonlinear measurement function, namely:
wherein the content of the first and second substances,measurement data representing a kth frame;a nonlinear measurement function representing a kth frame;represents the system state at the kth frame;representing the measurement noise at the kth frame.
The method and the device further adopt the EKF filtering framework for updating after linearization is carried out on the nonlinear measurement function. Before each update, it is checked whether the measurement passes 95% chisquared distribution check, thereby avoiding interference of erroneous measurement inputs to the system state estimation.
3.2 reprojection constraint measurement update of twodimensional code
Traversing all the twodimensional codes observed in each target of the latest frame to obtain a state equation of the reprojection pose constraint of each twodimensional code relative to each target of the latest frame, namely:
wherein the content of the first and second substances,representing the relative pose of the ith twodimensional code calculated based on the current EKF state with respect to the latest frame image coordinate system.
After the above state equation is linearized, the following error state equation is obtained:
wherein H represents a first order Jacobian matrix; error representative of the state;representing the relative pose error of the ith twodimensional code relative to the coordinate system of the latest frame of image.
3.3, apriori relative constraint update of the position and pose of the twodimensional code
In one embodiment, the robot can perform bundle optimization on the prior twodimensional code constraint information; the prior twodimensional code constraint information is the relative poses of any two predetermined labels or the pose of any one predetermined label in a world coordinate system. Namely:
the robot observes all the twodimensional codes in each eye of the latest frame, if the ith twodimensional code and the jth twodimensional code have relative constraints of known prior orientation and posture (for example, prior information is that the average value of the relative postures of the ith twodimensional code and the jth twodimensional code is 5 degrees, the variance is 3 degrees, and the average value of the relative position is 5 meters, and the variance is 0.3 meter), the prior relative constraints of the position and the posture of the twodimensional code are constructed in a measurement updating model, and the state constraint equation is as follows:
wherein the content of the first and second substances,representing the relative pose of the ith twodimensional code relative to the jth twodimensional code calculated based on the current EKF state.
After the above state equation is linearized, the following error state equation is obtained:
wherein H represents first orderA Jacobian matrix; error representative of the state;representing the relative pose error of the ith twodimensional code relative to the coordinate system of the latest frame of image.
3.4 Prior Global constraint update of position and pose of twodimensional code
All the twodimensional codes are observed in each eye of the latest frame, if the ith twodimensional code has global constraints of known prior orientation and posture (for example, the posture mean of the ith twodimensional code in a world coordinate system is 5 degrees, the variance of the ith twodimensional code is 3 degrees, and or the mean of the global position is 100 meters, and the variance of the ith twodimensional code is 1 meter), the prior global constraints of the position and the posture of the ith twodimensional code are constructed in a measurement updating model, and the state equation and the error variance of the prior global constraints are similar to those in 3.3.
3.5 measurement constraint update of wheel speed odometer
When the wheel speed odometer measurement is not used for recursion in the prediction model, the wheel speed odometer preintegration result can be used as the measurement constraint for updating during state updating.
3.6 time offset online calibration
The real clock is used as the acquisition clock of the camera, so the state under the image clock can be approximately modeled as the state recurrence estimation value of the current frame IMU or the wheel speed odometer is superposed with an error term, and the time corresponding to the error term is approximately time offset. Taking the time bias of the IMU and the camera as an example, the state equation is as follows:
wherein the content of the first and second substances,recursion estimation value for the current frame IMU state;is the estimation error of the time offset.
Fourdimensional and twodimensional code map output
After the EKF state is updated, the optimized global pose of each twodimensional code and the covariance of the global pose can be obtained. The global pose of each twodimensional code can be used as a known quantity in the robot positioning process, and the covariance of the global pose of each twodimensional code can be used as the reliability or uncertainty of the twodimensional code pose in the robot positioning process.
In the observation acquisition of the preset label, the tightlycoupled sliding window cluster optimization is adopted for multiframe multiview observation, and after the observation of the preset label with high robustness and high precision is obtained, the map of the preset label is obtained by adopting the looselycoupled EKF optimization. The external parameters and the time offset among the sensors after the EKF updating can be fed back to the bundling optimization in real time. Therefore, the method adopts a tightness fusion mode, and not only considers the observation precision and stability of the preset label, but also considers the calculation power requirement.
The application supports random posting of preset labels, so that the requirement of a preset label pasting process on field deployment is greatly simplified, and largescale popularization of the vision mobile robot is facilitated. In order to avoid the situations of image blurring, pose ambiguity phenomenon and noise equipotential pose estimation instability appearing according to the observation of a single preset tag, the method adopts multiframe multiview observation of preset tag information as reprojection error constraint, adds nonlinear cluster optimization, and obtains the pose of the optimized preset tag as the observation added with the EKF. In order to utilize the prior information of the preset label pasting process as much as possible, the relative or global orientation and distance between the preset labels can be constrained as optional prior information to improve the drawing accuracy of the preset labels.
In addition, in the multisensor fusion map building process, the external parameter precision of each sensor can greatly influence the map building effect, and the online optimization of the external parameters of each sensor is added in the EKF optimization process. In order to solve the problem of clock asynchrony among multiple sensors, online estimation of time offset among the sensors is added in the method. In addition, in the bundle optimization of the multiframe and multipurpose preset label, marginalized measurement based on observation information of the preset label is provided.
Based on the same concept, an embodiment of the present application provides a robot mapping apparatus, as shown in fig. 5, including: a bundle optimization unit 51, a state prediction unit 52, and a measurement update unit 53.
The cluster optimization unit 51 is configured to perform cluster optimization on continuous multiframe image data, inertial measurement unit IMU data and wheel speed odometer data acquired within a preset time period, and acquire an optimized pose of a preset label in a camera coordinate system of the robot and a variance value of the optimized pose in a current frame image; the state prediction unit 52 is configured to perform extended kalman EKF state recursion on the IMU data and the wheel speed odometer data corresponding to the current frame image, and determine a pose of the robot and a variance value of the pose of the robot; and the measurement updating unit 53 is configured to perform state updating of EKF measurement on the optimized pose of the preset tag, the variance value of the optimized pose of the preset tag, the pose of the robot, and the variance value of the pose of the robot, and acquire a target pose and a variance value of the target pose of the preset tag in a world coordinate system in continuous multiframe images acquired within a preset time period.
In the embodiment of the application, when the robot builds the map, the robot adopts continuous multiframe images and simultaneously refers to IMU data and wheel speed odometer data, and the accuracy of the map building can be ensured by the mode. In addition, the method for optimizing the cluster and predicting the EKF state is further integrated during mapping, so that the positioning accuracy of the robot mapping is higher, and the target pose and the variance value of the target pose of the preset label in the world coordinate system can be calculated more quickly.
In a possible implementation manner, the measurement updating unit 53 may further perform online optimization on the external parameters and time offsets between the sensors during the status updating process of the measurement of the EKF, and use the optimized external parameters and time offsets for bundle optimization; the external parameter comprises: the relative pose of the IMU and the camera, and the relative pose of the wheel speed odometer and the camera.
In the embodiment of the application, external parameters and time offset are added during cluster optimization, so that the optimization pose of the preset label in the acquired current frame image in the camera coordinate system of the robot is more reliable, and the positioning accuracy of the robot map building is higher.
In a possible implementation manner, the bundle optimizing unit 51 is specifically configured to: acquiring continuous N frames of images acquired within a preset time period through a camera of the robot, screening the N frames of images, and determining M frames of images including preset labels; n, M is an integer; m is greater than or equal to 2 and less than or equal to N; the method comprises the steps of performing bundle optimization on position information of corner points of preset labels in M frames of images, projection information of the corner points of the preset labels in a coordinate system where a camera of the robot is located, IMU data and wheel speed odometer data, and determining an optimized pose of the preset labels in the coordinate system where the camera of the robot is located, a variance value of the optimized pose, a projection error of the preset labels, a Jacobian matrix and a covariance matrix.
In the embodiment of the application, the robot performs cluster optimization on the position information of the corner points of the preset labels in the M frames of images, the projection information of the corner points of the preset labels in the coordinate system where the camera of the robot is located, IMU data and wheel speed odometry data, so that the optimized pose, the variance value of the optimized pose, the projection error of the preset labels, the Jacobian matrix and the covariance matrix of the preset labels in the coordinate system where the camera of the robot is located can be determined. By the method, the drawing establishing accuracy of the robot can be improved.
In one possible implementation, the continuous multiframe images in the bundleoptimized sliding window are determined by:
if the frame number of the image frames in the sliding window is less than or equal to the sliding window threshold value, performing cluster optimization on all the image frames; and if the frame number of the image frames in the sliding window is greater than the sliding window threshold value, performing bundle optimization on all the image frames, screening out preset frame images, and keeping the constraint on the sliding window of the rejected images through a Schuler complement method.
In the embodiment of the application, the image frames are screened in the sliding window, so that the positioning accuracy of the robot mapping is higher.
In one possible implementation, the screening of the preset frame images for bundle optimization is determined by one or more of the following ways:
acquiring an identification list of all preset labels in a current frame image;
rejecting images which do not comprise preset labels in the multiframe images; if the identifier of the preset label detected in the Nth frame image is different from the identifier of the preset label in the Mth frame image, rejecting the Nth frame image; if the identification of the preset label in any frame of image is the same as the identification of the preset label in the Mth frame of image in the preset time period, scoring the images except the Mth frame of image according to the information degree, and eliminating the frame image with the lowest score; wherein M, N is an integer, and M is the number of the last frame images acquired within a preset time period.
In the embodiment of the application, the technical optimization is carried out by screening the preset frame images, so that the optimized pose of the preset label is more reliable.
In one possible implementation, the projection error and the jacobian matrix of the preset label may be determined as follows:
calculating twodimensional coordinate information of the corner points of the preset label in the M frames of images by a perspective Npoint projection PNP algorithm, and determining the initial pose of the preset label in a camera coordinate system of the robot; determining threedimensional coordinate information of each corner point of the preset label in a world coordinate system according to the initial pose of the preset label in a camera coordinate system of the robot and the initial pose of the robot; projecting the threedimensional coordinate information of the corner point of the preset label to a coordinate system where the camera is located to obtain twodimensional coordinate information of the corner point of the preset label after projection; calculating a difference value between the twodimensional coordinate information of the corner point of the projected preset label and the twodimensional coordinate information of the corner point of the preset label before projection to determine a projection error of the preset label; and determining a Jacobian matrix of the projection error of the preset label relative to the optimized state quantity in the sliding window.
In the embodiment of the application, an initial pose of an ith preset label in an Mth frame image in a camera coordinate system is determined according to a threedimensional coordinate of an angular point in the ith preset label in a preset label coordinate system and a twodimensional pixel coordinate of the preset label in an image frame M of the ith preset label observed for the first time through a perspectiveNpoint (PNP) algorithm; according to the initial pose of the ith preset label in the M frame image and the global pose of the camera in the M frame image, determining global threedimensional coordinate information of the corner point of the preset label, and then determining the projection error of the preset label to ensure the positioning accuracy of the robot mapping.
In one possible implementation mode, the variance value of the target pose is related to the angular point observation uncertainty of each preset label of the sliding window; wherein the observation uncertainty of the corner of any preset label in any image frame has a functional relation with the following parameters:
the method comprises the steps of presetting twodimensional coordinate information of corner points of a label, presetting linear speed, angular speed and acceleration of an image acquisition device of an image frame where the label is located, and presetting pixel area of the label in the image frame.
In the embodiment of the application, the variance value of the target pose of the preset label has a functional relationship with various parameters, so that the variance value of the target pose obtained by the application is more consistent with an actual scene.
In a possible implementation manner, the bundle optimizing unit 51 is specifically configured to: adding the prior information of a preset label as a constraint item into bundle optimization and measurement updating; the prior information of the preset labels is the relative pose and uncertainty variance of any two preset labels, or the global pose and uncertainty variance of any one preset label in a world coordinate system.
In the embodiment of the application, the robot can also adjust the pose of the preset label in the world coordinate system based on the prior information, so that the robot is higher in mapping accuracy and more reliable in mapping data.
In a possible implementation manner, the robot may update the state of the EKF measurement for the projection constraint of the preset tag, the prior relative pose constraint of the preset tag, the prior global pose constraint of the preset tag, the measurement constraint of the wheel speed odometer, and the time offset constraint, to obtain an updated EKF state.
In the embodiment of the application, the accuracy of robot mapping can be improved by updating the projection constraint of the preset tag, the pose constraint of the preset tag, the time offset and the like.
In one possible implementation, the EKF status includes: and the target pose of the preset label in the world coordinate system, the external parameters among the sensors, the time offset among the sensors, the pose and the speed of the current frame robot and the zero offset of the IMU are preset in the continuous multiframe images collected in the preset time period.
After the robot mapping method and apparatus in the exemplary embodiment of the present application are introduced, a computing device in another exemplary embodiment of the present application is introduced next.
As will be appreciated by one skilled in the art, aspects of the present application may be embodied as a system, method or program product. Accordingly, various aspects of the present application may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
In some possible implementations, a computing device according to the present application may include at least one processor, and at least one memory. Wherein the memory stores a computer program that, when executed by the processor, causes the processor to perform the steps of the robot mapping method according to various exemplary embodiments of the present application described above in the present specification. For example, the processor may perform steps 201203 as shown in fig. 2.
The computing device 130 according to this embodiment of the present application is described below with reference to fig. 6. The computing device 130 shown in fig. 6 is only an example and should not bring any limitations to the functionality or scope of use of the embodiments of the present application. As shown in fig. 6, the computing device 130 is embodied in the form of a general purpose smart terminal. Components of computing device 130 may include, but are not limited to: the at least one processor 131, the at least one memory 132, and a bus 133 that connects the various system components (including the memory 132 and the processor 131).
Bus 133 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, a processor, or a local bus using any of a variety of bus architectures. The memory 132 may include readable media in the form of volatile memory, such as Random Access Memory (RAM) 1321 and/or cache memory 1322, and may further include Read Only Memory (ROM) 1323. Memory 132 may also include a program/utility 1325 having a set (at least one) of program modules 1324, such program modules 1324 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
In some possible embodiments, the aspects of the threedimensional visual repositioning method provided herein may also be implemented in the form of a program product comprising a computer program for causing a computer device to perform the steps of the robot mapping method according to various exemplary embodiments of the present application described above in this specification, when the program product is run on the computer device. For example, the processor may perform steps 201203 as shown in fig. 2.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a nonexhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a readonly memory (ROM), an erasable programmable readonly memory (EPROM or flash memory), an optical fiber, a portable compact disc readonly memory (CDROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The program product for robot mapping according to the embodiments of the present application may employ a portable compact disc read only memory (CDROM) and include a computer program, and may be run on a smart terminal. The program product of the present application is not so limited, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A readable signal medium may include a propagated data signal with a readable computer program embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electromagnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
It should be noted that although several units or subunits of the apparatus are mentioned in the above detailed description, such division is merely exemplary and not mandatory. Indeed, the features and functions of two or more units described above may be embodied in one unit, according to embodiments of the application. Conversely, the features and functions of one unit described above may be further divided into embodiments by a plurality of units.
Further, while the operations of the methods of the present application are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computerreadable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computerreadable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.
Claims (13)
1. A robot mapping method, comprising:
performing cluster optimization on continuous multiframe image data, inertial measurement unit IMU data and wheel speed odometer data collected in a preset time period to obtain an optimized pose of a preset label in a camera coordinate system of the robot in a current frame image and a variance value of the optimized pose;
performing extended Kalman EKF state recursion on the IMU data and the wheel speed odometer data corresponding to the current frame image, and determining a variance value of the pose of the robot and the pose of the robot;
and performing EKF measurement state updating on the optimized pose of the preset label, the variance value of the optimized pose of the preset label, the pose of the robot and the variance value of the pose of the robot, and acquiring the target pose and the variance value of the target pose of the preset label in a world coordinate system in continuous multiframe images collected in a preset time period.
2. The method of claim 1, further comprising:
performing online optimization on external parameters and time offsets among the sensors in the EKF measurement state updating process, and using the optimized external parameters and time offsets for bundle optimization; the external parameter comprises: the relative pose of the IMU and the camera, and the relative pose of the wheel speed odometer and the camera.
3. The method of claim 1, wherein performing cluster optimization on continuous multiframe images, Inertial Measurement Unit (IMU) data and wheel speed odometer data acquired within a preset time period to obtain an optimized pose and a variance value of the optimized pose of each preset label in a camera coordinate system of the robot in a current frame image comprises:
acquiring continuous N frames of images acquired within a preset time period through a camera of the robot, screening the N frames of images, and determining M frames of images including preset labels; n, M is an integer; m is greater than or equal to 2 and less than or equal to N;
and performing bundle optimization on the position information of the corner points of the preset labels in the M frames of images, the projection information of the corner points of the preset labels in a coordinate system where a camera of the robot is located, the IMU data and the wheel speed odometry data to determine the optimized pose of the preset labels in the coordinate system where the camera of the robot is located, the variance value of the optimized pose, the projection error of the preset labels, the Jacobian matrix and the covariance matrix.
4. The method of claim 3, wherein the consecutive frames of images in the bundleoptimized sliding window are determined by:
if the frame number of the image frames in the sliding window is less than or equal to the sliding window threshold value, performing cluster optimization on all the image frames;
and if the frame number of the image frames in the sliding window is greater than the sliding window threshold value, performing bundle optimization on all the image frames, screening out preset frame images, and keeping the constraint on the sliding window of the rejected images through a Schuler complement method.
5. The method of claim 4, wherein the screening out of the preset frame images is determined by one or more of the following methods:
acquiring an identification list of all preset labels in a current frame image;
rejecting images which do not comprise the preset label in the multiframe images;
if the identifier of the preset label detected in the Nth frame image is different from the identifier of the preset label in the Mth frame image, rejecting the Nth frame image;
if the identification of the preset label in any frame of image is the same as the identification of the preset label in the Mth frame of image in the preset time period, scoring the images except the Mth frame of image according to the information degree, and eliminating the frame image with the lowest score;
wherein M, N is an integer, and M is the number of the last frame of image acquired in the preset time period.
6. The method of claim 3, wherein the projection error and the Jacobian matrix of the preset labels are determined by:
calculating twodimensional coordinate information of the corner points of the preset label in the M frames of images by a perspective Npoint projection PNP algorithm, and determining the initial pose of the preset label in a camera coordinate system of the robot;
determining threedimensional coordinate information of each corner point of the preset label in a world coordinate system according to the initial pose of the preset label in a camera coordinate system of the robot and the initial pose of the robot;
projecting the threedimensional coordinate information of the corner point of the preset label to a coordinate system where the camera is located to obtain twodimensional coordinate information of the corner point of the preset label after projection;
calculating a difference value between the twodimensional coordinate information of the corner point of the projected preset label and the twodimensional coordinate information of the corner point of the preset label before projection to determine a projection error of the preset label;
and determining a Jacobian matrix of the projection error of the preset label relative to the optimized state quantity in the sliding window.
7. The method according to any one of claims 16, characterized in that the variance value of the target pose is related to the uncertainty of the observation of the corner points of each preset label of the sliding window; wherein, the observation uncertainty of the corner point of any preset label in any image frame has a functional relation with the following parameters:
the image acquisition device comprises twodimensional coordinate information of corner points of the preset labels, linear speed, angular speed and acceleration of the image acquisition device of the image frame where the preset labels are located, and pixel areas of the preset labels in the image frame.
8. The method according to any one of claims 16, further comprising:
adding the prior information of a preset label as a constraint item into bundle optimization and measurement updating; the prior information of the preset labels is the relative pose and uncertainty variance of any two preset labels, or the global pose and uncertainty variance of any one preset label in a world coordinate system.
9. The method of claim 8, further comprising:
and updating the EKF measurement state of the projection constraint of the preset label, the prior relative pose constraint of the preset label, the prior global pose constraint of the preset label, the measurement constraint of the wheel speed odometer and the time offset constraint to obtain an updated EKF state.
10. The method of claim 9, wherein the EKF status comprises: and the target pose of the preset label in the world coordinate system, the external parameters among the sensors, the time offset among the sensors, the pose and the speed of the current frame robot and the zero offset of the IMU are preset in the continuous multiframe images collected in the preset time period.
11. A robot mapping apparatus, comprising:
the cluster optimization unit is used for performing cluster optimization on continuous multiframe image data, inertial measurement unit IMU data and wheel speed odometer data which are collected in a preset time period, and acquiring an optimized pose of a preset label in a camera coordinate system of the robot and a variance value of the optimized pose in a current frame image;
the state prediction unit is used for performing extended Kalman EKF state recursion on the IMU data and the wheel speed odometer data corresponding to the current frame image and determining a variance value of the pose of the robot and the pose of the robot;
and the measurement updating unit is used for carrying out EKF measurement state updating on the optimized pose of the preset label, the variance value of the optimized pose of the preset label, the pose of the robot and the variance value of the pose of the robot, and acquiring the target pose and the variance value of the target pose of the preset label in a world coordinate system in continuous multiframe images acquired within a preset time period.
12. A computing device, comprising: a memory and a processor;
a memory for storing program instructions;
a processor for calling program instructions stored in said memory to execute the method of any one of claims 1 to 10 in accordance with the obtained program.
13. A computer storage medium storing computerexecutable instructions for performing the method of any one of claims 110.
Priority Applications (1)
Application Number  Priority Date  Filing Date  Title 

CN202110348706.7A CN112734852B (en)  20210331  20210331  Robot mapping method and device and computing equipment 
Applications Claiming Priority (1)
Application Number  Priority Date  Filing Date  Title 

CN202110348706.7A CN112734852B (en)  20210331  20210331  Robot mapping method and device and computing equipment 
Publications (2)
Publication Number  Publication Date 

CN112734852A CN112734852A (en)  20210430 
CN112734852B true CN112734852B (en)  20210629 
Family
ID=75596233
Family Applications (1)
Application Number  Title  Priority Date  Filing Date 

CN202110348706.7A Active CN112734852B (en)  20210331  20210331  Robot mapping method and device and computing equipment 
Country Status (1)
Country  Link 

CN (1)  CN112734852B (en) 
Families Citing this family (12)
Publication number  Priority date  Publication date  Assignee  Title 

CN111401445B (en) *  20200316  20230310  腾讯科技（深圳）有限公司  Training method of image recognition model, and image recognition method and device 
CN113052855B (en) *  20210226  20211102  苏州迈思捷智能科技有限公司  Semantic SLAM method based on visualIMUwheel speed meter fusion 
CN113390408A (en) *  20210630  20210914  深圳市优必选科技股份有限公司  Robot positioning method and device, robot and storage medium 
CN113379915B (en) *  20210705  20221223  广东工业大学  Driving scene construction method based on point cloud fusion 
CN113658260A (en) *  20210712  20211116  南方科技大学  Robot pose calculation method and system, robot and storage medium 
US20230072471A1 (en) *  20210804  20230309  Tata Consultancy Services Limited  Method and system of dynamic localization of a telepresence robot based on live markers 
CN113838129B (en) *  20210812  20240315  高德软件有限公司  Method, device and system for obtaining pose information 
CN113792564B (en) *  20210929  20231110  北京航空航天大学  Indoor positioning method based on invisible projection twodimensional code 
CN114018284B (en) *  20211013  20240123  上海师范大学  Wheel speed odometer correction method based on vision 
CN114523471B (en) *  20220107  20230425  中国人民解放军海军军医大学第一附属医院  Error detection method based on association identification and robot system 
CN117629204A (en) *  20220816  20240301  北京三快在线科技有限公司  Positioning method and device 
CN115388902B (en) *  20221028  20230324  苏州工业园区测绘地理信息有限公司  Indoor positioning method and system, AR indoor positioning navigation method and system 
Family Cites Families (5)
Publication number  Priority date  Publication date  Assignee  Title 

US7925049B2 (en) *  20060815  20110412  Sri International  Stereobased visual odometry method and system 
CN107145578B (en) *  20170508  20200410  深圳地平线机器人科技有限公司  Map construction method, device, equipment and system 
CN108180913A (en) *  20180103  20180619  深圳勇艺达机器人有限公司  A kind of Quick Response Code alignment system based on 3D cameras 
CN110163909A (en) *  20180212  20190823  北京三星通信技术研究有限公司  For obtaining the method, apparatus and storage medium of equipment pose 
CN110702107A (en) *  20191022  20200117  北京维盛泰科科技有限公司  Monocular vision inertial combination positioning navigation method 

2021
 20210331 CN CN202110348706.7A patent/CN112734852B/en active Active
Also Published As
Publication number  Publication date 

CN112734852A (en)  20210430 
Similar Documents
Publication  Publication Date  Title 

CN112734852B (en)  Robot mapping method and device and computing equipment  
Cvišić et al.  SOFT‐SLAM: Computationally efficient stereo visual simultaneous localization and mapping for autonomous unmanned aerial vehicles  
CN109084732B (en)  Positioning and navigation method, device and processing equipment  
CN112634451B (en)  Outdoor largescene threedimensional mapping method integrating multiple sensors  
WO2018142900A1 (en)  Information processing device, data management device, data management system, method, and program  
CN111210477B (en)  Method and system for positioning moving object  
CN110587597B (en)  SLAM closed loop detection method and detection system based on laser radar  
CN110726406A (en)  Improved nonlinear optimization monocular inertial navigation SLAM method  
CN112197770A (en)  Robot positioning method and positioning device thereof  
Ji et al.  RGBD SLAM using vanishing point and door plate information in corridor environment  
CN114758504B (en)  Online vehicle overspeed early warning method and system based on filtering correction  
CN111623773B (en)  Target positioning method and device based on fisheye vision and inertial measurement  
CN111998862A (en)  Dense binocular SLAM method based on BNN  
CN112541423A (en)  Synchronous positioning and map construction method and system  
CN117218350A (en)  SLAM implementation method and system based on solidstate radar  
CN115077519A (en)  Positioning and mapping method and device based on template matching and laser inertial navigation loose coupling  
CN112731503A (en)  Pose estimation method and system based on frontend tight coupling  
CN116958452A (en)  Threedimensional reconstruction method and system  
Andersson et al.  Simultaneous localization and mapping for vehicles using ORBSLAM2  
Ye et al.  Robust and efficient vehicles motion estimation with lowcost multicamera and odometergyroscope  
CN111862146A (en)  Target object positioning method and device  
CN114862953A (en)  Mobile robot repositioning method and device based on visual features and 3D laser  
CN115147344A (en)  Threedimensional detection and tracking method for parts in augmented reality assisted automobile maintenance  
CN114119885A (en)  Image feature point matching method, device and system and map construction method and system  
Xia et al.  YOLOBased Semantic Segmentation for Dynamic Removal in VisualInertial SLAM 
Legal Events
Date  Code  Title  Description 

PB01  Publication  
PB01  Publication  
SE01  Entry into force of request for substantive examination  
SE01  Entry into force of request for substantive examination  
GR01  Patent grant  
GR01  Patent grant 