CN112734852A - Robot mapping method and device and computing equipment - Google Patents

Robot mapping method and device and computing equipment Download PDF

Info

Publication number
CN112734852A
CN112734852A CN202110348706.7A CN202110348706A CN112734852A CN 112734852 A CN112734852 A CN 112734852A CN 202110348706 A CN202110348706 A CN 202110348706A CN 112734852 A CN112734852 A CN 112734852A
Authority
CN
China
Prior art keywords
pose
preset
robot
image
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110348706.7A
Other languages
Chinese (zh)
Other versions
CN112734852B (en
Inventor
韩松杉
刘星
王世汉
赵家阳
韩亮
张弥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Sineva Intelligent Technology Co ltd
Original Assignee
Zhejiang Sineva Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Sineva Intelligent Technology Co ltd filed Critical Zhejiang Sineva Intelligent Technology Co ltd
Priority to CN202110348706.7A priority Critical patent/CN112734852B/en
Publication of CN112734852A publication Critical patent/CN112734852A/en
Application granted granted Critical
Publication of CN112734852B publication Critical patent/CN112734852B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C22/00Measuring distance traversed on the ground by vehicles, persons, animals or other moving solid bodies, e.g. using odometers, using pedometers

Abstract

The application discloses a robot mapping method, a robot mapping device and a robot computing device, wherein the robot can perform cluster optimization on continuous multi-frame image data, Inertial Measurement Unit (IMU) data and wheel speed odometer data which are collected in a preset time period, and obtain an optimized pose and a variance value of the optimized pose of a preset label in a camera coordinate system of the robot in a current frame image; performing extended Kalman EKF state recursion on IMU data and wheel speed odometer data corresponding to the current frame image, and determining a variance value of the pose of the robot and the pose of the robot; and then performing iterative updating of EKF measurement on the optimized pose, the variance value of the optimized pose, the pose of the robot and the variance value of the pose of the robot to obtain a target pose and a variance value of the target pose of a preset label in a world coordinate system in continuous multi-frame images collected within a preset time period. The robot mapping method performs robot mapping through cluster optimization and EKF tightness fusion, and can improve mapping precision.

Description

Robot mapping method and device and computing equipment
Technical Field
The application relates to the technical field of computer vision, in particular to a robot image building method and device and computing equipment.
Background
With the demand for increased manufacturing flexibility, shortened or accelerated product cycle, increased labor costs, and increased safety requirements, mobile robots are rapidly developing to meet the ever-increasing demands of people. Today, Automated Guided Vehicles (AGVs) have found widespread use in automated warehouses, factory material transport systems, logistics distribution systems, flexible assembly systems, and other intelligent transportation sites. The core technology of autonomous guided vehicle navigation is synchronous positioning and mapping (SLAM), and an environment map and the real-time position of a guided vehicle can be obtained through the SLAM, so that the guided vehicle can efficiently and intelligently move to a destination without human intervention.
Laser SLAM and visual SLAM are mainly used at present. However, the camera cost of visual SLAM is much lower than that of a laser sensor. Furthermore, the visual SLAM can obtain a 6 degree of freedom pose of the target object to be positioned, relative to a 2-dimensional laser SLAM. As a result, relatively inexpensive visual SLAM has been rapidly developed.
However, the method depends on the visual SLAM only, so that an accumulated error exists, the visual SLAM precision based on the visual feature points is greatly reduced, and in order to improve the robustness of robot positioning, a two-dimensional code tag can be added for auxiliary positioning when the robot lands. The global pose and variance of the two-dimensional code label need to be obtained in advance in the related technology, but the global pose and variance of the label cannot be accurately obtained through surveying and mapping and the like. In addition, for the situations of image blur, pose ambiguity, noise and other pose estimation instability appearing in the observation of a single two-dimensional code label, the pose of the two-dimensional code cannot be accurately calculated based on the traditional loose coupling algorithm. In the multi-sensor fusion process, the SLAM effect can be greatly influenced by the external parameter precision and the time offset of each sensor.
Disclosure of Invention
The application provides a robot mapping method, a robot mapping device and computing equipment, which are used for improving reliability and accuracy of visual mapping.
In a first aspect, embodiments of the present application provide a robot mapping method, which may be performed by a robot, or may be performed by a server in communication with the robot (the robot collects data and transmits the data to the server), and the present application is not limited in this respect. Only taking an execution subject as an example of a robot, the robot may perform bundle optimization on continuous multi-frame image data, Inertial Measurement Unit (IMU) data and wheel speed odometer data acquired within a preset time period to obtain an optimized pose and a variance value of the optimized pose of a preset label in a camera coordinate system of the robot in a current frame image; performing Extended Kalman Filter (EKF) state recursion on the IMU data and the wheel speed odometer data corresponding to the current frame image, and determining a variance value of the pose of the robot and the pose of the robot; and performing iterative updating of EKF measurement on the pose optimization, the pose variance value of the pose optimization, the pose of the robot and the pose variance value of the robot to acquire a target pose and a target pose variance value of a preset label in a world coordinate system in continuous multi-frame images collected within a preset time period.
In the embodiment of the application, when the robot builds the map, the robot refers to IMU data and wheel speed odometer data while adopting continuous multi-frame images, and the accuracy of the preset label pose in the map building process can be ensured by the mode. In addition, the method for optimizing the cluster and predicting the EKF state is further fused during mapping, so that the mapping accuracy of the robot is higher, and the target pose and the variance value of the target pose of the preset label in the world coordinate system can be calculated more quickly.
In one possible implementation mode, the robot carries out online optimization on external parameters and time offsets among the sensors in the measurement iteration updating process of the EKF, and the optimized external parameters and time offsets are used for bundle optimization; the external parameter comprises: the relative pose of the IMU and the camera, and the relative pose of the wheel speed odometer and the camera.
In the embodiment of the application, external parameters and time offset are added during cluster optimization, so that the optimization pose of the preset label in the acquired current frame image in the camera coordinate system of the robot is more reliable, and the positioning accuracy of the robot map building is higher.
In a possible implementation manner, the robot can acquire continuous N frames of images acquired within a preset time period through a camera of the robot, and screen the N frames of images to determine M frames of images including preset labels; n, M is an integer; m is greater than or equal to 2 and less than or equal to N; the method comprises the steps of performing bundle optimization on position information of corner points of preset labels in M frames of images, projection information of the corner points of the preset labels in a coordinate system where a camera of the robot is located, IMU data and wheel speed odometer data, and determining an optimized pose of the preset labels in the coordinate system where the camera of the robot is located, a variance value of the optimized pose, a projection error of the preset labels, a Jacobian matrix and a covariance matrix.
In the embodiment of the application, the robot performs cluster optimization on the position information of the corner points of the preset labels in the M frames of images, the projection information of the corner points of the preset labels in the coordinate system where the camera of the robot is located, IMU data and wheel speed odometry data, so that the optimized pose, the variance value of the optimized pose, the projection error of the preset labels, the Jacobian matrix and the covariance matrix of the preset labels in the coordinate system where the camera of the robot is located can be determined. By the method, the drawing establishing accuracy of the robot can be improved.
In one possible implementation, the continuous multiframe images in the bundle-optimized sliding window are determined by:
if the frame number of the image frames in the sliding window is less than or equal to the sliding window threshold value, performing cluster optimization on all the image frames; if the frame number of the image frames in the sliding window is larger than the sliding window threshold value, all the image frames are subjected to cluster optimization and then a preset frame image is screened out, and the constraint on the sliding window of the image to be rejected is reserved through a Schuler complement method.
In the embodiment of the application, the image frames are screened in the sliding window, so that the positioning accuracy of the robot mapping is higher.
In one possible implementation, the screening of the preset frame images for bundle optimization is determined by one or more of the following ways:
acquiring an identification list of all preset labels in a current frame image;
rejecting images which do not comprise preset labels in the multi-frame images; if the identifier of the preset label detected in the Nth frame image is different from the identifier of the preset label in the Mth frame image, rejecting the Nth frame image; if the identification of the preset label in any frame of image is the same as the identification of the preset label in the Mth frame of image in the preset time period, scoring the images except the Mth frame of image according to the information degree, and eliminating the frame image with the lowest score; wherein M, N is an integer, and M is the number of the last frame images acquired within a preset time period.
In the embodiment of the application, the technical optimization is carried out by screening the preset frame images, so that the optimized pose of the preset label is more reliable.
In one possible implementation, the projection error and the jacobian matrix of the preset label may be determined as follows:
calculating two-dimensional coordinate information of the corner points of the preset label in the M frames of images by a perspective N-point projection PNP algorithm, and determining the initial pose of the preset label in a camera coordinate system of the robot; determining three-dimensional coordinate information of each corner point of the preset label in a world coordinate system according to the initial pose of the preset label in a camera coordinate system of the robot and the initial pose of the robot; projecting the three-dimensional coordinate information of the corner point of the preset label to a coordinate system where a camera is located to obtain the two-dimensional coordinate information of the corner point of the preset label after projection; calculating the difference value between the two-dimensional coordinate information of the corner point of the projected preset label and the two-dimensional coordinate information of the corner point of the preset label before projection to determine the projection error of the preset label; and determining a Jacobian matrix of the projection error of the preset label relative to the optimized state quantity in the sliding window.
In the embodiment of the application, an initial pose of an ith preset label in an Mth frame image in a camera coordinate system is determined according to a three-dimensional coordinate of an angular point in the ith preset label in a preset label coordinate system and a two-dimensional pixel coordinate of the preset label in an image frame M of the ith preset label observed for the first time through a perspective-N-point (PNP) algorithm; according to the initial pose of the ith preset label in the M frame image and the global pose of the camera in the M frame image, determining global three-dimensional coordinate information of the corner point of the preset label, and then determining the projection error of the preset label to ensure the positioning accuracy of the robot mapping.
In one possible implementation mode, the variance value of the target pose is related to the angular point observation uncertainty of each preset label of the sliding window; wherein, the observation uncertainty of the corner point of any preset label in any image frame has a functional relation with the following parameters:
the method comprises the steps of presetting two-dimensional coordinate information of corner points of a label, presetting linear speed, angular speed and acceleration of an image acquisition device of an image frame where the label is located, and presetting pixel area of the label in the image frame.
In the embodiment of the application, the variance value of the target pose of the preset label has a functional relationship with various parameters, so that the variance value of the target pose obtained by the application is more consistent with an actual scene.
In a possible implementation manner, the robot can add the prior information of the preset label as a constraint item to the bundle optimization and the measurement update; the prior information of the preset labels is the relative pose and uncertainty variance of any two preset labels, or the global pose and uncertainty variance of any one preset label in a world coordinate system.
In the embodiment of the application, the robot can also adjust the pose of the preset label in the world coordinate system based on the prior information, so that the robot is higher in mapping accuracy and more reliable in mapping result.
In a possible implementation manner, the robot may perform iterative update of EKF measurement on a projection constraint of a preset tag, a prior relative pose constraint of the preset tag, a prior global pose constraint of the preset tag, a measurement constraint of a wheel speed odometer, and a time offset constraint, to obtain an updated EKF state.
In the embodiment of the application, the accuracy of robot mapping can be improved by updating the projection constraint of the preset tag, the pose constraint of the preset tag, the time offset and the like.
In one possible implementation, the EKF status includes: and the target pose of the preset label in the world coordinate system, the external parameters among the sensors, the time offset among the sensors, the pose and the speed of the current frame robot and the zero offset of the IMU are preset in the continuous multi-frame images collected in the preset time period.
In a second aspect, an embodiment of the present application further provides a robot mapping apparatus, including: a bundle optimization unit, a state prediction unit and a measurement update unit.
The cluster optimization unit is used for performing cluster optimization on continuous multi-frame image data, inertial measurement unit IMU data and wheel speed odometer data which are collected in a preset time period, and acquiring an optimized pose of a preset label in a camera coordinate system of the robot and a variance value of the optimized pose in a current frame image; the state prediction unit is used for performing extended Kalman EKF state recursion on the IMU data and the wheel speed odometer data corresponding to the current frame image and determining a variance value of the pose of the robot and the pose of the robot; and the measurement updating unit is used for performing EKF measurement iteration updating on the optimized pose of the preset label, the variance value of the optimized pose of the preset label, the pose of the robot and the variance value of the pose of the robot, and acquiring the target pose and the variance value of the target pose of the preset label in a world coordinate system in continuous multi-frame images acquired within a preset time period.
In a possible implementation manner, the measurement updating unit may further perform online optimization on the external parameters and time offsets between the sensors in an iterative updating process of the measurement of the EKF, and use the optimized external parameters and time offsets for bundle optimization; the external parameter comprises: the relative pose of the IMU and the camera, and the relative pose of the wheel speed odometer and the camera.
In a possible implementation manner, the bundle optimizing unit is specifically configured to: acquiring continuous N frames of images acquired within a preset time period through a camera of the robot, screening the N frames of images, and determining M frames of images including preset labels; n, M is an integer; m is greater than or equal to 2 and less than or equal to N; the method comprises the steps of performing bundle optimization on position information of corner points of preset labels in M frames of images, projection information of the corner points of the preset labels in a coordinate system where a camera of the robot is located, IMU data and wheel speed odometer data, and determining an optimized pose of the preset labels in the coordinate system where the camera of the robot is located, a variance value of the optimized pose, a projection error of the preset labels, a Jacobian matrix and a covariance matrix.
In one possible implementation, the continuous multiframe images in the bundle-optimized sliding window are determined by:
if the frame number of the image frames in the sliding window is less than or equal to the sliding window threshold value, performing cluster optimization on all the image frames; and if the frame number of the image frames in the sliding window is greater than the sliding window threshold value, performing bundle optimization on all the image frames, screening out preset frame images, and keeping the constraint on the sliding window of the rejected images through a Schuler complement method.
In one possible implementation, the screening of the preset frame images for bundle optimization is determined by one or more of the following ways:
acquiring an identification list of all preset labels in a current frame image;
rejecting images which do not comprise preset labels in the multi-frame images; if the identifier of the preset label detected in the Nth frame image is different from the identifier of the preset label in the Mth frame image, rejecting the Nth frame image; if the identification of the preset label in any frame of image is the same as the identification of the preset label in the Mth frame of image in the preset time period, scoring the images except the Mth frame of image according to the information degree, and eliminating the frame image with the lowest score; wherein M, N is an integer, and M is the number of the last frame images acquired within a preset time period.
In one possible implementation, the projection error and the jacobian matrix of the preset label may be determined as follows:
calculating two-dimensional coordinate information of the corner points of the preset label in the M frames of images by a perspective N-point projection PNP algorithm, and determining the initial pose of the preset label in a camera coordinate system of the robot; determining three-dimensional coordinate information of each corner point of the preset label in a world coordinate system according to the initial pose of the preset label in a camera coordinate system of the robot and the initial pose of the robot; projecting the three-dimensional coordinate information of the corner point of the preset label to a coordinate system where the camera is located to obtain two-dimensional coordinate information of the corner point of the preset label after projection; calculating a difference value between the two-dimensional coordinate information of the corner point of the projected preset label and the two-dimensional coordinate information of the corner point of the preset label before projection to determine a projection error of the preset label; and determining a Jacobian matrix of the projection error of the preset label relative to the optimized state quantity in the sliding window.
In one possible implementation mode, the variance value of the target pose is related to the angular point observation uncertainty of each preset label of the sliding window; wherein the observation uncertainty of the corner of any preset label in any image frame has a functional relation with the following parameters:
the method comprises the steps of presetting two-dimensional coordinate information of corner points of a label, presetting linear speed, angular speed and acceleration of an image acquisition device of an image frame where the label is located, and presetting pixel area of the label in the image frame.
In a possible implementation manner, the bundle optimizing unit is specifically configured to: adding the prior information of a preset label as a constraint item into bundle optimization and measurement updating; the prior information of the preset labels is the relative pose and uncertainty variance of any two preset labels, or the global pose and uncertainty variance of any one preset label in a world coordinate system.
In a possible implementation manner, the robot may perform iterative update of EKF measurement on a projection constraint of a preset tag, a prior relative pose constraint of the preset tag, a prior global pose constraint of the preset tag, a measurement constraint of a wheel speed odometer, and a time offset constraint, to obtain an updated EKF state.
In one possible implementation, the EKF status includes: and the target pose of the preset label in the world coordinate system, the external parameters among the sensors, the time offset among the sensors, the pose and the speed of the current frame robot and the zero offset of the IMU are preset in the continuous multi-frame images collected in the preset time period.
In a third aspect, an embodiment of the present application further provides a computing device, including: a memory and a processor; a memory for storing program instructions; and the processor is used for calling the program instructions stored in the memory and establishing the robot map according to any one of the first aspect of the obtained program instructions.
In a fourth aspect, the present application further provides a computer storage medium, where the computer storage medium stores computer-executable instructions, and the computer-executable instructions are configured to cause a computer to execute any robot mapping method in the present application.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments of the present invention will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic view of an application scenario of a robot mapping method provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of a robot mapping method according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of a robot mapping method according to an embodiment of the present disclosure;
fig. 4 is a schematic flow chart of a schulvin compensation method provided in an embodiment of the present application;
fig. 5 is a schematic structural diagram of a robot mapping device according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a computing device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention.
It should be noted that the terms "first," "second," and the like in this application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
Next, an application scenario to which the present application may be applied is described with reference to fig. 1, where fig. 1 includes a robot, a preset positioning tag, and a server. In actual application, whether a server is needed or not is specifically executed according to the setting of the robot calculation granularity, and if the robot calculation granularity is high, the robot can autonomously complete complex calculation, and server-assisted calculation is not needed; if the calculation granularity of the robot is low, the robot can send the data to be calculated to the server, and the data processing efficiency can be improved through the calculation of the server. When the application scene comprises the server, the robot is in communication connection with the server, the robot and the server can also jointly calculate data to be calculated, and then the server assists the robot in drawing construction.
It should be noted that the preset label is attached to the space environment of the robot map, such as: factory ceilings, office roofs, and the like, are substantially maintained in a constant position. The preset label in this application may be any pattern that can identify information, such as: two-dimensional codes, bar codes, etc., and the present application is not particularly limited thereto. The application takes a preset label as an example of a two-dimensional code, wherein the two-dimensional code is a binary square mark and is composed of a wide black edge and an internal binary matrix, and the internal matrix arrangement determines an Identity Document (ID) of the two-dimensional code. The black border facilitates rapid detection of the image, the binary encoding can verify the ID, and allows application of error detection and correction techniques.
In the application, the robot collects continuous multi-frame images through the camera, the images comprise the two-dimensional codes, the poses of the two-dimensional codes in a world coordinate system are determined through bundle optimization of IMU data and wheel speed odometry data and measurement updating of EKF, and the image is built based on the poses of the two-dimensional codes in the world coordinate system.
In practical applications, the robot mapping may be performed by the robot in the application scenario shown in fig. 1, may also be performed by the server, and may also be performed by the cooperation between the robot and the server, and the application is not limited to the specific implementation subject herein. Here, only the execution subject is taken as an example of the robot, and as shown in fig. 2, the robot can be executed by referring to the following steps when creating the drawing:
step 201, the robot performs cluster optimization on continuous multi-frame image data, IMU data and wheel speed odometer data collected in a preset time period, and obtains an optimized pose and a variance value of the optimized pose of a preset label in a camera coordinate system of the robot in a current frame image.
In practical implementation, the camera of the robot may be a monocular camera, a binocular camera, or a multi-view camera, and the present application is not limited in particular. And the robot takes the target images with the same time stamp as the current frame image information according to the acquired target images of the latest frame. It should be noted that, when the robot mapping process shown in fig. 2 is performed to map, the pose of the preset tag in the world coordinate system is unknown, then a two-dimensional code detection program is called to sequentially detect the local area where each target image two-dimensional code of the current frame is located, and the IDs and two-dimensional coordinates of all the two-dimensional codes detected in the image are returned.
The two-dimensional code detection process comprises the following steps:
(1) and detecting a candidate area of the two-dimensional code. This step is mainly to find out which proprietary contours (e.g. squares) of the two-dimensional code are possible in the whole image area. The two-dimensional code is segmented using adaptive thresholds, then outline contours are extracted from the thresholded image and those that are not convex polygons, as well as those that are not square, are discarded. In the process, the too small or too large outline, too close convex polygon and the like can be eliminated through a filtering operation.
(2) And analyzing the candidate area to determine the type of the two-dimensional code. After a two-dimensional code candidate area is obtained through robot detection, whether a two-dimensional code of a specified type exists in the candidate area is determined through analyzing the internal code of the two-dimensional code candidate area, then, projection transformation is carried out on an image to obtain a standard form (a front view), and a white position and a black position are separated from the standard image through thresholding. Dividing the two-dimensional code into different lattices according to the size and the boundary size of the two-dimensional code, counting the number of black and white pixels falling in each lattice to determine whether the black bits or the white bits exist, and finally determining whether the two-dimensional code belongs to the type of the designated dictionary according to the flag bits.
(3) And outputting related data information of the two-dimensional codes, and matching the two-dimensional code type detected in the last step with the specified dictionary to determine the ID of each two-dimensional code. And for each two-dimensional code, determining two-dimensional coordinates of four corner points according to the two-dimensional code outer frame area.
And 202, performing EKF (extended Kalman Filter) state recursion on IMU (inertial measurement Unit) data and wheel speed odometer data corresponding to the current frame image by the robot, and determining the variance value of the pose of the robot and the pose of the robot.
It should be noted that, the step 202 and the step 201 may not be in a different order in actual execution, and the step 202 may be executed first and then the step 201 may be executed, or may be executed simultaneously, and the present application is not limited specifically herein.
And 203, the robot carries out EKF measurement iteration updating on the optimized pose, the variance value of the optimized pose, the pose of the robot and the variance value of the pose of the robot, and the pose of a preset label in a world coordinate system and the variance value of a target pose in continuous multi-frame images collected in a preset time period are obtained.
The variance value of the target pose can be used for representing the pose reliability of a preset label in the current frame image in a world coordinate system; the variance value of the target pose is related to the angular point observation uncertainty of each preset label of the sliding window; wherein, the observation uncertainty of the corner point of any preset label in any image frame has a functional relation with the following parameters: the method comprises the steps of presetting two-dimensional coordinate information of corner points of a label, presetting linear speed, angular speed and acceleration of an image acquisition device of an image frame where the label is located, and presetting pixel area of the label in the image frame.
In the embodiment of the application, when the robot builds the map, the robot adopts continuous multi-frame images and simultaneously refers to IMU data and wheel speed odometer data, and the accuracy of the map building can be ensured by the mode. In addition, the method for optimizing the cluster and predicting the EKF state is further fused during mapping, so that the positioning accuracy of the robot mapping is higher, and the pose and the target variance value of the preset label in the world coordinate system can be calculated more quickly.
In one embodiment, the robot can perform online optimization on external parameters and time offsets among the sensors during the iterative updating process of the EKF measurement, and the optimized external parameters and time offsets are used for bundle optimization; the external ginseng comprises: the relative pose of the IMU and the camera, and the relative pose of the wheel speed odometer and the camera. In the application, the robot performs cluster optimization on the external parameters during drawing construction, so that the drawing construction accuracy can be ensured.
The following describes the scheme of the present application with reference to the execution flowchart shown in fig. 3, specifically as follows:
1.1 preprocessing of sensor data
In an optional embodiment, the robot may acquire, through a plurality of cameras of the robot, that is, the camera module in fig. 3, N consecutive frames of images acquired within a preset time period, and filter the N frames of images to determine M frames of images including a preset tag; n, M is an integer; m is greater than or equal to 2 and less than or equal to N; the method comprises the steps of performing bundle optimization on position information of corner points of preset labels in M frames of images, projection information of the corner points of the preset labels in a coordinate system where a camera of the robot is located, IMU data and wheel speed odometer data, and determining an optimized pose of the preset labels in the coordinate system where the camera of the robot is located, a variance value of the optimized pose, a projection error of the preset labels, a Jacobian matrix and a covariance matrix. The method comprises the following specific steps:
A. preprocessing of two-dimensional codes in pictures
The robot may first acquire each eye image of the latest frame, and use each eye image with the same time stamp as the current frame image information. And then calling a two-dimensional code detection program, sequentially detecting the local area where the two-dimensional code of each target image of the current frame is located, and returning the IDs and the two-dimensional coordinates of all the two-dimensional codes detected in the image. And then outputting all two-dimension code observation information of each target image in the current frame, wherein the two-dimension code observation information can comprise: the ID of the two-dimensional code, the ID or the timestamp of an image frame where the two-dimensional code is located, the ID of a camera observing the two-dimensional code, and two-dimensional coordinates of four corner points of the image can be pixel uv values or normalized coordinates (the four corner points are output according to a specified sequence: for example, clockwise traversal is started from the upper left corner of the two-dimensional code), if the two-dimensional code is observed for the first time, the relative pose of the two-dimensional code and the camera can be calculated according to PNP, and then the initial value of the global pose of the two-dimensional code can be calculated based on the global pose of the current frame.
IMU pre-integration
Obtaining IMU data: and storing the IMU data between two continuous frames of images into a cache list.
Time stamp alignment: the integration start and end times of the IMU data are aligned with the image frame time by interpolation based on the IMU and camera time offsets.
Calculating a pre-integral: and (5) by utilizing a median method, iterating and integrating to obtain the state increment of the position, the speed and the angle between the two image frames.
Calculating the Jacobian and variance matrix: and calculating a Jacobian matrix and a variance matrix of the state quantity between the two frames of images for a subsequent clustering optimization process.
C. Wheel speed odometer pre-integration
Acquiring wheel speed odometer data: and storing the wheel speed mileage data between two continuous frames of images into a cache list.
Time stamp alignment: and aligning the integration starting time and the integration ending time of the wheel speed odometer data with the image frame time through interpolation based on the wheel speed meter and the time offset of the camera.
Calculating a pre-integral: and (5) by utilizing a median method, iteratively integrating to obtain the state increment of the position and the angle between the two image frames.
Calculating the Jacobian and variance matrix: and calculating a Jacobian matrix and a variance matrix of the state quantity between the two frames of images for a subsequent clustering optimization process.
1.2 Cluster optimization
The main purpose of cluster optimization is to optimize the current window system state according to the reprojection error of the two-dimensional code corner points observed in multiple frames and multiple purposes, the observation uncertainty of the reprojection error, IMU pre-integration and wheel speed odometer pre-integration, so that a more robust and accurate observation value of the relative pose of the two-dimensional code and the current robot coordinate system is obtained.
1.2.1 optimization State of the System
State X to be optimized in the current windowSComprises the following steps: inertial state (including position, orientation, velocity, accelerometer bias, and gyroscope bias) of the n +1 frame within the window, external reference of each eye camera to the IMU, external reference of wheel speed odometer to IMU, global pose of all two-dimensional codes that have been observed:
Figure 139431DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 135200DEST_PATH_IMAGE002
represents the inertial state of the k-th frame; wherein
Figure 521795DEST_PATH_IMAGE003
The IMU global pose (i.e., the pose in the world coordinate system) representing the kth frame;
Figure 713742DEST_PATH_IMAGE004
respectively representing the global position and the attitude of the IMU of the k frame;
Figure 341163DEST_PATH_IMAGE005
representing the IMU global velocity of the k-th frame;
Figure 773413DEST_PATH_IMAGE006
represents the accelerometer bias;
Figure 102763DEST_PATH_IMAGE007
stands for gyroscope bias;
Figure 849133DEST_PATH_IMAGE008
representing the global pose of the ith two-dimensional code;
Figure 377066DEST_PATH_IMAGE009
respectively representing the global position and the posture of the ith two-dimensional code;
Figure 180550DEST_PATH_IMAGE010
external parameters representing each camera and IMU; wherein
Figure 731617DEST_PATH_IMAGE011
A location component representing the external reference;
Figure 78416DEST_PATH_IMAGE012
an angular component representative of the external reference;
Figure 414850DEST_PATH_IMAGE013
external reference representing wheel speed odometer and IMU;
Figure 641432DEST_PATH_IMAGE014
A location component representing the external reference;
Figure 696107DEST_PATH_IMAGE015
representing the angular component of the external reference.
1.2.2 optimization constraint construction of System
The sliding window bundle optimization process can be constructed as a non-linear least squares problem:
Figure 33547DEST_PATH_IMAGE016
wherein the content of the first and second substances, Xrepresenting the state of sliding window optimization;
Figure 283876DEST_PATH_IMAGE017
a priori information constraint representing marginalization;
Figure 415780DEST_PATH_IMAGE018
representing a reprojection constraint for a preset tag;Ta list representing the observed preset tags;
Figure 754488DEST_PATH_IMAGE019
representing observation data of the kth image frame of the corner point of the ith preset label in the sliding window;
Figure 646352DEST_PATH_IMAGE020
representing an uncertainty variance matrix of the corner point of the ith preset label in the kth image frame of the sliding window;
Figure 941067DEST_PATH_IMAGE021
representing relative pose constraints of preset labels;
Figure 322501DEST_PATH_IMAGE022
representing the relative pose observation of the ith preset label and the jth preset label;
Figure 335456DEST_PATH_IMAGE023
representing a relative pose uncertainty variance matrix of the ith preset label and the jth preset label;
Figure 28081DEST_PATH_IMAGE024
representing global pose constraints of preset labels;
Figure 442882DEST_PATH_IMAGE025
representing the global pose observation of the ith preset label;
Figure 198480DEST_PATH_IMAGE026
representing a global pose uncertainty variance matrix of the ith preset label;
Figure 433152DEST_PATH_IMAGE027
a pre-integration constraint representing IMU data;Ba list representing image frames subject to pre-integration constraints for IMU data;
Figure 666818DEST_PATH_IMAGE028
representing a covariance matrix corresponding to the pre-integration state increment of the IMU data;
Figure 749175DEST_PATH_IMAGE029
a pre-integration result representing IMU data;
Figure 659362DEST_PATH_IMAGE030
pre-integration residuals representing wheel speed odometer data;Ea list representing image frames with wheel speed odometer pre-integration constraints;
Figure 394713DEST_PATH_IMAGE031
representing a covariance matrix corresponding to the wheel speed odometer pre-integration;
Figure 478075DEST_PATH_IMAGE032
representing the result of pre-integration of wheel speed odometer data. Further, the constraint for the wheel speed odometer may be adopted in consideration of the case where the wheel speed odometer slipsRobust kernel functionρForm (a).
Wherein, the prior information constraint represents the constraint of the marginalized information of the sliding window to the current sliding window state; the reprojection constraint of the two-dimensional code represents the projection error constraint of the global three-dimensional point of all the angular points in each two-dimensional code observed by each frame in the sliding window to the two-dimensional point; the wheel speed odometer measurement constraint represents the measurement constraint of the wheel speed odometer pre-integration of two adjacent frames in the sliding window; the IMU measurement constraint represents a measurement constraint of IMU pre-integration of two adjacent frames in the sliding window. The relative pose constraint of the two-dimension codes represents the known relative poses of all the two-dimension codes in the sliding window and the uncertainty thereof to the state X to be optimized in the windowSE.g. known relative poses of the ith and jth two-dimensional codes
Figure 352621DEST_PATH_IMAGE033
And the uncertainty variance of the relative pose is
Figure 433710DEST_PATH_IMAGE034
The constraint term is optional. Similarly, the global pose constraint of the two-dimensional code represents the known global poses of all the two-dimensional codes in the sliding window
Figure 659286DEST_PATH_IMAGE035
And its uncertainty
Figure 828230DEST_PATH_IMAGE036
For state X to be optimized in windowSOf (3) is performed.
1.2.3 construction of reprojection constraint of two-dimensional code
The construction of the two-dimensional code re-projection constraint is that all image frames in the sliding window are traversed, all corner points on each two-dimensional code observed by each image frame are subjected to a global pose state to be solved based on each two-dimensional code to obtain a three-dimensional global coordinate of each corner point, the three-dimensional global coordinate is projected onto a two-dimensional plane of each image frame, and then the three-dimensional global coordinate is subtracted from a two-dimensional coordinate observed value of the corner point to obtain residual error constraint.
In one embodiment, the projection error and jacobian matrix of the preset label can be determined as follows:
determining the initial pose of the ith preset label in the camera coordinate system of the robot in the Mth frame image of the robot according to the three-dimensional coordinates of the corner point in the ith preset label in the preset label coordinate system and the two-dimensional pixel coordinates of the preset label in the image frame M of the ith preset label observed for the first time through a PNP algorithm; determining global three-dimensional coordinate information of corner points of the preset labels according to the initial pose of the ith preset label in the camera coordinate system in the Mth frame image and the global pose of the camera in the Mth frame image; projecting the three-dimensional coordinate information of the corner point of the preset label to a coordinate system where a camera is located to obtain the two-dimensional coordinate information of the corner point of the preset label after projection; calculating the difference value between the two-dimensional coordinate information of the corner point of the projected preset label and the two-dimensional coordinate information of the corner point of the preset label before projection, and determining the projection error of the preset label in the Mth frame image; and determining a Jacobian matrix of the preset label projection error to the optimized state quantity in the sliding window.
It should be noted that, in the present application, the construction process of the reprojection constraint of the two-dimensional code mainly includes the following steps:
(1) two-dimensional code pose initialization
Because the two-dimensional code is arbitrarily posted in the image building stage, the global position and pose of the two-dimensional code cannot be acquired in advance. When a certain two-dimensional code is observed for the first time, the global pose of the two-dimensional code needs to be initialized. The method comprises the steps of observing the coordinates of the corner points of the two-dimensional code for the first time, and obtaining the relative pose of the two-dimensional code and a camera by a PNP method
Figure 72130DEST_PATH_IMAGE037
Then, the global pose of the camera is superposed to obtain an initial value of the global pose of the two-dimensional code
Figure 829780DEST_PATH_IMAGE038
Figure 339390DEST_PATH_IMAGE039
Wherein the content of the first and second substances,
Figure 967818DEST_PATH_IMAGE040
representing the initial value of the global position of the two-dimensional code;
Figure 613694DEST_PATH_IMAGE041
indicating the initial value of the global rotation.
(2) Global coordinate of each angular point of two-dimensional code
A. Local three-dimensional coordinates of the corner points are acquired. According to the physical size of each two-dimensional code, the local three-dimensional coordinates of each corner point in the two-dimensional code under the two-dimensional code coordinate system can be known
Figure 771006DEST_PATH_IMAGE042
B. The global three-dimensional coordinates of the corner points are calculated. According to the global position and pose of each two-dimensional code
Figure 971174DEST_PATH_IMAGE043
And local three-dimensional coordinates of the angular points on the two-dimensional code, and global three-dimensional coordinates of all angular points on the two-dimensional code can be obtained through calculation. When a certain two-dimensional code is observed for the first time, the global pose
Figure 947833DEST_PATH_IMAGE043
Taking the initial value of the global pose of the two-dimensional code obtained by the initialization of the last step
Figure 431904DEST_PATH_IMAGE044
. If a certain two-dimensional code is not observed for the first time, the global pose
Figure 307587DEST_PATH_IMAGE045
And taking the global pose of the two-dimensional code in the state after the previous bundling optimization. The calculation formula of the global three-dimensional coordinates is as follows:
Figure 526210DEST_PATH_IMAGE046
wherein the content of the first and second substances,
Figure 762019DEST_PATH_IMAGE047
representing global three-dimensional coordinates;
Figure 320171DEST_PATH_IMAGE048
representing the initial value of the global position of the two-dimensional code;
Figure 894984DEST_PATH_IMAGE049
indicating the initial value of the global rotation.
(3) Calculation of two-dimensional code reprojection error
And projecting the three-dimensional coordinates of each angular point in the two-dimensional code obtained in the last step onto a two-dimensional plane of an image frame based on the global position and posture of the image frame observing the two-dimensional code, and further calculating the difference between the three-dimensional coordinates and the observed value of the two-dimensional coordinates of the angular point to obtain residual constraint. Namely:
converting the three-dimensional global coordinates of all corner points on each two-dimensional code into the three-dimensional coordinates on the camera coordinate system of the frame:
Figure 53433DEST_PATH_IMAGE050
calculating the two-dimensional reprojection error of each corner point:
Figure 312507DEST_PATH_IMAGE051
wherein the content of the first and second substances,
Figure 36749DEST_PATH_IMAGE052
the physical meaning is as follows: three-dimensional points of a camera coordinate system are converted to a mapping function matrix of normalized coordinates.
(4) Computation of two-dimensional code reprojection constrained Jacobian (Jacobian) matrices
In order to obtain a Jacobian matrix of the two-dimensional code re-projection error pair sliding window for optimizing state quantity (the frame position, the external parameter and the two-dimensional code global position) the Jacobian matrix of the two-dimensional code re-projection error pair sliding window for each key frame position, the Jacobian matrix of the two-dimensional code re-projection error pair external parameter and the Jacobian matrix of the two-dimensional code re-projection error pair two-dimensional code global position are included. Different from the positioning stage (knowing the global pose of the two-dimensional code), the step needs to additionally solve the Jacobian matrix of the global pose of the two-dimensional code. According to the chain derivation method:
A. jacobian matrix of position components of two-dimensional code reprojection errors to image frame poses
Figure 723077DEST_PATH_IMAGE053
Comprises the following steps:
Figure 916292DEST_PATH_IMAGE054
wherein the content of the first and second substances,
Figure 962745DEST_PATH_IMAGE055
representing 3D points on the error-pair camera coordinate system
Figure 289297DEST_PATH_IMAGE056
A derivative of (a);
Figure 943263DEST_PATH_IMAGE057
represents
Figure 873042DEST_PATH_IMAGE058
Derivative of the image frame position disturbance:
Figure 473919DEST_PATH_IMAGE059
jacobian matrix of pose components of two-dimensional code reprojection errors to the keyframe pose
Figure 923486DEST_PATH_IMAGE060
Comprises the following steps:
Figure 466462DEST_PATH_IMAGE061
wherein the content of the first and second substances,
Figure 303444DEST_PATH_IMAGE062
represents
Figure 19596DEST_PATH_IMAGE063
Derivative to the image frame pose disturbance:
Figure 261353DEST_PATH_IMAGE064
B. jacobian matrix of position components of two-dimensional code reprojection errors to-be-optimized external parameters
Figure 522701DEST_PATH_IMAGE065
Comprises the following steps:
Figure 630334DEST_PATH_IMAGE066
wherein the content of the first and second substances,
Figure 369751DEST_PATH_IMAGE067
representsP C A derivative of the disturbance of the external reference position to be optimized; jacobian matrix for treating optimal external reference attitude by two-dimensional code reprojection error
Figure 715282DEST_PATH_IMAGE068
Comprises the following steps:
Figure 410181DEST_PATH_IMAGE069
Figure 224684DEST_PATH_IMAGE070
representsP C Derivative of the disturbance of the pose of the external reference to be optimized:
Figure 282639DEST_PATH_IMAGE071
C. two-dimensional code reprojection error to two-dimensional code global pose
Figure 233409DEST_PATH_IMAGE072
Jacobian matrix of (a):
wherein the Jacobian matrix for the position component
Figure 102139DEST_PATH_IMAGE073
Comprises the following steps:
Figure 653206DEST_PATH_IMAGE074
wherein
Figure 997075DEST_PATH_IMAGE075
Represents
Figure 317198DEST_PATH_IMAGE076
For two-dimensional code corner coordinatesP W The derivative of (a) of (b),
Figure 356829DEST_PATH_IMAGE077
representing two-dimensional code corner coordinatesP W A derivative of the position component.
Wherein the Jacobian matrix for the attitude component
Figure 145925DEST_PATH_IMAGE078
Comprises the following steps:
Figure 748944DEST_PATH_IMAGE079
wherein the content of the first and second substances,
Figure 939885DEST_PATH_IMAGE080
representing two-dimensional code corner coordinatesP W Derivatives of the attitude components.
(5) Calculation of observation uncertainty variance of each angular point in two-dimensional code
In order to quantify the single observation uncertainty of each angular point in the single two-dimensional code, an uncertain variance function of the two-dimensional code is configured according to a large number of actual test conclusions, namely:
Figure 134106DEST_PATH_IMAGE081
in the above formula, u and v represent two-dimensional coordinates of an angular point in an image in the two-dimensional code, linear _ vel represents a camera moving linear velocity of an image frame where the two-dimensional code is located, angular _ vel represents a camera rotation angular velocity of the image frame where the two-dimensional code is located, acc represents an acceleration of a camera of the image frame where the two-dimensional code is located, and S represents a pixel area of the two-dimensional code in the image.
According to the actual scene and the configuration of the sensor, the above formula can be simplified as follows:
Figure 204306DEST_PATH_IMAGE082
where u _ max represents the maximum value of the abscissa in the image pixel coordinates, and v _ max represents the maximum value of the ordinate in the image pixel coordinates.
(6) Two-dimensional code constraint validity judgment strategy
In order to avoid the situation that the accuracy or the stability of the sliding window optimization are reduced due to two-dimension code false detection or two-dimension point detection errors in the two-dimension code, a two-dimension code constraint validity judgment strategy is set, namely, whether the constraint of the two-dimension code is added into the sliding window optimization function is judged by judging the stability of a certain two-dimension code continuously observed in the sliding window. For example, the effective observation times of a certain two-dimensional code in each frame in the sliding window are counted, and if the effective observation times are greater than N times, the observation constraint of the two-dimensional code in the sliding window is considered to be effective. And carrying out effective observability judgment of single observation according to the single observation uncertainty variance of each corner point. For example, the uncertainty variance of a single observation of a preset tag corner point is smaller than a threshold value, and the observation is considered to be a valid observation.
1.2.4 design of marginalization strategy
In one embodiment, the continuous multiframe images in the bundle-optimized sliding window are determined by: if the frame number of the image frames in the sliding window is less than or equal to the sliding window threshold value, performing cluster optimization on all the image frames; if the frame number of the image frames in the sliding window is larger than the sliding window threshold value, all the image frames are subjected to cluster optimization and then are screened to obtain preset frame images, and the constraint of the eliminated image on the sliding window is retained through a schuln complement method. For example, if the sliding window threshold is 32 and the number of image frames in the sliding window is 18, performing bundle optimization on all 18 image frames; if the number of the image frames in the sliding window is 33, performing bundle optimization on the 33 frames, and removing the image frames through a marginalization strategy based on preset label observation information after optimization. The image frames which are removed are subjected to marginalization prior constraint construction through a Schur complement method and are used for cluster optimization. The marginalization process is a process of decomposing the joint probability distribution into marginal probability distribution and conditional probability distribution, so that an older state in a cluster optimization window can be marginalized into a sliding window, information of the sliding window is kept, sparsity of a corresponding information matrix is guaranteed, balance of computing resources and computing precision is achieved, and the marginalization function can be achieved by performing Schur complement operation on the matrix. To clearly illustrate the schulren's method, schematically illustrated by fig. 4, in fig. 4 (a) are indicated images of consecutive 4 frames, Xp1, Xp2, Xp3 and Xp4, wherein the preset labels Xm1, Xm2, Xm3 can be observed in the Xp1 frame. However, images of Xp1 frames are removed during set number optimization, and the schulren's complement method is shown in fig. 4 (b), and the association among the preset labels Xm1, Xm2, and Xm3 is still retained.
In one embodiment, the screening strategy is determined by one or more of the following:
it should be noted that the robot needs to acquire the identifier lists of all the preset tags in the current frame image, and then remove the images that do not include the preset tags from the multi-frame image.
Mode 1: and if the identifier of the preset label detected in the Nth frame image is different from the identifier of the preset label in the Mth frame image, rejecting the Nth frame image. Wherein M, N is an integer, and M is an index of the last frame image acquired within a preset time period.
Mode 2: if the identification of the preset label in any frame of image is the same as the identification of the preset label in the Mth frame of image in the preset time period, the images except the Mth frame of image are scored according to the information degree, and the frame image with the lowest score is removed. Wherein M, N is an integer, and M is an index of the last frame image acquired within a preset time period.
Specifically, if the number of image frames in the sliding window is still greater than the sliding window threshold after the image frames in the sliding window are screened by the method 1, the image frames may be scored according to the information degree, and the image frames may be removed from low to high until the sliding window threshold is met.
It should be noted that the above-mentioned screening operation of image frames in the sliding window may be understood as an marginalization operation, and by marginalization, a sliding window of a fixed number of image frames may be maintained, so as to control the amount of computation, and in the form of marginalization constraint, the constraint of the marginalized data on the state in the sliding window is retained in a schulren complementary manner.
Different from a sliding window marginalization strategy based on feature point tracking, the marginalization strategy based on two-dimensional code observation information is provided. The strategy mainly marginalizes all image frames in the current bundling optimization window according to the ID lists { IDs } of all two-dimensional codes observed in each purpose of the latest frame.
And if the number of the image frames in the window is less than or equal to the threshold value N, reserving all the image frames in the window. If the number of image frames in the window is greater than the threshold value N, all the image frames in the window are traversed, and the image frame marginalization is not limited to be carried out according to the following three conditions:
case 1: if a certain image frame does not detect the two-dimensional code, the image frame is discarded. And the IMU for that frame and the pre-integration constraint of the wheel speed odometer are retained to the next image frame.
Case 2: and if the two-dimensional code is detected in a certain image frame, but the detected two-dimensional code does not belong to the { IDs } list of the latest frame, discarding the image frame. And the IMU for that frame and the pre-integration constraint of the wheel speed odometer are retained to the next image frame. Meanwhile, the constraint relation of the frame to other frames in the window is reserved through the Schulk complement operation.
Case 3: and if all the image frames in the window detect the two-dimensional codes, and part or all of the detected two-dimensional codes belong to the { IDs } list, keeping the latest image frame, and discarding the image frame with the lowest information degree after other image frames are scored according to the information degree. And the IMU for that frame and the pre-integration constraint of the wheel speed odometer are retained to the next image frame. Meanwhile, the constraint relation of the frame to other frames in the window is reserved through the Schulk complement operation. The degree of information of the image frame is in direct proportion to the number of two-dimensional codes which are observed by the frame and belong to the { IDs } list, and is in inverse proportion to the observation variance of the two-dimensional codes.
1.2.5 two-dimensional code observation output based on sliding window optimization state
Based on the state after the sliding window optimization, all the two-dimensional codes observed in each target of the latest frame are traversed to obtain the relative position of each two-dimensional code relative to each target of the latest frame, namely:
Figure 96170DEST_PATH_IMAGE083
in the formula
Figure 390885DEST_PATH_IMAGE084
Representing the relative pose of the ith two-dimensional code calculated based on the state after the sliding window optimization relative to the coordinate system of the latest frame image;
Figure 506739DEST_PATH_IMAGE085
representing the outer parameter after the sliding window optimization;
Figure 519695DEST_PATH_IMAGE086
the represented pose after sliding window optimization;
Figure 215250DEST_PATH_IMAGE087
and representing the global pose of the ith two-dimensional code after the sliding window optimization.
Two, state prediction
The EKF-based multi-sensor fusion framework is adopted, and the time/space calibration error can be automatically compensated.
2.1 EKF optimized State
The EKF status may include: target pose of preset label in world coordinate system and each sensor in continuous multi-frame images collected within preset time periodExternal parameters between, time offset between sensors, pose and speed of the robot at the current frame, and zero offset of the IMU. Such as: system state vector of EKF
Figure 10528DEST_PATH_IMAGE088
Comprises the following steps: current inertial navigation state
Figure 749814DEST_PATH_IMAGE089
Global pose, external parameters and time offset of the two-dimensional code, namely:
Figure 797536DEST_PATH_IMAGE090
wherein the content of the first and second substances,
Figure 14890DEST_PATH_IMAGE091
representing the inertial state of the current frame;
Figure 97247DEST_PATH_IMAGE092
representing the IMU global pose of the current frame;
Figure 758167DEST_PATH_IMAGE093
respectively representing the IMU global position and the attitude of the current frame;
Figure 480135DEST_PATH_IMAGE094
representing the IMU global velocity of the current frame;
Figure 514563DEST_PATH_IMAGE095
represents the accelerometer bias;
Figure 700693DEST_PATH_IMAGE096
stands for gyroscope bias;
Figure 798093DEST_PATH_IMAGE097
representing the global pose of the ith two-dimensional code;
Figure 85986DEST_PATH_IMAGE098
respectively represent the ith two-dimensionThe global position and pose of the code;
Figure 645144DEST_PATH_IMAGE099
external parameters representing each camera and IMU;
Figure 639776DEST_PATH_IMAGE100
a location component representing the external reference;
Figure 701885DEST_PATH_IMAGE101
an angular component representative of the external reference;
Figure 664025DEST_PATH_IMAGE102
representing external parameters of a wheel speed odometer and an IMU;
Figure 777606DEST_PATH_IMAGE103
a location component representing the external reference;
Figure 876012DEST_PATH_IMAGE104
an angular component representative of the external reference;
Figure 111952DEST_PATH_IMAGE105
represents a time offset of the camera from the IMU;
Figure 295809DEST_PATH_IMAGE106
representing the time offset of the wheel speed odometer and the IMU. Chinese and foreign ginseng
Figure 744239DEST_PATH_IMAGE107
And
Figure 507271DEST_PATH_IMAGE108
the initial value of (a) is taken from the optimized result after the cluster optimization and the global pose of the two-dimensional code
Figure 101063DEST_PATH_IMAGE109
The initial value is taken from the optimized result after the bundling optimization.
2.2 State recurrence formula
When motion measurement information is received, the EKF recurses the status. According to the configuration and the operation scene of the sensor, three state recursion modes can be adopted:
A. three-dimensional recursion of IMU
The recursion process is as follows: when a frame of IMU measurement information is received, the state and covariance of the IMU from the moment k-1 to the moment k are deduced according to the dynamic model of the IMU by using the acquired linear acceleration of the accelerometer and the angular velocity of the gyroscope as follows:
Figure 319686DEST_PATH_IMAGE110
wherein
Figure 40648DEST_PATH_IMAGE111
State X representing time kEPredicting a value;
Figure 910384DEST_PATH_IMAGE112
representing the state value at the k-1 moment;
Figure 425810DEST_PATH_IMAGE113
accelerometer measurements representative of the IMU;
Figure 584259DEST_PATH_IMAGE114
gyroscope measurements representative of the IMU;
Figure 637141DEST_PATH_IMAGE115
represents the covariance of the state at time k;
Figure 580958DEST_PATH_IMAGE116
represents the covariance of the state at time k-1;
Figure 313290DEST_PATH_IMAGE117
is a first order Jacobian matrix to the system state;
Figure 709768DEST_PATH_IMAGE118
is the covariance of the IMU measurement noise.
B. Three-dimensional recursion of wheel speed odometer
The recursion process is as follows: when a frame of wheel speed odometer measurement information is received, the state and covariance deduced from the moment k-1 to the moment k are as follows according to a dynamic model of the wheel speed odometer by utilizing the collected linear velocity and angular velocity:
Figure 756221DEST_PATH_IMAGE119
wherein
Figure 554544DEST_PATH_IMAGE120
State X representing time kEPredicting a value;
Figure 471160DEST_PATH_IMAGE121
representing the state value at the k-1 moment;
Figure 604201DEST_PATH_IMAGE122
a linear velocity measurement representative of a wheel speed odometer;
Figure 205078DEST_PATH_IMAGE123
an angular velocity measurement representative of a wheel speed odometer;
Figure 638333DEST_PATH_IMAGE124
represents the covariance of the state at time k;
Figure 932042DEST_PATH_IMAGE125
represents the covariance of the state at time k-1;
Figure 834271DEST_PATH_IMAGE126
is a first order Jacobian matrix to the system state;
Figure 753685DEST_PATH_IMAGE127
is the covariance of the wheel speed odometer measurement noise.
C. Hybrid recursion of wheel speed odometer and IMU
The recursion process is as follows: when a frame of wheel speed odometer and IMU measurement information is received, the state and covariance deduced from the moment k-1 to the moment k are as follows according to the dynamic models of the wheel speed odometer and the gyroscope by utilizing the collected linear speed of the wheel speed odometer and the angular speed of the IMU gyroscope:
Figure 992512DEST_PATH_IMAGE128
wherein
Figure 503128DEST_PATH_IMAGE129
State X representing time kEPredicting a value;
Figure 95914DEST_PATH_IMAGE130
representing the state value at the k-1 moment;
Figure 553440DEST_PATH_IMAGE131
a linear velocity measurement representative of a wheel speed odometer;
Figure 712020DEST_PATH_IMAGE132
gyroscope measurements representative of the IMU;
Figure 347532DEST_PATH_IMAGE133
represents the covariance of the state at time k;
Figure 676882DEST_PATH_IMAGE134
represents the covariance of the state at time k-1;
Figure 217061DEST_PATH_IMAGE135
is a first order Jacobian matrix to the system state;
Figure 682677DEST_PATH_IMAGE136
is the covariance of the wheel speed odometer and gyroscope measurement noise.
Third, status update
The robot can perform iterative updating on the EKF state by adopting projection constraint of a preset label, prior relative pose constraint of the preset label, prior global pose constraint of the preset label, measurement constraint of a wheel speed odometer and time offset constraint to obtain an updated EKF state. And transmitting the updated external parameters and the time offsets of the sensors to the bundling optimization link in real time, wherein the updated external parameters are used for initial values of external parameter optimization in bundling optimization, and the updated time offsets are used for time alignment and interpolation of measurement constraints in the bundling optimization. The specific content of the state updating link is as follows:
3.1 general nonlinear measurement function
In order to be compatible with multi-modal sensor input measurements, the present application employs a generic non-linear measurement function, namely:
Figure 754669DEST_PATH_IMAGE137
wherein the content of the first and second substances,
Figure 853206DEST_PATH_IMAGE138
measurement data representing a k-th frame;
Figure 918114DEST_PATH_IMAGE139
a non-linear measurement function representing a k-th frame;
Figure 254549DEST_PATH_IMAGE140
represents the system state at the k-th frame;
Figure 481131DEST_PATH_IMAGE141
representing the measurement noise at the k-th frame.
The method and the device further adopt the EKF filtering framework for updating after linearization is carried out on the nonlinear measurement function. Before each update, it is checked whether the measurement passes 95% chi-squared distribution check, thereby avoiding interference of erroneous measurement inputs to the system state estimation.
3.2, the reprojection constraint measurement of the two-dimensional codes is updated to traverse all the two-dimensional codes observed in each target of the latest frame, and a state equation of each two-dimensional code relative to the reprojection pose constraint of each target of the latest frame is obtained, namely:
Figure 556314DEST_PATH_IMAGE142
wherein the content of the first and second substances,
Figure 159333DEST_PATH_IMAGE143
representing the relative pose of the ith two-dimensional code calculated based on the current EKF state with respect to the latest frame image coordinate system.
After the above state equation is linearized, the following error state equation is obtained:
Figure 84695DEST_PATH_IMAGE144
wherein H represents a first order Jacobian matrix; error representative of the state;
Figure 747758DEST_PATH_IMAGE145
representing the relative pose error of the ith two-dimensional code relative to the coordinate system of the latest frame of image.
3.3, apriori relative constraint update of the position and pose of the two-dimensional code
In one embodiment, the robot can perform bundle optimization on the prior two-dimensional code constraint information; the prior two-dimensional code constraint information is the relative poses of any two predetermined labels or the pose of any one predetermined label in a world coordinate system. Namely:
the robot observes all the two-dimensional codes in each eye of the latest frame, if the ith two-dimensional code and the jth two-dimensional code have relative constraints of known prior orientation and posture (for example, prior information is that the average value of the relative postures of the ith two-dimensional code and the jth two-dimensional code is 5 degrees, the variance is 3 degrees, and the average value of the relative position is 5 meters, and the variance is 0.3 meter), the prior relative constraints of the position and the posture of the two-dimensional code are constructed in a measurement updating model, and the state constraint equation is as follows:
Figure 24149DEST_PATH_IMAGE146
wherein the content of the first and second substances,
Figure 227597DEST_PATH_IMAGE147
representing the relative pose of the ith two-dimensional code relative to the jth two-dimensional code calculated based on the current EKF state.
After the above state equation is linearized, the following error state equation is obtained:
Figure 538624DEST_PATH_IMAGE148
wherein H represents a first order Jacobian matrix; error representative of the state;
Figure 841430DEST_PATH_IMAGE149
representing the relative pose error of the ith two-dimensional code relative to the coordinate system of the latest frame of image.
3.4 Prior Global constraint update of position and pose of two-dimensional code
All the two-dimensional codes are observed in each eye of the latest frame, if the ith two-dimensional code has global constraints of known prior orientation and posture (for example, the posture mean of the ith two-dimensional code in a world coordinate system is 5 degrees, the variance of the ith two-dimensional code is 3 degrees, and or the mean of the global position is 100 meters, and the variance of the ith two-dimensional code is 1 meter), the prior global constraints of the position and the posture of the ith two-dimensional code are constructed in a measurement updating model, and the state equation and the error variance of the prior global constraints are similar to those in 3.3.
3.5 measurement constraint update of wheel speed odometer
When the wheel speed odometer measurement is not used for recursion in the prediction model, the wheel speed odometer pre-integration result can be used as the measurement constraint for updating during state updating.
3.6 time offset on-line calibration
The real clock is used as the acquisition clock of the camera, so the state under the image clock can be approximately modeled as the state recurrence estimation value of the current frame IMU or the wheel speed odometer is superposed with an error term, and the time corresponding to the error term is approximately time offset. Taking the time bias of the IMU and the camera as an example, the state equation is as follows:
Figure 867767DEST_PATH_IMAGE150
wherein the content of the first and second substances,
Figure 360059DEST_PATH_IMAGE151
recursion estimation value for the current frame IMU state;
Figure 509281DEST_PATH_IMAGE152
is the estimation error of the time offset.
Four-dimensional and two-dimensional code map output
After the EKF state is updated, the optimized global pose of each two-dimensional code and the covariance of the global pose can be obtained. The global pose of each two-dimensional code can be used as a known quantity in the robot positioning process, and the covariance of the global pose of each two-dimensional code can be used as the reliability or uncertainty of the two-dimensional code pose in the robot positioning process.
In the observation acquisition of the preset label, the tightly-coupled sliding window cluster optimization is adopted for multi-frame multi-view observation, and after the observation of the preset label with high robustness and high precision is obtained, the map of the preset label is obtained by adopting the loosely-coupled EKF optimization. The external parameters and the time offset among the sensors after the EKF updating can be fed back to the bundling optimization in real time. Therefore, the method adopts a tightness fusion mode, and not only considers the observation precision and stability of the preset label, but also considers the calculation power requirement.
The application supports random posting of preset labels, so that the requirement of a preset label pasting process on field deployment is greatly simplified, and large-scale popularization of the vision mobile robot is facilitated. In order to avoid the situations of image blurring, pose ambiguity phenomenon and noise equipotential pose estimation instability appearing according to the observation of a single preset tag, the method adopts multi-frame multi-view observation of preset tag information as reprojection error constraint, adds nonlinear cluster optimization, and obtains the pose of the optimized preset tag as the observation added with the EKF. In order to utilize the prior information of the preset label pasting process as much as possible, the relative or global orientation and distance between the preset labels can be constrained as optional prior information to improve the drawing accuracy of the preset labels.
In addition, in the multi-sensor fusion map building process, the external parameter precision of each sensor can greatly influence the map building effect, and the online optimization of the external parameters of each sensor is added in the EKF optimization process. In order to solve the problem of clock asynchrony among multiple sensors, online estimation of time offset among the sensors is added in the method. In addition, in the bundle optimization of the multi-frame and multi-purpose preset label, marginalized measurement based on observation information of the preset label is provided.
Based on the same concept, an embodiment of the present application provides a robot mapping apparatus, as shown in fig. 5, including: a bundle optimization unit 51, a state prediction unit 52, and a measurement update unit 53.
The cluster optimization unit 51 is configured to perform cluster optimization on continuous multi-frame image data, inertial measurement unit IMU data and wheel speed odometer data acquired within a preset time period, and acquire an optimized pose of a preset label in a camera coordinate system of the robot and a variance value of the optimized pose in a current frame image; the state prediction unit 52 is configured to perform extended kalman EKF state recursion on the IMU data and the wheel speed odometer data corresponding to the current frame image, and determine a pose of the robot and a variance value of the pose of the robot; and the measurement updating unit 53 is configured to perform iterative updating of EKF measurement on the optimized pose of the preset tag, the variance value of the optimized pose of the preset tag, the pose of the robot, and the variance value of the pose of the robot, and acquire a target pose and a variance value of the target pose of the preset tag in the world coordinate system in continuous multi-frame images acquired within a preset time period.
In the embodiment of the application, when the robot builds the map, the robot adopts continuous multi-frame images and simultaneously refers to IMU data and wheel speed odometer data, and the accuracy of the map building can be ensured by the mode. In addition, the method for optimizing the cluster and predicting the EKF state is further integrated during mapping, so that the positioning accuracy of the robot mapping is higher, and the target pose and the variance value of the target pose of the preset label in the world coordinate system can be calculated more quickly.
In a possible implementation manner, the measurement updating unit 53 may further perform online optimization on the external parameters and time offsets between the sensors during the iterative update process of the measurement of the EKF, and use the optimized external parameters and time offsets for bundle optimization; the external parameter comprises: the relative pose of the IMU and the camera, and the relative pose of the wheel speed odometer and the camera.
In the embodiment of the application, external parameters and time offset are added during cluster optimization, so that the optimization pose of the preset label in the acquired current frame image in the camera coordinate system of the robot is more reliable, and the positioning accuracy of the robot map building is higher.
In a possible implementation manner, the bundle optimizing unit 51 is specifically configured to: acquiring continuous N frames of images acquired within a preset time period through a camera of the robot, screening the N frames of images, and determining M frames of images including preset labels; n, M is an integer; m is greater than or equal to 2 and less than or equal to N; the method comprises the steps of performing bundle optimization on position information of corner points of preset labels in M frames of images, projection information of the corner points of the preset labels in a coordinate system where a camera of the robot is located, IMU data and wheel speed odometer data, and determining an optimized pose of the preset labels in the coordinate system where the camera of the robot is located, a variance value of the optimized pose, a projection error of the preset labels, a Jacobian matrix and a covariance matrix.
In the embodiment of the application, the robot performs cluster optimization on the position information of the corner points of the preset labels in the M frames of images, the projection information of the corner points of the preset labels in the coordinate system where the camera of the robot is located, IMU data and wheel speed odometry data, so that the optimized pose, the variance value of the optimized pose, the projection error of the preset labels, the Jacobian matrix and the covariance matrix of the preset labels in the coordinate system where the camera of the robot is located can be determined. By the method, the drawing establishing accuracy of the robot can be improved.
In one possible implementation, the continuous multiframe images in the bundle-optimized sliding window are determined by:
if the frame number of the image frames in the sliding window is less than or equal to the sliding window threshold value, performing cluster optimization on all the image frames; and if the frame number of the image frames in the sliding window is greater than the sliding window threshold value, performing bundle optimization on all the image frames, screening out preset frame images, and keeping the constraint on the sliding window of the rejected images through a Schuler complement method.
In the embodiment of the application, the image frames are screened in the sliding window, so that the positioning accuracy of the robot mapping is higher.
In one possible implementation, the screening of the preset frame images for bundle optimization is determined by one or more of the following ways:
acquiring an identification list of all preset labels in a current frame image;
rejecting images which do not comprise preset labels in the multi-frame images; if the identifier of the preset label detected in the Nth frame image is different from the identifier of the preset label in the Mth frame image, rejecting the Nth frame image; if the identification of the preset label in any frame of image is the same as the identification of the preset label in the Mth frame of image in the preset time period, scoring the images except the Mth frame of image according to the information degree, and eliminating the frame image with the lowest score; wherein M, N is an integer, and M is the number of the last frame images acquired within a preset time period.
In the embodiment of the application, the technical optimization is carried out by screening the preset frame images, so that the optimized pose of the preset label is more reliable.
In one possible implementation, the projection error and the jacobian matrix of the preset label may be determined as follows:
calculating two-dimensional coordinate information of the corner points of the preset label in the M frames of images by a perspective N-point projection PNP algorithm, and determining the initial pose of the preset label in a camera coordinate system of the robot; determining three-dimensional coordinate information of each corner point of the preset label in a world coordinate system according to the initial pose of the preset label in a camera coordinate system of the robot and the initial pose of the robot; projecting the three-dimensional coordinate information of the corner point of the preset label to a coordinate system where the camera is located to obtain two-dimensional coordinate information of the corner point of the preset label after projection; calculating a difference value between the two-dimensional coordinate information of the corner point of the projected preset label and the two-dimensional coordinate information of the corner point of the preset label before projection to determine a projection error of the preset label; and determining a Jacobian matrix of the projection error of the preset label relative to the optimized state quantity in the sliding window.
In the embodiment of the application, an initial pose of an ith preset label in an Mth frame image in a camera coordinate system is determined according to a three-dimensional coordinate of an angular point in the ith preset label in a preset label coordinate system and a two-dimensional pixel coordinate of the preset label in an image frame M of the ith preset label observed for the first time through a perspective-N-point (PNP) algorithm; according to the initial pose of the ith preset label in the M frame image and the global pose of the camera in the M frame image, determining global three-dimensional coordinate information of the corner point of the preset label, and then determining the projection error of the preset label to ensure the positioning accuracy of the robot mapping.
In one possible implementation mode, the variance value of the target pose is related to the angular point observation uncertainty of each preset label of the sliding window; wherein the observation uncertainty of the corner of any preset label in any image frame has a functional relation with the following parameters:
the method comprises the steps of presetting two-dimensional coordinate information of corner points of a label, presetting linear speed, angular speed and acceleration of an image acquisition device of an image frame where the label is located, and presetting pixel area of the label in the image frame.
In the embodiment of the application, the variance value of the target pose of the preset label has a functional relationship with various parameters, so that the variance value of the target pose obtained by the application is more consistent with an actual scene.
In a possible implementation manner, the bundle optimizing unit 51 is specifically configured to: adding the prior information of a preset label as a constraint item into bundle optimization and measurement updating; the prior information of the preset labels is the relative pose and uncertainty variance of any two preset labels, or the global pose and uncertainty variance of any one preset label in a world coordinate system.
In the embodiment of the application, the robot can also adjust the pose of the preset label in the world coordinate system based on the prior information, so that the robot is higher in mapping accuracy and more reliable in mapping data.
In a possible implementation manner, the robot may perform iterative update of EKF measurement on a projection constraint of a preset tag, a prior relative pose constraint of the preset tag, a prior global pose constraint of the preset tag, a measurement constraint of a wheel speed odometer, and a time offset constraint, to obtain an updated EKF state.
In the embodiment of the application, the accuracy of robot mapping can be improved by updating the projection constraint of the preset tag, the pose constraint of the preset tag, the time offset and the like.
In one possible implementation, the EKF status includes: and the target pose of the preset label in the world coordinate system, the external parameters among the sensors, the time offset among the sensors, the pose and the speed of the current frame robot and the zero offset of the IMU are preset in the continuous multi-frame images collected in the preset time period.
After the robot mapping method and apparatus in the exemplary embodiment of the present application are introduced, a computing device in another exemplary embodiment of the present application is introduced next.
As will be appreciated by one skilled in the art, aspects of the present application may be embodied as a system, method or program product. Accordingly, various aspects of the present application may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
In some possible implementations, a computing device according to the present application may include at least one processor, and at least one memory. Wherein the memory stores a computer program that, when executed by the processor, causes the processor to perform the steps of the robot mapping method according to various exemplary embodiments of the present application described above in the present specification. For example, the processor may perform steps 201-203 as shown in fig. 2.
The computing device 130 according to this embodiment of the present application is described below with reference to fig. 6. The computing device 130 shown in fig. 6 is only an example and should not bring any limitations to the functionality or scope of use of the embodiments of the present application. As shown in fig. 6, the computing device 130 is embodied in the form of a general purpose smart terminal. Components of computing device 130 may include, but are not limited to: the at least one processor 131, the at least one memory 132, and a bus 133 that connects the various system components (including the memory 132 and the processor 131).
Bus 133 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, a processor, or a local bus using any of a variety of bus architectures. The memory 132 may include readable media in the form of volatile memory, such as Random Access Memory (RAM) 1321 and/or cache memory 1322, and may further include Read Only Memory (ROM) 1323. Memory 132 may also include a program/utility 1325 having a set (at least one) of program modules 1324, such program modules 1324 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Computing device 130 may also communicate with one or more external devices 134 (e.g., keyboard, pointing device, etc.) and/or any device (e.g., router, modem, etc.) that enables computing device 130 to communicate with one or more other intelligent terminals. Such communication may occur via input/output (I/O) interfaces 135. Also, the intelligent terminal 130 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet) via the network adapter 136. As shown, network adapter 136 communicates with other modules for computing device 130 over bus 133. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with computing device 130, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
In some possible embodiments, the aspects of the three-dimensional visual repositioning method provided herein may also be implemented in the form of a program product comprising a computer program for causing a computer device to perform the steps of the robot mapping method according to various exemplary embodiments of the present application described above in this specification, when the program product is run on the computer device. For example, the processor may perform steps 201-203 as shown in fig. 2.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The program product for robot mapping according to the embodiments of the present application may employ a portable compact disc read only memory (CD-ROM) and include a computer program, and may be run on a smart terminal. The program product of the present application is not so limited, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A readable signal medium may include a propagated data signal with a readable computer program embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
It should be noted that although several units or sub-units of the apparatus are mentioned in the above detailed description, such division is merely exemplary and not mandatory. Indeed, the features and functions of two or more units described above may be embodied in one unit, according to embodiments of the application. Conversely, the features and functions of one unit described above may be further divided into embodiments by a plurality of units.
Further, while the operations of the methods of the present application are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (13)

1. A robot mapping method, comprising:
performing cluster optimization on continuous multi-frame image data, inertial measurement unit IMU data and wheel speed odometer data collected in a preset time period to obtain an optimized pose of a preset label in a camera coordinate system of the robot in a current frame image and a variance value of the optimized pose;
performing extended Kalman EKF state recursion on the IMU data and the wheel speed odometer data corresponding to the current frame image, and determining a variance value of the pose of the robot and the pose of the robot;
and performing iterative updating of EKF measurement on the optimized pose of the preset label, the variance value of the optimized pose of the preset label, the pose of the robot and the variance value of the pose of the robot, and acquiring the target pose and the variance value of the target pose of the preset label in a world coordinate system in continuous multi-frame images acquired within a preset time period.
2. The method of claim 1, further comprising:
performing online optimization on external parameters and time offsets among the sensors in the iterative updating process of EKF measurement, and using the optimized external parameters and time offsets for bundle optimization; the external parameter comprises: the relative pose of the IMU and the camera, and the relative pose of the wheel speed odometer and the camera.
3. The method of claim 1, wherein performing cluster optimization on continuous multi-frame images, Inertial Measurement Unit (IMU) data and wheel speed odometer data acquired within a preset time period to obtain an optimized pose and a variance value of the optimized pose of each preset label in a camera coordinate system of the robot in a current frame image comprises:
acquiring continuous N frames of images acquired within a preset time period through a camera of the robot, screening the N frames of images, and determining M frames of images including preset labels; n, M is an integer; m is greater than or equal to 2 and less than or equal to N;
and performing bundle optimization on the position information of the corner points of the preset labels in the M frames of images, the projection information of the corner points of the preset labels in a coordinate system where a camera of the robot is located, the IMU data and the wheel speed odometry data to determine the optimized pose of the preset labels in the coordinate system where the camera of the robot is located, the variance value of the optimized pose, the projection error of the preset labels, the Jacobian matrix and the covariance matrix.
4. The method of claim 3, wherein the consecutive frames of images in the bundle-optimized sliding window are determined by:
if the frame number of the image frames in the sliding window is less than or equal to the sliding window threshold value, performing cluster optimization on all the image frames;
and if the frame number of the image frames in the sliding window is greater than the sliding window threshold value, performing bundle optimization on all the image frames, screening out preset frame images, and keeping the constraint on the sliding window of the rejected images through a Schuler complement method.
5. The method of claim 4, wherein the screening out of the preset frame images is determined by one or more of the following methods:
acquiring an identification list of all preset labels in a current frame image;
rejecting images which do not comprise the preset label in the multi-frame images;
if the identifier of the preset label detected in the Nth frame image is different from the identifier of the preset label in the Mth frame image, rejecting the Nth frame image;
if the identification of the preset label in any frame of image is the same as the identification of the preset label in the Mth frame of image in the preset time period, scoring the images except the Mth frame of image according to the information degree, and eliminating the frame image with the lowest score;
wherein M, N is an integer, and M is the number of the last frame of image acquired in the preset time period.
6. The method of claim 3, wherein the projection error and the Jacobian matrix of the preset labels are determined by:
calculating two-dimensional coordinate information of the corner points of the preset label in the M frames of images by a perspective N-point projection PNP algorithm, and determining the initial pose of the preset label in a camera coordinate system of the robot;
determining three-dimensional coordinate information of each corner point of the preset label in a world coordinate system according to the initial pose of the preset label in a camera coordinate system of the robot and the initial pose of the robot;
projecting the three-dimensional coordinate information of the corner point of the preset label to a coordinate system where the camera is located to obtain two-dimensional coordinate information of the corner point of the preset label after projection;
calculating a difference value between the two-dimensional coordinate information of the corner point of the projected preset label and the two-dimensional coordinate information of the corner point of the preset label before projection to determine a projection error of the preset label;
and determining a Jacobian matrix of the projection error of the preset label relative to the optimized state quantity in the sliding window.
7. The method according to any one of claims 1-6, characterized in that the variance value of the target pose is related to the uncertainty of the observation of the corner points of each preset label of the sliding window; wherein, the observation uncertainty of the corner point of any preset label in any image frame has a functional relation with the following parameters:
the image acquisition device comprises two-dimensional coordinate information of corner points of the preset labels, linear speed, angular speed and acceleration of the image acquisition device of the image frame where the preset labels are located, and pixel areas of the preset labels in the image frame.
8. The method according to any one of claims 1-6, further comprising:
adding the prior information of a preset label as a constraint item into bundle optimization and measurement updating; the prior information of the preset labels is the relative pose and uncertainty variance of any two preset labels, or the global pose and uncertainty variance of any one preset label in a world coordinate system.
9. The method of claim 8, further comprising:
and carrying out iterative updating of EKF measurement on the projection constraint of the preset label, the prior relative pose constraint of the preset label, the prior global pose constraint of the preset label, the measurement constraint of the wheel speed odometer and the time offset constraint to obtain an updated EKF state.
10. The method of claim 9, wherein the EKF status comprises: and the target pose of the preset label in the world coordinate system, the external parameters among the sensors, the time offset among the sensors, the pose and the speed of the current frame robot and the zero offset of the IMU are preset in the continuous multi-frame images collected in the preset time period.
11. A robot mapping apparatus, comprising:
the cluster optimization unit is used for performing cluster optimization on continuous multi-frame image data, inertial measurement unit IMU data and wheel speed odometer data which are collected in a preset time period, and acquiring an optimized pose of a preset label in a camera coordinate system of the robot and a variance value of the optimized pose in a current frame image;
the state prediction unit is used for performing extended Kalman EKF state recursion on the IMU data and the wheel speed odometer data corresponding to the current frame image and determining a variance value of the pose of the robot and the pose of the robot;
and the measurement updating unit is used for performing EKF measurement iteration updating on the optimized pose of the preset label, the variance value of the optimized pose of the preset label, the pose of the robot and the variance value of the pose of the robot, and acquiring the target pose and the variance value of the target pose of the preset label in a world coordinate system in continuous multi-frame images acquired within a preset time period.
12. A computing device, comprising: a memory and a processor;
a memory for storing program instructions;
a processor for calling program instructions stored in said memory to execute the method of any one of claims 1 to 10 in accordance with the obtained program.
13. A computer storage medium storing computer-executable instructions for performing the method of any one of claims 1-10.
CN202110348706.7A 2021-03-31 2021-03-31 Robot mapping method and device and computing equipment Active CN112734852B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110348706.7A CN112734852B (en) 2021-03-31 2021-03-31 Robot mapping method and device and computing equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110348706.7A CN112734852B (en) 2021-03-31 2021-03-31 Robot mapping method and device and computing equipment

Publications (2)

Publication Number Publication Date
CN112734852A true CN112734852A (en) 2021-04-30
CN112734852B CN112734852B (en) 2021-06-29

Family

ID=75596233

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110348706.7A Active CN112734852B (en) 2021-03-31 2021-03-31 Robot mapping method and device and computing equipment

Country Status (1)

Country Link
CN (1) CN112734852B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111401445A (en) * 2020-03-16 2020-07-10 腾讯科技(深圳)有限公司 Training method of image recognition model, and image recognition method and device
CN113052855A (en) * 2021-02-26 2021-06-29 苏州迈思捷智能科技有限公司 Semantic SLAM method based on visual-IMU-wheel speed meter fusion
CN113379915A (en) * 2021-07-05 2021-09-10 广东工业大学 Driving scene construction method based on point cloud fusion
CN113390408A (en) * 2021-06-30 2021-09-14 深圳市优必选科技股份有限公司 Robot positioning method and device, robot and storage medium
CN113658260A (en) * 2021-07-12 2021-11-16 南方科技大学 Robot pose calculation method and system, robot and storage medium
CN113792564A (en) * 2021-09-29 2021-12-14 北京航空航天大学 Indoor positioning method based on invisible projection two-dimensional code
CN113838129A (en) * 2021-08-12 2021-12-24 高德软件有限公司 Method, device and system for obtaining pose information
CN114018284A (en) * 2021-10-13 2022-02-08 上海师范大学 Wheel speed odometer correction method based on vision
CN114523471A (en) * 2022-01-07 2022-05-24 中国人民解放军海军军医大学第一附属医院 Error detection method based on associated identification and robot system
CN115388902A (en) * 2022-10-28 2022-11-25 苏州工业园区测绘地理信息有限公司 Indoor positioning method and system, AR indoor positioning navigation method and system
EP4130924A1 (en) * 2021-08-04 2023-02-08 Tata Consultancy Services Limited Method and system of dynamic localization of a telepresence robot based on live markers
WO2024037295A1 (en) * 2022-08-16 2024-02-22 北京三快在线科技有限公司 Positioning

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080144925A1 (en) * 2006-08-15 2008-06-19 Zhiwei Zhu Stereo-Based Visual Odometry Method and System
CN107145578A (en) * 2017-05-08 2017-09-08 深圳地平线机器人科技有限公司 Map constructing method, device, equipment and system
CN108180913A (en) * 2018-01-03 2018-06-19 深圳勇艺达机器人有限公司 A kind of Quick Response Code alignment system based on 3D cameras
CN110163909A (en) * 2018-02-12 2019-08-23 北京三星通信技术研究有限公司 For obtaining the method, apparatus and storage medium of equipment pose
CN110702107A (en) * 2019-10-22 2020-01-17 北京维盛泰科科技有限公司 Monocular vision inertial combination positioning navigation method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080144925A1 (en) * 2006-08-15 2008-06-19 Zhiwei Zhu Stereo-Based Visual Odometry Method and System
CN107145578A (en) * 2017-05-08 2017-09-08 深圳地平线机器人科技有限公司 Map constructing method, device, equipment and system
CN108180913A (en) * 2018-01-03 2018-06-19 深圳勇艺达机器人有限公司 A kind of Quick Response Code alignment system based on 3D cameras
CN110163909A (en) * 2018-02-12 2019-08-23 北京三星通信技术研究有限公司 For obtaining the method, apparatus and storage medium of equipment pose
CN110702107A (en) * 2019-10-22 2020-01-17 北京维盛泰科科技有限公司 Monocular vision inertial combination positioning navigation method

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111401445A (en) * 2020-03-16 2020-07-10 腾讯科技(深圳)有限公司 Training method of image recognition model, and image recognition method and device
CN113052855A (en) * 2021-02-26 2021-06-29 苏州迈思捷智能科技有限公司 Semantic SLAM method based on visual-IMU-wheel speed meter fusion
CN113052855B (en) * 2021-02-26 2021-11-02 苏州迈思捷智能科技有限公司 Semantic SLAM method based on visual-IMU-wheel speed meter fusion
CN113390408A (en) * 2021-06-30 2021-09-14 深圳市优必选科技股份有限公司 Robot positioning method and device, robot and storage medium
CN113379915A (en) * 2021-07-05 2021-09-10 广东工业大学 Driving scene construction method based on point cloud fusion
CN113658260A (en) * 2021-07-12 2021-11-16 南方科技大学 Robot pose calculation method and system, robot and storage medium
EP4130924A1 (en) * 2021-08-04 2023-02-08 Tata Consultancy Services Limited Method and system of dynamic localization of a telepresence robot based on live markers
CN113838129A (en) * 2021-08-12 2021-12-24 高德软件有限公司 Method, device and system for obtaining pose information
CN113838129B (en) * 2021-08-12 2024-03-15 高德软件有限公司 Method, device and system for obtaining pose information
CN113792564A (en) * 2021-09-29 2021-12-14 北京航空航天大学 Indoor positioning method based on invisible projection two-dimensional code
CN113792564B (en) * 2021-09-29 2023-11-10 北京航空航天大学 Indoor positioning method based on invisible projection two-dimensional code
CN114018284A (en) * 2021-10-13 2022-02-08 上海师范大学 Wheel speed odometer correction method based on vision
CN114018284B (en) * 2021-10-13 2024-01-23 上海师范大学 Wheel speed odometer correction method based on vision
CN114523471B (en) * 2022-01-07 2023-04-25 中国人民解放军海军军医大学第一附属医院 Error detection method based on association identification and robot system
CN114523471A (en) * 2022-01-07 2022-05-24 中国人民解放军海军军医大学第一附属医院 Error detection method based on associated identification and robot system
WO2024037295A1 (en) * 2022-08-16 2024-02-22 北京三快在线科技有限公司 Positioning
CN115388902A (en) * 2022-10-28 2022-11-25 苏州工业园区测绘地理信息有限公司 Indoor positioning method and system, AR indoor positioning navigation method and system

Also Published As

Publication number Publication date
CN112734852B (en) 2021-06-29

Similar Documents

Publication Publication Date Title
CN112734852B (en) Robot mapping method and device and computing equipment
Cvišić et al. SOFT‐SLAM: Computationally efficient stereo visual simultaneous localization and mapping for autonomous unmanned aerial vehicles
CN109084732B (en) Positioning and navigation method, device and processing equipment
CN112634451B (en) Outdoor large-scene three-dimensional mapping method integrating multiple sensors
CN110555901B (en) Method, device, equipment and storage medium for positioning and mapping dynamic and static scenes
CN113168717B (en) Point cloud matching method and device, navigation method and equipment, positioning method and laser radar
CN111210477B (en) Method and system for positioning moving object
CN110587597B (en) SLAM closed loop detection method and detection system based on laser radar
JP2018124787A (en) Information processing device, data managing device, data managing system, method, and program
Chien et al. Visual odometry driven online calibration for monocular lidar-camera systems
CN110726406A (en) Improved nonlinear optimization monocular inertial navigation SLAM method
Ji et al. RGB-D SLAM using vanishing point and door plate information in corridor environment
CN111623773B (en) Target positioning method and device based on fisheye vision and inertial measurement
CN114485640A (en) Monocular vision inertia synchronous positioning and mapping method and system based on point-line characteristics
CN112541423A (en) Synchronous positioning and map construction method and system
CN117218350A (en) SLAM implementation method and system based on solid-state radar
CN115077519A (en) Positioning and mapping method and device based on template matching and laser inertial navigation loose coupling
CN112731503A (en) Pose estimation method and system based on front-end tight coupling
CN116958452A (en) Three-dimensional reconstruction method and system
Wei et al. Novel robust simultaneous localization and mapping for long-term autonomous robots
Andersson et al. Simultaneous localization and mapping for vehicles using ORB-SLAM2
CN116977628A (en) SLAM method and system applied to dynamic environment and based on multi-mode semantic framework
Ye et al. Robust and efficient vehicles motion estimation with low-cost multi-camera and odometer-gyroscope
CN111862146A (en) Target object positioning method and device
CN114862953A (en) Mobile robot repositioning method and device based on visual features and 3D laser

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant