WO2024001649A1 - 机器人定位方法、装置和计算可读存储介质 - Google Patents
机器人定位方法、装置和计算可读存储介质 Download PDFInfo
- Publication number
- WO2024001649A1 WO2024001649A1 PCT/CN2023/097297 CN2023097297W WO2024001649A1 WO 2024001649 A1 WO2024001649 A1 WO 2024001649A1 CN 2023097297 W CN2023097297 W CN 2023097297W WO 2024001649 A1 WO2024001649 A1 WO 2024001649A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- robot
- pose
- data
- laser
- point cloud
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 44
- 238000009434 installation Methods 0.000 claims description 21
- 238000004422 calculation algorithm Methods 0.000 claims description 14
- 230000004927 fusion Effects 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 6
- 230000000007 visual effect Effects 0.000 claims description 4
- 238000012937 correction Methods 0.000 claims description 2
- 238000005457 optimization Methods 0.000 claims 1
- 238000005516 engineering process Methods 0.000 description 8
- 238000004590 computer program Methods 0.000 description 7
- 238000013473 artificial intelligence Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 230000001186 cumulative effect Effects 0.000 description 4
- 238000005259 measurement Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000002085 persistent effect Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000002355 dual-layer Substances 0.000 description 1
- 238000002592 echocardiography Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/86—Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/93—Lidar systems specially adapted for specific applications for anti-collision purposes
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S19/00—Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
- G01S19/38—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
- G01S19/39—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
- G01S19/42—Determining position
- G01S19/48—Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system
Definitions
- This application relates to the field of artificial intelligence, and in particular to robot positioning methods, devices and computer-readable storage media.
- the robot positioning method is a positioning method based on QR codes. Specifically, a number of QR codes for navigation are attached to the robot workplace. These QR codes contain the orientation information of the robot here. Therefore, as long as The robot can observe the navigation QR code and decode the information contained in it to locate itself.
- the above positioning method based on QR code is based on the premise that the robot can observe the navigation QR code.
- the distance between the navigation QR codes in the robot's workplace is large or the navigation QR codes cannot be installed due to various reasons. For example, the distance between the navigation QR codes exceeds 1 meter. Once the robot takes a long time or If the navigation QR code cannot be observed even after running for a long distance, it will lead to inaccurate positioning or the cumulative error will continue to increase, eventually causing the robot to go astray or even be unable to continue performing the task.
- this application provides a robot positioning method, device and computationally readable storage medium, which can improve the accuracy of robot positioning.
- the first aspect of this application provides a robot positioning method, including:
- the final pose of the robot is obtained by fusing the first pose of the robot with the second pose currently output by the extended Kalman filter.
- a second aspect of this application provides a robot positioning device, including:
- the first acquisition module is used to acquire laser point cloud data through the laser sensor mounted on the robot;
- the second acquisition module is used to acquire the pose data of the robot through the motion sensor mounted on the robot;
- a calculation module configured to calculate the prior pose of the robot based on the navigation QR code and the pose data and based on the extended Kalman filter;
- a matching module for matching the laser point cloud data with the robot map based on the prior pose of the robot to obtain the first pose of the robot;
- a fusion module configured to fuse the first posture of the robot with the second posture currently output by the extended Kalman filter to obtain the final posture of the robot.
- the third aspect of this application provides an electronic device, including:
- a memory has executable code stored thereon, and when the executable code is executed by the processor, causes the processor to perform the method as described above.
- a fourth aspect of the present application provides a computer-readable storage medium on which executable code is stored.
- the executable code is executed by a processor of an electronic device, the processor is caused to execute the method as described above.
- the technical solution of this application calculates the prior pose of the robot based on the pose data obtained by the navigation QR code and the motion sensor and based on the extended Kalman filter. For pose, match the laser point cloud data with the robot map to obtain the first pose of the robot. Finally, fuse the first pose of the robot with the second pose currently output by the extended Kalman filter to obtain the final pose of the robot. .
- the technical solution of this application is based on navigation QR codes and integrates pose data obtained by multiple sensors, so that even when the navigation QR codes are relatively sparse, still available To obtain the robot's pose data, the robot can be positioned more accurately.
- Figure 1 is a schematic flowchart of a robot positioning method provided by an embodiment of the present application
- Figure 2 is a schematic diagram of aligning multi-sensor data on time stamps provided by an embodiment of the present application
- Figure 3 is a schematic structural diagram of a robot positioning device provided by an embodiment of the present application.
- Figure 4 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
- first, second, third, etc. may be used in this application to describe various information, the information should not be limited to these terms. These terms are only used to distinguish information of the same type from each other.
- first information may also be called second information, and similarly, the second information may also be called first information. Therefore, features defined as “first” and “second” may explicitly or implicitly include one or more of these features.
- plurality means two or more, Unless otherwise expressly and specifically limited.
- the robot positioning method is a positioning method based on QR codes.
- QR codes a number of QR codes for navigation are attached to the robot workplace. These QR codes contain the orientation information of the robot here. Therefore, as long as The robot can observe the navigation QR code and decode the information contained in it to locate itself.
- the above positioning method based on QR code is based on the premise that the robot can observe the navigation QR code.
- the distance between the navigation QR codes in the robot's workplace is large or the navigation QR codes cannot be installed due to various reasons.
- the distance between the navigation QR codes exceeds 1 meter. Once the robot takes a long time or If the navigation QR code cannot be observed even after running for a long distance, it will lead to inaccurate positioning or the cumulative error will continue to increase, eventually causing the robot to go astray or even be unable to continue performing the task.
- embodiments of the present application provide a robot positioning method, which can improve the accuracy of robot positioning.
- FIG. 1 it is a schematic flowchart of a robot positioning method according to an embodiment of the present application, which mainly includes steps S101 to S105.
- the description is as follows:
- Step S101 Obtain laser point cloud data through the laser sensor mounted on the robot.
- Laser point cloud data is the information carried by the points returned from the target surface after the laser beam hits the target through scanning by the laser sensor. Because there are generally many points returned from the target surface (the order of magnitude is usually in the order of ten thousand or one hundred thousand) It is similar to a cloud, so it is called a laser point cloud.
- the information carried by the laser point cloud such as the three-dimensional coordinates, the texture of the target, the reflection intensity and the number of echoes are laser point cloud data.
- the laser sensor mounted on the robot may be a 2D laser sensor such as lidar or millimeter wave radar.
- the laser sensor mounted on the robot can be at least one 2D laser sensor deployed on the robot (which can be deployed on the front, rear, left, or right of the robot). Diagonal position, this application does not limit the specific deployment position). Because there may be an installation position error when installing the at least two 2D laser sensors, or, after the robot has been running for a period of time, there may be an error in the installation position of the originally registered at least two 2D laser sensors. Considering the scanning range of a single 2D laser sensor The circumference is 270°. When at least one 2D laser sensor is installed for data splicing, there will inevitably be data overlap.
- At least two 2D laser sensors have the above error, the data obtained by at least two 2D laser sensors will overlap. cannot match exactly. Therefore, in one embodiment of the present application, when the laser sensor mounted on the robot is at least one 2D laser sensor deployed on the robot, before acquiring the laser point cloud data through the laser sensor mounted on the robot, at least one 2D laser sensor is calibrated offline. sensors to obtain the installation position error of at least one 2D laser sensor each. After obtaining the installation position error of at least one 2D laser sensor, when it is necessary to perform data splicing on each of the at least one 2D laser sensor installed, this installation position error can be used as compensation data, so that the data obtained by each at least one 2D laser sensor A perfect match is possible where their scans overlap.
- the offline calibration of the at least two 2D laser sensors can be done so that the robot can operate in a good environment (for example, a sunny day, good lighting, and the navigation QR codes are densely arranged to ensure that the robot can observe the navigation most of the time. Movement in a specific test site with a QR code) will only rely on at least two 2D laser sensors without involving other sensors to perform coordinate conversion between the robot pose obtained by map matching and the position and posture obtained by the navigation QR code. Through the obtained conversion The matrix can calculate the actual installation positions of at least two 2D laser sensors, thereby obtaining the installation position errors of at least two 2D laser sensors.
- At least two 2D laser sensors can be calibrated offline to obtain the installation position errors of at least two 2D laser sensors
- offline calibration has strict requirements on the test site.
- at least two 2D laser sensors before acquiring laser point cloud data through the laser sensor mounted on the robot, at least two 2D laser sensors can be calibrated online to obtain the installation position errors of the at least two 2D laser sensors.
- the so-called online calibration means that when the robot is running in any field, it can rely on various sensors deployed on it to calibrate at least two 2D laser sensors while running.
- the advantage of online calibration is that there are not too many restrictions on the site, so it can be performed at any time.
- online calibration of at least one 2D laser sensor to obtain the installation position error of at least one 2D laser sensor can be: obtaining the installation position error of the visual device. Perform feature point matching on the acquired image data to obtain the reprojection error corresponding to the feature point; perform pose correction and registration on the two frame point clouds in the laser point cloud data obtained by any one of at least one 2D laser sensor, and calculate The relative pose between the two frame point clouds; calculate the pose deviation between the two frame point clouds through the pose data obtained by the motion sensor; within the set time period, based on the reprojection error corresponding to the feature point, the two frames The relative pose between point clouds and the pose deviation between two frame point clouds are calculated and optimized through iteration to obtain the actual installation position of at least one 2D laser sensor.
- the visual device may be a visual sensor such as a monocular camera, a binocular camera or a depth camera
- the motion sensor may be a wheeled odometer or an inertial measurement unit (Inertial Measurement Unit, IMU).
- IMU Inertial Measurement Unit
- Step S102 Obtain the pose data of the robot through the motion sensor mounted on the robot.
- the motion sensor mounted on the robot can be the aforementioned wheel odometer or inertial measurement unit IMU and other sensors.
- the motion sensor obtains the pose data of the robot, including the robot's three-dimensional coordinates, acceleration, speed and Orientation and other information.
- the sensor itself takes a certain amount of time dt from data acquisition to output. Therefore, the real sensor data collection time should be T-dt.
- T is the actual UTC of this frame output given by the sensor supplier.
- Time and because different sensors have inconsistent sampling frequencies even after hardware synchronization, there must be a problem that the data obtained by the sensors are not synchronized in time stamps in the fusion positioning of multiple sensors.
- the laser point cloud data and the pose data of the robot are Align in time.
- the above-mentioned alignment of the laser point cloud data and the robot's pose data in time can be: through the linear interpolation algorithm, Align the timestamps of laser point cloud data and robot pose data.
- the motion sensor of the above embodiment including an inertial measurement unit IMU and a wheeled odometer as an example, as shown in Figure 2, assume that the pose data of the robot acquired by the IMU at time t i is D ti , and ideally it is wheeled.
- the odometer can also obtain the robot's pose data at time t i .
- the wheeled odometer can only obtain the robot's pose data D l ' i at time t l ' i , which is Data between sensors is not aligned in timestamps; for laser sensor acquisition
- the laser point cloud data also has timestamp misalignment with the data collected by the IMU and wheel odometer. That is, due to inconsistent sampling frequencies and other reasons, the robot pose data obtained by the IMU at time t i is D ti . , the laser sensor can only obtain the laser point cloud data D' xi at time t' xi . For the above situation, a data alignment solution needs to be adopted.
- using a linear interpolation algorithm to align the timestamps of the laser point cloud data and the robot's pose data can be: using the robot pose obtained by the IMU with adjacent timestamps before and after the current frame of the laser point cloud data.
- the data interpolates the robot pose data so that the robot pose data obtained after interpolation is aligned with the current frame laser point cloud data collected by the laser sensor; and is obtained using the wheel odometer with adjacent timestamps before and after the current frame laser point cloud data.
- the pose data of the robot is interpolated so that the pose data of the robot obtained after interpolation is aligned with the laser point cloud data of the current frame collected by the laser sensor.
- the robot's pose data obtained by the wheel odometry with adjacent timestamps before and after the current frame laser point cloud data that is, use the robot's pose data D' li at time t' li and time t' li+1
- the robot's pose data D' li+1 interpolates the robot's pose data to obtain the robot's pose interpolation data at time t' xi
- the pose interpolation data of the robot It has been aligned with the current frame laser point cloud data D' xi collected by the laser sensor.
- Step S103 Calculate the prior pose of the robot based on the navigation QR code and kinematic data and based on the extended Kalman filter.
- the navigation QR code includes the position information of the robot when the robot moves to the place where the navigation QR code is attached and the pose information of the robot obtained by the IMU mounted on the robot.
- the confidence level is high, so it can be used as an observation value for the extended Kalman filter to calculate the robot's pose information, and the data obtained by the wheel odometer mounted on the robot can be used as a priori value for the extended Kalman filter to calculate the robot's pose information.
- Kalman filtering can optimally estimate the system state through system input and output observation data. In other words, the Kalman filter process can process the input observation data of the system and obtain the true signal estimate with the smallest error.
- the extended Kalman filter can perform a first-order linearization truncation on the Taylor expansion of a nonlinear function and ignore the remaining higher-order terms, thereby converting the nonlinear problem into a linear problem. Therefore, in the embodiment of the present application, Obtain the position information of the robot carried by the navigation QR code and the pose data of the robot obtained by the motion sensor (wherein, the navigation QR code includes the position information of the robot when the robot moves to the place where the navigation QR code is attached, and the position information of the robot obtained by the motion sensor).
- the robot's pose information obtained by the IMU mounted on the robot is used as an extended Kalman filter to calculate the observed value of the robot's pose information, while the data obtained by the wheel odometry mounted on the robot can be used as an extended Kalman filter to calculate the robot's pose information.
- the prior pose of the robot can be calculated based on the navigation QR code and kinematic data and based on the extended Kalman filter.
- calculating the robot's pre-verified pose can be: obtaining the target pose information of the robot at time k-1 (i.e., one moment before time k); and obtaining the robot's pose data based on the motion sensor (including the robot's pose and attitude information carried by the robot).
- the pose information of the robot obtained by the IMU and the pose information of the robot obtained by the wheel odometer mounted on the robot) and the target pose information of the robot at time k-1 are used to determine the estimated value of the pose information of the robot at time k; obtain The estimated value of the covariance matrix of the robot at time k; determine the Kalman gain based on the estimated value of the covariance matrix of the robot at time k; based on the position information of the robot when the robot moves to the place where the navigation QR code is attached, the robot's estimated value at time k
- the estimated value of the pose information and the Kalman gain determine the target pose information of the robot at time k as the prior pose of the robot.
- Step S104 Based on the prior pose of the robot, match the laser point cloud data with the robot map to obtain the first pose of the robot.
- the laser point cloud data corresponding to the moving target is eliminated from the laser point cloud data through a data association algorithm.
- the data association algorithm it can be the Joint Compatibility Branch and Bound (JCBB) algorithm, the Independent Compatibility Nearest Neighbor (ICNN) algorithm, or improvements of both.
- embodiments of the present application can design a hybrid adaptive data association scheme based on association criteria, that is, in the data association Under the constraints of the basic association criteria, judge whether the data association results under the ICCN algorithm are correct. If the data association result is judged to be wrong, the JCBB algorithm is used to re-associate the data to improve association accuracy and efficiency.
- the robot map may be a two-dimensional grid map.
- a two-dimensional grid map is also called a two-dimensional grid probability map or a two-dimensional occupancy grid map (Occupancy Grid Map). This kind of map divides the plane into grids, and each grid is assigned an occupancy rate (Occupancy). ).
- occupancy rate refers to the probability that a grid is occupied by obstacles (occupied state), is not occupied by obstacles (Free state), or is between the two states on a two-dimensional occupied grid map. Among them, the state occupied by obstacles is represented by 1, the state occupied by no obstacles is represented by 0, and the value between the two states is represented by a value between 0 and 1.
- the occupancy rate indicates a grid The possibility of being occupied by obstacles.
- the greater the occupancy rate of a grid the greater the possibility that the grid is occupied by obstacles, and vice versa.
- the laser point cloud data and the robot map can be matched based on the robot's prior pose, and we get The first pose of the robot.
- the robot map being a two-dimensional grid map
- the laser point cloud data is matched with the robot map to obtain the first pose of the robot. Specifically, it can be: based on the prior pose of the robot.
- the pose determines several candidate poses in the pose search space; using each of the several candidate poses, project the laser point cloud data to a two-dimensional grid map, and calculate each of the several candidate poses.
- the matching score of the pose on the two-dimensional grid map; the candidate pose with the highest matching score on the two-dimensional grid map of several candidate poses is determined as the first pose of the robot.
- the above-mentioned several candidate poses in the pose search space are determined by the prior pose of the robot. Specifically, the Ceres solver searches for several candidate poses of the robot near the prior pose of the robot, and then calculates several candidate poses.
- the robot's pose can be used as the state to construct a nonlinear least squares problem, and the error E 1 is the error of the nonlinear least squares problem. Constraints, iteratively optimize the robot's pose until the error E 1 is minimum, where the error E 1 is the difference between the robot's estimated pose and the global pose observation. When the error E 1 is the smallest, a certain candidate pose among several candidate poses has the highest matching score on the two-dimensional grid map, and this candidate pose is determined as the first pose of the robot.
- Step S105 Fusion of the first pose of the robot with the second pose currently output by the extended Kalman filter to obtain the final pose of the robot.
- the laser The first pose of the robot obtained by matching the point cloud data with the robot map, the second pose currently output by the extended Kalman filter, and the pose data obtained based on the navigation QR code and motion sensors and based on the extended Kalman filter The calculated prior pose of the robot is different between the three data. For robot states obtained through different methods, a more credible and accurate value needs to be obtained through data fusion.
- the first pose of the robot can be fused with the second pose currently output by the extended Kalman filter to obtain the final pose of the robot.
- the difference between the first pose of the robot and the prior pose of the robot can be calculated to obtain the pose deviation; the sum of the pose deviation and the second pose currently output by the extended Kalman filter can be obtained to obtain the final pose of the robot. posture.
- the second pose currently output by the extended Kalman filter is the pose data of the robot calculated by using the pose data of the robot obtained by the navigation QR code and the motion sensor as the input of the extended Kalman filter.
- the technical solution of this application is based on the kinematic data obtained by the navigation QR code and motion sensors and based on the extended Kalman filter. After calculating the prior pose of the robot, based on the robot The prior pose of posture. Unlike the existing technology that relies entirely on navigation QR codes to obtain the robot's pose, the technical solution of this application is based on navigation QR codes and integrates pose data obtained by multiple sensors, so that even when the navigation QR codes are relatively sparse, It is still possible to obtain the robot's pose data and position the robot more accurately.
- FIG. 3 is a schematic structural diagram of a robot positioning device according to an embodiment of the present application. For ease of explanation, only parts related to the embodiments of the present application are shown.
- the robot positioning device illustrated in Figure 3 mainly includes a first acquisition module 301, a second acquisition module 302, a calculation module 303, a matching module 304 and a fusion module 305, where:
- the first acquisition module 301 is used to acquire laser point cloud data through the laser sensor mounted on the robot;
- the second acquisition module 302 is used to acquire the pose data of the robot through the motion sensor mounted on the robot;
- the calculation module 303 is used to obtain the position of the robot based on the navigation QR code and motion sensors. pose data and based on the extended Kalman filter, calculate the prior pose of the robot;
- the matching module 304 is used to match the laser point cloud data with the robot map based on the prior pose of the robot to obtain the first pose of the robot;
- the fusion module 305 is used to fuse the first pose of the robot with the second pose currently output by the extended Kalman filter to obtain the final pose of the robot.
- the technical solution of this application is based on the kinematic data obtained by the navigation QR code and the motion sensor and based on the extended Kalman filter.
- the prior pose of the robot is matched with the laser point cloud data and the robot map to obtain the first pose of the robot.
- the first pose of the robot is fused with the second pose currently output by the extended Kalman filter to obtain the final pose of the robot. posture.
- the technical solution of this application is based on navigation QR codes and integrates pose data obtained by multiple sensors, so that even when the navigation QR codes are relatively sparse, It is still possible to obtain the robot's pose data and position the robot more accurately.
- FIG. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
- electronic device 400 includes memory 410 and processor 420 .
- the processor 420 can be a central processing unit (Central Processing Unit, CPU), or other general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or an on-site processor.
- Programmable gate array Field-Programmable Gate Array, FPGA or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
- a general-purpose processor may be a microprocessor or the processor may be any conventional processor, etc.
- Memory 410 may include various types of storage units, such as system memory, read-only memory (ROM), and persistent storage. Among them, ROM can store static data or instructions required by the processor 420 or other modules of the computer. Persistent storage may be readable and writable storage. Persistent storage may be a non-volatile storage device that does not lose stored instructions and data even when the computer is powered off. In some embodiments, the permanent storage device uses a large-capacity storage device (eg, magnetic or optical disk, flash memory) as the permanent storage device. Some other implementations In the formula, the permanent storage device may be a removable storage device (such as a floppy disk, optical drive).
- System memory can be a read-write storage device or a volatile read-write storage device, such as dynamic random access memory.
- System memory can store some or all of the instructions and data the processor needs to run.
- memory 410 may include any combination of computer-readable storage media, including various types of semiconductor memory chips (eg, DRAM, SRAM, SDRAM, flash memory, programmable read-only memory), magnetic disks, and/or optical disks.
- memory 410 may include a readable and/or writable removable storage device, such as a compact disc (CD), a read-only digital versatile disc (eg, DVD-ROM, dual-layer DVD-ROM), Read-only Blu-ray discs, ultra-density discs, flash memory cards (such as SD card, min SD card, Micro-SD card, etc.), magnetic floppy disks, etc.
- a readable and/or writable removable storage device such as a compact disc (CD), a read-only digital versatile disc (eg, DVD-ROM, dual-layer DVD-ROM), Read-only Blu-ray discs, ultra-density discs, flash memory cards (such as SD card, min SD card, Micro-SD card, etc.), magnetic floppy disks, etc.
- Computer-readable storage media do not contain carrier waves and transient electronic signals that are transmitted wirelessly or wired.
- the memory 410 stores executable code.
- the processor 420 can be caused to execute part or all of the above-mentioned methods.
- the method according to the present application can also be implemented as a computer program or computer program product, which computer program or computer program product includes computer program code instructions for executing part or all of the steps in the above method of the present application.
- the application may also be implemented as a computer-readable storage medium (or a non-transitory machine-readable storage medium or a machine-readable storage medium) with executable code (or computer program or computer instruction code) stored thereon,
- executable code or computer program or computer instruction code
- the processor of the electronic device or server, etc.
- the processor is caused to execute part or all of the respective steps of the above method according to the present application.
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- Automation & Control Theory (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
- Navigation (AREA)
Abstract
一种机器人定位方法、装置、电子设备和计算可读存储介质。该方法包括:通过机器人搭载的激光传感器获取激光点云数据(S101);通过机器人搭载的运动类传感器获取机器人的位姿数据(S102);根据导航二维码和运动类传感器获取的机器人的位姿数据并基于扩展卡尔曼滤波器,计算得到机器人的先验位姿(S103);基于机器人的先验位姿,将激光点云数据与机器人地图匹配,得到机器人的第一位姿(S104);将机器人的第一位姿与扩展卡尔曼滤波器当前输出的第二位姿融合,得到机器人最终的位姿(S105)。该方法可以提升机器人定位的准确性。
Description
本申请涉及人工智能领域,尤其涉及机器人定位方法、装置和计算可读存储介质。
随着人工智能(Artificial Intelligence,AI)技术的发展,一些赋予了AI的机器人广泛应用于餐饮、医疗和仓储等领域。在应用这些AI机器人时,比较关键之处是能够对机器人进行精确定位,因为只有对机器人进行精确定位,才能够使得机器人顺利抵达目的地,完成既定任务。相关技术中,机器人定位方法是基于二维码的定位方法,具体是在机器人工作场地装贴若干用于导航的二维码,这些二维码包含了机器人在此处的方位信息,因此,只要机器人能够观察到导航二维码,通过解码其中包含的信息即可对自身定位。
显然,上述基于二维码的定位方法以机器人能够观察到导航二维码为前提。然而,在实际应用场景中,机器人的工作场地的导航二维码间距较大或者基于各种原因不能装贴导航二维码,例如,导航二维码的间距超过1米,一旦机器人长时间或运行长距离均观察不到导航二维码,就会导致定位不准确或者累计误差持续增大,最终使得机器人走偏、甚至无法继续执行任务。
发明内容
为解决或部分解决相关技术中存在的问题,本申请提供一种机器人定位方法、装置和计算可读存储介质,可以提升机器人定位的准确性。
本申请第一方面提供一种机器人定位方法,包括:
通过机器人搭载的激光传感器获取激光点云数据;
通过所述机器人搭载的运动类传感器获取所述机器人的位姿数据;
根据导航二维码和所述位姿数据并基于扩展卡尔曼滤波器,计算得到所述机器人的先验位姿;
基于所述机器人的先验位姿,将所述激光点云数据与机器人地图匹配,得到所述机器人的第一位姿;
将所述机器人的第一位姿与所述扩展卡尔曼滤波器当前输出的第二位姿融合,得到所述机器人最终的位姿。
本申请第二方面提供一种机器人定位装置,包括:
第一获取模块,用于通过机器人搭载的激光传感器获取激光点云数据;
第二获取模块,用于通过所述机器人搭载的运动类传感器获取所述机器人的位姿数据;
计算模块,用于根据导航二维码和所述位姿数据并基于扩展卡尔曼滤波器,计算得到所述机器人的先验位姿;
匹配模块,用于基于所述机器人的先验位姿,将所述激光点云数据与机器人地图匹配,得到所述机器人的第一位姿;
融合模块,用于将所述机器人的第一位姿与所述扩展卡尔曼滤波器当前输出的第二位姿融合,得到所述机器人最终的位姿。
本申请第三方面提供一种电子设备,包括:
处理器;以及
存储器,其上存储有可执行代码,当所述可执行代码被所述处理器执行时,使所述处理器执行如上所述的方法。
本申请第四方面提供一种计算机可读存储介质,其上存储有可执行代码,当所述可执行代码被电子设备的处理器执行时,使所述处理器执行如上所述的方法。
本申请提供的技术方案可知,本申请的技术方案根据导航二维码和运动类传感器获取的位姿数据并基于扩展卡尔曼滤波器,计算得到机器人的先验位姿后,基于机器人的先验位姿,将激光点云数据与机器人地图匹配,得到机器人的第一位姿,最后将机器人的第一位姿与扩展卡尔曼滤波器当前输出的第二位姿融合,得到机器人最终的位姿。与现有技术完全依赖于导航二维码得到机器人的位姿,本申请的技术方案基于导航二维码,融合多种传感器获取的位姿数据,从而即使在导航二维码设置较为稀疏时,仍然能够获取
到机器人的位姿数据,对机器人进行较为精确的定位。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本申请。
通过结合附图对本申请示例性实施方式进行更详细地描述,本申请的上述以及其它目的、特征和优势将变得更加明显,其中,在本申请示例性实施方式中,相同的参考标号通常代表相同部件。
图1是本申请实施例提供的机器人定位方法的流程示意图;
图2是本申请实施例提供的多传感器的数据在时间戳上进行对齐示意图;
图3是本申请实施例提供的机器人定位装置的结构示意图;
图4是本申请实施例提供的电子设备的结构示意图。
下面将参照附图更详细地描述本申请的实施方式。虽然附图中显示了本申请的实施方式,然而应该理解,可以以各种形式实现本申请而不应被这里阐述的实施方式所限制。相反,提供这些实施方式是为了使本申请更加透彻和完整,并且能够将本申请的范围完整地传达给本领域的技术人员。
在本申请使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本申请。在本申请和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。
应当理解,尽管在本申请可能采用术语“第一”、“第二”、“第三”等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本申请范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本申请的描述中,“多个”的含义是两个或两个以上,
除非另有明确具体的限定。
在应用AI机器人时,比较关键之处是能够对机器人进行精确定位,因为只有对机器人进行精确定位,才能够使得机器人顺利抵达目的地,完成既定任务。相关技术中,机器人定位方法是基于二维码的定位方法,具体是在机器人工作场地装贴若干用于导航的二维码,这些二维码包含了机器人在此处的方位信息,因此,只要机器人能够观察到导航二维码,通过解码其中包含的信息即可对自身定位。显然,上述基于二维码的定位方法以机器人能够观察到导航二维码为前提。然而,在实际应用场景中,机器人的工作场地的导航二维码间距较大或者基于各种原因不能装贴导航二维码,例如,导航二维码的间距超过1米,一旦机器人长时间或运行长距离均观察不到导航二维码,就会导致定位不准确或者累计误差持续增大,最终使得机器人走偏、甚至无法继续执行任务。
针对上述问题,本申请实施例提供一种机器人定位方法,可以提升机器人定位的准确性。
以下结合附图详细描述本申请实施例的技术方案。
参见图1,是本申请实施例示出的机器人定位方法的流程示意图,主要包括步骤S101至步骤S105,说明如下:
步骤S101:通过机器人搭载的激光传感器获取激光点云数据。
激光点云数据为激光传感器通过扫描,在激光束击中目标后,从目标表面返回的点携带的信息,因从目标表面返回的点一般较多(其数量级通常以万或十万为单位)而类似一团云状,因此称为激光点云,激光点云携带的三维坐标、目标的纹理、反射强度和回波次数等信息就是激光点云数据。在本申请实施例中,机器人搭载的激光传感器可以是激光雷达或毫米波雷达等2D激光传感器。
为了更大范围地识别机器人周围的环境并避开障碍物,在本申请一个实施例中,机器人搭载的激光传感器可以是部署在机器人的各至少一个2D激光传感器(可部署在机器人前后、左右或斜对角位置,对具体部署位置本申请不限制)。由于在安装这至少两个2D激光传感器时可能存在安装位置误差,或者,在机器人运行了一段时间后,原本已配准的至少两个2D激光传感器的安装位置出现了误差。考虑到单个2D激光传感器的扫描范
围是270°,当安装的各至少一个2D激光传感器进行数据拼接时,必然会有数据重叠处,而至少两个2D激光传感器存在上述误差时,则至少两个2D激光传感器获取的数据在重叠处不能完全匹配。因此,在本申请一个实施例中,当机器人搭载的激光传感器是部署在机器人的各至少一个2D激光传感器时,在通过机器人搭载的激光传感器获取激光点云数据之前,离线标定各至少一个2D激光传感器,以获取各至少一个2D激光传感器的安装位置误差。在获取各至少一个2D激光传感器的安装位置误差后,当需要对安装的各至少一个2D激光传感器进行数据拼接时,可以将这个安装位置误差作为补偿数据,使得各至少一个2D激光传感器获取的数据在其扫描重叠处能够完全匹配。
上述实施例中,离线标定上述至少两个2D激光传感器可以是使得机器人在环境良好(譬如,晴天、光照良好以及导航二维码安排得较为稠密以确保机器人在绝大多数时间都能观察到导航二维码)的特定测试场地中运动,将仅仅依靠至少两个2D激光传感器而不涉及其他传感器进行地图匹配获得的机器人位姿与导航二维码获得的位姿进行坐标转换,通过得到的转换矩阵可以计算出至少两个2D激光传感器的实际安装位置,从而得到至少两个2D激光传感器的安装位置误差。从上述实施例可知,虽然通过离线方式能够标定至少两个2D激光传感器以获取至少两个2D激光传感器的安装位置误差,然而,离线标定对测试场地的要求较为苛刻。在本申请另一实施例中,可以在通过机器人搭载的激光传感器获取激光点云数据之前,在线标定至少两个2D激光传感器,以获取至少两个2D激光传感器的安装位置误差。所谓在线标定,是指可以在机器人运行在任意场地时,依靠其上部署的各种传感器,一边运行一边对至少两个2D激光传感器进行标定。相对于离线标定,在线标定的优势在于对场地没有过多限制,因而可以随时进行,此外,如前所述,在机器人运行了一段时间后,原本已配准的至少两个2D激光传感器的安装位置可能出现误差,因此,在线标定可以实时解决机器人长久运行导致的2D激光传感器安装位置误差的问题。
作为本申请一个实施例,在线标定各至少一个2D激光传感器,以获取各至少一个2D激光传感器的安装位置误差可以是:通过对视觉设备获
取的图像数据进行特征点匹配,获取特征点对应的重投影误差;对各至少一个2D激光传感器中任意一个激光传感器获取的激光点云数据中两帧点云进行位姿修正和配准,计算两帧点云之间的相对位姿;通过运动类传感器获取的位姿数据,计算两帧点云之间的位姿偏差;在设定时长内,根据特征点对应的重投影误差、两帧点云之间的相对位姿以及计算两帧点云之间的位姿偏差进行迭代优化求解,获取各至少一个2D激光传感器的实际安装位置。在获取各至少一个2D激光传感器的实际安装位置后,将各至少一个2D激光传感器的实际安装位置相减即得到各至少一个2D激光传感器的安装位置误差。上述实施例中,视觉设备可以单目相机、双目相机或者深度相机等视觉类传感器,而运动类传感器可以是轮式里程计或惯性测量单元(Inertial Measurement Unit,IMU)等。
步骤S102:通过机器人搭载的运动类传感器获取机器人的位姿数据。
在本申请实施例中,机器人搭载的运动类传感器可以是前述提及的轮式里程计或惯性测量单元IMU等传感器,运动类传感器获取机器人的位姿数据包括机器人的三维坐标、加速度、速度和朝向等信息。
需要说明的是,传感器本身从数据获取到输出需要一定的时间dt,因此,真实的传感器数据采集时间应该是T-dt,此处,T为传感器供应商给出的这一帧输出的实际UTC时间,而由于不同的传感器即使硬件同步以后也存在采样频率不一致的差异,因此,在多种传感器融合定位中必然存在传感器获取的数据在时间戳上不同步的问题。鉴于上述事实,在本申请实施例中,在通过机器人搭载的激光传感器获取激光点云数据以及通过机器人搭载的运动类传感器获取机器人的位姿数据之后,将激光点云数据和机器人的位姿数据在时间上进行对齐,具体地,考虑到线性插值算法具有简单易行、计算量小等优点,上述将激光点云数据和机器人的位姿数据在时间上进行对齐可以是:通过线性插值算法,将激光点云数据和机器人的位姿数据的时间戳对齐。以上述实施例的运动类传感器包含惯性测量单元IMU和轮式里程计为例,如图2所示,假设IMU在ti时刻获取的机器人的位姿数据为Dti,理想状态下是轮式里程计也能够在ti时刻获取该机器人的位姿数据,然而由于采样频率不一致等原因,轮式里程计只能在tl'i时刻获取到机器人的位姿数据Dl'i,这就是传感器之间的数据在时间戳上没有对齐;对于激光传感器采集的
激光点云数据,也同样存在与IMU和轮式里程计采集的数据在时间戳上不对齐的情形,即由于采样频率不一致等原因,IMU在ti时刻获取的机器人位姿数据为Dti时,激光传感器只能在t'xi时刻获取到激光点云数据D'xi。对于上述情形,需要采用数据对齐的方案。
在本申请一个实施例中,通过线性插值算法,将激光点云数据和机器人的位姿数据的时间戳对齐可以是:使用与当前帧激光点云数据前后相邻时间戳IMU获取的机器人位姿数据对机器人位姿数据进行插值,以使插值后所得机器人位姿数据与激光传感器采集的当前帧激光点云数据对齐;以及使用与当前帧激光点云数据前后相邻时间戳轮式里程计获取的机器人的位姿数据对机器人的位姿数据进行插值,以使插值后所得机器人的位姿数据与激光传感器采集的当前帧激光点云数据对齐。仍然以图2为例,使用与当前帧激光点云数据前后相邻时间戳IMU获取的机器人的位姿数据,即,使用ti-1时刻机器人的位姿数据D'ti-1与ti时刻机器人位姿数据Dti对机器人的位姿数据进行插值,得到t'xi时刻的机器人的位姿插值数据从图2可以看出,经过上述插值操作,机器人的位姿插值数据与激光传感器采集的当前帧激光点云数据D'xi已经对齐。同样地,使用与当前帧激光点云数据前后相邻时间戳轮式里程计获取的机器人的位姿数据,即,使用t'li时刻机器人的位姿数据D'li与t'li+1时刻机器人的位姿数据D'li+1对机器人的位姿数据进行插值,得到t'xi时刻的机器人的位姿插值数据从图2可以看出,经过上述插值操作,机器人的位姿插值数据与激光传感器采集的当前帧激光点云数据D'xi已经对齐。
步骤S103:根据导航二维码和运动学数据并基于扩展卡尔曼滤波器,计算得到机器人的先验位姿。
在本申请实施例中,导航二维码中包含机器人移动到装贴该导航二维码之处时该机器人的位置信息和由机器人搭载的IMU得到的机器人的位姿信息,这些位姿信息的置信度较高,因此可以作为扩展卡尔曼滤波器计算机器人的位姿信息的观测值,而机器人搭载的轮式里程计得到的数据可以作为扩展卡尔曼滤波器计算机器人位姿信息的先验值。作为一种利用线性系统状态方程,卡尔曼滤波(Kalman filtering)可通过系统输入输出观测数据对系统状态进行最优估计的算法。换言之,卡尔曼滤波过程可以对系统输入观测数据进行处理,得到误差最小的真实信号估计值的过程。而扩展卡尔曼滤波器(Extended
Kalman Filter,EKF)可以对非线性函数的泰勒(Taylor)展开式进行一阶线性化截断,忽略其余高阶项,从而将非线性问题转化为线性问题,因此,在本申请实施例中,在得到导航二维码携带的机器人的位置信息和运动类传感器获取的机器人的位姿数据(其中,导航二维码包含机器人移动到装贴该导航二维码之处时该机器人的位置信息和由机器人搭载的IMU得到的机器人的位姿信息作为扩展卡尔曼滤波器计算机器人的位姿信息的观测值,而机器人搭载的轮式里程计得到的数据可以作为扩展卡尔曼滤波器计算机器人位姿信息的先验值)之后,可以根据导航二维码和运动学数据并基于扩展卡尔曼滤波器,计算得到机器人的先验位姿。具体地,计算机器人的先验证位姿可以是:获取机器人在k-1时刻(即k时刻之前一时刻)的目标位姿信息;根据运动类传感器获取的机器人位姿数据(包括由机器人搭载的IMU得到的机器人的位姿信息和机器人搭载的轮式里程计得到的机器人的位姿信息)和机器人在k-1时刻的目标位姿信息,确定机器人在k时刻的位姿信息估计值;获取机器人在k时刻的协方差矩阵估计值;根据机器人在k时刻的协方差矩阵估计值确定卡尔曼增益;根据机器人移动到装贴导航二维码之处时该机器人的位置信息、机器人在k时刻的位姿信息估计值以及卡尔曼增益确定机器人在k时刻的目标位姿信息作为机器人的先验位姿。
步骤S104:基于机器人的先验位姿,将激光点云数据与机器人地图匹配,得到机器人的第一位姿。
在机器人移动的环境中,除了存在墙面、货架等不可移动的物体之外,还存在移动的人、机器人或其他物体等移动目标。这些移动目标对机器人的定位是一种干扰或者在进行数据匹配时是一种噪声。因此,在上述实施例中,在基于机器人的先验位姿,将激光点云数据与机器人地图匹配之前,通过数据关联算法剔除激光点云数据中对应于移动目标的激光点云数据。至于数据关联算法,可以是联合相容分支定界数据关联(Joint Compatibility Branch and Bound,JCBB)算法、独立兼容最临近数据关联(Individual Compatibility Nearest Neighbor,ICNN)算法或者两者的改进等。例如,考虑到JCBB算法和ICNN算法在数据关联过程中所存在的关联精度和计算效率之间的矛盾,本申请实施例可以设计一种基于关联准则的混合自适应数据关联方案,即,在数据关联基本准则的约束下,评判ICCN算法下数据关联结果是否正确,
若判断数据关联结果错误,则采用JCBB算法重新进行数据关联,以此提高关联精度和关联效率。
在上述实施例中,机器人地图可以为二维栅格地图。二维栅格地图也称为二维栅格概率地图或二维占据栅格地图(Occupancy Grid Map),这种地图是将平面划分为一个个栅格,每个栅格赋予一个占据率(Occupancy)。所谓占据率,是指二维占据栅格地图上,一个栅格被障碍物占据的状态(occupied状态)、无障碍物占据的状态(Free状态)或者介于这两种状态之间的概率,其中,被障碍物占据的状态使用1表示,无障碍物占据的状态使用0表示,介于两种状态之间则使用0~1之间的值来表示,显然,占据率表明了一个栅格被障碍物占据的可能性大小,当一个栅格的占据率越大,表明该栅格被障碍物占据的可能性越大,反之则反。考虑到扩展卡尔曼滤波器的计算周期较短,而激光点云数据与机器人地图匹配的计算周期较长,因此,可以基于机器人的先验位姿,将激光点云数据与机器人地图匹配,得到机器人的第一位姿。相应于机器人地图为二维栅格地图,上述实施例的基于机器人的先验位姿,将激光点云数据与机器人地图匹配,得到机器人的第一位姿,具体可以是:由机器人的先验位姿确定位姿搜索空间中的若干候选位姿;利用若干候选位姿中的每一个候选位姿,将激光点云数据投影至二维栅格地图,计算若干候选位姿中的每一个候选位姿在二维栅格地图上的匹配得分;将若干候选位姿在二维栅格地图上的匹配得分最高的候选位姿确定为机器人的第一位姿。上述由机器人的先验位姿确定位姿搜索空间中的若干候选位姿具体是以在机器人的先验位姿的附近,由Ceres求解器搜索机器人的若干候选位姿,而在计算若干候选位姿中的每一个候选位姿在二维栅格地图上的匹配得分时,可以机器人的位姿作为状态构建非线性最小二乘化问题,以误差E1为非线性最小二乘化问题的误差约束条件,迭代优化机器人的位姿,直至误差E1最小时为止,此处,误差E1为机器人的估计位姿与全局位姿观测值之间的差值。当误差E1最小时,若干候选位姿中的某个候选位姿在二维栅格地图上的匹配得分最高,将该某个候选位姿确定为机器人的第一位姿。
步骤S105:将机器人的第一位姿与扩展卡尔曼滤波器当前输出的第二位姿融合,得到机器人最终的位姿。
一方面,由于轮式里程计等运动类传感器存在累计误差,需要通过其他
传感器来修正这个误差值,激光点云数据与机器人地图匹配也存在二维栅格地图的误差,需要通过优化匹配和准确的先验值来消除二维栅格地图累计误差;另一方面,激光点云数据与机器人地图匹配得到的机器人的第一位姿、扩展卡尔曼滤波器当前输出的第二位姿以及根据导航二维码和运动类传感器获取的位姿数据并基于扩展卡尔曼滤波器计算得到的机器人的先验位姿,三者之间的数据不相同。对于通过不同方法得到的机器人状态,需要通过数据融合得到一个更加可信的准确的值。因此,在本申请实施例中,可以将机器人的第一位姿与扩展卡尔曼滤波器当前输出的第二位姿融合,得到机器人最终的位姿。具体地,可以计算机器人的第一位姿与机器人的先验位姿之差,得到位姿偏差;求取位姿偏差与扩展卡尔曼滤波器当前输出的第二位姿之和,得到机器人最终的位姿。此处,扩展卡尔曼滤波器当前输出的第二位姿是以导航二维码和运动类传感器获取的机器人的位姿数据为扩展卡尔曼滤波器的输入,计算得到的机器人的位姿数据。
从上述图1示例的机器人定位方法可知,本申请的技术方案根据导航二维码和运动类传感器获取的运动学数据并基于扩展卡尔曼滤波器,计算得到机器人的先验位姿后,基于机器人的先验位姿,将激光点云数据与机器人地图匹配,得到机器人的第一位姿,最后将机器人的第一位姿与扩展卡尔曼滤波器当前输出的第二位姿融合,得到机器人最终的位姿。与现有技术完全依赖于导航二维码得到机器人的位姿,本申请的技术方案基于导航二维码,融合多种传感器获取的位姿数据,从而即使在导航二维码设置较为稀疏时,仍然能够获取到机器人的位姿数据,对机器人进行较为精确的定位。
参见图3,是本申请实施例示出的机器人定位装置的结构示意图。为了便于说明,仅示出了与本申请实施例相关的部分。图3示例的机器人定位装置主要包括第一获取模块301、第二获取模块302、计算模块303、匹配模块304和融合模块305,其中:
第一获取模块301,用于通过机器人搭载的激光传感器获取激光点云数据;
第二获取模块302,用于通过机器人搭载的运动类传感器获取机器人的位姿数据;
计算模块303,用于根据导航二维码和运动类传感器获取的机器人的位
姿数据并基于扩展卡尔曼滤波器,计算得到机器人的先验位姿;
匹配模块304,用于基于机器人的先验位姿,将激光点云数据与机器人地图匹配,得到机器人的第一位姿;
融合模块305,用于将机器人的第一位姿与扩展卡尔曼滤波器当前输出的第二位姿融合,得到机器人最终的位姿。
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不再做详细阐述说明。
从上述图3示例的机器人定位装置可知,本申请的技术方案根据导航二维码和运动类传感器获取的运动学数据并基于扩展卡尔曼滤波器,计算得到机器人的先验位姿后,基于机器人的先验位姿,将激光点云数据与机器人地图匹配,得到机器人的第一位姿,最后将机器人的第一位姿与扩展卡尔曼滤波器当前输出的第二位姿融合,得到机器人最终的位姿。与现有技术完全依赖于导航二维码得到机器人的位姿,本申请的技术方案基于导航二维码,融合多种传感器获取的位姿数据,从而即使在导航二维码设置较为稀疏时,仍然能够获取到机器人的位姿数据,对机器人进行较为精确的定位。
图4是本申请实施例示出的电子设备的结构示意图。
参见图4,电子设备400包括存储器410和处理器420。
处理器420可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
存储器410可以包括各种类型的存储单元,例如系统内存、只读存储器(ROM)和永久存储装置。其中,ROM可以存储处理器420或者计算机的其他模块需要的静态数据或者指令。永久存储装置可以是可读写的存储装置。永久存储装置可以是即使计算机断电后也不会失去存储的指令和数据的非易失性存储设备。在一些实施方式中,永久性存储装置采用大容量存储装置(例如磁或光盘、闪存)作为永久存储装置。另外一些实施方
式中,永久性存储装置可以是可移除的存储设备(例如软盘、光驱)。系统内存可以是可读写存储设备或者易失性可读写存储设备,例如动态随机访问内存。系统内存可以存储一些或者所有处理器在运行时需要的指令和数据。此外,存储器410可以包括任意计算机可读存储媒介的组合,包括各种类型的半导体存储芯片(例如DRAM,SRAM,SDRAM,闪存,可编程只读存储器),磁盘和/或光盘也可以采用。在一些实施方式中,存储器410可以包括可读和/或写的可移除的存储设备,例如激光唱片(CD)、只读数字多功能光盘(例如DVD-ROM,双层DVD-ROM)、只读蓝光光盘、超密度光盘、闪存卡(例如SD卡、min SD卡、Micro-SD卡等)、磁性软盘等。计算机可读存储媒介不包含载波和通过无线或有线传输的瞬间电子信号。
存储器410上存储有可执行代码,当可执行代码被处理器420处理时,可以使处理器420执行上文述及的方法中的部分或全部。
此外,根据本申请的方法还可以实现为一种计算机程序或计算机程序产品,该计算机程序或计算机程序产品包括用于执行本申请的上述方法中部分或全部步骤的计算机程序代码指令。
或者,本申请还可以实施为一种计算机可读存储介质(或非暂时性机器可读存储介质或机器可读存储介质),其上存储有可执行代码(或计算机程序或计算机指令代码),当可执行代码(或计算机程序或计算机指令代码)被电子设备(或服务器等)的处理器执行时,使处理器执行根据本申请的上述方法的各个步骤的部分或全部。
以上已经描述了本申请的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在最好地解释各实施例的原理、实际应用或对市场中的技术的改进,或者使本技术领域的其他普通技术人员能理解本文披露的各实施例。
Claims (11)
- 一种机器人定位方法,其特征在于,所述方法包括:通过机器人搭载的激光传感器获取激光点云数据;通过所述机器人搭载的运动类传感器获取所述机器人的位姿数据;根据导航二维码和所述位姿数据并基于扩展卡尔曼滤波器,计算得到所述机器人的先验位姿;基于所述机器人的先验位姿,将所述激光点云数据与机器人地图匹配,得到所述机器人的第一位姿;将所述机器人的第一位姿与所述扩展卡尔曼滤波器当前输出的第二位姿融合,得到所述机器人最终的位姿。
- 根据权利要求1所述的机器人定位方法,其特征在于,所述激光传感器包括部署在所述机器人的各至少一个2D激光传感器,所述方法还包括:在通过机器人搭载的激光传感器获取激光点云数据之前,离线标定或在线标定所述各至少一个2D激光传感器,以获取所述各至少一个2D激光传感器的安装位置误差。
- 根据权利要求2所述的机器人定位方法,其特征在于,所述在线标定所述各至少一个2D激光传感器,以获取所述各至少一个2D激光传感器的安装位置误差,包括:通过对视觉设备获取的图像数据进行所述特征点匹配,获取所述特征点对应的重投影误差;对所述2D激光传感器中任意一个激光传感器获取的激光点云数据中两帧点云进行位姿修正和配准,计算所述两帧点云之间的相对位姿;通过所述运动类传感器获取的位姿数据,计算两帧点云之间的位姿偏差;在设定时长内,根据所述重投影误差、所述相对位姿以及所述位姿偏差进行迭代优化求解,获取所述2D激光传感器的实际安装位置以得到所述2D激光传感器的安装位置误差。
- 根据权利要求1所述的机器人定位方法,其特征在于,所述方法 还包括:在通过机器人搭载的激光传感器获取激光点云数据以及通过所述机器人搭载的运动类传感器获取所述机器人的位姿数据之后,将所述激光点云数据和所述机器人的位姿数据在时间上进行对齐。
- 根据权利要求4所述的机器人定位方法,其特征在于,所述将所述激光点云数据和所述机器人的位姿数据在时间上进行对齐,包括:通过线性插值算法,将所述激光点云数据和所述机器人的位姿数据的时间戳对齐。
- 根据权利要求1所述的机器人定位方法,其特征在于,所述方法还包括:在基于所述机器人的先验位姿,将所述激光点云数据与机器人地图匹配之前,通过数据关联算法剔除所述激光点云数据中对应于移动目标的激光点云数据。
- 根据权利要求1所述的机器人定位方法,其特征在于,所述机器人地图为二维栅格地图,所述基于所述机器人的先验位姿,将所述激光点云数据与机器人地图匹配,得到所述机器人的第一位姿,包括:由所述机器人的先验位姿确定位姿搜索空间中的若干候选位姿;利用所述若干候选位姿中的每一个候选位姿,将所述激光点云数据投影至所述二维栅格地图,计算所述每一个候选位姿在所述二维栅格地图上的匹配得分;将所述若干候选位姿在所述二维栅格地图上的匹配得分最高的候选位姿确定为所述机器人的第一位姿。
- 根据权利要求1所述的机器人定位方法,其特征在于,所述将所述机器人的第一位姿与所述扩展卡尔曼滤波器当前输出的第二位姿融合,得到所述机器人最终的位姿,包括:计算所述机器人的第一位姿与所述机器人的先验位姿之差,得到位姿偏差;求取所述位姿偏差与所述扩展卡尔曼滤波器当前输出的第二位姿之和,得到所述机器人最终的位姿。
- 一种机器人定位装置,其特征在于,所述装置包括:第一获取模块,用于通过机器人搭载的激光传感器获取激光点云数据;第二获取模块,用于通过所述机器人搭载的运动类传感器获取所述机器人的位姿数据;计算模块,用于根据导航二维码和所述位姿数据并基于扩展卡尔曼滤波器,计算得到所述机器人的先验位姿;匹配模块,用于基于所述机器人的先验位姿,将所述激光点云数据与机器人地图匹配,得到所述机器人的第一位姿;融合模块,用于将所述机器人的第一位姿与所述扩展卡尔曼滤波器当前输出的第二位姿融合,得到所述机器人最终的位姿。
- 一种电子设备,其特征在于,包括:处理器;以及存储器,其上存储有可执行代码,当所述可执行代码被所述处理器执行时,使所述处理器执行如权利要求1至8中任意一项所述的方法。
- 一种计算机可读存储介质,其上存储有可执行代码,当所述可执行代码被电子设备的处理器执行时,使所述处理器执行如权利要求1至8中任意一项所述的方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210746832.2 | 2022-06-29 | ||
CN202210746832.2A CN117367419A (zh) | 2022-06-29 | 2022-06-29 | 机器人定位方法、装置和计算可读存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024001649A1 true WO2024001649A1 (zh) | 2024-01-04 |
Family
ID=89382782
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2023/097297 WO2024001649A1 (zh) | 2022-06-29 | 2023-05-31 | 机器人定位方法、装置和计算可读存储介质 |
Country Status (3)
Country | Link |
---|---|
CN (1) | CN117367419A (zh) |
TW (1) | TW202411613A (zh) |
WO (1) | WO2024001649A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220044436A1 (en) * | 2019-04-25 | 2022-02-10 | Beijing Didi Infinity Technology And Development Co., Ltd. | Pose data processing method and system |
CN117824667A (zh) * | 2024-03-06 | 2024-04-05 | 成都睿芯行科技有限公司 | 一种基于二维码和激光的融合定位方法及介质 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112598757A (zh) * | 2021-03-03 | 2021-04-02 | 之江实验室 | 一种多传感器时间空间标定方法及装置 |
CN112764053A (zh) * | 2020-12-29 | 2021-05-07 | 深圳市普渡科技有限公司 | 一种融合定位方法、装置、设备和计算机可读存储介质 |
CN112964291A (zh) * | 2021-04-02 | 2021-06-15 | 清华大学 | 一种传感器标定的方法、装置、计算机存储介质及终端 |
CN113091736A (zh) * | 2021-04-02 | 2021-07-09 | 京东数科海益信息科技有限公司 | 机器人定位方法、装置、机器人及存储介质 |
CN113538699A (zh) * | 2021-06-21 | 2021-10-22 | 广西综合交通大数据研究院 | 基于三维点云的定位方法、装置、设备及存储介质 |
CN113804184A (zh) * | 2020-06-15 | 2021-12-17 | 上海知步邦智能科技有限公司 | 基于多传感器的地面机器人定位方法 |
WO2022105024A1 (zh) * | 2020-11-17 | 2022-05-27 | 深圳市优必选科技股份有限公司 | 机器人位姿的确定方法、装置、机器人及存储介质 |
-
2022
- 2022-06-29 CN CN202210746832.2A patent/CN117367419A/zh active Pending
-
2023
- 2023-05-31 WO PCT/CN2023/097297 patent/WO2024001649A1/zh unknown
- 2023-06-14 TW TW112122106A patent/TW202411613A/zh unknown
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113804184A (zh) * | 2020-06-15 | 2021-12-17 | 上海知步邦智能科技有限公司 | 基于多传感器的地面机器人定位方法 |
WO2022105024A1 (zh) * | 2020-11-17 | 2022-05-27 | 深圳市优必选科技股份有限公司 | 机器人位姿的确定方法、装置、机器人及存储介质 |
CN112764053A (zh) * | 2020-12-29 | 2021-05-07 | 深圳市普渡科技有限公司 | 一种融合定位方法、装置、设备和计算机可读存储介质 |
CN112598757A (zh) * | 2021-03-03 | 2021-04-02 | 之江实验室 | 一种多传感器时间空间标定方法及装置 |
CN112964291A (zh) * | 2021-04-02 | 2021-06-15 | 清华大学 | 一种传感器标定的方法、装置、计算机存储介质及终端 |
CN113091736A (zh) * | 2021-04-02 | 2021-07-09 | 京东数科海益信息科技有限公司 | 机器人定位方法、装置、机器人及存储介质 |
CN113538699A (zh) * | 2021-06-21 | 2021-10-22 | 广西综合交通大数据研究院 | 基于三维点云的定位方法、装置、设备及存储介质 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220044436A1 (en) * | 2019-04-25 | 2022-02-10 | Beijing Didi Infinity Technology And Development Co., Ltd. | Pose data processing method and system |
CN117824667A (zh) * | 2024-03-06 | 2024-04-05 | 成都睿芯行科技有限公司 | 一种基于二维码和激光的融合定位方法及介质 |
CN117824667B (zh) * | 2024-03-06 | 2024-05-10 | 成都睿芯行科技有限公司 | 一种基于二维码和激光的融合定位方法及介质 |
Also Published As
Publication number | Publication date |
---|---|
CN117367419A (zh) | 2024-01-09 |
TW202411613A (zh) | 2024-03-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2024001649A1 (zh) | 机器人定位方法、装置和计算可读存储介质 | |
Li et al. | Multi-sensor fusion for navigation and mapping in autonomous vehicles: Accurate localization in urban environments | |
WO2022127532A1 (zh) | 一种激光雷达与imu的外参标定方法、装置及设备 | |
CN112230242B (zh) | 位姿估计系统和方法 | |
US20200124421A1 (en) | Method and apparatus for estimating position | |
CN110873883B (zh) | 融合激光雷达和imu的定位方法、介质、终端和装置 | |
CN105760811B (zh) | 全局地图闭环匹配方法及装置 | |
CN111551186B (zh) | 一种车辆实时定位方法、系统及车辆 | |
US8855911B2 (en) | Systems and methods for navigation using cross correlation on evidence grids | |
CN106052691B (zh) | 激光测距移动制图中闭合环误差纠正方法 | |
JP2022510418A (ja) | 時間同期処理方法、電子機器及び記憶媒体 | |
CN111427061A (zh) | 一种机器人建图方法、装置,机器人及存储介质 | |
CN113933818A (zh) | 激光雷达外参的标定的方法、设备、存储介质及程序产品 | |
WO2017008454A1 (zh) | 一种机器人的定位方法 | |
CN104715469A (zh) | 一种数据处理方法及电子设备 | |
CN110412596A (zh) | 一种基于图像信息和激光点云的机器人定位方法 | |
US20220398825A1 (en) | Method for generating 3d reference points in a map of a scene | |
Lee et al. | LiDAR odometry survey: recent advancements and remaining challenges | |
CN115200572B (zh) | 三维点云地图构建方法、装置、电子设备及存储介质 | |
CN116452763A (zh) | 一种基于误差卡尔曼滤波和因子图的三维点云地图构建方法 | |
CN114442133A (zh) | 无人机定位方法、装置、设备及存储介质 | |
CN112965076B (zh) | 一种用于机器人的多雷达定位系统及方法 | |
CN117824667A (zh) | 一种基于二维码和激光的融合定位方法及介质 | |
CN116465393A (zh) | 基于面阵激光传感器的同步定位和建图方法及装置 | |
CN116222541A (zh) | 利用因子图的智能多源组合导航方法及装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23829834 Country of ref document: EP Kind code of ref document: A1 |