CN110849392A - Robot mileage counting data correction method and robot - Google Patents

Robot mileage counting data correction method and robot Download PDF

Info

Publication number
CN110849392A
CN110849392A CN201911118990.8A CN201911118990A CN110849392A CN 110849392 A CN110849392 A CN 110849392A CN 201911118990 A CN201911118990 A CN 201911118990A CN 110849392 A CN110849392 A CN 110849392A
Authority
CN
China
Prior art keywords
robot
data
covariance matrix
moving distance
correcting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911118990.8A
Other languages
Chinese (zh)
Inventor
蔡龙生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Has A Robot Co Ltd
Shanghai Yogo Robot Co Ltd
Original Assignee
Shanghai Has A Robot Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Has A Robot Co Ltd filed Critical Shanghai Has A Robot Co Ltd
Priority to CN201911118990.8A priority Critical patent/CN110849392A/en
Publication of CN110849392A publication Critical patent/CN110849392A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C25/00Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass
    • G01C25/005Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass initial alignment, calibration or starting-up of inertial devices
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C22/00Measuring distance traversed on the ground by vehicles, persons, animals or other moving solid bodies, e.g. using odometers, using pedometers

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Manufacturing & Machinery (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a robot and a method for correcting mileage counting data thereof, wherein the method for correcting the mileage counting data comprises the following steps: initializing a system state quantity and a system covariance matrix of the robot; calculating the moving distance of the left wheel and the moving distance of the right wheel of the robot, performing state prediction and system covariance prediction according to a system state noiseless model by combining the initialized system state quantity and system covariance matrix of the robot, and outputting the system state quantity and the system covariance matrix of the robot after state prediction; and reading the data of the inertial measurement unit, judging whether the absolute value of the difference value between the current data of the inertial measurement unit and the data of the inertial measurement unit at the previous moment is within a first set threshold value, and outputting the system state quantity and the system covariance matrix of the robot after the state prediction as fused information. The mileage counting data correction method has high real-time performance and high calculation precision.

Description

Robot mileage counting data correction method and robot
Technical Field
The invention relates to the technical field of robot track calculation, in particular to a robot and a method for correcting mileage counting data of the robot.
Background
In recent years, with the increasing maturity of robot technology research and the gradual improvement of hardware conditions, the calculation of the real-time trajectory of the robot has higher requirements. On the premise of ensuring real-time, the accuracy of the pose is also ensured, which is a basic problem of robot trajectory estimation.
In the prior art, the calculation result of the robot track calculation method is obviously wrong under the condition that wheels slip, in addition, the calculated relative deflection angle is also inaccurate when the robot turns, and due to the influence of null shift and temperature shift, the data of the robot track calculation method is easily influenced, so that the calculated track has a very large accumulated error. Therefore, improvements are urgently required.
Disclosure of Invention
In view of the above-mentioned shortcomings of the prior art, an object of the present invention is to provide a method for calibrating odometry data of a robot and a robot, which are used to solve the problems that in the prior art, the calculation result is obviously wrong when the wheels of the robot slip, the calculated relative deflection angle is inaccurate when the robot turns, and the calculated track has a very large accumulated error due to the influence of null shift and temperature drift, and the data is easily affected.
In order to achieve the above and other related objects, the present invention provides a method for calibrating odometry data of a robot, the method comprising:
initializing a system state quantity and a system covariance matrix of the robot;
calculating the moving distance of the left wheel and the moving distance of the right wheel of the robot, performing state prediction and system covariance prediction according to a system state noiseless model by combining the initialized system state quantity and system covariance matrix of the robot, and outputting the system state quantity and the system covariance matrix of the robot after state prediction;
reading the data of the inertia measurement unit, judging whether the absolute value of the difference value between the current data of the inertia measurement unit and the data of the inertia measurement unit at the previous moment is within a first set threshold value, if so, updating the system state and the system covariance matrix of the robot for the first time, otherwise, outputting the system state quantity and the system covariance matrix of the robot after state prediction as fused information, and continuously performing the operation of calculating the moving distance of the left wheel and the moving distance of the right wheel of the robot.
In an embodiment of the present invention, the method for correcting odometry data of the robot further includes: reading optical flow data, judging whether the absolute value of the difference value between the current optical flow data and the optical flow data at the previous moment is within a second set threshold value, if so, updating the system state and the system covariance matrix of the robot for the second time, otherwise, outputting the system state and the system covariance matrix of the robot after the first updating as fused information, and continuously performing the operation of calculating the moving distance of the left wheel and the moving distance of the right wheel of the robot.
In an embodiment of the present invention, the method for correcting odometry data of the robot further includes: and outputting the system state and the system covariance matrix of the robot after the second update, and continuously performing the operation of calculating the moving distance of the left wheel and the moving distance of the right wheel of the robot.
In an embodiment of the present invention, the inertial measurement unit data is read, and the output of the inertial measurement unit data is used as the first observation quantity and the first observation matrix.
In an embodiment of the present invention, the optical flow data is read, and the optical flow data is output as a second observation quantity and a second observation matrix.
In one embodiment of the invention, the inertial measurement unit data and the optical flow data are subjected to joint correction of robot odometry data by a multi-sensor fusion model.
In an embodiment of the invention, the first observed quantity is [ pitch, roll, yaw ]]TWherein pitch represents a pitch angle of the robot in a three-dimensional space, roll represents a roll angle of the robot in the three-dimensional space, and yaw represents a yaw angle of the robot in the three-dimensional space; the first observation matrix is
Figure BDA0002274896800000021
In an embodiment of the invention, the second observed quantity is [ x, y, z, yaw [ ]]TWherein x represents the abscissa of the robot in the three-dimensional space, and y represents the robot in the three-dimensional spaceThe ordinate in the middle, z represents the vertical coordinate of the robot in the three-dimensional space, and yaw represents the yaw angle of the robot in the three-dimensional space; the second observation matrix is
In an embodiment of the invention, the moving distance of the left wheel of the robot is calculated according to code disc data of the left wheel of the robot, and the moving distance of the right wheel of the robot is calculated according to code disc data of the right wheel of the robot.
In order to achieve the above object, the present invention further provides a robot, wherein the robot runs a program instruction to implement the method for correcting the odometry data of the robot.
As described above, the method for correcting odometry data of a robot and the robot according to the present invention have the following advantageous effects:
the method for correcting the odometry data of the robot comprises the following steps: and reading data of the inertial measurement unit as a first observation quantity and a first observation matrix, performing a state updating process if the data meets certain conditions, reading optical flow data as a second observation quantity and a second observation matrix, and performing the state updating process if the data meets certain conditions. The mileage counting data correction method has the characteristics of high real-time performance and high calculation precision, and in addition, when the robot turns, the calculated relative deflection angle is accurate and is not influenced by zero drift and temperature drift, the data of the robot is easily influenced, and the robot is not influenced by skidding when calculating relative translation. The method can efficiently correct the odometer data in real time, and further obtain better state estimation.
The inertial measurement unit data and the optical flow data are read through the multi-sensor fusion model, and the multi-sensor fusion model provided by the invention can efficiently correct the odometer data in real time, so that better state estimation is obtained, and the multi-sensor fusion model has a very strong application value.
Drawings
Fig. 1 is a flowchart illustrating a method for correcting odometry data of a robot according to an embodiment of the present disclosure.
Fig. 2 is a flowchart illustrating a method for correcting odometry data of a robot according to another embodiment of the present disclosure.
Fig. 3 is a flowchart illustrating a method for correcting odometry data of a robot according to yet another embodiment of the present disclosure.
Fig. 4 is a flowchart illustrating a method for correcting odometry data of a robot according to yet another embodiment of the present disclosure.
Fig. 5 is a structural diagram of a robot according to an embodiment of the present application.
Description of the element reference numerals
1 robot case
2 right wheel
3 left wheel
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention, and the drawings only show the components related to the present invention rather than the number, shape and size of the components in actual implementation, and the type, quantity and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated.
Referring to fig. 1, fig. 1 is a flowchart illustrating a method for calibrating odometry data of a robot according to an embodiment of the present disclosure. A method for correcting odometry data of a robot, the method comprising: s1, initializing the robotSystem state quantities and a system covariance matrix. And S2, calculating the moving distance of the left wheel 3 and the moving distance of the right wheel 4 of the robot, and performing state prediction and covariance matrix prediction according to a system state noiseless model by combining the initialized system state quantity and system covariance matrix of the robot so as to output the system state quantity and system covariance matrix of the robot after state prediction. Specifically, but not limited to, the moving distance of the left wheel 3 of the robot is calculated according to the code wheel data of the left wheel 3 of the robot, and the moving distance of the right wheel 2 of the robot is calculated according to the code wheel data of the right wheel 2 of the robot. And S3, reading inertial measurement unit data (IMU data). And S4, judging whether the absolute value of the difference value between the current inertial measurement unit data and the inertial measurement unit data at the previous moment is within a first set threshold value. If the absolute value of the difference between the current inertial measurement unit data and the inertial measurement unit data at the previous time is within the first set threshold, the operation of step S5 is performed, and if the absolute value of the difference between the current inertial measurement unit data and the inertial measurement unit data at the previous time is not within the first set threshold, the operation of step S6 is performed. And S5, updating the system state and the system covariance matrix of the robot for the first time. And S6, outputting the system state quantity and the system covariance matrix of the robot after state prediction as fused information, continuing to execute the calculation of the step S2 to obtain the moving distance of the left wheel 3 and the moving distance of the right wheel 4 of the robot, and performing state prediction and system covariance prediction according to a system state noiseless model by combining the initialized system state quantity and the initialized system covariance matrix of the robot so as to output the operation of the system state quantity and the system covariance matrix of the robot after state prediction. Specifically, in step S4: the absolute value of the difference between the current inertial measurement unit data and the inertial measurement unit data at the previous moment can be obtained through calculation, namely, the difference between the current optical flow data and the optical flow data at the previous moment is obtained through calculation, and then whether the absolute value of the difference between the current inertial measurement unit data and the inertial measurement unit data at the previous moment is within a first set threshold value or not is judged. In particular toReading the inertial measurement unit data and the optical flow data through a multi-sensor fusion model. The multi-sensor fusion model is as follows:
Figure BDA0002274896800000041
wherein, XkRepresents a state quantity, YkRepresents an observed quantity, UkRepresenting the control quantity, B representing the control coefficient, A representing the state transition matrix, CiDenotes an observation matrix, ξ denotes a state noise following a normal distribution N (0, Q), σiRepresenting the observed noise following a normal distribution N (0, R). And updating the state quantity of the system and the system covariance matrix thereof through IMU data, updating the state quantity and the system covariance matrix as the system prediction state quantity and the system prediction covariance matrix of the optical flow updating process, outputting the optimal system state quantity and the optimal system covariance matrix, and using the optimal system state quantity and the optimal system covariance matrix in the prediction process to enter the next iteration updating process. In the multi-sensor fusion model of the present invention, the system state quantity is [ x, y, z, pitch, roll, yaw [ ]]TRepresenting the coordinates [ x y z ] of the robot in three-dimensional space]TAnd angle [ pitch roll yaw ]]TIn the following description, pitch represents a pitch angle of the robot in a three-dimensional space, roll represents a roll angle of the robot in the three-dimensional space, and yaw represents a yaw angle of the robot in the three-dimensional space. The system input is left wheel right wheel distance of coderl,Sr]T,SlIndicates the distance, S, that the left wheel 3 of the robot movesrIndicating the distance the right wheel 2 of the robot has moved. In order to obtain a model which is as simple as possible for an autonomous mobile robot on an indoor floor, the state quantity at time t is taken as X (t) ([ x (t) y (t) yaw (t))]TEstablishing the following system state noiseless model according to the geometrical relationship between the front and back states, namely the states X (t) and X (t + 1):
Figure BDA0002274896800000051
wherein D represents the distance between the two driving wheels. The observed quantity in the multi-sensor fusion model comprises IMU data and optical flow data, wherein the first observed quantity output by the IMU data is [ pitch, roll, yaw]TThe first observation matrix is
Figure BDA0002274896800000052
The second observed quantity of the optical flow data output is [ x, y, z, yaw]TWherein x represents an abscissa of the robot in the three-dimensional space, y represents an ordinate of the robot in the three-dimensional space, z represents a vertical coordinate of the robot in the three-dimensional space, and yaw represents a yaw angle of the robot in the three-dimensional space. The second observation matrix isThe observation matrix is C in the multi-sensor fusion modeli. The two observation vectors constitute the observation vector y (t).
Referring to fig. 2, fig. 3 and fig. 4, fig. 2 is a flowchart illustrating a method for calibrating odometry data of a robot according to another embodiment of the present application. Fig. 3 is a flowchart illustrating a method for correcting odometry data of a robot according to yet another embodiment of the present disclosure. Fig. 4 is a flowchart illustrating a method for correcting odometry data of a robot according to yet another embodiment of the present disclosure. The odometry data correction method of the robot further comprises the steps of S7, S7 and optical flow data reading. S8, determining whether the absolute value of the difference between the current optical flow data and the optical flow data at the previous time is within a second set threshold, if the absolute value of the difference between the current optical flow data and the optical flow data at the previous time is within the second set threshold, performing the operation of step S9, and if the absolute value of the difference between the current optical flow data and the optical flow data at the previous time is not within the second set threshold, performing the operation of step S10. And S9, updating the system state and the system covariance matrix of the robot for the second time. S10, outputting the system state and the system covariance matrix of the robot after the first update as the fused information, continuing to execute the calculation of the step S2 to obtain the moving distance of the left wheel 3 and the moving distance of the right wheel 4 of the robot, and performing state prediction and system covariance matrix prediction according to a system state noiseless model by combining the initialized system state quantity and the system covariance matrix of the robot to output the state predicted system state and system covariance matrixThe system state quantity of the robot and the operation of the covariance matrix of the system. Specifically, in step S8: the absolute value of the difference between the current optical flow data and the optical flow data at the previous moment can be obtained through calculation, namely, the absolute value of the difference between the current optical flow data and the optical flow data at the previous moment is obtained through calculation, and whether the absolute value of the difference between the current optical flow data and the optical flow data at the previous moment is within a second set threshold or not is judged. The method for correcting the odometry data of the robot further comprises the steps of S11, S11, outputting the system state and the system covariance matrix of the robot after the second updating, continuously executing the calculation of the step S2 to obtain the moving distance of the left wheel 3 and the moving distance of the right wheel 4 of the robot, and performing state prediction and system covariance matrix prediction according to a system state noise-free model by combining the initialized system state quantity and the initialized system covariance matrix of the robot so as to output the operation of the system state quantity and the system covariance matrix of the robot after the state prediction. Specifically, the first set threshold and the second set threshold may be, but not limited to, a translation threshold of 0.1 meter to 0.3 meter, and an angle threshold of 20 degrees to 40 degrees. The translation threshold may be 0.18 meters and the angle threshold may be 30 degrees. Taking one sensor as an example, the steps of obtaining the optimal state quantity by fusion are as follows: the method comprises the following steps: the initialization state quantity X (0|0) and the covariance matrix P (0|0) represent quantities at time 0. Step two: and performing state prediction according to the input quantity U (t) at the current time t, namely estimating the state X (t +1| t) at the next time through the optimal state X (t | t) at the current time t: x (t +1| t) ═ a × X (t | t) + B × u (t). Step three: obtaining a prediction covariance matrix according to an error propagation principle, namely calculating an estimation state covariance matrix P (t +1| t) at the next moment according to a covariance matrix Q (t) of a state equation at the moment t and a covariance matrix P (t | t) of an observation state: p (t +1| t) ═ a × P (t | t) × aT+ Q (t). Step four: calculating a Kalman gain based on said observation matrix C and observation covariance matrix R (t): gain ═ P (t +1| t) × CT*(C*P(t+1|t)*CT+R(t))-1. Step five: and (3) calculating the optimal state estimation according to the observed quantity Y (t) at the time t: x (t +1| t +1) ═ X (t +1| t) + Gain (y (t) -X (t +1| t)). Step six: updating covariance matrix of estimated state: p (t +1| t +1) ═ P (t +1| t) (I-Gain × C) ×, where I represents a unit matrix having the same dimension as Gain × C. The multi-sensor fusion model can be applied to the autonomous mobile robot on the ground.
Referring to fig. 5, fig. 5 is a structural diagram of a robot according to an embodiment of the present disclosure. The invention provides a robot, and the robot runs a program instruction to realize the mileage counting data correction method of the robot. The robot comprises a robot housing 1, a right wheel 2 and a left wheel 3. In the initial stage of the fusion process, the robot should be stationary for a period of time to collect data of the sensors, and the mean value of the data is counted according to the data to be used as a reference for eliminating the drift. After the sensor data is read, validity judgment should be carried out on the data, and whether the data is in a noise distribution range of the sensor and the jumping degree between adjacent data is used as a judgment standard. The invention does not require that the time when each sensor outputs data is synchronous, namely the multi-sensor fusion model in the invention is a step-by-step fusion model, and even if data of a certain sensor fails, the fusion model of the sensor can still continue to be calculated. The state noise and the observation noise in the invention are assumed to be in direct proportion to the time change, so the covariance of the state model and the covariance of the observation model in the fusion model of the invention are dynamically changed.
In summary, the method for calibrating odometry data of a robot according to the present invention includes: and reading data of the inertial measurement unit as a first observation quantity and a first observation matrix, performing a state updating process if the data meets certain conditions, reading optical flow data as a second observation quantity and a second observation matrix, and performing the state updating process if the data meets certain conditions. The mileage counting data correction method has the characteristics of high real-time performance and high calculation precision, and in addition, when the robot turns, the calculated relative deflection angle is accurate and is not influenced by zero drift and temperature drift, the data of the robot is easily influenced, and the robot is not influenced by skidding when calculating relative translation. The method can efficiently correct the odometer data in real time, and further obtain better state estimation.
The foregoing embodiments are merely illustrative of the principles and utilities of the present invention and are not intended to limit the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention be covered by the claims of the present invention.

Claims (10)

1. A method for correcting odometry data of a robot, the method comprising:
initializing a system state quantity and a system covariance matrix of the robot;
calculating the moving distance of the left wheel and the moving distance of the right wheel of the robot, performing state prediction and system covariance prediction according to a system state noiseless model by combining the initialized system state quantity and system covariance matrix of the robot, and outputting the system state quantity and the system covariance matrix of the robot after state prediction;
reading the data of the inertia measurement unit, judging whether the absolute value of the difference value between the current data of the inertia measurement unit and the data of the inertia measurement unit at the previous moment is within a first set threshold value, if so, updating the system state and the system covariance matrix of the robot for the first time, otherwise, outputting the system state quantity and the system covariance matrix of the robot after state prediction as fused information, and continuously performing the operation of calculating the moving distance of the left wheel and the moving distance of the right wheel of the robot.
2. The method for correcting odometry data of a robot according to claim 1, further comprising: reading optical flow data, judging whether the absolute value of the difference value between the current optical flow data and the optical flow data at the previous moment is within a second set threshold value, if so, updating the system state and the system covariance matrix of the robot for the second time, otherwise, outputting the system state and the system covariance matrix of the robot after the first updating as fused information, and continuously performing the operation of calculating the moving distance of the left wheel and the moving distance of the right wheel of the robot.
3. The method for correcting odometry data of a robot according to claim 2, further comprising: and outputting the system state and the system covariance matrix of the robot after the second update, and continuously performing the operation of calculating the moving distance of the left wheel and the moving distance of the right wheel of the robot.
4. The method for correcting odometry data of a robot according to claim 1, wherein: and reading the data of the inertial measurement unit, and taking the output of the data of the inertial measurement unit as a first observed quantity and a first observation matrix.
5. The method for correcting odometry data of a robot according to claim 2, wherein: and reading the optical flow data, and outputting the optical flow data as a second observation quantity and a second observation matrix.
6. The method for correcting odometry data of a robot according to claim 2, wherein: and performing joint correction of robot odometry data on the inertial measurement unit data and the optical flow data through a multi-sensor fusion model.
7. The method for correcting odometry data of a robot according to claim 4, wherein: the first observed quantity is [ pitch, roll, yaw ]]TWherein pitch represents a pitch angle of the robot in a three-dimensional space, roll represents a roll angle of the robot in the three-dimensional space, and yaw represents a yaw angle of the robot in the three-dimensional space; the above-mentionedThe first observation matrix is
Figure FDA0002274896790000011
8. The method for correcting odometry data of a robot according to claim 5, wherein: the second observed quantity is [ x, y, z, yaw]TWherein x represents an abscissa of the robot in the three-dimensional space, y represents an ordinate of the robot in the three-dimensional space, z represents a vertical coordinate of the robot in the three-dimensional space, and yaw represents a yaw angle of the robot in the three-dimensional space; the second observation matrix is
Figure FDA0002274896790000021
9. The method for correcting odometry data of a robot according to claim 1, wherein: and calculating the moving distance of the left wheel of the robot according to the code disc data of the left wheel of the robot, and calculating the moving distance of the right wheel of the robot according to the code disc data of the right wheel of the robot.
10. A robot, characterized by: the robot operation program instructions implement the odometry data correction method for a robot according to any one of claim 1 to claim 9.
CN201911118990.8A 2019-11-15 2019-11-15 Robot mileage counting data correction method and robot Pending CN110849392A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911118990.8A CN110849392A (en) 2019-11-15 2019-11-15 Robot mileage counting data correction method and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911118990.8A CN110849392A (en) 2019-11-15 2019-11-15 Robot mileage counting data correction method and robot

Publications (1)

Publication Number Publication Date
CN110849392A true CN110849392A (en) 2020-02-28

Family

ID=69600250

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911118990.8A Pending CN110849392A (en) 2019-11-15 2019-11-15 Robot mileage counting data correction method and robot

Country Status (1)

Country Link
CN (1) CN110849392A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112697153A (en) * 2020-12-31 2021-04-23 广东美的白色家电技术创新中心有限公司 Positioning method of autonomous mobile device, electronic device and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120290146A1 (en) * 2010-07-15 2012-11-15 Dedes George C GPS/IMU/Video/Radar absolute/relative positioning communication/computation sensor platform for automotive safety applications
CN106017454A (en) * 2016-06-16 2016-10-12 东南大学 Pedestrian navigation device and method based on novel multi-sensor fusion technology
CN106708037A (en) * 2016-12-05 2017-05-24 北京贝虎机器人技术有限公司 Autonomous mobile equipment positioning method and device, and autonomous mobile equipment
CN107478214A (en) * 2017-07-24 2017-12-15 杨华军 A kind of indoor orientation method and system based on Multi-sensor Fusion
CN108036792A (en) * 2017-12-11 2018-05-15 苏州中德睿博智能科技有限公司 A kind of data fusion method of odometer for mobile robot and measurement pose
CN109141411A (en) * 2018-07-27 2019-01-04 顺丰科技有限公司 Localization method, positioning device, mobile robot and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120290146A1 (en) * 2010-07-15 2012-11-15 Dedes George C GPS/IMU/Video/Radar absolute/relative positioning communication/computation sensor platform for automotive safety applications
CN106017454A (en) * 2016-06-16 2016-10-12 东南大学 Pedestrian navigation device and method based on novel multi-sensor fusion technology
CN106708037A (en) * 2016-12-05 2017-05-24 北京贝虎机器人技术有限公司 Autonomous mobile equipment positioning method and device, and autonomous mobile equipment
CN107478214A (en) * 2017-07-24 2017-12-15 杨华军 A kind of indoor orientation method and system based on Multi-sensor Fusion
CN108036792A (en) * 2017-12-11 2018-05-15 苏州中德睿博智能科技有限公司 A kind of data fusion method of odometer for mobile robot and measurement pose
CN109141411A (en) * 2018-07-27 2019-01-04 顺丰科技有限公司 Localization method, positioning device, mobile robot and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112697153A (en) * 2020-12-31 2021-04-23 广东美的白色家电技术创新中心有限公司 Positioning method of autonomous mobile device, electronic device and storage medium

Similar Documents

Publication Publication Date Title
CN110706279B (en) Global position and pose estimation method based on information fusion of global map and multiple sensors
CN110030994B (en) Monocular-based robust visual inertia tight coupling positioning method
CN111795686B (en) Mobile robot positioning and mapping method
CN111136660B (en) Robot pose positioning method and system
CN109579824B (en) Self-adaptive Monte Carnot positioning method integrating two-dimensional code information
CN110986988B (en) Track calculation method, medium, terminal and device integrating multi-sensor data
CN113074739A (en) UWB/INS fusion positioning method based on dynamic robust volume Kalman
CN113091738B (en) Mobile robot map construction method based on visual inertial navigation fusion and related equipment
CN105136145A (en) Kalman filtering based quadrotor unmanned aerial vehicle attitude data fusion method
CN109827571A (en) A kind of dual acceleration meter calibration method under the conditions of no turntable
CN106772524A (en) A kind of agricultural robot integrated navigation information fusion method based on order filtering
CN114526745A (en) Drawing establishing method and system for tightly-coupled laser radar and inertial odometer
CN113933818A (en) Method, device, storage medium and program product for calibrating laser radar external parameter
CN109507706B (en) GPS signal loss prediction positioning method
CN103983278A (en) Method for measuring factors influencing precision of satellite attitude determination system
CN113324542A (en) Positioning method, device, equipment and storage medium
CN111649747A (en) IMU-based adaptive EKF attitude measurement improvement method
CN113340324B (en) Visual inertia self-calibration method based on depth certainty strategy gradient
CN113189541B (en) Positioning method, device and equipment
CN110849392A (en) Robot mileage counting data correction method and robot
CN110186483B (en) Method for improving drop point precision of inertia guidance spacecraft
CN117075158A (en) Pose estimation method and system of unmanned deformation motion platform based on laser radar
CN117387604A (en) Positioning and mapping method and system based on 4D millimeter wave radar and IMU fusion
CN114858166B (en) IMU attitude resolving method based on maximum correlation entropy Kalman filter
CN112987054B (en) Method and device for calibrating SINS/DVL combined navigation system error

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200228

RJ01 Rejection of invention patent application after publication