CN111337018B - Positioning method and device, intelligent robot and computer readable storage medium - Google Patents

Positioning method and device, intelligent robot and computer readable storage medium Download PDF

Info

Publication number
CN111337018B
CN111337018B CN202010433361.0A CN202010433361A CN111337018B CN 111337018 B CN111337018 B CN 111337018B CN 202010433361 A CN202010433361 A CN 202010433361A CN 111337018 B CN111337018 B CN 111337018B
Authority
CN
China
Prior art keywords
sensor
pose
detection data
intelligent robot
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010433361.0A
Other languages
Chinese (zh)
Other versions
CN111337018A (en
Inventor
赵敏
宋乐
秦宝星
程昊天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Gaussian Automation Technology Development Co Ltd
Original Assignee
Shanghai Gaussian Automation Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Gaussian Automation Technology Development Co Ltd filed Critical Shanghai Gaussian Automation Technology Development Co Ltd
Priority to CN202010433361.0A priority Critical patent/CN111337018B/en
Publication of CN111337018A publication Critical patent/CN111337018A/en
Application granted granted Critical
Publication of CN111337018B publication Critical patent/CN111337018B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)

Abstract

The application discloses a positioning method, which is applied to an intelligent robot and is characterized in that a plurality of sensors for detecting the detection pose of the intelligent robot are arranged on the intelligent robot, each sensor comprises a laser sensor, the positioning method comprises the steps of acquiring a plurality of detection data obtained by the detection of the plurality of sensors when the detection pose is detected, and the laser sensors are used for detecting the laser data; calculating the self-adaptive confidence of the laser data in at least one degree of freedom according to the laser data; calculating the self-adaptive weight of pose constraint of at least one detection data in the graph optimization model according to the self-adaptive confidence coefficient; constructing a graph optimization model and an objective function of the graph optimization model according to at least one piece of detection data and self-adaptive weight; and calculating the optimized pose of the intelligent robot according to the objective function. The application also discloses a positioning device, an intelligent robot and a computer readable storage medium.

Description

Positioning method and device, intelligent robot and computer readable storage medium
Technical Field
The present application relates to the field of robotics, and more particularly, to a positioning method and apparatus, an intelligent robot, and a computer-readable storage medium.
Background
An intelligent robot positioning algorithm generally adopts a filter frame, constructs a motion equation and an observation equation, and combines odometer data and laser matching to realize pose estimation. However, when the working ambient environment of the intelligent robot changes violently or the laser data does not return obviously, the observation equation has a large error, which causes the pose estimation error. Meanwhile, when the system has more data which can be referred to by the sensor, the filter algorithm cannot perform deep fusion on the data. The robot cannot work in a complex environment, so that the intelligent robot is easy to drift in positioning and lose.
Disclosure of Invention
In view of the above, the present invention is directed to solving, at least to some extent, one of the problems in the related art. Therefore, the embodiment of the application provides a positioning method and device, an intelligent robot and a computer readable storage medium.
The positioning method is applied to an intelligent robot, a plurality of sensors for detecting the detection pose of the intelligent robot are mounted on the intelligent robot, the sensors comprise laser sensors, the positioning method comprises the steps of acquiring a plurality of detection data obtained by detecting the detection pose by the plurality of sensors, and the laser sensors are used for detecting laser data; calculating an adaptive confidence of the laser data in at least one degree of freedom from the laser data; calculating the self-adaptive weight of pose constraint of at least one detection data in a graph optimization model according to the self-adaptive confidence coefficient; constructing the graph optimization model and an objective function of the graph optimization model according to at least one detection data and the self-adaptive weight; and calculating the optimized pose of the intelligent robot according to the objective function.
According to the positioning method, firstly, a plurality of detection data obtained by detecting a plurality of sensors when detecting the pose are obtained, then, the self-adaptive confidence of the laser data on at least one degree of freedom is calculated, the pose constraint self-adaptive weight of the detection data is calculated according to the self-adaptive confidence, the graph optimization model and the objective function of the graph optimization model are constructed according to at least one detection data and the self-adaptive weight, and finally, the optimization pose of the intelligent robot is calculated according to the objective function. Therefore, by introducing the detection data of the plurality of sensors, the pose estimation error caused by data failure of a single sensor is effectively overcome, the positioning robustness of the intelligent robot is improved, the intelligent robot can stably operate under severe environmental change or data failure of one sensor, meanwhile, the accuracy and reliability of laser data can be analyzed by calculating the confidence coefficient, so that the adaptive weights corresponding to the obtained plurality of detection data are more accurate, the error of each detection data can be balanced by calculating the adaptive weight corresponding to at least one detection data, the problem of larger error of the data of the single sensor is overcome, and the obtained optimal pose of the intelligent robot is more accurate.
In some embodiments, the confidence levels include a confidence level in an x-direction, a confidence level in a y-direction, and a confidence level in a yaw angle, the confidence level in the x-direction being the uncertainty in the x-direction, the confidence level in the y-direction being the uncertainty in the y-direction, and the confidence level in the yaw angle being the uncertainty in the yaw angle.
In this embodiment, by calculating the confidence degrees in the x direction, the y direction, and the yaw angle, the adaptive weight corresponding to each detection data calculated in the subsequent step will be more accurate.
In some embodiments, the sensor further includes a mileage sensor, a GPS sensor, a UWB sensor, a vision sensor, a WIFI sensor, a bluetooth sensor, a 5G communication sensor, and the calculating an adaptive weight of pose constraint of at least one of the detection data in a graph optimization model according to the adaptive confidence includes: calculating an adaptive weight of pose constraints of detection data of one or more of the mileage sensor, the GPS sensor, the UWB sensor, the vision sensor, the WIFI sensor, the Bluetooth sensor, and the 5G communication sensor in the graph optimization model according to the confidence in the x direction, the confidence in the y direction, and the confidence in the yaw angle.
In this embodiment, the pose constraint adaptive weight of the plurality of detection data in the graph optimization model is obtained according to the confidence in the x direction, the confidence in the y direction, and the confidence in the yaw angle, the pose constraint adaptive weight of the plurality of detection data in the graph optimization model is more accurate, and the obtained graph optimization model can better reflect the real pose.
In some embodiments, the constructing the graph optimization model and the objective function of the graph optimization model according to at least one of the detection data and the adaptive weight includes calculating a residual error between at least one of the detection poses and a pose to be optimized of the intelligent robot; acquiring an information matrix of detection data corresponding to the self-adaptive weight according to the self-adaptive weight; and constructing the target function according to the information matrix and the residual error.
In this embodiment, a residual error between at least one detection pose and a pose to be optimized of the intelligent robot is calculated, an information matrix of detection data corresponding to the adaptive weight is obtained according to the adaptive weight of the detection data, and an objective function is constructed through the information matrix and the residual error, so that the calculation of the information matrix is convenient.
In some embodiments, the constructing the graph optimization model and the objective function of the graph optimization model according to at least one of the detection data and the adaptive weight includes: constructing the graph optimization model including the trajectory of the intelligent robot, the graph optimization model having detection poses matched with the laser data as nodes; and generating relative pose constraints and/or absolute pose constraints between the detection data and the nodes based on at least one of the adaptive weights.
In this embodiment, the detection pose matched with the laser sensor is used as a node of the graph optimization model, and relative constraint and/or absolute constraint between the detection data and the node of the graph optimization model are generated, so that the graph optimization model includes a constraint relation given by the detection data of the multiple sensors, errors in the graph optimization model are smaller, and the obtained pose of the intelligent robot is more accurate.
In some embodiments, the sensors further comprise a mileage sensor, a GPS sensor, a UWB sensor, a vision sensor, a WIFI sensor, a bluetooth sensor, and a 5G communication sensor, the generating of the relative pose constraint and/or the absolute pose constraint between the detection data and the node based on at least one of the adaptive weights comprises one or more of: generating relative pose constraints between detection nodes of the odometry sensor; generating relative pose constraint obtained by matching the detection data of the laser sensor with a map; generating detection data of the GPS sensor and absolute pose constraints of the nodes; generating detection data of the UWB sensor and absolute pose constraints of the nodes; generating detection data of the vision sensor and absolute pose constraints of the nodes; generating detection data of the WIFI sensor and absolute pose constraints of the nodes; generating detection data of the Bluetooth sensor and absolute pose constraints of the nodes; and generating detection data of the 5G communication sensor and absolute pose constraints of the nodes.
In this embodiment, the sensor further includes a mileage sensor, a GPS sensor, a UWB sensor, a visual sensor, a WIFI sensor, a bluetooth sensor, and a 5G communication sensor, and by generating at least a constraint relationship between at least one of the mileage sensor, the laser sensor, the GPS sensor, the UWB sensor, the visual sensor, the WIFI sensor, the bluetooth sensor, and the 5G communication sensor and a node of the graph optimization model, the graph optimization model is made to more closely fit a real trajectory of the intelligent robot, so that an objective function of the constructed graph optimization model is made more accurate, so that a pose of the intelligent robot obtained by subsequent calculation is made more accurate, and meanwhile, a pose calculation error caused by data failure of a single sensor is effectively overcome.
In some embodiments, the sensors further include a mileage sensor, and before calculating the adaptive weight of the pose constraint of at least one of the detection data in the graph optimization model according to the adaptive confidence, the localization method further includes: acquiring a detection data difference value of the mileage sensor at the current moment and the previous moment; acquiring an initial pose of the intelligent robot at the current moment according to the detection data difference and the pose of the intelligent robot at the previous moment; and acquiring a corrected pose according to the initial pose and the laser data.
In the embodiment, the accumulated error of the long-time detection of the mileage sensor can be effectively avoided by acquiring the detection data difference value between the current time and the previous time of the mileage sensor, the confidence coefficient calculated according to the detection data of the laser sensor is more accurate by acquiring the initial pose of the intelligent robot at the current time and acquiring the corrected pose according to the detection data of the laser sensor, the adaptive weight corresponding to the detection data calculated according to the confidence coefficient is more accurate, and the robustness of the system is improved.
In some embodiments, the calculating an adaptive weight of a pose constraint of at least one of the detection data in a graph optimization model according to the adaptive confidence level includes: judging whether the difference value between the timestamp of the detection data and the timestamp of the optimization pose of the intelligent robot is smaller than a preset difference value threshold value or not; and if so, calculating the self-adaptive weight of the pose constraint of the detection data in the graph optimization model.
In this embodiment, by determining the relationship between the difference between the timestamp of each detection data and the timestamp of the intelligent robot for optimizing the pose and the preset difference threshold, and when the difference is smaller than the preset difference threshold, calculating the adaptive weight of the detection data, the corresponding relationship between the detection data and the pose of the intelligent robot can be effectively aligned on the time axis, and the accuracy of the calculated pose of the intelligent robot is improved.
The positioning device is applied to an intelligent robot, a plurality of sensors for detecting the detection pose of the intelligent robot are mounted on the intelligent robot, each sensor comprises a laser sensor, the positioning device comprises an acquisition module, a first calculation module, a second calculation module, a construction module and a third calculation module, the acquisition module is used for acquiring a plurality of detection data obtained by detecting the detection pose by the plurality of sensors, and the laser sensors are used for detecting laser data; the first calculation module is used for calculating the self-adaptive confidence coefficient of the laser data in at least one degree of freedom according to the laser data; the second calculation module is used for calculating the self-adaptive weight of the pose constraint of at least one detection data in the graph optimization model according to the self-adaptive confidence coefficient; the construction module is used for constructing the graph optimization model and an objective function of the graph optimization model according to at least one detection data and the adaptive weight; and the third calculation module is used for calculating the optimized pose of the intelligent robot according to the objective function.
In the positioning device according to the embodiment of the application, a plurality of detection data obtained by detection of a plurality of sensors are obtained, then the adaptive confidence of the laser data in at least one degree of freedom is calculated, the adaptive weight of pose constraint of the detection data is calculated according to the adaptive confidence, the target function of a graph optimization model and a graph optimization model is constructed according to the detection data and the adaptive weight, and finally the pose of the intelligent robot is calculated according to the target function. Therefore, by introducing the detection data of the plurality of sensors, the pose estimation error caused by data failure of a single sensor is effectively overcome, the positioning robustness of the intelligent robot is improved, the intelligent robot can stably operate under the condition of severe environmental change or data failure of a certain sensor, meanwhile, the accuracy and reliability of laser data can be analyzed by calculating the confidence coefficient, so that the self-adaptive weights corresponding to the plurality of obtained detection data are more accurate, the error of each detection data can be balanced by calculating the self-adaptive weight corresponding to at least one detection data, the problem of larger data error of the single sensor is overcome, and the obtained optimization error of the intelligent robot is more accurate.
In some embodiments, the confidence levels include a confidence level in an x-direction, a confidence level in a y-direction, and a confidence level in a yaw angle, the confidence level in the x-direction being the uncertainty in the x-direction, the confidence level in the y-direction being the uncertainty in the y-direction, and the confidence level in the yaw angle being the uncertainty in the yaw angle.
In this embodiment, by calculating the confidence degrees in the x direction, the y direction, and the yaw angle, the adaptive weight corresponding to each detection data calculated in the subsequent step will be more accurate.
In some embodiments, the sensors further include a mileage sensor, a GPS sensor, a UWB sensor, a vision sensor, a WIFI sensor, a bluetooth sensor, a 5G communication sensor, and the second computing module is further configured to: calculating an adaptive weight of pose constraints of detection data of one or more of the mileage sensor, the GPS sensor, the UWB sensor, the vision sensor, the WIFI sensor, the Bluetooth sensor, and the 5G communication sensor in the graph optimization model according to the confidence in the x direction, the confidence in the y direction, and the confidence in the yaw angle.
In this embodiment, the pose constraint adaptive weight of the plurality of detection data in the graph optimization model is obtained according to the confidence in the x direction, the confidence in the y direction, and the confidence in the yaw angle, the pose constraint adaptive weight of the plurality of detection data in the graph optimization model is more accurate, and the obtained graph optimization model can better reflect the real pose.
In some embodiments, the construction module is further configured to calculate a residual error between at least one of the detection poses and the pose to be optimized of the intelligent robot; acquiring an information matrix of detection data corresponding to the self-adaptive weight according to the self-adaptive weight; and constructing the target function according to the information matrix and the residual error.
In this embodiment, a residual error between at least one detection pose and a pose to be optimized of the intelligent robot is calculated, an information matrix of detection data corresponding to the adaptive weight is obtained according to the adaptive weight of the detection data, and an objective function is constructed through the information matrix and the residual error, so that the calculation of the information matrix is convenient.
In some embodiments, the construction module is further configured to construct the graph optimization model including a trajectory of the intelligent robot, the graph optimization model having detection poses matched with the laser data as nodes; and generating relative pose constraints and/or absolute pose constraints between the detection data and the nodes based on at least one of the adaptive weights.
In this embodiment, the detection pose matched with the laser sensor is used as a node of the graph optimization model, and relative constraint and/or absolute constraint between the detection data and the node of the graph optimization model are generated, so that the graph optimization model includes a constraint relation given by the detection data of the multiple sensors, errors in the graph optimization model are smaller, and the obtained pose of the intelligent robot is more accurate.
In certain embodiments, the sensors further comprise a mileage sensor, a GPS sensor, a UWB sensor, a vision sensor, a WIFI sensor, a bluetooth sensor, and a 5G communication sensor, the building module is further configured to generate one or more of the following constraints: generating relative pose constraints between detection nodes of the odometry sensor; generating relative pose constraint obtained by matching the detection data of the laser sensor with a map; generating detection data of the GPS sensor and absolute pose constraints of the nodes; generating detection data of the UWB sensor and absolute pose constraints of the nodes; generating detection data of the vision sensor and absolute pose constraints of the nodes; generating detection data of the WIFI sensor and absolute pose constraints of the nodes; generating detection data of the Bluetooth sensor and absolute pose constraints of the nodes; and generating detection data of the 5G communication sensor and absolute pose constraints of the nodes.
In this embodiment, the sensor further includes a mileage sensor, a GPS sensor, a UWB sensor, a visual sensor, a WIFI sensor, a bluetooth sensor, and a 5G communication sensor, and by generating at least a constraint relationship between at least one of the mileage sensor, the laser sensor, the GPS sensor, the UWB sensor, the visual sensor, the WIFI sensor, the bluetooth sensor, and the 5G communication sensor and a node of the graph optimization model, the graph optimization model is made to more closely fit a real trajectory of the intelligent robot, so that an objective function of the constructed graph optimization model is made more accurate, so that a pose of the intelligent robot obtained by subsequent calculation is made more accurate, and meanwhile, a pose calculation error caused by data failure of a single sensor is effectively overcome.
In some embodiments, the sensors further include a mileage sensor, and the first calculation module is further configured to obtain a difference value between detection data of the mileage sensor at a current time and a previous time before calculating an adaptive weight of a pose constraint of at least one of the detection data in a graph optimization model according to the adaptive confidence; acquiring an initial pose of the intelligent robot at the current moment according to the detection data difference and the pose of the intelligent robot at the previous moment; and acquiring a corrected pose according to the initial pose and the laser data.
In the embodiment, the accumulated error of the long-time detection of the mileage sensor can be effectively avoided by acquiring the detection data difference value between the current time and the previous time of the mileage sensor, and the confidence coefficient calculated according to the detection data of the laser sensor is more accurate by acquiring the initial pose of the intelligent robot at the current time and acquiring the corrected pose according to the detection data of the laser sensor, so that the self-adaptive weight corresponding to the detection data calculated according to the confidence coefficient is more accurate, and the robustness of the system is improved.
In some embodiments, the second calculation module is further configured to determine whether a difference between the timestamp of the detection data and the timestamp of the optimized pose of the intelligent robot is smaller than a preset difference threshold; and if so, calculating the self-adaptive weight of the pose constraint of the detection data in the graph optimization model.
In this embodiment, by determining a relationship between a difference between the timestamp of each detection data and the timestamp of the intelligent robot for optimizing the pose and a preset difference threshold, and when the difference is smaller than the preset difference threshold, calculating the adaptive weight of the detection data according to the detection data, the corresponding relationship between the detection data and the pose of the intelligent robot can be effectively aligned on a time axis, and the accuracy of the calculated pose of the intelligent robot is improved.
The intelligent robot of the embodiment of the application comprises one or more processors and a memory; and one or more programs, wherein the one or more programs are stored in the memory and executed by the one or more processors, the programs comprising instructions for performing the positioning method of any of the above embodiments.
In the intelligent robot according to the embodiment of the application, a plurality of detection data obtained by detection of a plurality of sensors are obtained, then the adaptive weight corresponding to each detection data is calculated according to at least one detection data, an objective function of a graph optimization model is constructed according to at least one detection data and the adaptive weight, and finally the optimization pose of the intelligent robot is calculated according to the objective function. Therefore, by introducing the detection data of the plurality of sensors, the pose estimation error caused by data failure of a single sensor is effectively overcome, the positioning robustness of the intelligent robot is improved, and the intelligent robot can stably operate under the condition of severe environment change or data failure of one sensor. Meanwhile, the accuracy and reliability of the laser data can be analyzed by calculating the confidence coefficient, so that the adaptive weights corresponding to the obtained detection data are more accurate, the error of each detection data can be balanced by calculating the adaptive weight corresponding to at least one detection data, the problem that the data error of a single sensor is larger is solved, and the acquired optimal pose of the intelligent robot is more accurate.
A non-transitory computer-readable storage medium containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the positioning method of any of the above embodiments. Acquiring a plurality of detection data detected by a plurality of sensors; respectively calculating the self-adaptive weight corresponding to each detection data according to at least one detection data; constructing an objective function of a graph optimization model according to at least one detection data and the self-adaptive weight; and calculating the optimized pose of the intelligent robot according to the objective function.
In the computer-readable storage medium of the embodiment of the application, a plurality of detection data obtained by detection of a plurality of sensors are firstly obtained, then adaptive weights corresponding to the detection data are respectively calculated according to at least one detection data, an objective function of a graph optimization model is constructed according to at least one detection data and the adaptive weights, and finally an optimization pose of the intelligent robot is calculated according to the objective function. Therefore, by introducing the detection data of the plurality of sensors, the pose estimation error caused by data failure of a single sensor is effectively overcome, the positioning robustness of the intelligent robot is improved, and the intelligent robot can stably operate under the condition of severe environment change or data failure of one sensor. Meanwhile, the accuracy and reliability of the laser data can be analyzed by calculating the confidence coefficient, so that the adaptive weights corresponding to the obtained detection data are more accurate, the error of each detection data can be balanced by calculating the adaptive weight corresponding to at least one detection data, the problem that the data error of a single sensor is larger is solved, and the acquired optimal pose of the intelligent robot is more accurate.
Additional aspects and advantages of embodiments of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of embodiments of the present application.
Drawings
The above and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic flow chart of a positioning method according to some embodiments of the present application;
FIG. 2 is a block schematic diagram of an intelligent robot according to certain embodiments of the present application;
FIG. 3 is a block schematic diagram of a positioning device according to certain embodiments of the present application;
FIG. 4 is a schematic diagram of a scenario of a positioning method according to some embodiments of the present application;
FIG. 5 is a framework diagram of an intelligent robot graph optimization model in accordance with certain embodiments of the present application;
FIG. 6 is a schematic flow chart diagram of a positioning method according to some embodiments of the present application;
FIG. 7 is a schematic flow chart diagram of a positioning method according to some embodiments of the present application;
FIG. 8 is a schematic flow chart diagram of a positioning method according to some embodiments of the present application;
FIG. 9 is a schematic flow chart diagram of a positioning method according to some embodiments of the present application;
FIG. 10 is a schematic flow chart diagram of a positioning method according to some embodiments of the present application;
FIG. 11 is a schematic diagram of a connection between a processor and a computer-readable storage medium according to some embodiments of the present application.
Description of the main element symbols:
the intelligent robot comprises an intelligent robot 100, a processor 10, a memory 20, a communication interface 30, a sensor 40, a laser sensor 41, a mileage sensor 42, a GPS sensor 43, a UWB sensor 44, a vision sensor 45, a positioning device 200, an acquisition module 210, a first calculation module 220, a second calculation module 230, a construction module 240, and a third calculation module 250.
Detailed Description
Embodiments of the present application will be further described below with reference to the accompanying drawings. The same or similar reference numbers in the drawings identify the same or similar elements or elements having the same or similar functionality throughout.
In addition, the embodiments of the present application described below in conjunction with the accompanying drawings are exemplary and are only for the purpose of explaining the embodiments of the present application, and are not to be construed as limiting the present application.
In this application, unless expressly stated or limited otherwise, the first feature "on" or "under" the second feature may be directly contacting the first and second features or indirectly contacting the first and second features through intervening media. Also, a first feature "on," "above," and "over" a second feature may mean that the first feature is directly above or obliquely above the second feature, or that only the first feature is at a higher level than the second feature. A first feature being "under," "below," and "beneath" a second feature may be directly under or obliquely under the first feature, or may simply mean that the first feature is at a lesser elevation than the second feature.
Referring to fig. 1 to 4, the positioning method according to the embodiment of the present application is applied to an intelligent robot 100, a plurality of sensors 40 for detecting detection poses of the intelligent robot 100 are installed on the intelligent robot 100, each sensor 40 includes a laser sensor 41, and the positioning method includes the following steps:
s010: acquiring a plurality of detection data detected by a plurality of sensors 40, a laser sensor 41 for detecting laser data;
s020: calculating the self-adaptive confidence of the laser data in at least one degree of freedom according to the laser data;
s030: calculating the self-adaptive weight of pose constraint of at least one detection data in the graph optimization model according to the self-adaptive confidence coefficient;
s040: constructing a graph optimization model and an objective function of the graph optimization model according to at least one piece of detection data and self-adaptive weight; and
s050: and calculating the optimized pose of the intelligent robot 100 according to the objective function.
The intelligent robot 100 of the present embodiment includes one or more processors 10, a memory 20, and one or more programs, where the one or more programs are stored in the memory 20 and executed by the one or more processors 10, and the programs include instructions for executing the external reference calibration method of the present embodiment. When the processor 10 executes the program, the program may be configured to perform step S010, step S020, step S030, step S040, and step S050, that is, the processor 10 may be configured to: acquiring a plurality of detection data detected by the plurality of sensors 40 at the time of detecting the detection pose; calculating the self-adaptive confidence of the laser data in at least one degree of freedom according to the laser data; calculating the self-adaptive weight of pose constraint of at least one detection data in the graph optimization model according to the self-adaptive confidence coefficient; constructing a graph optimization model and an objective function of the graph optimization model according to at least one piece of detection data and self-adaptive weight; and calculating the optimized pose of the intelligent robot 100 according to the objective function.
The positioning device 200 of the embodiment of the present application includes an obtaining module 210, a first calculating module 220, a second calculating module 230, a constructing module 240, and a third calculating module 250, where the obtaining module 210, the first calculating module 220, the second calculating module 230, the constructing module 240, and the third calculating module 250 can be respectively used to implement step S010, step S020, step S030, step S040, and step S050. That is, the acquisition module 210 is configured to acquire a plurality of detection data detected by the plurality of sensors 40 at the time of detecting the detection pose; the first calculating module 220 is configured to calculate an adaptive confidence of the laser data in at least one degree of freedom according to the laser data; the second calculation module 230 is configured to calculate an adaptive weight of a pose constraint of the at least one detection data in the graph optimization model according to the adaptive confidence; the construction module 240 is configured to construct an objective function of the graph optimization model according to the at least one detection data and the adaptive weight; and a third calculation module 250 for calculating the optimized pose of the intelligent robot 100 according to the objective function.
In the positioning method, the positioning apparatus 200, and the intelligent robot 100 according to the embodiment of the present application, first, a plurality of detection data obtained by detecting the detection pose by the plurality of sensors 40 are obtained, then, the adaptive weight corresponding to each detection data is calculated according to at least one detection data, an objective function of a graph optimization model is constructed according to at least one detection data and the adaptive weight, and finally, the optimization pose of the intelligent robot 100 is calculated according to the objective function. Therefore, by introducing the detection data of the plurality of sensors 40, the pose estimation error caused by data failure of a single sensor 40 is effectively overcome, the positioning robustness of the intelligent robot 100 is improved, and the intelligent robot 100 can stably operate under the condition of severe environmental change or data failure of one sensor 40. Meanwhile, the accuracy and reliability of the laser data can be analyzed by calculating the confidence coefficient, so that the adaptive weights corresponding to the obtained detection data are more accurate, the error of each detection data can be balanced by calculating the adaptive weight corresponding to at least one detection data, the adaptive weight of each detection data is reasonably configured, the problem of larger data error of the single sensor 40 is solved, and then a graph optimization model is constructed according to the adaptive weight and the detection data, so that the optimization pose of the intelligent robot 100 calculated through the objective function of the graph optimization model is more accurate.
The intelligent robot 100 may be an industrial robot, an agricultural robot, a home robot, a service robot, a cleaning robot, etc., which is not limited herein. Further, the cleaning robot may be an intelligent robot 100 such as a sweeper, a scrubber, a vacuum cleaner, etc. The intelligent robot 100 may also include elements such as a communication interface 30, a cleaning implement, and the like. The intelligent robot 100 may be used to clean surfaces such as floors, floor tiles, pavements, or cement grounds.
Specifically, in step S010, a plurality of detection data detected by the plurality of sensors 40 at the time of detecting the detection pose are acquired. Referring to fig. 4, the plurality of sensors 40 may be a laser sensor 41, a mileage sensor 42, a GPS sensor 43, a UWB sensor 44, a visual sensor 45, a WIFI sensor, a bluetooth sensor, a 5G communication sensor, a radio frequency sensor, an ultrasonic sensor, an infrared sensor, and the like, which is not limited herein. In the driving process of the intelligent robot 100, in order to obtain the corresponding detection pose, the plurality of sensors 40 may detect working environment data around the intelligent robot 100 and data of the intelligent robot 100 itself in real time, for example, the laser sensor 41 may detect laser data, the laser data may specifically be point cloud data around the intelligent robot 100, the mileage sensor 42 may detect data such as a driving distance and a driving speed of the intelligent robot 100, the GPS sensor 43 may detect positioning of the intelligent robot 100 in real time, the visual sensor 45 may detect environment data around the intelligent robot 100, and the UWB sensor 44 may detect positioning data of the intelligent robot 100.
Further, the corresponding detection pose can be obtained through the detection data of each sensor, and it can be understood that each sensor can detect one detection pose of the intelligent robot 100, for example, the detection pose of the laser sensor 41 can be obtained through matching the point cloud data detected by the laser sensor 41 with a map; the detection pose of the mileage sensor 42 can be obtained through calculation according to the running distance and the running speed detected by the mileage sensor; the detection pose of the GPS sensor 43 can be acquired through the detection data of the GPS sensor 43; the detection pose of the UWB sensor 44 can be further obtained through calculation through the detection data of the UWB sensor 44, the detection pose of the visual sensor 45 can be obtained through calculation and analysis through the detection data of the visual sensor 45, the real-time situation of the intelligent robot 100 can be known through obtaining a plurality of detection data obtained through detection of the plurality of sensors 40, and the detection pose corresponding to the intelligent robot 100 can be obtained through the detection data of different sensors 40, so that the detection error of the single sensor 40 is reduced.
In step S020, an adaptive confidence of the laser data in at least one degree of freedom is calculated from the laser data. The laser data includes data of a plurality of degrees of freedom, and since the work environment of the intelligent robot 100 is complicated, the reliability of the data of different degrees of freedom is different in different work environments, and thus, the confidence of the data of different degrees of freedom is different in different work environments. The confidence of the data with different degrees of freedom can be obtained according to the relative size of the data with multiple degrees of freedom.
For example, the laser data packet includes data in x direction, y direction and yaw angle, the confidence levels include confidence level in x direction, confidence level in y direction and confidence level in yaw angle, the confidence level in x direction is uncertainty in x direction, the confidence level in y direction is uncertainty in y direction, and the confidence level in yaw angle is uncertainty in yaw angle, and the confidence levels in x direction, y direction and yaw angle can be calculated according to the data in x direction, y direction and yaw angle. It can be understood that when the data of multiple degrees of freedom changes, the confidence of multiple degrees of freedom also changes accordingly, and therefore, the confidence of the embodiment of the present application may also be referred to as an adaptive confidence, which changes with the change of the data of multiple degrees of freedom.
In step S030, an adaptive weight of pose constraint of the at least one detection data in the graph optimization model is calculated according to the adaptive confidence. It can be understood that, in a certain scene, the importance or reliability of the data detected by the different sensors 40 is different, so that the ratio of the data detected by the different sensors 40 in the subsequent calculation is inconsistent, and the adaptive weight of the pose constraint of the at least one detection data in the graph optimization model needs to be calculated according to the adaptive confidence, so that the optimal pose of the intelligent robot 100 in the subsequent calculation is more accurate. For example, when the wheel diameter of the intelligent robot 100 is worn, the measurement of the mileage sensor 42 is prone to a large error; when the intelligent robot 100 operates in a high-rise environment, the GPS sensor 43 is prone to multipath, so that no fixation solution exists; when the intelligent robot 100 operates in the long corridor M environment, scenes such as matching failure (as shown in fig. 4) are likely to occur in the laser sensor 41, and these scenes may introduce wrong constraints, or the weights of the wrong constraints are too large, so that the calculated pose of the intelligent robot 100 is inaccurate, and a large error occurs. Therefore, the adaptive weights corresponding to the detection data need to be adjusted in real time to reduce the influence of error constraints on the graph optimization model and the objective function of the graph optimization model. Further, adjusting the adaptive weight corresponding to each detection data in real time first requires calculating the confidence, and then calculating the adaptive weight corresponding to the plurality of detection data according to the confidence. It is understood that when the confidence level changes, the adaptive weight of the detected data changes accordingly.
For example, the sensors 40 further include a mileage sensor 42, a GPS sensor 43, a UWB sensor 44, a vision sensor 45, a WIFI sensor (not shown in the drawing), a bluetooth sensor (not shown in the drawing), and a 5G communication sensor (not shown in the drawing), and when calculating the adaptive weight, the adaptive weight of the pose constraint in the graph optimization model of the detection data of one or more of the mileage sensor 42, the GPS sensor 43, the UWB sensor 44, the vision sensor 45, the WIFI sensor, the bluetooth sensor, and the 5G communication sensor may be calculated according to the confidence in the x direction, the confidence in the y direction, and the confidence in the yaw angle.
In step S040, a graph optimization model and an objective function of the graph optimization model are constructed according to at least one piece of detection data and adaptive weight, and an adaptive weight corresponding to each piece of detection data is calculated in step S030, and then an objective function of the graph optimization model and the graph optimization model is constructed according to at least one piece of detection data and the adaptive weight corresponding to each piece of detection data, so that the graph optimization model includes constraints of a plurality of pieces of detection data, and the objective function of the graph optimization model includes the detection data and the adaptive weight of a plurality of sensors 40, which can avoid the situation that the pose of the intelligent robot 100 finally calculated is inaccurate due to the detection data error of one sensor 40, and can improve the robustness of the system.
Further, in step S050, the optimized pose of the intelligent robot 100 is calculated according to the objective function, in step S040, the graph optimization model and the objective function of the graph optimization model are already constructed, the objective function includes the detection data and the adaptive weight of the plurality of sensors 40, the optimized pose of the intelligent robot 100 can be obtained by calculating the objective function, specifically, the objective function can be optimized by an optimizer, so that the value of the objective function is minimum, the pose with the minimum objective function value is taken as the optimized pose of the intelligent robot 100, and the optimized pose is taken as the pose of the current positioning of the intelligent robot 100, so that the positioning of the intelligent robot 100 is more accurate, and the intelligent robot 100 can be more accurate when navigating, building a graph or avoiding an obstacle with the optimized pose.
Referring to fig. 5, fig. 5 is a frame schematic diagram of an intelligent robot graph optimization model, in fig. 5, a white big circle (i.e., W in the figure), in fig. 5, a pose n is a pose of the intelligent robot 100 at the current time, and poses n-29 to n-1 all belong to poses of the intelligent robot 100 at previous times. The method mainly uses a sliding window method, maintains a graph optimization model containing 30 nodes of the intelligent robot with 100 poses, after each optimization is finished, the poses of the 30 nodes are adjusted, and meanwhile, the pose n is output to serve as the optimal pose of the current robot. Of course, other numbers of graph optimization models of the pose nodes of the intelligent robot 100 may be maintained.
Referring to fig. 3 and 6, in some embodiments, step S040 includes the following steps:
s041: constructing a graph optimization model including the track of the intelligent robot 100, wherein the graph optimization model takes the detection pose matched with the laser data as a node; and
s042: based on the at least one adaptive weight, a relative pose constraint and/or an absolute pose constraint between the detection data and the node is generated.
In some embodiments, the construction module 240 is configured to construct a graph optimization model including the trajectory of the intelligent robot 100, the graph optimization model having detection poses matched with the laser data as nodes; and generating relative pose constraints and/or absolute pose constraints between the detection data and the nodes based on the at least one adaptive weight. That is, the building module 240 may be used to implement step S041 and step S042.
In some embodiments, the processor 10 is further configured to construct a graph optimization model including the trajectory of the intelligent robot 100, the graph optimization model having as nodes the detection poses matched with the laser data; and generating relative pose constraints and/or absolute pose constraints between the detection data and the nodes based on the at least one adaptive weight. That is, the processor 10 is configured to implement step S041 and step S042.
Specifically, the intelligent robot 100 outputs the current pose in real time during the driving process, and connecting all the output poses generates a curve in which the spatial position of the intelligent robot 100 continuously changes with time. Further, a graph optimization model including the track of the intelligent robot 100 is constructed, wherein nodes of the graph optimization model are detection poses matched with the laser data detected by the laser sensor 41, namely the detection poses obtained after the laser data are matched with the running map of the intelligent robot 100. Then, based on at least one adaptive weight, a constraint relation between the detection data of each sensor 40 and a node of the graph optimization model is generated, wherein the constraint relation comprises one or two of a relative constraint relation and an absolute constraint relation, therefore, the constraint relation given by the plurality of sensors 40 is included in the graph optimization model, the error of the graph optimization model is smaller, and finally, the pose of the intelligent robot 100 obtained by calculating the objective function of the graph optimization model is more accurate.
Further, referring to fig. 3 and 6, in some embodiments, step S042 includes one or more of the following steps:
s0421: generating relative pose constraints between the odometry sensor 42 detection nodes;
s0422: generating relative pose constraints obtained by matching the detection data of the laser sensor 41 with a map;
s0423: generating detection data of the GPS sensor 43 and absolute pose constraints of the nodes;
s0424: generating detection data of the UWB sensor 44 and absolute pose constraints of the nodes; and
s0425: generating detection data of the vision sensor 45 and absolute pose constraints of the nodes;
s0426: generating detection data of the WIFI sensor and absolute pose constraints of the nodes;
s0427: generating detection data of a Bluetooth sensor and absolute pose constraint of nodes; and
s0428: and generating detection data of the 5G communication sensor and absolute pose constraints of the nodes.
In certain embodiments, build module 240 is further configured to generate at least one of the following constraints: relative pose constraints between detection nodes of the odometry sensor 42; generating relative pose constraints obtained by matching the detection data of the laser sensor 41 with a map; generating detection data of the GPS sensor 43 and absolute pose constraints of the nodes; generating detection data of the UWB sensor 44 and absolute pose constraints of the nodes; generating detection data of the vision sensor 45 and absolute pose constraints of the nodes; generating detection data of the WIFI sensor and absolute pose constraints of the nodes; generating detection data of a Bluetooth sensor and absolute pose constraint of nodes; and generating detection data of the 5G communication sensor and absolute pose constraints of the nodes. That is, the constraint generating module 250 is further configured to implement at least one or more of step S0421, step S0422, step S0423, step S0424, step S0425, step S0426, step S0427, and step S0428.
In some embodiments, the processor 10 is further configured to generate at least one of the following constraints: generating relative pose constraints between the detection nodes of the odometry sensor 42; generating relative pose constraints obtained by matching the detection data of the laser sensor 41 with a map; generating detection data of the GPS sensor 43 and absolute pose constraints of the nodes; generating detection data of the UWB sensor 44 and absolute pose constraints of the nodes; generating detection data of the vision sensor 45 and absolute pose constraints of the nodes; generating detection data of the WIFI sensor and absolute pose constraints of the nodes; generating detection data of a Bluetooth sensor and absolute pose constraint of nodes; and generating detection data of the 5G communication sensor and absolute pose constraints of the nodes. That is, the processor 10 is further configured to implement at least one or more of step S0421, step S0422, step S0423, step S0424, step S0425, step S0426, step S0427, and step S0428.
Wherein, the laser sensor 41 can obtain the detection pose of the intelligent robot 100 detected by the laser sensor 41 through matching the laser point cloud with the map, the detection pose of the intelligent robot 100 detected by the mileage sensor 42 can be obtained through the detection data of the mileage sensor 42, the detection pose of the intelligent robot 100 detected by the GPS sensor 43 can be obtained through the detection data of the GPS sensor 43, the detection pose of the intelligent robot 100 detected by the UWB sensor 44 can be obtained through the detection data of the UWB sensor 44, the detection pose of the intelligent robot 100 detected by the visual sensor 45 can be obtained through the detection data of the visual sensor 45, the detection pose of the intelligent robot 100 detected by the WIFI sensor can be obtained through the detection data of the WIFI sensor, the detection pose of the intelligent robot 100 detected by the Bluetooth sensor can be obtained through the detection data of the Bluetooth sensor, the detection poses of the intelligent robot 100 detected by the 5G communication sensors can be obtained through the 5G communication sensors, that is, each sensor can obtain corresponding detection poses for the intelligent robot 100, and the detection poses can have a certain constraint effect on the optimization poses of the intelligent robot 100.
Specifically, step S0421, step S0422, step S0423, step S0424, and step S0425 may be implemented, one of step S0424, step S0422, step S0423, step S0424, step S0425, step S0426, step S0427, and step S0428 may be implemented, step S0421, step S0422, and step S0423 may be implemented, and specific implementation of which steps is not limited herein.
Further, since the odometer sensor 42 measures accurately in a short time, but an accumulated error is likely to occur in a long-time measurement, in step S061, the detection data of the odometer sensor 42 at the current time and the detection data of the previous time are acquired, a difference between the detection data of the odometer sensor 42 at the current time and the detection data of the previous time is calculated, the difference is a relative pose constraint between the detection nodes of the odometer sensor 42, and the relative pose constraint between the detection nodes of the odometer sensor 42 is generated to constrain the pose of the intelligent robot 100, so that the accumulated error of the odometer sensor 42 can be effectively reduced.
Further, by matching the detection data of the laser sensor 41 and the map of the intelligent robot 100 with each other, the relative pose constraint between the detection data of the laser sensor 41 and the map data can be obtained, the detection data by the GPS sensor 43 can give absolute pose constraints for the nodes of the graph optimization model, the detection data by the UWB sensor 44 can give absolute pose constraints for the nodes of the graph optimization model, and absolute pose constraints for the nodes of the graph optimization model that can be given by the detection data of the vision sensor 45, the absolute pose constraints for the nodes of the graph optimization model can be given through the detection data of the WIFI sensor, the detection data of the Bluetooth sensor can give absolute pose constraints of nodes aiming at the graph optimization model, the detection data of the 5G communication sensor can give absolute pose constraints of nodes aiming at the graph optimization model.
In one embodiment, the working scene of the intelligent robot is outdoor, and the mutual constraint relationship between the nodes of the graph optimization model can be established through the detection data corresponding to the laser sensor 41, the mileage sensor 42, the GPS sensor 43 and the visual sensor 45, so that the intelligent robot 100 can be more accurately positioned when working outdoors.
In another embodiment, the working scene of the intelligent robot is indoor, and a mutual constraint relationship between the nodes of the graph optimization model of the intelligent robot 100 can be established through detection data corresponding to sensors such as the laser sensor 41, the mileage sensor 42, the vision sensor 43, the WIFI sensor, the bluetooth sensor, and the 5G communication sensor, so that the intelligent robot 100 can be positioned more accurately when working indoors.
Further, referring to fig. 5 again, in some embodiments, a graph optimization model including pose nodes of 30 intelligent robots 100 is maintained by a sliding window method, where a white large circle (i.e., W in the graph) in fig. 5 is the pose of the intelligent robot 100, a pose n (i.e., W1) is the pose of the intelligent robot 100 at the current time, and poses n-29 to n-1 are the poses of the intelligent robot 100 at the previous time, in this embodiment, after the poses of the intelligent robot 100 are calculated each time, the poses of the 30 nodes are adjusted, and at the same time, the pose n is output as the pose of the intelligent robot 100 at the current time, that is, the pose n is output as the optimized pose of the intelligent robot 100. The dark rectangular frame between the poses at two moments is the difference between the detection data of two adjacent frames of the mileage sensor 42, so that the pose of the intelligent robot 100 can be constrained. In fig. 5, the dark gray circle is the detection pose G of the GPS sensor 43, the dark gray rectangular frame is the absolute pose constraint YG of the detection pose of the GPS sensor 43 and the node of the graph optimization model, and in an outdoor environment, the GPS sensor 43 can provide accurate pose measurement, and the measurement result of the GPS sensor 43 has no accumulated error, so the detection pose G of the GPS sensor 43 can directly constrain the pose of the corresponding intelligent robot 100.
Further, the light gray circle in fig. 5 is the detection pose U of the UWB sensor 44, and the light gray rectangular frame in fig. 5 is the absolute pose constraint YU of the detection pose of the UWB sensor 44 and the node of the graph optimization model, and since the UWB sensor 44 is an instantaneous pose measurement method without accumulated error in an indoor environment, the absolute pose of the intelligent robot 100 can be obtained by building 3 or more than 3 base stations, and the detection pose U of the UWB sensor 44 can be directly constrained to the pose of the corresponding intelligent robot 100. The white small circle in fig. 5 is the pose calculated from the detection data of the vision sensor 45, the white rectangle is the absolute pose constraint between the detection data of the vision sensor 45 and the node of the graph optimization model, and the detection data of the vision sensor 45 can be directly constrained to the pose of the intelligent robot 100 because the vision sensor 45 has no accumulated error. By generating the corresponding constraints of the poses of the laser sensor 41, the mileage sensor 42, the GPS sensor 43, the UWB sensor 44 and the vision sensor 45 with the intelligent robot 100, the detection data of each sensor 40 can be fully considered when the objective function of the graph optimization model is calculated subsequently, so that the detection error caused by a certain sensor 40 is reduced, and the obtained pose of the intelligent robot 100 is more accurate.
Of course, the pose of the intelligent robot 100 may also be obtained by adding detection data of sensors such as a radio frequency sensor and an ultrasonic sensor to generate a corresponding constraint relationship.
Referring to fig. 8, in some embodiments, before performing step S030, the positioning method further includes the following steps:
s060: acquiring a detection data difference value of the mileage sensor 42 at the current moment and the previous moment;
s070: acquiring an initial pose of the intelligent robot 100 at the current moment according to the detection data difference and the pose of the intelligent robot 100 at the previous moment; and
s080: and acquiring the corrected pose according to the initial pose and the laser data.
In some embodiments, the first calculating module 220 is further configured to obtain a difference value between the detection data of the mileage sensor 42 at the current time and the detection data of the mileage sensor at the previous time; acquiring an initial pose of the intelligent robot 100 at the current moment according to the detection data difference and the pose of the intelligent robot 100 at the previous moment; and acquiring the corrected pose according to the initial pose and the laser data. That is, the first calculating module 220 can also be used to implement step S060, step S070, and step S080.
In some embodiments, the processor 10 may be further configured to obtain a difference between the detection data of the mileage sensor 42 at the current time and the previous time; acquiring an initial pose of the intelligent robot 100 at the current moment according to the detection data difference and the pose of the intelligent robot 100 at the previous moment; and acquiring the corrected pose according to the initial pose and the laser data. That is, the processor 10 may also be configured to implement step S060, step S070, and step S080.
Specifically, the detection data of the current-time mileage sensor 42 and the detection data of the previous-time mileage sensor 42 are obtained, the detection data of the current-time mileage sensor 42 is subtracted from the detection data of the previous-time mileage sensor 42, so that a detection data difference value of the mileage sensor 42 can be obtained, the detection data difference value of the previous-time mileage sensor 42 is accumulated according to the pose of the intelligent robot 100 at the previous time, so that the initial pose of the intelligent robot 100 at the current time can be obtained, and then the corrected pose can be obtained by performing matching calculation according to the laser point cloud detected by the laser sensor 41 and the initial pose with a map.
More specifically, the laser point cloud and the initial pose can be matched with a map by adopting an ICP (inductively coupled plasma) algorithm to obtain a corrected pose; or the NDT algorithm is used for matching the laser point cloud and the initial pose with a map to obtain a corrected pose; the laser point cloud and the initial pose can be matched with the map through other algorithms to obtain the corrected pose, and the method is not limited herein.
Further, referring to fig. 3 and 9, step S030 includes the following steps:
s031: judging whether the difference between the timestamp of the detection data and the timestamp of the pose of the intelligent robot 100 is smaller than a preset difference threshold value;
if the determination result in step S031 is yes, then step S032 is executed: and calculating the self-adaptive weight of the pose constraint of the detection data in the graph optimization model.
In some embodiments, the second calculation module 230 is further configured to determine whether a difference between the timestamp of the detection data and the timestamp of the pose of the intelligent robot 100 is less than a preset difference threshold; and if so, calculating the self-adaptive weight of the pose constraint of the detection data in the graph optimization model. That is, the second calculating module 230 can also be used to implement step S031 and step S032.
In some embodiments, the processor 10 is further configured to determine whether a difference between the timestamp of the detection data and the timestamp of the pose of the intelligent robot 100 is less than a preset difference threshold; and if so, calculating the self-adaptive weight of the pose constraint of the detection data in the graph optimization model. That is, the processor 10 may also be configured to implement step S031 and step S032.
Specifically, each sensor 40 detects data in real time, each detection data corresponds to a time stamp, and when the pose of the intelligent robot 100 is calculated, and the difference between the time stamp corresponding to the detection data and the time stamp of the pose of the intelligent robot 100 is smaller than a preset difference threshold, that is, the difference between the time stamp corresponding to the detection data and the time stamp of the pose of the intelligent robot 100 is within an allowable range, the detection data is considered to be valid data, and the adaptive weight corresponding to the detection data can be calculated. The timestamp of the pose of the intelligent robot 100 is the system time of the intelligent robot 100 at the current moment, the preset difference threshold may be a user-defined numerical value or a numerical value obtained through multiple debugging, and the smaller the preset difference threshold is, the more accurate the corresponding detection data is. The preset difference threshold may be one millisecond, five milliseconds, ten milliseconds, thirty milliseconds, fifty milliseconds, one hundred milliseconds, or the like.
Further, if the timestamp corresponding to the detection data and the timestamp of the pose of the intelligent robot 100 are greater than the preset difference threshold, the detection data of the sensor 40 are discarded, so as to avoid that the finally obtained pose deviation is large. That is, when the output result of step S029 is no, the detection data of the corresponding sensor 40 is discarded, and the corrected pose of the intelligent robot 100 is not calculated from the detection data.
In one embodiment, if the difference between the timestamp of the GPS sensor 43 and the timestamp of the current intelligent robot 100 is within milliseconds, the detection pose of the GPS sensor 43 is considered as valid data, and the adaptive weight corresponding to the detection data of the GPS sensor can be calculated; if the difference between the timestamp of the UWB sensor 44 and the timestamp of the current intelligent robot 100 is within milliseconds, the detection pose of the UWB sensor 44 is considered as valid data, and the adaptive weight corresponding to the detection pose of the UWB sensor 44 can be calculated according to the time of the UWB sensor 44; if the difference between the timestamp of the visual sensor 45 and the timestamp of the intelligent robot 100 is within milliseconds, the detection pose of the visual sensor 45 is considered as valid data, and the adaptive weight corresponding to the detection pose of the UWB sensor 44 can be calculated, so that the detection data of the GPS sensor 43, the UWB sensor 44 and the visual sensor 45 are very close to the current time, and the calculated pose of the intelligent robot 100 is more accurate.
Further, referring to fig. 10, in some embodiments, step S040 includes the following steps:
s043: calculating a residual error between the at least one detection pose and a pose to be optimized of the intelligent robot 100;
s044: acquiring an information matrix of detection data corresponding to the adaptive weight according to the adaptive weight; and
s045: and constructing an objective function according to the information matrix and the residual error.
In some embodiments, the building module 240 is further configured to calculate a residual error between the at least one detection pose and the pose to be optimized of the intelligent robot 100; acquiring an information matrix of detection data corresponding to the adaptive weight according to the adaptive weight; and constructing an objective function according to the information matrix and the residual error. That is, the building module 240 may also be used to implement step S043, step S044 and step S045.
In some embodiments, the processor 10 is further configured to calculate a residual between the at least one detection pose and the pose to be optimized of the intelligent robot 100; acquiring an information matrix of detection data corresponding to the adaptive weight according to the adaptive weight; and constructing an objective function according to the information matrix and the residual error. That is, the processor 10 may also be configured to implement step S043, step S044 and step S045.
Specifically, in step S010, detection data of each sensor 40 is acquired, wherein each sensor 40 corresponds to one detection pose, and by calculating a residual error between the detection pose of each sensor 40 and the pose of the intelligent robot 100, it can be understood that the pose to be optimized of the intelligent robot 100 is assumed and needs to be optimized to obtain the optimized pose of the intelligent robot 100, and thus, the residual error includes the pose to be optimized of the intelligent robot 100, so that the optimized pose of the intelligent robot 100 can be calculated by combining the residual error subsequently. In step S020, the adaptive weight corresponding to the detection data of each sensor 40 is acquired, and an information matrix of the detection data corresponding to the adaptive weight, that is, one information matrix corresponding to the detection data of each sensor 40 may be further acquired according to the adaptive weight acquired in step S030.
The order of implementing step S043 and step S044 is not limited herein, and step S043 may be implemented first, step S044 may be implemented first, step S043 may be implemented then, step S043 and step S044 may be implemented simultaneously. Finally, in step S045, an objective function is constructed according to the residual error obtained in step S043 and the information matrix obtained in step S044, and specifically, the objective function is constructed according to the residual error of each sensor 40 and the information matrix, so that the objective function includes the detection data of each sensor 40, and the problem of a large error of a single sensor 40 can be overcome.
In certain embodiments, the laser sensor 41 is capable of calculating the confidence level C = [ in real time ] according to a matching algorithmc x ,c y, c θ ],0≤c x ≤30,0≤c y ≤30,0≤c θ Less than or equal to 30. Wherein, cxRepresenting uncertainty in the x-direction, cyAs uncertainty in the y direction, cθIs the uncertainty of the yaw angle. Under the regular environment, the laser sensor 41 has a good matching effect, the uncertainty of three degrees of freedom is low, the C value is small, and otherwise, the C value is large. For example, when the intelligent robot 100 is in the long corridor M, cxWill be very large, and cyAnd cθAre all smaller, cxThe reason for the large value is mainly that the robot is in the long corridor M, and the front and back directions belong to the x directionAnd, the uncertainty of the matching algorithm in the direction is large due to the lack of effective constraint in the direction. Considering that the working scene of the intelligent robot 100 is relatively complex and the uncertainty of the laser sensor 41 is large, the laser sensor 41 needs to calculate the adaptive weights in the three degrees of freedom of the x direction, the y direction and the yaw angle, and the specific calculation method is as follows:
adaptive weights in the x-direction
Figure 261957DEST_PATH_IMAGE001
Figure 734527DEST_PATH_IMAGE002
Adaptive weights in the y-direction
Figure 847976DEST_PATH_IMAGE003
Figure 773207DEST_PATH_IMAGE004
Adaptive weighting in yaw angle theta direction
Figure 122149DEST_PATH_IMAGE005
Figure 183645DEST_PATH_IMAGE006
Wherein total = cx+cy+cθAnd the superscript lidar represents the matching result of the laser point cloud and the map.
The adaptive weight corresponding to the detection data of the mileage sensor 42 is calculated by using the difference between the detection data of two adjacent frames, and the mileage sensor 42 directly uses an adaptive weight wodomThe adaptive weight of three degrees of freedom is replaced, and the specific calculation method is as follows: w is aodom=min(C,min(cx, cy,cθ)x2)
Adaptive weight w corresponding to detection data of GPS sensor 43GPSThe calculation method comprises the following steps: w is aGPS=min(D,max(cx, cy,cθ));
Adaptive weight w corresponding to detected data of UWB sensor 44UWBThe calculation method comprises the following steps: w is aUWB=min(E,max(cx, cy,cθ));
Adaptive weight w corresponding to detection data of visual sensor 45CAMThe calculation method comprises the following steps: w is aCAM=min(F,max(cx, cy,cθ));
A, B, C, D, E, F are adaptive weight values, which may be fixed values already set or empirical values obtained by debugging, and are not limited herein. In one embodiment, a =10, B =20, C =50, D =20, E =10, and F =30, which are all the corresponding adaptive weight values obtained by the user, then:
adaptive weighting of laser sensor 41 in the x-direction
Figure 467996DEST_PATH_IMAGE001
Figure 942840DEST_PATH_IMAGE007
Adaptive weighting of laser sensor 41 in the y-direction
Figure 970839DEST_PATH_IMAGE003
Figure 152421DEST_PATH_IMAGE008
Adaptive weighting in yaw angle theta direction
Figure 342094DEST_PATH_IMAGE005
Figure 304234DEST_PATH_IMAGE009
Adaptive weight w corresponding to detection data of mileage sensor 42odom:wodom=min(50,min(cx, cy,cθ)x2);
Number of detections of the GPS sensor 43According to the corresponding adaptive weight wGPS:wGPS=min(20,max(cx, cy,cθ));
Adaptive weight w corresponding to detected data of UWB sensor 44UWB:wUWB=min(10,max(cx, cy,cθ));
Adaptive weight w corresponding to detection data of visual sensor 45CAM:wCAM=min(30,max(cx, cy,cθ))。
Referring to fig. 6, assume a pose x of the intelligent robot 100 at time n (i.e., the current time)nIs composed of
Figure 870345DEST_PATH_IMAGE010
Wherein p isTFor the linear displacement, phi is the rotation angle, and the position and attitude of the odometer sensor 42 from the moment i to the moment j are made to be different
Figure 172013DEST_PATH_IMAGE011
Is composed of
Figure 532587DEST_PATH_IMAGE012
Let the detection pose of the GPS sensor 43 at the nth time
Figure 919706DEST_PATH_IMAGE013
Is composed of
Figure 289508DEST_PATH_IMAGE014
Figure 242420DEST_PATH_IMAGE015
Let the detection pose of the UWB sensor 44 at the nth time
Figure 39475DEST_PATH_IMAGE016
Is composed of
Figure 648311DEST_PATH_IMAGE017
Let the detection pose of the vision sensor 45 at the nth time
Figure 821803DEST_PATH_IMAGE018
Is composed of
Figure 832484DEST_PATH_IMAGE019
Residual error between matching pose obtained by matching laser sensor 41 with a map and pose to be optimized of intelligent robot 100
Figure 800440DEST_PATH_IMAGE020
Comprises the following steps:
Figure 896572DEST_PATH_IMAGE021
wherein, the normaize normalizes the angle difference and limits the angle difference between-180 degrees and +180 degrees, and R isiAnd a matrix quantity constructed by the angle of the pose at the ith moment is a rotation matrix.
Residual error between the inter-frame pose difference detected by the mileage sensor 42 and the inter-frame pose difference to be optimized of the intelligent robot 100
Figure 670493DEST_PATH_IMAGE022
Comprises the following steps:
Figure 535681DEST_PATH_IMAGE023
residual error between detection pose of GPS sensor 43 and pose to be optimized of intelligent robot 100
Figure 674538DEST_PATH_IMAGE024
Comprises the following steps:
Figure 257967DEST_PATH_IMAGE025
residual error between detection pose of UWB sensor 44 and pose to be optimized of intelligent robot 100
Figure 507682DEST_PATH_IMAGE026
Comprises the following steps:
Figure 492956DEST_PATH_IMAGE027
residual error between detection pose of visual sensor 45 and pose to be optimized of robot
Figure 865031DEST_PATH_IMAGE028
Comprises the following steps:
Figure 935756DEST_PATH_IMAGE029
further, an information matrix corresponding to the detection data of the laser sensor 41
Figure 989162DEST_PATH_IMAGE030
Comprises the following steps:
Figure 828942DEST_PATH_IMAGE031
information matrix corresponding to detection data of mileage sensor 42
Figure 309602DEST_PATH_IMAGE032
Comprises the following steps:
Figure 602043DEST_PATH_IMAGE033
information matrix corresponding to detection data of GPS sensor 43
Figure 521458DEST_PATH_IMAGE034
Comprises the following steps:
Figure 215744DEST_PATH_IMAGE035
information matrix corresponding to detection data of UWB sensor 44
Figure 867305DEST_PATH_IMAGE036
Comprises the following steps:
Figure 912622DEST_PATH_IMAGE037
information matrix corresponding to detection data of visual sensor 45
Figure 307831DEST_PATH_IMAGE038
Comprises the following steps:
Figure 856624DEST_PATH_IMAGE039
further, an objective function for constructing a graph optimization model according to the residual error and the adaptive weight of the detection data of each sensor 40 is as follows:
Figure 6983DEST_PATH_IMAGE040
Figure 274016DEST_PATH_IMAGE041
Figure 472916DEST_PATH_IMAGE042
Figure 876216DEST_PATH_IMAGE043
Figure 135159DEST_PATH_IMAGE044
(ii) a By solving the minimum value of the objective function, the pose corresponding to the minimum value is the optimized pose of the intelligent robot 100, and the optimized pose of the intelligent robot 100 is the pose of the intelligent robot 100 at the current moment, so that the obtained pose error of the intelligent robot 100 is minimum, and the pose is taken as the output pose, so that the navigation, obstacle avoidance and positioning of the intelligent robot 100 are more accurate. In one embodiment, a google ceres solution optimizer is used, and a minimum value of the objective function is obtained through calculation by combining a gauss-newton method.
Referring to fig. 2 again, the memory 20 is used for storing a computer program that can run on the processor 10, and the processor 10 executes the computer program to implement the path generation method in any of the above embodiments.
The memory 20 may comprise high-speed RAM memory and may also include non-volatile memory (non-volatile memory), such as at least one disk memory. Further, the intelligent robot 100 may further include a communication interface 30, and the communication interface 30 is used for communication between the memory 20 and the processor 10.
If the memory 20, the processor 10 and the communication interface 30 are implemented independently, the communication interface 30, the memory 20 and the processor 10 may be connected to each other through a bus and perform communication with each other. The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (enhanced Industry Standard Architecture) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 2, but it is not intended that there be only one bus or one type of bus.
Optionally, in a specific implementation, if the memory 20, the processor 10, and the communication interface 30 are integrated on a chip, the memory 20, the processor 10, and the communication interface 30 may complete communication with each other through an internal interface.
The processor 10 may be a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement embodiments of the present Application.
Referring to fig. 11, a non-transitory computer-readable storage medium 300 according to an embodiment of the present application includes computer-executable instructions 301, and when the computer-executable instructions 301 are executed by one or more processors 400, the processor 400 is configured to perform a positioning method according to any embodiment of the present application.
For example, referring to fig. 1 and 2, when the computer executable instructions 301 are executed by the processor 400, the processor 400 is configured to perform the following steps:
s010: acquiring a plurality of detection data detected by a plurality of sensors 40, a laser sensor 41 for detecting laser data;
s020: calculating the self-adaptive confidence of the laser data in at least one degree of freedom according to the laser data;
s030: calculating the self-adaptive weight of pose constraint of at least one detection data in the graph optimization model according to the self-adaptive confidence coefficient;
s040: constructing a graph optimization model and an objective function of the graph optimization model according to at least one piece of detection data and self-adaptive weight; and
s050: and calculating the optimized pose of the intelligent robot 100 according to the objective function.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium. The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc.
In the description herein, reference to the description of the terms "certain embodiments," "one embodiment," "some embodiments," "illustrative embodiments," "examples," "specific examples," or "some examples" means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one of the feature. In the description of the present application, "a plurality" means at least two, e.g., two, three, unless specifically limited otherwise.
Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations of the above embodiments may be made by those of ordinary skill in the art within the scope of the present application, which is defined by the claims and their equivalents.

Claims (10)

1. A positioning method is applied to an intelligent robot, a plurality of sensors are installed on the intelligent robot and used for detecting the detection pose of the intelligent robot, the sensors comprise laser sensors, and the positioning method comprises the following steps:
acquiring a plurality of detection data detected by a plurality of sensors when detecting the detection pose, wherein the laser sensor is used for detecting laser data;
calculating an adaptive confidence of the laser data in at least one degree of freedom from the laser data;
calculating the self-adaptive weight of pose constraint of at least one detection data in a graph optimization model according to the self-adaptive confidence coefficient;
constructing the graph optimization model and an objective function of the graph optimization model according to at least one detection data and the self-adaptive weight; and
calculating the optimized pose of the intelligent robot according to the objective function;
the self-adaptive confidence coefficient comprises a confidence coefficient in an x direction, a confidence coefficient in a y direction and a confidence coefficient in a yaw angle, wherein the confidence coefficient in the x direction is uncertainty in the x direction, the confidence coefficient in the y direction is uncertainty in the y direction, and the confidence coefficient in the yaw angle is uncertainty in the yaw angle;
the adaptive weight of the laser data in the x direction is min (A, B × (total-c)x)/total);
The adaptive weight of the laser data in the y direction is min (A, B × (total-c)y)/total);
The adaptive weight of the laser data in the yaw angle direction is min (A, B × (total-c)θ)/total);
Wherein A, B is a fixed value, cxAs confidence in the x-direction, cyAs confidence in the y direction, cθFor confidence in the yaw angle, total = cx+cy+cθ
2. The positioning method according to claim 1, wherein the sensors further comprise a mileage sensor, a GPS sensor, a UWB sensor, a visual sensor, a WIFI sensor, a bluetooth sensor, a 5G communication sensor, and the calculating an adaptive weight of pose constraint of at least one of the detection data in a graph optimization model according to the adaptive confidence includes:
calculating an adaptive weight of pose constraints of detection data of one or more of the mileage sensor, the GPS sensor, the UWB sensor, the vision sensor, the WIFI sensor, the Bluetooth sensor, and the 5G communication sensor in the graph optimization model according to the confidence in the x direction, the confidence in the y direction, and the confidence in the yaw angle.
3. The method according to claim 1, wherein the constructing the graph optimization model and the objective function of the graph optimization model according to the at least one detection data and the adaptive weight comprises:
calculating a residual error between at least one detection pose and a pose to be optimized of the intelligent robot;
acquiring an information matrix of detection data corresponding to the self-adaptive weight according to the self-adaptive weight; and
and constructing the target function according to the information matrix and the residual error.
4. The method according to claim 1, wherein the constructing the graph optimization model and the objective function of the graph optimization model according to the at least one detection data and the adaptive weight comprises:
constructing the graph optimization model including the trajectory of the intelligent robot, the graph optimization model having detection poses matched with the laser data as nodes; and
generating relative and/or absolute pose constraints between the detection data and the nodes based on at least one of the adaptive weights.
5. The positioning method according to claim 4, wherein the sensors further comprise a mileage sensor, a GPS sensor, a UWB sensor, a vision sensor, a WIFI sensor, a Bluetooth sensor, and a 5G communication sensor, and the generating of the relative pose constraint and/or the absolute pose constraint between the detection data and the node based on at least one of the adaptive weights comprises one or more of the following steps:
generating relative pose constraints between detection nodes of the odometry sensor;
generating relative pose constraint obtained by matching the detection data of the laser sensor with a map;
generating detection data of the GPS sensor and absolute pose constraints of the nodes;
generating detection data of the UWB sensor and absolute pose constraints of the nodes;
generating detection data of the vision sensor and absolute pose constraints of the nodes;
generating detection data of the WIFI sensor and absolute pose constraints of the nodes;
generating detection data of the Bluetooth sensor and absolute pose constraints of the nodes; and
and generating the detection data of the 5G communication sensor and the absolute pose constraint of the node.
6. The localization method according to claim 1, wherein the sensors further comprise a mileage sensor, and before calculating an adaptive weight of a pose constraint of at least one of the detection data in a graph optimization model according to the adaptive confidence, the localization method further comprises:
acquiring a detection data difference value of the mileage sensor at the current moment and the previous moment;
acquiring an initial pose of the intelligent robot at the current moment according to the detection data difference and the pose of the intelligent robot at the previous moment; and
and acquiring a corrected pose according to the initial pose and the laser data.
7. The localization method according to any one of claims 1 to 6, wherein the calculating an adaptive weight of a pose constraint of at least one of the detection data in a graph optimization model according to the adaptive confidence level comprises:
judging whether the difference value between the timestamp of the detection data and the timestamp of the optimization pose of the intelligent robot is smaller than a preset difference value threshold value or not; and
and if so, calculating the self-adaptive weight of the pose constraint of the detection data in the graph optimization model.
8. The utility model provides a positioner, is applied to intelligent robot, intelligent robot is last to install a plurality of sensors, and is a plurality of the sensor is used for detecting intelligent robot's detection position appearance, the sensor includes laser sensor, its characterized in that, positioner includes:
an acquisition module configured to acquire a plurality of detection data detected by the plurality of sensors when detecting the detection pose, the laser sensor being configured to detect laser data;
a first calculation module for calculating an adaptive confidence of the laser data in at least one degree of freedom from the laser data;
a second calculation module, configured to calculate an adaptive weight of pose constraint of at least one of the detection data in a graph optimization model according to the adaptive confidence;
a construction module for constructing the graph optimization model and an objective function of the graph optimization model according to at least one of the detection data and the adaptive weight; and
a third calculation module, configured to calculate an optimized pose of the intelligent robot according to the objective function;
the self-adaptive confidence coefficient comprises a confidence coefficient in an x direction, a confidence coefficient in a y direction and a confidence coefficient in a yaw angle, wherein the confidence coefficient in the x direction is uncertainty in the x direction, the confidence coefficient in the y direction is uncertainty in the y direction, and the confidence coefficient in the yaw angle is uncertainty in the yaw angle;
the adaptive weight of the laser data in the x direction is min (A, B × (total-c)x)/total);
The adaptive weight of the laser data in the y direction is min (A, B × (total-c)y)/total);
The adaptive weight of the laser data in the yaw angle direction is min (A, B × (total-c)θ)/total);
Wherein A, B is a fixed value, cxAs confidence in the x-direction, cyAs confidence in the y direction, cθFor confidence in the yaw angle, total = cx+cy+cθ
9. An intelligent robot, comprising:
one or more processors, memory; and
one or more programs, wherein the one or more programs are stored in the memory and executed by the one or more processors, the programs comprising instructions for performing the positioning method of any of claims 1 to 7.
10. A non-transitory computer-readable storage medium containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the positioning method of any one of claims 1 to 7.
CN202010433361.0A 2020-05-21 2020-05-21 Positioning method and device, intelligent robot and computer readable storage medium Active CN111337018B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010433361.0A CN111337018B (en) 2020-05-21 2020-05-21 Positioning method and device, intelligent robot and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010433361.0A CN111337018B (en) 2020-05-21 2020-05-21 Positioning method and device, intelligent robot and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111337018A CN111337018A (en) 2020-06-26
CN111337018B true CN111337018B (en) 2020-09-01

Family

ID=71181162

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010433361.0A Active CN111337018B (en) 2020-05-21 2020-05-21 Positioning method and device, intelligent robot and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111337018B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111707272B (en) * 2020-06-28 2022-10-14 湖南大学 Underground garage automatic driving laser positioning system
CN111664860B (en) * 2020-07-01 2022-03-11 北京三快在线科技有限公司 Positioning method and device, intelligent equipment and storage medium
CN111735458B (en) * 2020-08-04 2020-11-24 西南石油大学 Navigation and positioning method of petrochemical inspection robot based on GPS, 5G and vision
CN112097772B (en) * 2020-08-20 2022-06-28 深圳市优必选科技股份有限公司 Robot and map construction method and device thereof
CN112833876B (en) * 2020-12-30 2022-02-11 西南科技大学 Multi-robot cooperative positioning method integrating odometer and UWB
CN113324542B (en) * 2021-06-07 2024-04-12 北京京东乾石科技有限公司 Positioning method, device, equipment and storage medium
CN113670290B (en) * 2021-06-30 2023-05-12 西南科技大学 Mobile robot indoor map construction method based on multi-robot cooperation
CN114088085B (en) * 2021-11-19 2023-06-23 安克创新科技股份有限公司 Position determining method and device for robot, electronic equipment and storage medium
CN117572335B (en) * 2024-01-09 2024-04-16 智道网联科技(北京)有限公司 Updating method and device for laser positioning confidence coefficient and electronic equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101282243A (en) * 2008-03-05 2008-10-08 中科院嘉兴中心微系统所分中心 Method for recognizing distributed amalgamation of wireless sensor network
CN107121979A (en) * 2016-02-25 2017-09-01 福特全球技术公司 Autonomous confidence control
US9927813B1 (en) * 2012-09-28 2018-03-27 Waymo Llc Detecting sensor degradation by actively controlling an autonomous vehicle
CN108492316A (en) * 2018-02-13 2018-09-04 视辰信息科技(上海)有限公司 A kind of localization method and device of terminal
CN109001787A (en) * 2018-05-25 2018-12-14 北京大学深圳研究生院 A kind of method and its merge sensor of solving of attitude and positioning
CN109443348A (en) * 2018-09-25 2019-03-08 同济大学 It is a kind of based on the underground garage warehouse compartment tracking for looking around vision and inertial navigation fusion
CN110553652A (en) * 2019-10-12 2019-12-10 上海高仙自动化科技发展有限公司 robot multi-sensor fusion positioning method and application thereof
CN111044069A (en) * 2019-12-16 2020-04-21 驭势科技(北京)有限公司 Vehicle positioning method, vehicle-mounted equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101282243A (en) * 2008-03-05 2008-10-08 中科院嘉兴中心微系统所分中心 Method for recognizing distributed amalgamation of wireless sensor network
US9927813B1 (en) * 2012-09-28 2018-03-27 Waymo Llc Detecting sensor degradation by actively controlling an autonomous vehicle
CN107121979A (en) * 2016-02-25 2017-09-01 福特全球技术公司 Autonomous confidence control
CN108492316A (en) * 2018-02-13 2018-09-04 视辰信息科技(上海)有限公司 A kind of localization method and device of terminal
CN109001787A (en) * 2018-05-25 2018-12-14 北京大学深圳研究生院 A kind of method and its merge sensor of solving of attitude and positioning
CN109443348A (en) * 2018-09-25 2019-03-08 同济大学 It is a kind of based on the underground garage warehouse compartment tracking for looking around vision and inertial navigation fusion
CN110553652A (en) * 2019-10-12 2019-12-10 上海高仙自动化科技发展有限公司 robot multi-sensor fusion positioning method and application thereof
CN111044069A (en) * 2019-12-16 2020-04-21 驭势科技(北京)有限公司 Vehicle positioning method, vehicle-mounted equipment and storage medium

Also Published As

Publication number Publication date
CN111337018A (en) 2020-06-26

Similar Documents

Publication Publication Date Title
CN111337018B (en) Positioning method and device, intelligent robot and computer readable storage medium
US11763568B2 (en) Ground plane estimation in a computer vision system
WO2021135645A1 (en) Map updating method and device
CN111590595B (en) Positioning method and device, mobile robot and storage medium
EP3398166B1 (en) Method for structure from motion processing in a computer vision system
US20190249998A1 (en) Systems and methods for robotic mapping
WO2017028653A1 (en) Method and system for automatically establishing map indoors by mobile robot
US10127677B1 (en) Using observations from one or more robots to generate a spatio-temporal model that defines pose values for a plurality of objects in an environment
US8793069B2 (en) Object recognition system for autonomous mobile body
JP2501010B2 (en) Mobile robot guidance device
EP3825903A1 (en) Method, apparatus and storage medium for detecting small obstacles
CN111539994B (en) Particle filter repositioning method based on semantic likelihood estimation
CN110570449B (en) Positioning and mapping method based on millimeter wave radar and visual SLAM
CN109633664B (en) Combined positioning method based on RGB-D and laser odometer
US20200233061A1 (en) Method and system for creating an inverse sensor model and method for detecting obstacles
CN107526085B (en) Ultrasonic array ranging modeling method and system
Park et al. Radar localization and mapping for indoor disaster environments via multi-modal registration to prior LiDAR map
CN108415417A (en) A kind of robot obstacle-avoiding system and method based on the prediction of barrier motion state
CN111521195B (en) Intelligent robot
CN112204486A (en) Time-of-flight sensor arrangement for robot navigation and method for positioning using the same
WO2022143285A1 (en) Cleaning robot and distance measurement method therefor, apparatus, and computer-readable storage medium
CN111989631A (en) Self-position estimation method
CN102297692A (en) Self-localization method of intelligent wheelchair in corner areas
Rasch et al. Tidy up my room: Multi-agent cooperation for service tasks in smart environments
Lim et al. Multi-object identification for mobile robot using ultrasonic sensors

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant