CN117784151A - Robot positioning method, apparatus, electronic device and storage medium - Google Patents

Robot positioning method, apparatus, electronic device and storage medium Download PDF

Info

Publication number
CN117784151A
CN117784151A CN202311459775.0A CN202311459775A CN117784151A CN 117784151 A CN117784151 A CN 117784151A CN 202311459775 A CN202311459775 A CN 202311459775A CN 117784151 A CN117784151 A CN 117784151A
Authority
CN
China
Prior art keywords
warehouse scene
warehouse
scene image
cloud data
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311459775.0A
Other languages
Chinese (zh)
Inventor
代晓君
李灏源
尤赟
曾锴
侯成成
谢骏
孙涛
张晨
黄保润
潘贤真
徐伟
樊芳
裴泽平
郭双双
陶涛
李嘉琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sinotrans Innovation Technology Co ltd
China Foreign Transport Co ltd
Original Assignee
Sinotrans Innovation Technology Co ltd
China Foreign Transport Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sinotrans Innovation Technology Co ltd, China Foreign Transport Co ltd filed Critical Sinotrans Innovation Technology Co ltd
Priority to CN202311459775.0A priority Critical patent/CN117784151A/en
Publication of CN117784151A publication Critical patent/CN117784151A/en
Pending legal-status Critical Current

Links

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a robot positioning method, a robot positioning device, electronic equipment and a storage medium, and relates to the technical field of computers, wherein the method comprises the following steps: acquiring a warehouse scene image acquired by a vision sensor at the current moment and point cloud data acquired by a laser radar; the point cloud data represent distance information obtained by modeling a warehouse scene by a laser radar; calibrating the warehouse scene image and the point cloud data respectively to obtain a calibrated warehouse scene image and calibrated point cloud data; based on the calibrated warehouse scene image and the calibrated point cloud data, respectively determining the three-dimensional space position of goods in the warehouse scene image and the three-dimensional map of the warehouse scene; and positioning the robot based on the three-dimensional space position of the goods in the warehouse scene image, the three-dimensional map of the warehouse scene, the warehouse scene image and the point cloud data. The positioning of the robot in a complex warehouse scene is realized, and the positioning precision of the robot is improved.

Description

Robot positioning method, apparatus, electronic device and storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a robot positioning method, a robot positioning device, an electronic device, and a storage medium.
Background
Warehousing is an important component of modern logistics, plays a vital role in logistics systems, and inventory of goods works as an important item in warehousing logistics. For a long time, the conventional checking mode has the defects of low efficiency, poor aging, high potential safety hazard and the like. With the rapid development of automation disciplines, in order to improve warehouse operation efficiency, an intelligent positioning navigation device is required to replace manual inventory. Currently, in the field of positioning and navigation, the conventional positioning method based on an inertial sensor and a global positioning system (Global Positioning System, GPS) has difficulty in meeting the actual requirements, because the positioning accuracy of the GPS is low under the condition of weak indoor signals, and the positioning accuracy is reduced after long-time working due to accumulated errors of the inertial sensor.
Currently, the positioning and navigation aiming at logistics storage mainly comprise a magnetic stripe navigation technology, a two-dimensional code navigation technology, a pure visual navigation and a pure laser navigation. In magnetic stripe navigation, the magnetic stripe is easy to break, regular maintenance is needed, the magnetic stripe is required to be paved again in path change, an automatic guided vehicle (Automated Guided Vehicle, AGV) can only walk according to the magnetic stripe, and intelligent avoidance or real-time task change cannot be realized through a control system; in two-dimensional code navigation, a path needs to be maintained regularly, if a field is complex, the two-dimensional code needs to be replaced frequently, the requirements on the precision and the service life of a gyroscope are strict, certain requirements on the flatness of the field are met, and the price is high; in visual navigation, a visual sensor cannot directly measure distance information, namely, the problem of low depth estimation accuracy exists even if binocular is used, and modeling of a complex environment is difficult; in laser navigation, the laser navigation is easily limited by the radar detection range, the requirement of a mounting structure is higher, and the manufacturing cost and the price are relatively higher.
Therefore, in a complex environment, how to realize accurate positioning of the intelligent positioning navigation device is a technical problem to be solved.
Disclosure of Invention
The invention provides a robot positioning method, a robot positioning device, electronic equipment and a storage medium, which are used for solving the problem of how to realize accurate positioning of intelligent positioning navigation equipment.
The invention provides a robot positioning method, which is applied to a robot, wherein a vision sensor and a laser radar are arranged on the robot, and the robot positioning method comprises the following steps:
acquiring a warehouse scene image acquired by the vision sensor at the current moment and point cloud data acquired by the laser radar; the point cloud data represent distance information obtained by modeling a warehouse scene by the laser radar;
calibrating the warehouse scene image and the point cloud data respectively to obtain a calibrated warehouse scene image and calibrated point cloud data;
based on the calibrated warehouse scene image and the calibrated point cloud data, respectively determining the three-dimensional space position of goods in the warehouse scene image and the three-dimensional map of the warehouse scene;
and positioning the robot based on the three-dimensional space position of the goods in the warehouse scene image, the three-dimensional map of the warehouse scene, the warehouse scene image and the point cloud data.
According to the robot positioning method provided by the invention, based on the calibrated warehouse scene image, the three-dimensional space position of goods in the warehouse scene image is determined, and the method comprises the following steps:
extracting characteristic points of the calibrated warehouse scene image to obtain at least one image characteristic point;
and determining the three-dimensional space position of the goods in the warehouse scene image based on each image characteristic point at the current moment and at least one image characteristic point at the last moment.
According to the robot positioning method provided by the invention, the three-dimensional space position of goods in the warehouse scene image is determined based on each image characteristic point at the current moment and at least one image characteristic point at the last moment, and the method comprises the following steps:
matching each image characteristic point at the current moment with each image characteristic point at the previous moment to obtain a corresponding relation between each image characteristic point;
based on the corresponding relation and the triangulation method, obtaining three-dimensional space positions of the image feature points;
and determining the three-dimensional space position of the goods in the warehouse scene image based on the three-dimensional space position of each image characteristic point.
According to the robot positioning method provided by the invention, based on the calibrated point cloud data, the three-dimensional map of the warehouse scene is determined, and the method comprises the following steps:
extracting characteristic points of the calibrated point cloud data to obtain at least one laser characteristic point;
and determining the three-dimensional map of the warehouse scene based on each laser characteristic point at the current moment and at least one laser characteristic point at the last moment.
According to the robot positioning method provided by the invention, the three-dimensional map of the warehouse scene is determined based on each laser characteristic point at the current moment and at least one laser characteristic point at the last moment, and the method comprises the following steps:
matching each laser characteristic point at the current moment with each laser characteristic point at the previous moment by adopting a disorder point-to-scale invariant feature transform SIFT to obtain a corresponding relation among the laser characteristic points;
determining motion information of the laser radar based on the corresponding relation;
and converting the calibrated point cloud data through the motion information to obtain a three-dimensional map of the warehouse scene.
According to the robot positioning method provided by the invention, the positioning of the robot based on the three-dimensional space position of goods in the warehouse scene image, the three-dimensional map of the warehouse scene, the warehouse scene image and the point cloud data comprises the following steps:
determining a target space position of the goods in the warehouse scene image and a target map of the warehouse scene based on the three-dimensional space position of the goods in the warehouse scene image, the three-dimensional map of the warehouse scene, and the three-dimensional space position of the goods in the warehouse scene image and the three-dimensional map of the warehouse scene at the historical moment;
matching each laser characteristic point corresponding to the point cloud data with the target map, and determining the matching position and the matching gesture of the robot;
and searching the position of the target map by combining the warehouse scene image and the target space position based on the matching position and the gesture of the robot, and determining the position and the orientation of the robot.
The invention also provides a robot positioning device applied to a robot, wherein a vision sensor and a laser radar are arranged on the robot, and the robot positioning device comprises:
the acquisition module is used for acquiring the warehouse scene image acquired by the vision sensor and the point cloud data acquired by the laser radar at the current moment; the point cloud data represent distance information obtained by modeling a warehouse scene by the laser radar;
the calibration module is used for calibrating the warehouse scene image and the point cloud data respectively to obtain a calibrated warehouse scene image and calibrated point cloud data;
the determining module is used for respectively determining the three-dimensional space position of the goods in the warehouse scene image and the three-dimensional map of the warehouse scene based on the calibrated warehouse scene image and the calibrated point cloud data;
and the positioning module is used for positioning the robot based on the three-dimensional space position of the goods in the warehouse scene image, the three-dimensional map of the warehouse scene, the warehouse scene image and the point cloud data.
The invention also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the robot positioning method as described above when executing the program.
The invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a robot positioning method as described in any of the above.
The invention also provides a computer program product comprising a computer program which, when executed by a processor, implements a robot positioning method as described in any of the above.
According to the robot positioning method, the robot positioning device, the electronic equipment and the storage medium, the vision sensor and the laser radar installed on the robot are used for respectively acquiring the warehouse scene image acquired by the vision sensor at the current moment and the point cloud data acquired by the laser radar; the point cloud data represent distance information obtained by modeling a warehouse scene by a laser radar; calibrating the warehouse scene image and the point cloud data respectively to obtain a calibrated warehouse scene image and calibrated point cloud data; based on the calibrated warehouse scene image and the calibrated point cloud data, respectively determining the three-dimensional space position of goods in the warehouse scene image and the three-dimensional map of the warehouse scene; and positioning the robot based on the three-dimensional space position of the goods in the warehouse scene image, the three-dimensional map of the warehouse scene, the warehouse scene image and the point cloud data, so that the robot is positioned in the complex warehouse scene, and the positioning precision of the robot is improved.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a robot positioning method provided by the invention;
FIG. 2 is a second flow chart of the robot positioning method according to the present invention;
FIG. 3 is a schematic view of a robot positioning device according to the present invention;
fig. 4 is a schematic structural diagram of an electronic device provided by the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
For a clearer understanding of the various embodiments of the present application, a description is first given of the relevant knowledge of the present application.
The vision sensor has the advantages of high resolution, low cost and the like, and is widely applied to the fields of target tracking, instant positioning, map construction (Simultaneous Localization and Mapping, SLAM) and the like. However, visual sensors cannot directly measure distance information, making modeling of complex environments difficult. While lidar can provide high-precision distance measurement information, it has a low resolution and cannot provide rich texture information.
Thus, by fusing the vision sensor and the lidar, the respective deficiencies can be made up. The invention provides a robot positioning method, which is characterized in that a vision sensor and a laser radar arranged on a robot are used for respectively acquiring a warehouse scene image acquired by the vision sensor at the current moment and point cloud data acquired by the laser radar; the point cloud data represent distance information obtained by modeling a warehouse scene by a laser radar; calibrating the warehouse scene image and the point cloud data to obtain a calibrated warehouse scene image and calibrated point cloud data; based on the calibrated warehouse scene image and the calibrated point cloud data, respectively determining the three-dimensional space position of goods in the warehouse scene image and the three-dimensional map of the warehouse scene; and positioning the robot based on the three-dimensional space position of the goods in the warehouse scene image, the three-dimensional map of the warehouse scene, the warehouse scene image and the point cloud data, so that the robot is positioned in the complex warehouse scene, and the positioning precision of the robot is improved.
The robot positioning method of the present invention is described below with reference to fig. 1-2.
FIG. 1 is a schematic diagram of a robot positioning process according to the present invention, as shown in FIG. 1, applied to a robot, on which a vision sensor and a laser radar are mounted, and the method includes steps 101-104; wherein,
step 101, acquiring a warehouse scene image acquired by the vision sensor at the current moment and point cloud data acquired by the laser radar; and the point cloud data represents distance information obtained by modeling the warehouse scene by the laser radar.
It should be noted that, the robot positioning method provided by the present invention is suitable for a scenario of intelligent warehouse, and the execution subject of the method may be a robot positioning device, for example, a robot, or a control module in the robot positioning device for executing the robot positioning method.
Specifically, the robot is provided with a vision sensor and a laser radar, and the robot is continuously moved, namely the vision sensor and the laser radar are continuously moved. The visual sensor can be a visual camera, and can acquire scene images of the warehouse, namely image data of the warehouse in real time; the laser radar can be a laser sensor, and can acquire point cloud data of a warehouse scene in real time, wherein the point cloud data represents distance information obtained by modeling the warehouse scene by the laser radar. The warehouse scene image acquired by the vision sensor and the point cloud data of the warehouse scene acquired by the laser radar can lay a foundation for the joint calibration of the follow-up data.
And 102, calibrating the warehouse scene image and the point cloud data respectively to obtain a calibrated warehouse scene image and calibrated point cloud data.
Specifically, the calibration of the warehouse scene image and the point cloud data includes an internal calibration and an external calibration. The internal calibration is to measure and estimate an imaging model of a visual sensor (camera), and complete calculation of intrinsic parameters of the imaging model of the camera by collecting images and position information of a plurality of angles or a plurality of positions, and mainly comprises parameters such as focal length, distortion, camera center and the like, namely the internal calibration is to measure intrinsic parameters of the camera and parameters of a relative application scene by restoring the imaging model of the camera. The external calibration is to fix the relative positions of a vision sensor (camera) and a laser radar, and a preset target calibration plate is detected by using the vision sensor and the laser radar after the vision sensor and the laser radar are formed, wherein the target calibration plate comprises a plurality of calibration points; the characteristics of the target calibration plate are detected in the vision sensor and the laser radar respectively, and the relative position and the attitude between the vision sensor and the laser radar, such as relative rotation, displacement or pitch angle, are obtained through characteristic correspondence and optimization calculation, namely, the external calibration is carried out by measuring the relative position and the attitude between the vision sensor and the laser radar.
By calibrating the warehouse scene image and the point cloud data respectively, the calibrated warehouse scene image and the calibrated point cloud data can be obtained, so that the calibrated warehouse scene image and the calibrated point cloud data are aligned, the corresponding relation between the calibrated warehouse scene image and the calibrated point cloud data can be established, and the calibrated warehouse scene image and the calibrated point cloud data can be measured and perceived under the same coordinate system.
And step 103, respectively determining the three-dimensional space position of the goods in the warehouse scene image and the three-dimensional map of the warehouse scene based on the calibrated warehouse scene image and the calibrated point cloud data.
Specifically, according to the calibrated warehouse scene image and the calibrated point cloud data, the three-dimensional space position of the goods in the warehouse scene image and the three-dimensional map of the warehouse scene can be respectively determined.
And 104, positioning the robot based on the three-dimensional space position of the goods in the warehouse scene image, the three-dimensional map of the warehouse scene, the warehouse scene image and the point cloud data.
Specifically, according to the determined three-dimensional space position of goods in the warehouse scene image, the three-dimensional map of the warehouse scene, and the warehouse scene image and the point cloud data acquired in real time, the position and the gesture of the robot can be accurately positioned.
According to the robot positioning method provided by the invention, the vision sensor and the laser radar arranged on the robot are used for respectively acquiring the warehouse scene image acquired by the vision sensor at the current moment and the point cloud data acquired by the laser radar; the point cloud data represent distance information obtained by modeling a warehouse scene by a laser radar; calibrating the warehouse scene image and the point cloud data respectively to obtain a calibrated warehouse scene image and calibrated point cloud data; based on the calibrated warehouse scene image and the calibrated point cloud data, respectively determining the three-dimensional space position of goods in the warehouse scene image and the three-dimensional map of the warehouse scene; and positioning the robot based on the three-dimensional space position of the goods in the warehouse scene image, the three-dimensional map of the warehouse scene, the warehouse scene image and the point cloud data, so that the robot is positioned in the complex warehouse scene, and the positioning precision of the robot is improved.
Optionally, determining the three-dimensional spatial position of the cargo in the warehouse scene image based on the calibrated warehouse scene image includes:
(1) And extracting characteristic points of the calibrated warehouse scene image to obtain at least one image characteristic point.
Specifically, for each pixel point in the calibrated warehouse scene image, the gradient of each pixel point is calculated, and the region with larger gradient change in the calibrated warehouse scene image, namely the region of the goods in the warehouse scene image, can be extracted through the gradient difference value of the adjacent pixel points. And taking the plurality of areas with large gradient change as image characteristic points, thereby obtaining at least one image characteristic point.
(2) And determining the three-dimensional space position of the goods in the warehouse scene image based on each image characteristic point at the current moment and at least one image characteristic point at the last moment.
Specifically, feature point extraction is performed on the calibrated warehouse scene image at the previous moment, at least one image feature point at the previous moment can be obtained, and then the three-dimensional space position of goods in the warehouse scene image is determined according to each image feature point at the current moment and at least one image feature point at the previous moment.
Optionally, the determining the three-dimensional spatial position of the cargo in the warehouse scene image based on each image feature point at the current moment and at least one image feature point at the last moment includes:
matching each image characteristic point at the current moment with each image characteristic point at the previous moment to obtain a corresponding relation between each image characteristic point; based on the corresponding relation and the triangulation method, obtaining three-dimensional space positions of the image feature points; and determining the three-dimensional space position of the goods in the warehouse scene image based on the three-dimensional space position of each image characteristic point.
Specifically, through respectively extracting each image characteristic point of the warehouse scene image calibrated at the current moment and each image characteristic point of the warehouse scene image calibrated at the last moment, each image characteristic point of the warehouse scene image calibrated adjacently corresponding to the adjacent moment can be matched, and the corresponding relation of each image characteristic point between the adjacent warehouse scene images can be obtained, namely, each image characteristic point of the warehouse scene image at the current moment corresponds to which image characteristic point of the warehouse scene image at the last moment; calculating the three-dimensional space position of each image characteristic point at the current moment by adopting a triangulation method according to the corresponding relation of each image characteristic point between adjacent warehouse scene images; and determining the three-dimensional space position of the goods in the warehouse scene image according to the three-dimensional space position of each image characteristic point at the current moment.
Optionally, determining the three-dimensional map of the warehouse scene based on the calibrated point cloud data includes:
(1) And extracting characteristic points from the calibrated point cloud data to obtain at least one laser characteristic point.
Specifically, in the process of movement, the laser radar continuously collects surrounding point cloud data, calibrates the collected point cloud data, extracts characteristic points of the calibrated point cloud data, namely, calculates the gradient of each point cloud, and can extract the region with larger gradient change in the calibrated point cloud data, namely, the region of goods in a warehouse scene through the gradient difference value of adjacent point clouds. And taking a plurality of areas with larger gradient change in the calibrated point cloud data as laser characteristic points, thereby obtaining at least one laser characteristic point.
(2) And determining the three-dimensional map of the warehouse scene based on each laser characteristic point at the current moment and at least one laser characteristic point at the last moment.
Specifically, according to each laser characteristic point at the current moment and at least one laser characteristic point at the last moment, a three-dimensional map of the warehouse scene can be further determined, namely, real-time three-dimensional modeling is performed on the warehouse scene.
Optionally, determining the three-dimensional map of the warehouse scene based on each of the laser feature points at the current time and at least one laser feature point at the previous time includes:
matching each laser characteristic point at the current moment with each laser characteristic point at the previous moment by adopting Scale-invariant feature transform (SIFT) of disordered points to obtain a corresponding relation among the laser characteristic points; determining motion information of the laser radar based on the corresponding relation; and converting the calibrated point cloud data through the motion information to obtain a three-dimensional map of the warehouse scene.
Specifically, matching each laser characteristic point at the current moment with each laser characteristic point at the last moment by adopting a non-ordered point-to-SIFT algorithm, and carrying out point cloud registration on each laser characteristic point at the current moment and each laser characteristic point at the last moment by adopting an iterative closest point (Iterative Closest Point, ICP) algorithm to obtain a corresponding relation among the laser characteristic points; determining movement information of the laser radar according to the corresponding relation between the laser characteristic points, namely the corresponding relation between the position and the orientation of each laser characteristic point, so as to determine the movement information of the laser radar according to the position and the orientation of each laser characteristic point at the current moment and the position and the orientation of each laser characteristic point at the last moment, wherein the movement information comprises a rotation angle and translation information; and converting the calibrated point cloud data through motion information, namely converting the calibrated point cloud data into a global three-dimensional model (world coordinate system) through rotation angle and translation information, so that a three-dimensional map of the warehouse scene can be obtained, and completing three-dimensional reconstruction of the warehouse scene in the motion process.
Optionally, the specific implementation manner of step 104 includes:
1) And determining the target space position of the goods in the warehouse scene image and the target map of the warehouse scene based on the three-dimensional space position of the goods in the warehouse scene image, the three-dimensional map of the warehouse scene, and the three-dimensional space position of the goods in the warehouse scene image and the three-dimensional map of the warehouse scene at the historical moment.
Specifically, according to the three-dimensional space position of the goods in the warehouse scene image at the current moment, the three-dimensional map of the warehouse scene, the three-dimensional space position of the goods in the warehouse scene image at the historical moment and the three-dimensional map of the warehouse scene, a beam method adjustment (Bundle Adjustment, BA) algorithm is adopted to determine the target space position of the goods in the warehouse scene image and the target map of the warehouse scene, namely, the three-dimensional space position of the goods in the warehouse scene image at the current moment and the three-dimensional space position of the goods in the warehouse scene image at the historical moment are averaged, the target space position of the goods in the warehouse scene image is accurately obtained, and the three-dimensional map of the warehouse scene at the current moment and the three-dimensional map of the warehouse scene at the historical moment are averaged, so that the target map of the warehouse scene is accurately obtained.
2) And matching the characteristic points corresponding to the point cloud data with the target map, and determining the matching position and the matching gesture of the robot.
Specifically, feature extraction is performed on point cloud data acquired by a laser radar in real time to obtain feature points corresponding to the point cloud data, namely, environmental features of a warehouse scene are extracted in real time, then the feature points corresponding to the point cloud data are matched with a target map, the feature points corresponding to the point cloud data are globally searched for matching points of the target map, and the position and the gesture of a robot corresponding to the matching points are used as optimal matching positions and gestures.
3) And searching the position of the target map by combining the warehouse scene image and the target space position based on the matching position and the gesture of the robot, and determining the position and the orientation of the robot.
Specifically, according to the matching position and the gesture of the robot, combining the warehouse scene image and the target space position of goods in the warehouse scene image, searching the position of a target map by using a graph optimization method, minimizing an observation error, and finally determining the position and the orientation of the robot, thereby obtaining a more accurate positioning result of the robot.
FIG. 2 is a second flow chart of the robot positioning method according to the present invention, as shown in FIG. 2, the method includes steps 201-212; wherein,
step 201, acquiring a warehouse scene image acquired by a vision sensor at the current moment and point cloud data acquired by a laser radar; the point cloud data represents distance information obtained by modeling a warehouse scene by the laser radar.
And 202, calibrating the warehouse scene image and the point cloud data respectively to obtain the calibrated warehouse scene image and the calibrated point cloud data.
And 203, extracting characteristic points of the calibrated warehouse scene image to obtain at least one image characteristic point.
And 204, matching each image characteristic point at the current moment with each image characteristic point at the previous moment to obtain a corresponding relation among the image characteristic points.
Step 205, obtaining the three-dimensional space position of each image feature point based on the corresponding relation and the triangulation method.
And 206, extracting characteristic points of the calibrated point cloud data to obtain at least one laser characteristic point.
Step 207, matching each laser characteristic point at the current moment with each laser characteristic point at the previous moment by using the unordered points to obtain the corresponding relation between each laser characteristic point.
Step 208, determining the motion information of the laser radar based on the correspondence.
And step 209, converting the calibrated point cloud data through motion information to obtain a three-dimensional map of the warehouse scene.
Step 210, determining a target spatial position of the goods in the warehouse scene image and a target map of the warehouse scene based on the three-dimensional spatial position of the goods in the warehouse scene image, the three-dimensional map of the warehouse scene, and the three-dimensional spatial position of the goods in the warehouse scene image and the three-dimensional map of the warehouse scene at the historical moment.
Step 211, matching each laser characteristic point corresponding to the point cloud data with the target map, and determining the matching position and the matching gesture of the robot.
Step 212, searching the position of the target map by combining the warehouse scene image and the target space position based on the matching position and the gesture of the robot, and determining the position and the orientation of the robot.
According to the robot positioning method provided by the invention, the high-precision robot positioning navigation can be realized through the warehouse scene image acquired by the vision sensor and the point cloud data acquired by the laser radar, and the positioning navigation algorithm based on the vision sensor and the laser radar can not only improve the inventory efficiency of the logistics warehouse, but also has remarkable advantages in the aspects of precision performance, application range, cost and the like. In addition, the robot positioning navigation algorithm based on the vision sensor and the laser radar can be applied to the fields of intelligent storage robot navigation, automatic driving and the like, and can also be applied to the fields of building mapping, smart cities and the like, so that the robot positioning navigation algorithm based on the vision sensor and the laser radar is widely applied.
The robot positioning device provided by the invention is described below, and the robot positioning device described below and the robot positioning method described above can be referred to correspondingly.
Fig. 3 is a schematic structural diagram of a robot positioning device 300 according to the present invention, and as shown in fig. 3, the robot positioning device 300 is applied to a robot, on which a vision sensor and a laser radar are mounted, and the robot positioning device 300 includes: an acquisition module 301, a calibration module 302, a determination module 303 and a positioning module 304; wherein,
the acquisition module 301 is configured to acquire a warehouse scene image acquired by the vision sensor and point cloud data acquired by the laser radar at a current time; the point cloud data represent distance information obtained by modeling a warehouse scene by the laser radar;
the calibration module 302 is configured to calibrate the warehouse scene image and the point cloud data respectively, so as to obtain a calibrated warehouse scene image and calibrated point cloud data;
a determining module 303, configured to determine a three-dimensional spatial position of a cargo in the warehouse scene image and a three-dimensional map of the warehouse scene based on the calibrated warehouse scene image and the calibrated point cloud data, respectively;
and the positioning module 304 is configured to position the robot based on the three-dimensional spatial position of the cargo in the warehouse scene image, the three-dimensional map of the warehouse scene, the warehouse scene image and the point cloud data.
According to the robot positioning device provided by the invention, the vision sensor and the laser radar arranged on the robot are used for respectively acquiring the warehouse scene image acquired by the vision sensor at the current moment and the point cloud data acquired by the laser radar; the point cloud data represent distance information obtained by modeling a warehouse scene by a laser radar; calibrating the warehouse scene image and the point cloud data respectively to obtain a calibrated warehouse scene image and calibrated point cloud data; based on the calibrated warehouse scene image and the calibrated point cloud data, respectively determining the three-dimensional space position of goods in the warehouse scene image and the three-dimensional map of the warehouse scene; and positioning the robot based on the three-dimensional space position of the goods in the warehouse scene image, the three-dimensional map of the warehouse scene, the warehouse scene image and the point cloud data, so that the robot is positioned in the complex warehouse scene, and the positioning precision of the robot is improved.
Optionally, the determining module 303 is specifically configured to:
extracting characteristic points of the calibrated warehouse scene image to obtain at least one image characteristic point;
and determining the three-dimensional space position of the goods in the warehouse scene image based on each image characteristic point at the current moment and at least one image characteristic point at the last moment.
Optionally, the determining module 303 is further configured to:
matching each image characteristic point at the current moment with each image characteristic point at the previous moment to obtain a corresponding relation between each image characteristic point;
based on the corresponding relation and the triangulation method, obtaining three-dimensional space positions of the image feature points;
and determining the three-dimensional space position of the goods in the warehouse scene image based on the three-dimensional space position of each image characteristic point.
Optionally, the determining module 303 is further configured to:
extracting characteristic points of the calibrated point cloud data to obtain at least one laser characteristic point;
and determining the three-dimensional map of the warehouse scene based on each laser characteristic point at the current moment and at least one laser characteristic point at the last moment.
Optionally, the determining module 303 is further configured to:
matching each laser characteristic point at the current moment with each laser characteristic point at the previous moment by adopting a disorder point-to-scale invariant feature transform SIFT to obtain a corresponding relation among the laser characteristic points;
determining motion information of the laser radar based on the corresponding relation;
and converting the calibrated point cloud data through the motion information to obtain a three-dimensional map of the warehouse scene.
Optionally, the positioning module 304 is specifically configured to:
determining a target space position of the goods in the warehouse scene image and a target map of the warehouse scene based on the three-dimensional space position of the goods in the warehouse scene image, the three-dimensional map of the warehouse scene, and the three-dimensional space position of the goods in the warehouse scene image and the three-dimensional map of the warehouse scene at the historical moment;
matching each laser characteristic point corresponding to the point cloud data with the target map, and determining the matching position and the matching gesture of the robot;
and searching the position of the target map by combining the warehouse scene image and the target space position based on the matching position and the gesture of the robot, and determining the position and the orientation of the robot.
Fig. 4 is a schematic physical structure of an electronic device according to the present invention, as shown in fig. 4, the electronic device 400 may include: processor 410, communication interface (Communications Interface) 420, memory 430 and communication bus 440, wherein processor 410, communication interface 420 and memory 430 communicate with each other via communication bus 440. The processor 410 may invoke logic instructions in the memory 430 to perform a robot positioning method comprising: acquiring a warehouse scene image acquired by the vision sensor at the current moment and point cloud data acquired by the laser radar; the point cloud data represent distance information obtained by modeling a warehouse scene by the laser radar; calibrating the warehouse scene image and the point cloud data respectively to obtain a calibrated warehouse scene image and calibrated point cloud data; based on the calibrated warehouse scene image and the calibrated point cloud data, respectively determining the three-dimensional space position of goods in the warehouse scene image and the three-dimensional map of the warehouse scene; and positioning the robot based on the three-dimensional space position of the goods in the warehouse scene image, the three-dimensional map of the warehouse scene, the warehouse scene image and the point cloud data.
Further, the logic instructions in the memory 430 described above may be implemented in the form of software functional units and may be stored in a computer-readable storage medium when sold or used as a stand-alone product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product comprising a computer program, the computer program being storable on a non-transitory computer readable storage medium, the computer program, when executed by a processor, being capable of performing the robot positioning method provided by the above methods, the method comprising: acquiring a warehouse scene image acquired by the vision sensor at the current moment and point cloud data acquired by the laser radar; the point cloud data represent distance information obtained by modeling a warehouse scene by the laser radar; calibrating the warehouse scene image and the point cloud data respectively to obtain a calibrated warehouse scene image and calibrated point cloud data; based on the calibrated warehouse scene image and the calibrated point cloud data, respectively determining the three-dimensional space position of goods in the warehouse scene image and the three-dimensional map of the warehouse scene; and positioning the robot based on the three-dimensional space position of the goods in the warehouse scene image, the three-dimensional map of the warehouse scene, the warehouse scene image and the point cloud data.
In yet another aspect, the present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, is implemented to perform the robot positioning method provided by the above methods, the method comprising: acquiring a warehouse scene image acquired by the vision sensor at the current moment and point cloud data acquired by the laser radar; the point cloud data represent distance information obtained by modeling a warehouse scene by the laser radar; calibrating the warehouse scene image and the point cloud data respectively to obtain a calibrated warehouse scene image and calibrated point cloud data; based on the calibrated warehouse scene image and the calibrated point cloud data, respectively determining the three-dimensional space position of goods in the warehouse scene image and the three-dimensional map of the warehouse scene; and positioning the robot based on the three-dimensional space position of the goods in the warehouse scene image, the three-dimensional map of the warehouse scene, the warehouse scene image and the point cloud data.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A robot positioning method, applied to a robot on which a vision sensor and a laser radar are mounted, comprising:
acquiring a warehouse scene image acquired by the vision sensor at the current moment and point cloud data acquired by the laser radar; the point cloud data represent distance information obtained by modeling a warehouse scene by the laser radar;
calibrating the warehouse scene image and the point cloud data respectively to obtain a calibrated warehouse scene image and calibrated point cloud data;
based on the calibrated warehouse scene image and the calibrated point cloud data, respectively determining the three-dimensional space position of goods in the warehouse scene image and the three-dimensional map of the warehouse scene;
and positioning the robot based on the three-dimensional space position of the goods in the warehouse scene image, the three-dimensional map of the warehouse scene, the warehouse scene image and the point cloud data.
2. The robotic positioning method of claim 1, wherein determining a three-dimensional spatial location of cargo in the warehouse scene image based on the calibrated warehouse scene image comprises:
extracting characteristic points of the calibrated warehouse scene image to obtain at least one image characteristic point;
and determining the three-dimensional space position of the goods in the warehouse scene image based on each image characteristic point at the current moment and at least one image characteristic point at the last moment.
3. The robot positioning method according to claim 2, wherein the determining the three-dimensional spatial position of the cargo in the warehouse scene image based on each of the image feature points at the present time and at least one image feature point at the previous time comprises:
matching each image characteristic point at the current moment with each image characteristic point at the previous moment to obtain a corresponding relation between each image characteristic point;
based on the corresponding relation and the triangulation method, obtaining three-dimensional space positions of the image feature points;
and determining the three-dimensional space position of the goods in the warehouse scene image based on the three-dimensional space position of each image characteristic point.
4. The robotic positioning method of claim 1, wherein determining a three-dimensional map of the warehouse scene based on the calibrated point cloud data comprises:
extracting characteristic points of the calibrated point cloud data to obtain at least one laser characteristic point;
and determining the three-dimensional map of the warehouse scene based on each laser characteristic point at the current moment and at least one laser characteristic point at the last moment.
5. The robot positioning method of claim 4, wherein the determining the three-dimensional map of the warehouse scene based on each of the laser feature points at the current time and at least one laser feature point at a previous time comprises:
matching each laser characteristic point at the current moment with each laser characteristic point at the previous moment by adopting a disorder point-to-scale invariant feature transform SIFT to obtain a corresponding relation among the laser characteristic points;
determining motion information of the laser radar based on the corresponding relation;
and converting the calibrated point cloud data through the motion information to obtain a three-dimensional map of the warehouse scene.
6. The robot positioning method according to claim 4 or 5, wherein the positioning the robot based on the three-dimensional spatial position of the cargo in the warehouse scene image, the three-dimensional map of the warehouse scene, the warehouse scene image, and the point cloud data, comprises:
determining a target space position of the goods in the warehouse scene image and a target map of the warehouse scene based on the three-dimensional space position of the goods in the warehouse scene image, the three-dimensional map of the warehouse scene, and the three-dimensional space position of the goods in the warehouse scene image and the three-dimensional map of the warehouse scene at the historical moment;
matching each laser characteristic point corresponding to the point cloud data with the target map, and determining the matching position and the matching gesture of the robot;
and searching the position of the target map by combining the warehouse scene image and the target space position based on the matching position and the gesture of the robot, and determining the position and the orientation of the robot.
7. A robotic positioning device for use with a robot having a vision sensor and a lidar mounted thereon, comprising:
the acquisition module is used for acquiring the warehouse scene image acquired by the vision sensor and the point cloud data acquired by the laser radar at the current moment; the point cloud data represent distance information obtained by modeling a warehouse scene by the laser radar;
the calibration module is used for calibrating the warehouse scene image and the point cloud data respectively to obtain a calibrated warehouse scene image and calibrated point cloud data;
the determining module is used for respectively determining the three-dimensional space position of the goods in the warehouse scene image and the three-dimensional map of the warehouse scene based on the calibrated warehouse scene image and the calibrated point cloud data;
and the positioning module is used for positioning the robot based on the three-dimensional space position of the goods in the warehouse scene image, the three-dimensional map of the warehouse scene, the warehouse scene image and the point cloud data.
8. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the robot positioning method according to any of claims 1 to 6 when executing the program.
9. A non-transitory computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when executed by a processor, implements the robot positioning method according to any of claims 1 to 6.
10. A computer program product comprising a computer program, characterized in that the computer program, when executed by a processor, implements the robot positioning method according to any of claims 1 to 6.
CN202311459775.0A 2023-11-03 2023-11-03 Robot positioning method, apparatus, electronic device and storage medium Pending CN117784151A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311459775.0A CN117784151A (en) 2023-11-03 2023-11-03 Robot positioning method, apparatus, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311459775.0A CN117784151A (en) 2023-11-03 2023-11-03 Robot positioning method, apparatus, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN117784151A true CN117784151A (en) 2024-03-29

Family

ID=90393394

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311459775.0A Pending CN117784151A (en) 2023-11-03 2023-11-03 Robot positioning method, apparatus, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN117784151A (en)

Similar Documents

Publication Publication Date Title
CN110243360B (en) Method for constructing and positioning map of robot in motion area
CN112197770B (en) Robot positioning method and positioning device thereof
CN109598765B (en) Monocular camera and millimeter wave radar external parameter combined calibration method based on spherical calibration object
KR101632486B1 (en) Method for extracting curb of road using laser range finder and method for localizing of mobile robot using curb information of road
US8427472B2 (en) Multidimensional evidence grids and system and methods for applying same
CN110470333B (en) Calibration method and device of sensor parameters, storage medium and electronic device
KR102006291B1 (en) Method for estimating pose of moving object of electronic apparatus
CN103065323A (en) Subsection space aligning method based on homography transformational matrix
CN112254729A (en) Mobile robot positioning method based on multi-sensor fusion
CN108596117B (en) Scene monitoring method based on two-dimensional laser range finder array
CN108549376A (en) A kind of navigation locating method and system based on beacon
US20230236280A1 (en) Method and system for positioning indoor autonomous mobile robot
KR102490521B1 (en) Automatic calibration through vector matching of the LiDAR coordinate system and the camera coordinate system
CN117824667A (en) Fusion positioning method and medium based on two-dimensional code and laser
CN114111791A (en) Indoor autonomous navigation method and system for intelligent robot and storage medium
JP2023503750A (en) ROBOT POSITIONING METHOD AND DEVICE, DEVICE, STORAGE MEDIUM
CN116358547B (en) Method for acquiring AGV position based on optical flow estimation
CN114459467B (en) VI-SLAM-based target positioning method in unknown rescue environment
CN117784151A (en) Robot positioning method, apparatus, electronic device and storage medium
CN113850864B (en) GNSS/LIDAR loop detection method for outdoor mobile robot
Pishehvari et al. Robust range-doppler registration with hd maps
CN112344966B (en) Positioning failure detection method and device, storage medium and electronic equipment
Jiang et al. Bridge Deformation Measurement Using Unmanned Aerial Dual Camera and Learning‐Based Tracking Method
CN114862953A (en) Mobile robot repositioning method and device based on visual features and 3D laser
CN114879168A (en) Laser radar and IMU calibration method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination