CN118089705A - Map updating method, map updating device, computer equipment and storage medium - Google Patents

Map updating method, map updating device, computer equipment and storage medium Download PDF

Info

Publication number
CN118089705A
CN118089705A CN202410510963.XA CN202410510963A CN118089705A CN 118089705 A CN118089705 A CN 118089705A CN 202410510963 A CN202410510963 A CN 202410510963A CN 118089705 A CN118089705 A CN 118089705A
Authority
CN
China
Prior art keywords
data
pose
robot
target
environment sensing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410510963.XA
Other languages
Chinese (zh)
Inventor
唐诗然
刘勇
武金龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Pudu Technology Co Ltd
Original Assignee
Shenzhen Pudu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Pudu Technology Co Ltd filed Critical Shenzhen Pudu Technology Co Ltd
Priority to CN202410510963.XA priority Critical patent/CN118089705A/en
Publication of CN118089705A publication Critical patent/CN118089705A/en
Pending legal-status Critical Current

Links

Landscapes

  • Manipulator (AREA)

Abstract

The present application relates to a map updating method, apparatus, computer device, storage medium and computer program product. The method comprises the following steps: acquiring an environment sensing data set acquired by a robot in a target area through an environment sensing sensor; sequentially taking each environmental perception data in the environmental perception data set as target environmental perception data; acquiring a robot pose with a timestamp matched with the data receiving time as a target robot pose corresponding to the target environment sensing data from a pose buffer corresponding to the environment sensing sensor based on the data receiving time corresponding to the target environment sensing data; and updating the map corresponding to the target area based on the respective environment sensing data and the respective target robot pose corresponding to the respective environment sensing data. By adopting the method, the map updating accuracy can be improved.

Description

Map updating method, map updating device, computer equipment and storage medium
Technical Field
The present application relates to the field of computer technology, and in particular, to a map updating method, apparatus, computer device, storage medium, and computer program product.
Background
The robot navigation system depends on different types of sensors, and the sensors carried by the robot can provide more accurate and reliable environment sensing capability for the robot. And determining obstacle information in the environment based on sensor data acquired by the sensor in real time and the robot pose corresponding to the sensor data, so as to correct and update the existing map to reflect the change of the environment.
However, in the conventional map updating method, there is a problem that the synchronism of sensor data acquired by a robot is low, resulting in low accuracy of map updating.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a map updating method, apparatus, computer device, computer-readable storage medium, and computer program product that can improve the accuracy of map updating.
The application provides a map updating method. The method comprises the following steps:
Acquiring an environment sensing data set acquired by a robot in a target area through an environment sensing sensor;
sequentially taking each environmental perception data in the environmental perception data set as target environmental perception data;
acquiring a robot pose with a timestamp matched with the data receiving time from a pose buffer area corresponding to the environment sensing sensor based on the data receiving time corresponding to the target environment sensing data, and taking the robot pose as a target robot pose corresponding to the target environment sensing data; the pose buffer zone comprises a robot pose generated based on environment sensing data acquired by the environment sensing sensor, and a timestamp corresponding to the robot pose in the pose buffer zone is a data receiving time corresponding to the corresponding environment sensing data;
And updating the map corresponding to the target area based on the respective environment sensing data and the respective target robot pose corresponding to the respective environment sensing data.
The application also provides a map updating device. The device comprises:
The data acquisition module is used for acquiring an environment perception data set acquired by the robot through the environment perception sensor in the target area;
the target data determining module is used for sequentially taking each environmental perception data in the environmental perception data set as target environmental perception data;
The pose determining module is used for acquiring a robot pose with a timestamp matched with the data receiving time from a pose buffer area corresponding to the environment sensing sensor based on the data receiving time corresponding to the target environment sensing data as a target robot pose corresponding to the target environment sensing data; the pose buffer zone comprises a robot pose generated based on environment sensing data acquired by the environment sensing sensor, and a timestamp corresponding to the robot pose in the pose buffer zone is a data receiving time corresponding to the corresponding environment sensing data;
And the map updating module is used for updating the map corresponding to the target area based on the respective environment sensing data and the respective target robot pose corresponding to the respective environment sensing data.
A computer device comprising a memory storing a computer program and a processor implementing the steps of the map updating method described above when the processor executes the computer program.
A computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the map updating method described above.
A computer program product comprising a computer program which, when executed by a processor, implements the steps of the map updating method described above.
The map updating method, the map updating device, the computer equipment, the storage medium and the computer program product enable the environment sensing data and the robot pose generated based on the environment sensing data to carry the same timestamp by creating the pose buffer comprising the robot pose generated based on the environment sensing data and taking the data receiving time corresponding to the environment sensing data as the timestamp corresponding to the robot pose. Therefore, when the map is updated based on the target environment sensing data in the environment sensing data set, the target robot pose corresponding to the target environment sensing data can be quickly and accurately searched in the pose buffer zone storing the robot poses corresponding to the environment sensing data respectively according to the data receiving time corresponding to the environment sensing data, and therefore the synchronism between the environment sensing data and the robot poses is improved. Because the accuracy of the pose of the target robot corresponding to the acquired target environment sensing data is improved, the map corresponding to the target area is updated based on the target environment sensing data and the pose of the target robot corresponding to the target environment sensing data, and the accuracy of map updating can be improved.
Drawings
FIG. 1 is an application environment diagram of a map updating method in one embodiment;
FIG. 2 is a flow chart of a map updating method according to an embodiment;
FIG. 3 is a flow chart of a map updating method in one embodiment;
FIG. 4 is a flowchart of a map updating method according to another embodiment;
FIG. 5 is a flowchart of a map updating method according to another embodiment;
FIG. 6 is a block diagram of a map updating apparatus in one embodiment;
fig. 7 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The map updating method provided by the embodiment of the application can be applied to an application environment shown in fig. 1. The map updating method may be performed by the terminal 102 or the server 104, or may be performed in conjunction with the terminal 102 and the server 104, where the terminal 102 communicates with the server 104 through a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104 or may be located on a cloud or other network server. The terminal 102 may be, but not limited to, various robots, personal computers, notebook computers, smart phones, tablet computers, internet of things devices, and portable wearable devices, and the internet of things devices may be smart televisions, smart vehicle devices, and the like. The portable wearable device may be a smart watch, smart bracelet, headset, or the like. The robot may be various industrial robots (e.g., transfer robot, palletizing robot, painting robot, etc.), service robots (e.g., cleaning robot, dispensing robot, mowing robot, etc.), or specialty robots (fire fighter robot, underwater robot, security robot, etc.), which require autonomous movement. The terminal 102 acquires the environmental awareness data acquired through the environmental awareness data in the target area, and sequentially takes each environmental awareness data in the environmental awareness data as target environmental awareness data. The terminal 102 obtains, from the pose buffer area corresponding to the environment-aware sensor, the pose of the robot whose timestamp matches the data receiving time based on the data receiving time corresponding to the target environment-aware data, as the pose of the target robot corresponding to the target environment-aware data. The pose buffer zone comprises robot poses generated based on environment sensing data acquired by the environment sensing sensors, and the time stamp corresponding to the robot poses in the pose buffer zone is the data receiving time corresponding to the corresponding environment sensing data. The terminal 102 updates the map corresponding to the target area based on the respective environment-aware data and the respective target robot poses to which the respective environment-aware data correspond.
The terminal acquires environment sensing data continuously acquired by the environment sensing sensor according to the data acquisition frequency, and processes the environment sensing data to obtain corresponding robot pose when each environment sensing data is acquired, so that the robot pose corresponding to each environment sensing data is continuously generated. Because the process of generating the robot pose based on the environment sensing data has certain time consumption, the acquisition frequency of the environment sensing data and the generation frequency of the robot pose are often not synchronous, and it can be understood that the acquisition frequency of the environment sensing data is higher than the generation frequency of the robot pose, that is, the real-time generated robot pose has certain hysteresis relative to the environment sensing data acquired in real time. The traditional map updating method ignores the problem of asynchronism between the environment sensing data acquired in real time and the robot pose generated in real time, and when the environment sensing data acquired by the environment sensing sensor is acquired, the current robot pose generated in real time is directly used as the robot pose corresponding to the environment sensing data, so that the determined robot pose corresponding to the environment sensing data is inaccurate. Therefore, when the map is updated based on the environment-aware data and the robot pose corresponding to the environment-aware data, the map update accuracy is low due to the asynchronism between the environment-aware data and the robot pose. According to the method, the environment sensing data and the robot pose generated based on the environment sensing data are marked with the same time stamp, and according to the data receiving time corresponding to the environment sensing data, the target robot pose corresponding to the environment sensing data can be quickly and accurately searched in the pose buffer zone storing the robot poses with the time stamps corresponding to the environment sensing data, so that the synchronism between the environment sensing data and the robot poses is improved, and the map updating accuracy is further improved.
In one embodiment, as shown in fig. 2, a map updating method is provided, which is applied to a robot for illustration, and includes the following steps S202 to S208:
step S202, an environment sensing data set acquired by the robot in a target area through an environment sensing sensor is acquired.
The environment sensing sensor is used for detecting the distance and the position of surrounding objects, so that the robot is assisted to navigate and position and route planning. For example, the environmental awareness sensor may be a lidar sensor, an ultrasonic sensor, a vision sensor, a depth camera sensor, a line laser sensor, a depth sensor, or the like. The target area is a work area of the robot. The context-aware dataset refers to a set of a plurality of context-aware data acquired by the context-aware sensor. For example, when the environment-aware sensor is a lidar sensor, the collected environment-aware data is lidar point cloud data, and when the environment-aware sensor is a depth camera sensor, the collected environment-aware data is a depth image. The environment sensing data set contains environment sensing data which is not used for updating the map corresponding to the target area, and when the environment sensing data in the environment sensing data set is used as target environment sensing data and the map corresponding to the target area is updated according to the target environment sensing data, the environment sensing data needs to be removed from the environment sensing data set.
Illustratively, a robot receives environmental awareness data continuously collected by an environmental awareness sensor within a target region, resulting in an environmental awareness dataset. In the actual implementation process, the robot runs in the target area, and in the running process, the environment sensing sensor arranged on the robot continuously collects environment sensing data, wherein the environment sensing data can be collected according to fixed data collection frequency corresponding to the environment sensing data, or the environment sensing data can be collected when reaching a preset data collection position corresponding to the environment sensing data in the target area. When the environment-aware data is received, the data receiving time corresponding to the environment-aware data is used as a time stamp corresponding to the environment-aware data.
Step S204, each environmental perception data in the environmental perception data set is sequentially used as target environmental perception data.
The target environment sensing data refers to environment sensing data which is determined in the environment sensing data set and is used for updating a map corresponding to the target area.
In an exemplary embodiment, the robot sequentially uses the environmental awareness data in the environmental awareness data set as target environmental awareness data according to the sequence of the data receiving times, that is, uses the environmental awareness data with the earliest data receiving time in the environmental awareness data set as target environmental awareness data each time.
Step S206, acquiring a robot pose with a timestamp matched with the data receiving time as a target robot pose corresponding to the target environment sensing data from a pose buffer corresponding to the environment sensing sensor based on the data receiving time corresponding to the target environment sensing data; the pose buffer zone comprises a robot pose generated based on environment sensing data acquired by the environment sensing sensor, and a timestamp corresponding to the robot pose in the pose buffer zone is a data receiving time corresponding to the corresponding environment sensing data.
The pose buffer area is a storage space for storing the robot poses generated based on the environment sensing data, and when the map is updated according to the target environment sensing data, the corresponding target robot poses can be found from the pose buffer area. Because the storage space of the pose buffer area is limited, when the pose buffer area is full, the robot pose with the earliest storage time in the pose buffer area is popped up according to the strategy of first-in first-out. The data receiving time corresponding to the target environment sensing data refers to the moment when the robot receives the target environment sensing data acquired by the environment sensing sensor. The pose of the robot refers to the position and the gesture of the robot in the three-dimensional space, wherein the position describes the position of the robot in the three-dimensional space and can be represented by a three-dimensional vector, and the gesture describes the orientation of the robot in the three-dimensional space and can be represented by a rotation matrix. The target robot pose refers to a robot pose generated based on target environment awareness data.
For example, when the target environment sensing data is determined in the environment sensing data set, and the map needs to be updated according to the target environment sensing data, the robot pose with the same timestamp (namely, the data receiving time) as the timestamp corresponding to the target environment sensing data is searched from the pose buffer corresponding to the environment sensing sensor and is used as the target robot pose corresponding to the target environment sensing data.
Step S208, updating the map corresponding to the target area based on the environment sensing data and the pose of the target robot corresponding to the environment sensing data.
The map corresponding to the target area refers to a map used when the robot performs a task in the target area, and is used for indicating obstacle information in the target area, such as furniture, walls and other objects. In the actual implementation process, the map corresponding to the target area may be a cost map (costmap), which is an important tool in robot navigation, and is used for representing navigation information and obstacle information in the working area of the robot, and is usually presented in a two-dimensional or three-dimensional form, and includes all information required by the robot in obstacle avoidance, navigation or task execution. Specifically, the cost map identifies all obstacles in the target area of the robot, including dynamic obstacles and static obstacles, and obstacle information is used for guiding the robot to safely navigate in the environment, avoid collision, and find a safe path. And, each location in the cost map (represented as a pixel point in the map) has a corresponding cost value indicating the cost from the start point to the location, the cost value is typically calculated based on a variety of factors, such as distance, direction, obstacle density, and cost of movement of the robot. In addition, the cost map also contains path information of the robot, such as path track points, path lengths and path directions, which are used for guiding the robot to move along the safest and most efficient path.
The robot, after determining the pose of the target robot corresponding to the target environment sensing data, projects the target environment sensing data onto a map corresponding to the target area according to the pose of the target robot, so as to obtain an updated map corresponding to the target area. Specifically, according to the target environment sensing data, obstacle information in a sensing range indicated by the target environment sensing data is determined, according to the pose of the target robot, the obstacle information is projected to a corresponding position on a map, and cost values corresponding to all pixel points in the map are updated. The target environment sensing data and the target robot pose corresponding to the target environment sensing data are marked with the same time stamp, so that the queried target robot pose is the robot pose generated based on the target environment sensing data, the synchronism between the environment sensing data and the robot pose can be improved, and the accuracy of map updating is improved. For other environment sensing data in the environment sensing data set, determining the pose of the target robot corresponding to the other environment sensing data respectively by the same method, and updating the map corresponding to the target area based on the environment sensing data and the pose of the target robot corresponding to the environment sensing data.
In the map updating method, the pose buffer zone is created, the pose buffer zone comprises the robot pose generated based on the environment sensing data, and the data receiving time corresponding to the environment sensing data is used as the time stamp corresponding to the robot pose, so that the environment sensing data and the robot pose generated based on the environment sensing data carry the same time stamp. Therefore, when the map is updated based on the target environment sensing data in the environment sensing data set, the target robot pose corresponding to the target environment sensing data can be quickly and accurately searched in the pose buffer zone storing the robot poses corresponding to the environment sensing data respectively according to the data receiving time corresponding to the environment sensing data, and therefore the synchronism between the environment sensing data and the robot poses is improved. Because the accuracy of the pose of the target robot corresponding to the acquired target environment sensing data is improved, the map corresponding to the target area is updated based on the target environment sensing data and the pose of the target robot corresponding to the target environment sensing data, and the accuracy of map updating can be improved.
In one embodiment, the map updating method further comprises, before acquiring the environment-aware dataset acquired by the robot through the environment-aware sensor in the target area:
receiving environment sensing data transmitted by an environment sensing sensor;
Calculating the pose of the robot based on the environment sensing data, and taking the data receiving time corresponding to the environment sensing data as a time stamp corresponding to the pose of the robot;
And storing the pose of the robot into a pose buffer area corresponding to the environment sensing sensor, continuously acquiring the next environment sensing data transmitted by the environment sensing sensor, and returning to the step of calculating the pose of the robot based on the environment sensing data for execution until the ending condition is met.
The ending condition can be set according to actual requirements. Specifically, the end condition may be that the electric quantity of the robot is smaller than a preset value, and may be that the robot completes updating the entire map corresponding to the target area. Because the sensing range of the environment sensing sensor is limited, only object information in a few meters or tens of meters around the robot can be sensed, namely, each frame of environment sensing data can generally only indicate object information corresponding to a part of the target area, therefore, in the map updating process, the robot needs to continuously run in the target area, multiple frames of environment sensing data are collected at different positions of the target area until the collected frames of environment sensing data cover the whole target area, and updating of the whole map corresponding to the target area is completed based on the multiple frames of environment sensing data collected at different positions.
Illustratively, each sensor carried by the robot may be divided into a first environment sensor and a second environment sensor, where the pose of the robot is generated based on sensor data collected by the first environment sensor carried by the robot, and typically, the sensor with the shortest data collection period is used as the first environment sensor for determining the pose of the robot, so that the real-time performance of the determined pose of the robot may be improved, for example, a laser radar sensor is used as the first environment sensor. And when the robot receives the first environment sensing data acquired by the first environment sensing sensor every time, calculating the pose of the robot corresponding to the robot when the first environment sensing data are acquired according to the first environment sensing data, taking the data receiving time corresponding to the first environment sensing data as a time stamp corresponding to the pose of the robot, and writing the pose of the robot with the time stamp into a pose buffer area corresponding to the environment sensing sensor. Specifically, the first environment sensing data is preprocessed, feature points are extracted, the feature points are matched with known feature points in the target area, and the robot pose of the robot in the global coordinate system corresponding to the target area is calculated based on the position relation between the feature points and the corresponding known feature points. This approach may help the robot to more accurately determine its position and orientation in the environment, thereby better navigating and positioning. The robot pose with different time stamps is the robot pose corresponding to the robot at different moments. And continuously acquiring next first environment sensing data transmitted by the first environment sensing sensor, and returning to the step of calculating and acquiring the robot pose corresponding to the robot when the first environment sensing data is acquired according to the first environment sensing data to execute until the ending condition is met.
In the above embodiment, when the environment sensing data transmitted by the environment sensing sensor is received, the corresponding robot pose is calculated based on the environment sensing data, the data receiving time corresponding to the environment sensing data is used as the time stamp corresponding to the robot pose, and the robot pose carrying the time stamp is stored in the pose buffer. The robot continuously updates the map corresponding to the target area based on the received environment sensing data of each frame, and when the robot uses the environment sensing data to update the map, the robot pose corresponding to the environment sensing data can be rapidly and accurately obtained from the pose buffer zone according to the timestamp corresponding to the environment sensing data, so that the synchronism between the environment sensing data and the robot pose can be improved, and the accuracy of map updating is improved.
In one embodiment, the map updating method further comprises:
step S302, when a robot pose with a timestamp matched with the data receiving time corresponding to the target environment sensing data does not exist in the pose buffer area, waiting for a first preset duration, continuing to acquire the robot pose with the timestamp matched with the timestamp from the pose buffer area based on the data receiving time corresponding to the target environment sensing data, and when acquisition fails, taking the candidate robot pose corresponding to the target environment sensing data as the target robot pose corresponding to the target environment sensing data; candidate robot poses corresponding to the target environment perception data are robot poses with data generation time matched with data receiving time corresponding to the target environment perception data.
The first preset duration is a duration of waiting for continuously acquiring the robot pose matched with the target environment sensing data from the pose buffer zone when the robot pose matched with the data receiving time corresponding to the target environment sensing data does not exist in the pose buffer zone, and the first preset duration is determined based on the data acquisition period of the environment sensing data and is positively correlated with the data acquisition period. The data generation time of the robot pose refers to the time when the robot pose is generated based on the environment-aware data.
For example, when there is no robot pose in the pose buffer, the robot pose corresponding to the environment sensing data is not generated or is not stored in the pose buffer, and the robot waits for a first preset duration, acquires the robot pose with the same time stamp as the data receiving time corresponding to the environment sensing data from the pose buffer again, and uses the robot pose as the target robot pose corresponding to the environment sensing data. If the matched robot pose does not exist in the pose buffer zone after waiting for the first preset time length, the candidate robot pose corresponding to the target environment sensing data is directly used as the target robot pose corresponding to the target environment sensing data in order to improve the map updating efficiency. The candidate robot pose corresponding to the target environment sensing data refers to the first generated robot pose after the robot receives the target environment sensing data, namely the robot pose with the data generating time being after the data receiving time corresponding to the target environment sensing data and the data generating time being closest to the data receiving time corresponding to the target environment sensing data.
In the above embodiment, when the pose buffer area cannot acquire the pose of the target robot corresponding to the target environmental awareness data, it is indicated that the pose of the robot corresponding to the environmental awareness data is not generated or is not stored in the pose buffer area. At this time, according to the waiting strategy corresponding to the environment sensing data, waiting for a first preset time period, and acquiring the robot pose corresponding to the target environment sensing data from the pose buffer again, so that the problem that the environment sensing data and the robot pose are not synchronous due to the delay of the pose data can be effectively avoided, and when the re-acquisition fails, the candidate robot pose corresponding to the target environment sensing data is directly used as the target robot pose corresponding to the target environment sensing data, and the influence of excessive waiting on the system efficiency can be avoided, so that the map updating efficiency is ensured.
In a specific embodiment, as shown in fig. 3, the map updating method may include the following steps:
step S202, an environment sensing data set acquired by the robot in a target area through an environment sensing sensor is acquired.
Step S204, each environmental perception data in the environmental perception data set is sequentially used as target environmental perception data.
Step S206, acquiring a robot pose with a timestamp matched with the data receiving time as a target robot pose corresponding to the target environment sensing data from a pose buffer corresponding to the environment sensing sensor based on the data receiving time corresponding to the target environment sensing data; the pose buffer zone comprises a robot pose generated based on environment sensing data acquired by the environment sensing sensor, and a timestamp corresponding to the robot pose in the pose buffer zone is a data receiving time corresponding to the corresponding environment sensing data.
Step S302, when no robot pose with a timestamp matched with the data receiving time corresponding to the target environment sensing data exists in the pose buffer zone, waiting for a first preset duration, continuing to acquire the robot pose with the timestamp matched with the data receiving time from the pose buffer zone based on the data receiving time corresponding to the target environment sensing data, and when acquisition fails, taking the candidate robot pose corresponding to the target environment sensing data as the target robot pose corresponding to the target environment sensing data; candidate robot poses corresponding to the target environment perception data are robot poses with data generation time matched with data receiving time corresponding to the target environment perception data.
Step S208, updating the map corresponding to the target area based on the environment sensing data and the pose of the target robot corresponding to the environment sensing data.
Through the above embodiments, it may be understood that, in the process of performing map updating, the pose of the target robot corresponding to the environment awareness data may be determined according to step S206 and step S302. Moreover, the method proposed in step S206 and step S302 is more suitable for the case where the time difference between the data receiving time and the data collecting time corresponding to the environmental awareness data is smaller, or the environmental awareness data is the first environmental awareness data for calculating the pose of the robot. The first environment sensing data refers to environment sensing data acquired by a first environment sensing sensor for determining the pose of the robot. For example, when the environment sensing sensor is a lidar sensor and the environment sensing data is lidar data, the pose of the target robot corresponding to the lidar data may be determined through step S206 and step S302.
In one embodiment, the map updating method further comprises:
Step S402, acquiring the robot pose with the timestamp matched with the data acquisition time from the pose buffer area based on the data acquisition time corresponding to the target environment sensing data as the target robot pose corresponding to the target environment sensing data.
In step S404, when there is no robot pose in the pose buffer, the forward robot pose and the backward robot pose, the time stamps of which are matched with the data acquisition time corresponding to the target environment sensing data, are acquired from the pose buffer.
And step S406, interpolating the forward robot pose and the backward robot pose to obtain the target robot pose corresponding to the target environment perception data.
The data acquisition time refers to the time when the environmental perception sensor acquires the environmental perception data. The forward robot pose refers to the robot pose corresponding to the last acquired environmental perception data before the robot acquires the target environmental perception data. The backward robot pose refers to the robot pose corresponding to the first acquired environmental perception data after the robot acquires the target environmental perception data. The environment sensing data refers to environment sensing data collected by an environment sensing sensor for determining the pose of the robot. The target robot pose corresponding to the target environment perception data is the robot pose corresponding to the target environment perception data obtained through interpolation.
Illustratively, the robot acquires a robot pose with the same time stamp as the data acquisition time corresponding to the target environment sensing data from the pose buffer as a target robot pose corresponding to the target environment sensing data. When the robot pose with the same time stamp as the data acquisition time corresponding to the target environment sensing data does not exist in the pose buffer zone, the robot acquires the forward robot pose and the backward robot pose with the time stamp matched with the target environment sensing data from the pose buffer zone according to the data acquisition period corresponding to the target environment sensing data and the data acquisition time corresponding to the target environment sensing data. The method comprises the steps of acquiring a robot pose corresponding to the last acquired environmental perception data before acquiring target environmental perception data, and acquiring a robot pose corresponding to the first acquired environmental perception data after acquiring the target environmental perception data, namely the robot poses corresponding to two adjacent frames of environmental perception data corresponding to the target environmental perception data respectively. And interpolating the forward robot pose and the backward robot pose to obtain the target robot pose corresponding to the target environment perception data. And then the pose of the target robot corresponding to the target environment perception data is obtained.
In a specific embodiment, as shown in fig. 4, the map updating method may include the following steps:
step S202, an environment sensing data set acquired by the robot in a target area through an environment sensing sensor is acquired.
Step S204, each environmental perception data in the environmental perception data set is sequentially used as target environmental perception data.
Step S402, acquiring the robot pose with the timestamp matched with the data acquisition time from the pose buffer area based on the data acquisition time corresponding to the target environment sensing data as the target robot pose corresponding to the target environment sensing data.
In step S404, when there is no robot pose in the pose buffer, the forward robot pose and the backward robot pose, the time stamps of which are matched with the data acquisition time corresponding to the target environment sensing data, are acquired from the pose buffer.
And step S406, interpolating the forward robot pose and the backward robot pose to obtain the target robot pose corresponding to the target environment perception data.
Step S208, updating the map corresponding to the target area based on the environment sensing data and the pose of the target robot corresponding to the environment sensing data.
Through the above embodiments, it may be understood that in the process of performing map updating, the pose of the target robot corresponding to the environment-aware data may be determined according to steps S402 to S406. The method proposed in step S402 to step S406 is more suitable for the case where the time difference between the data receiving time and the data collecting time corresponding to the environmental awareness data is large. For example, when the environment sensing sensor is a depth camera sensor and the environment sensing data is a depth image, the pose buffer may include a robot pose generated based on the depth image acquired by the depth camera sensor when the target robot pose corresponding to the depth image may be determined through steps S402 to S406.
In a specific embodiment, when the environment-aware sensor is a depth camera sensor and the environment-aware data is a depth image set, the map updating method further includes:
Acquiring a depth image set acquired by a robot in a target area through a depth camera sensor; the depth image set comprises a plurality of depth images taking image acquisition time as a time stamp;
sequentially taking each depth image in the depth image set as a target depth image;
acquiring the robot pose matched with the timestamp from the pose buffer based on the image acquisition time corresponding to the target depth image;
When the pose buffer zone does not have the robot pose with the timestamp matched with the image acquisition time corresponding to the target depth image, acquiring the forward robot pose and the backward robot pose with the timestamp matched with the target depth image from the pose buffer zone;
Interpolating the forward robot pose and the backward robot pose to obtain a target robot pose corresponding to the target depth image;
and updating the map corresponding to the target area based on each target depth image and the pose of the target robot corresponding to each target depth image.
The depth camera sensor is a sensor capable of acquiring color images and depth images within an environment. For example, the Depth camera sensor may be an RGBD camera, RGB representing three primary colors of red, blue and green, and D representing Depth (Depth) information. The RGBD camera has the function of measuring the distance from the original function of collecting color images of the camera, usually uses infrared rays or laser scanning technology to measure the distance between an object and the camera, obtains depth information in a sensing range, combines the collected color images with the depth information to obtain a depth image in the sensing range, and the pixel value of each pixel point in the depth image represents the distance between the object corresponding to the pixel point and the camera. The depth image set refers to a set composed of a plurality of depth images acquired by a depth camera sensor and is used for updating a map corresponding to a target area. The depth image set includes depth images which are not used for updating the map corresponding to the target area, and when the depth images in the depth image set are used as target depth images and the map corresponding to the target area is updated according to the target depth images, the depth images need to be removed from the depth image set.
The target depth image is a depth image determined in the depth image set and used for updating a map corresponding to the target area. The forward robot pose refers to the robot pose corresponding to the last acquired environmental perception data before the robot acquires the target depth image. The backward robot pose refers to a robot pose corresponding to the acquired first environmental perception data after the robot acquires the target depth image. The pose of the target robot corresponding to the target depth image is the pose of the robot corresponding to the target depth image obtained through interpolation.
For example, the robot may collect environmental information through various sensors and update a map corresponding to the target area in common based on sensor data collected by the respective sensors. The various sensors may be lidar sensors, ultrasonic sensors, vision sensors, depth camera sensors, line laser sensors, or depth sensors, among others. The robot determines one sensor for determining the pose of the robot from among the sensors as a first environment-aware sensor, and calculates the pose of the robot based on sensor data (i.e., first environment-aware data) acquired by the first environment-aware sensor. The other sensors are taken as second environment sensing sensors, and the sensor data collected by the first environment sensing sensors and the robot pose corresponding to the sensor data (namely the second environment sensing data) collected by the second environment sensing sensors can be determined according to the robot pose corresponding to the first environment sensing data. The sensor with the shortest data acquisition period is generally used as a first environment sensing sensor for determining the pose of the robot, so that the real-time performance of the determined pose of the robot can be improved.
For example, since the data acquisition period of the lidar sensor is often smaller than the data acquisition period of the sensors such as the depth camera sensor and the ultrasonic sensor, the lidar sensor may be used as a first environment sensing sensor for determining the pose of the robot, the pose of the robot corresponding to each frame of lidar data acquired by the lidar sensor may be calculated, and the pose of the robot corresponding to the sensor data acquired by the depth camera sensor and the ultrasonic sensor may be determined based on the pose of the robot corresponding to each frame of lidar data. For example, the robot may acquire environmental awareness data and depth images within the target area through an environmental awareness sensor and a depth camera sensor. And updating the map corresponding to the target area based on the environment perception data, the target robot pose corresponding to the environment perception data, the target depth image and the target robot pose corresponding to the target depth image. The map corresponding to the target area can be updated cooperatively by the depth image and the environment sensing data, so that more accurate and comprehensive environment information can be obtained, and the respective measurement precision can be improved by mutually calibrating the depth image and the environment sensing data, so that the accuracy and the reliability of map updating are improved.
The robot takes the depth camera sensor as a second environment sensing sensor, receives the depth images continuously collected by the depth camera sensor in the target area, and obtains a depth image set. In the actual implementation process, the robot runs in the target area, and in the running process, the depth camera sensor arranged on the robot continuously collects the depth images, and the depth images can be collected according to fixed data collection frequency corresponding to the depth images, or can be collected when reaching a preset data collection position corresponding to the depth images in the target area. When the depth image is received, the image acquisition time corresponding to the depth image is taken as a time stamp corresponding to the depth image. And sequentially taking each depth image in the depth image set as a target depth image according to the sequence of the image acquisition time, namely taking the depth image with the earliest image acquisition time in the depth image set as the target depth image each time. And acquiring the robot pose with the same time stamp as the image acquisition time corresponding to the target depth image from the pose buffer area, wherein the robot pose with the same time stamp as the image acquisition time corresponding to the target depth image is taken as the target robot pose corresponding to the target depth image, namely the robot pose corresponding to the environment perception data with the same data receiving time as the image acquisition time corresponding to the target depth image, namely the robot pose corresponding to the environment perception data acquired simultaneously with the target depth image.
When the robot pose with the same time stamp as the image acquisition time corresponding to the target depth image does not exist in the pose buffer zone, the robot acquires the forward robot pose and the backward robot pose with the time stamp matched with the target depth image from the pose buffer zone according to the data acquisition period corresponding to the depth image and the image acquisition time corresponding to the target depth image, namely the robot pose corresponding to two adjacent frames of environment perception data corresponding to the depth image respectively. And interpolating the forward robot pose and the backward robot pose to obtain the target robot pose corresponding to the target depth image. And then, according to the pose of the target robot corresponding to the target depth image, projecting the target depth image onto a map corresponding to the target area, and obtaining an updated map corresponding to the target area.
In the above embodiment, the depth image acquired by the depth camera sensor is acquired, and the image acquisition time corresponding to the depth image is taken as the time stamp corresponding to the depth image. And when the map corresponding to the target area is updated based on the depth image, according to the time stamp corresponding to the depth image, acquiring the robot pose matched with the time stamp of the depth image from the pose buffer area as the target robot pose, and if the matched robot pose does not exist, acquiring the forward robot pose and the backward robot pose corresponding to the depth image based on the time stamp corresponding to the depth image, and interpolating the forward robot pose and the backward robot pose to obtain the target robot pose corresponding to the depth image. Thus, according to the time stamp corresponding to the target depth image, the target robot pose corresponding to the target depth image can be quickly and accurately obtained from the pose buffer zone, so that the synchronism between the depth image and the robot pose is improved, and the accuracy of map updating is further improved.
In one embodiment, the data acquisition time corresponding to the environmental awareness data is obtained by performing time compensation on the data receiving time corresponding to the environmental awareness data based on the data generation duration corresponding to the environmental awareness data.
After the environment sensing sensor collects the original sensor data in the sensing area, the original data is subjected to data processing to obtain corresponding environment sensing data, and the time consumed in the data processing process is the data generation time. The data receiving time refers to the moment when the robot receives the environment sensing data uploaded by the environment sensing sensor. The data generation duration can be preset according to the performance of the environment-aware sensor, or can be an average value obtained by counting the duration consumed by the environment-aware sensor for processing the collected original data each time.
Illustratively, when the robot receives the environment-aware data, the moment when the environment-aware data is received is taken as the data receiving time corresponding to the environment-aware data. And then taking the difference value between the data receiving time and the data generating time as the data acquisition time corresponding to the environment perception data.
In a specific embodiment, the environment-aware data may be a depth image acquired by a depth camera sensor, and the data acquisition time corresponding to the environment-aware data is obtained by performing time compensation on the image receiving time corresponding to the depth image based on the image generation time corresponding to the depth image. After the color image in the sensing area is acquired by the depth camera sensor, performing depth processing on the color image to obtain a depth image corresponding to the color image, wherein the time consumed in the depth processing process is the image generation time. The image receiving time refers to the moment when the robot receives the depth image uploaded by the depth camera sensor. The image generation time length can be preset according to the performance of the depth camera sensor in advance, or can be an average value obtained by counting the time length consumed by the depth camera sensor for carrying out the depth processing on the acquired color image each time.
In the above embodiment, a certain time difference exists between the data receiving time corresponding to the environmental awareness data and the data collecting time corresponding to the environmental awareness data. Therefore, the data receiving time corresponding to the environment sensing data is subtracted by the data generating time to obtain the data collecting time corresponding to the environment sensing data, the robot pose when the environment sensing sensor collects the environment sensing data can be accurately found based on the data collecting time, and the accuracy of the determined target robot pose is improved.
In one embodiment, interpolating a forward robot pose and a backward robot pose to obtain a target robot pose corresponding to target environment awareness data, including:
Determining interpolation weights corresponding to forward robot poses based on forward time differences between the target environment sensing data and the forward robot poses, and determining interpolation weights corresponding to the backward robot poses based on backward time differences between the target environment sensing data and the backward robot poses; the interpolation weight is inversely related to the corresponding time difference;
and interpolating the forward robot pose and the backward robot pose based on interpolation weights respectively corresponding to the forward robot pose and the backward robot pose to obtain the target robot pose corresponding to the target environment perception data.
The robot uses the difference between the data acquisition time corresponding to the target environment sensing data and the time stamp corresponding to the forward robot pose as the forward time difference, and uses the difference between the time stamp corresponding to the backward robot pose corresponding to the target environment sensing data and the data acquisition time as the backward time difference. And determining interpolation weights corresponding to the forward robot pose based on the forward time difference, and determining interpolation weights corresponding to the backward robot based on the backward time difference. If the time difference between the time stamp corresponding to the data acquisition time and the time stamp corresponding to the forward robot pose is smaller, the robot pose corresponding to the moment when the robot acquires the environment sensing data is closer to the forward robot pose, so that the forward robot pose is given higher interpolation weight, and if the time difference between the time stamp corresponding to the data acquisition time and the time stamp corresponding to the forward robot pose is larger, the difference between the robot pose corresponding to the moment when the robot acquires the environment sensing data is larger, so that the forward robot pose is given lower interpolation weight. And determining interpolation weights corresponding to the backward robot pose by the same method. It can be appreciated that when interpolating the forward and backward robot poses, the interpolation weights corresponding to the robot poses are inversely related to the corresponding time differences. And interpolating the forward robot pose and the backward robot pose based on interpolation weights respectively corresponding to the forward robot pose and the backward robot pose to obtain the target robot pose corresponding to the target environment perception data.
In the above embodiment, when interpolation is performed on the forward robot pose and the backward robot pose, the time difference between the data acquisition time of the target environment sensing data and the time difference between the forward robot pose and the backward robot pose is determined, a lower interpolation weight is given to the robot pose with a larger time difference, a higher interpolation weight is given to the robot pose with a smaller time difference, and thus, the target robot pose obtained by interpolation can be more accurate, and the accuracy of map updating is improved.
In one embodiment, acquiring forward and backward robot poses with time stamps matching the target environment awareness data from a pose buffer comprises:
Determining a forward time range and a backward time range corresponding to the target environment sensing data based on the data acquisition time corresponding to the target environment sensing data and the data acquisition period corresponding to the environment sensing data;
acquiring a forward robot pose with a time stamp in a forward time range and a backward robot pose with a time stamp in a backward time range from a pose buffer zone;
When the forward robot pose or the backward robot pose does not exist in the pose buffer zone, waiting for a second preset time period, continuously acquiring the forward robot pose with the timestamp in the forward time range and the backward robot pose with the timestamp in the backward time range from the pose buffer zone, and when the forward robot pose or the backward robot pose does not exist in the pose buffer zone, taking the candidate robot pose corresponding to the target environment perception data as the target robot pose corresponding to the target environment perception data; candidate robot poses corresponding to the target environment perception data are robot poses with data generation time matched with data receiving time corresponding to the target environment perception data.
The second preset duration refers to duration of waiting for continuously acquiring the forward robot pose and the backward robot pose corresponding to the target environment sensing data from the pose buffer zone when the forward robot pose or the backward robot pose does not exist in the pose buffer zone. The candidate robot pose refers to a first robot pose generated after the robot receives the target environment sensing data.
The robot calculates a difference between the data acquisition time corresponding to the target environment sensing data and the data acquisition period corresponding to the environment sensing data, takes the difference as a first time point, and determines a time range between the first time point and the data acquisition time corresponding to the target environment sensing data as a forward time range. And summing the data acquisition time corresponding to the target environment perception data and the data acquisition period corresponding to the environment perception data to obtain a second time point. And determining a time range between the second time point and the data acquisition time corresponding to the target environment perception data as a backward time range. And further, in the pose buffer area, the pose of the robot with the timestamp in the forward time range is obtained as the forward robot pose, and the pose of the robot with the timestamp in the backward time range is obtained as the backward robot pose. When the forward robot pose or the backward robot pose does not exist in the pose buffer zone, the forward robot pose or the backward robot pose corresponding to the target environment perception data is explained to be not generated or not stored in the pose buffer zone, a second preset time period is waited, and the forward robot pose and the backward robot pose are acquired from the pose buffer zone again. If the forward robot pose or the backward robot pose does not exist in the pose buffer zone after waiting for the second preset time period, in order to improve the map updating efficiency, the candidate robot pose corresponding to the target environment sensing data is directly used as the target robot pose corresponding to the target environment sensing data. The candidate robot pose corresponding to the target environment sensing data refers to the first generated robot pose after the robot receives the target environment sensing data, namely the robot pose with the data generating time being after the data receiving time corresponding to the target environment sensing data and the data generating time being closest to the data receiving time corresponding to the target environment sensing data.
In the above embodiment, when the forward robot pose or the backward robot pose does not exist in the pose buffer, it is described that the forward robot pose or the backward robot pose corresponding to the target environment sensing data is not generated or is not stored in the pose buffer, and at this time, the forward robot pose and the backward robot pose corresponding to the target environment sensing data are acquired again from the pose buffer according to the waiting policy corresponding to the environment sensing data, waiting for a second preset period of time. Therefore, the problem that the environment sensing data and the robot pose are not synchronous due to the fact that the pose data are delayed can be effectively avoided. And when the re-acquisition fails, the pose of the candidate robot corresponding to the target environment sensing data is directly used as the pose of the target robot corresponding to the target environment sensing data, so that the influence on the system efficiency due to excessive waiting can be avoided, and the map updating efficiency is ensured.
In one embodiment, the first preset duration and the second preset duration may be determined according to a data acquisition period corresponding to the environmental awareness data, where the first preset duration and the second preset duration are both positively related to the data acquisition period. The method for determining the pose of the target robot provided in step S206 and step S302 is more suitable for the case that the time difference between the data receiving time and the data collecting time corresponding to the environmental awareness data is smaller. For example, when the environment-aware sensor is a lidar sensor and the environment-aware data is lidar data, the pose of the target robot corresponding to the lidar data may be determined in step S206 and step S302, and at this time, the first preset duration corresponding to the waiting mechanism is determined based on the data acquisition period corresponding to the lidar data. The method for determining the pose of the target robot provided in step S402, step S404 and step S406 is more suitable for the situation that the time difference between the data receiving time and the data collecting time corresponding to the environmental awareness data is larger. For example, when the environment-aware sensor is a depth camera sensor and the environment-aware data is a depth image, the pose of the target robot corresponding to the depth image may be determined through step S402, step S404 and step S406, and at this time, the second preset duration corresponding to the waiting mechanism is determined based on the data acquisition period corresponding to the depth image. Because the data acquisition period corresponding to the laser radar sensor is shorter, the laser radar sensor is used as a first environment sensing sensor for determining the pose of the robot, and the depth camera sensor is used as a second environment sensing sensor. Therefore, the first preset time period determined based on the data acquisition period corresponding to the laser radar data is smaller than the second preset time period determined based on the data acquisition period corresponding to the depth image.
In a specific embodiment, the map updating method provided by the application can be applied to a robot navigation system to update a cost map used in the robot navigation system. As shown in fig. 5, the map updating method includes the steps of:
1. creation of data buffers (i.e., pose buffers)
The robotic navigation system initializes an empty buffer. When the laser radar data acquired by the laser radar sensor are acquired, the data receiving time corresponding to the laser radar data is taken as a time stamp corresponding to the laser radar data. And generating a robot pose corresponding to the laser radar data based on the laser radar data, taking the data receiving time corresponding to the laser radar data as a time stamp of the robot pose corresponding to the laser radar data, and storing the robot pose carrying the time stamp into a data buffer area through a thread locking technology to ensure the consistency and the integrity of data transmission. When the data buffer area is full, the earliest received robot pose is popped up by using a first-in first-out strategy.
2. Updating cost map based on laser radar data
Before updating the cost map based on the received lidar data, the robot navigation system needs to acquire the pose of the target robot corresponding to the lidar data (i.e., sensor data) from the data buffer. Firstly, according to the time stamp corresponding to the laser radar data, searching whether the robot pose with the same time stamp exists in a data buffer area, and if so, taking the robot pose with the same time stamp as the target robot pose corresponding to the laser radar data. If the robot pose does not exist, waiting for a first preset time, for example, the first preset time can be 0.01s, searching the robot pose which carries the same time stamp as the laser radar data in the data buffer again, and if the robot pose fails to search again, taking the candidate robot pose corresponding to the laser radar data as the target robot pose corresponding to the laser radar data. After the pose of the target robot corresponding to the laser radar data is determined, the laser radar data is projected to the cost map based on the pose of the target robot corresponding to the laser radar data, and the updated cost map is obtained.
3. Updating cost map based on depth image
After receiving the depth image, the robot navigation system performs time compensation on the image receiving time corresponding to the depth image based on the image generating time corresponding to the depth image to obtain the image collecting time corresponding to the depth image, and takes the image collecting time corresponding to the depth image as a time stamp corresponding to the depth image. Further, as shown in fig. 5, before updating the cost map based on the depth image, the robot pose having the same time stamp as the time stamp corresponding to the depth image is searched from the data buffer based on the time stamp corresponding to the depth image as the target robot pose corresponding to the depth image. If the robot pose with the same time stamp cannot be found, the robot pose corresponding to the front and rear two frames of laser radar data corresponding to the depth image is found from the data buffer, and the robot pose corresponding to the front and rear two frames of laser radar data respectively is interpolated to obtain the target robot pose corresponding to the depth image. If the searching fails, waiting for a second preset time, for example, the second preset time may be 0.02s, and if the searching fails again, taking the candidate robot pose corresponding to the depth image as the target robot pose corresponding to the depth image. Because the data acquisition frequency of the depth image is smaller than the data acquisition frequency corresponding to the laser radar data, the second preset time length corresponding to the depth image is longer than the first preset time length corresponding to the laser radar data. And after determining the pose of the target robot corresponding to the depth image, projecting the depth image to the cost map based on the pose of the target robot corresponding to the depth image, and obtaining an updated cost map.
In the embodiment, the pose of the robot can be updated in real time by creating the dynamic data buffer zone, so that the consistency and the integrity of the pose of the robot are ensured. And meanwhile, performing time compensation on the image receiving time corresponding to the depth image to obtain the image acquisition time corresponding to the depth image. Based on the image acquisition time corresponding to the depth image, the robot pose corresponding to the depth image is searched in the data buffer zone, so that the accuracy of the determined robot pose can be improved, and the synchronism of data is improved. Meanwhile, by setting different waiting strategies for the depth image and the laser radar data respectively, the problem of data mismatch caused by pose data delay can be effectively avoided, and meanwhile, the system efficiency is not influenced by excessive waiting. The method can effectively improve the precision of the cost map, provide accurate data support for subsequent tasks such as robot positioning and path planning, and further ensure the high efficiency and stability of the robot navigation system.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a map updating device for realizing the map updating method. The implementation of the solution provided by the device is similar to the implementation described in the above method, so the specific limitation in one or more embodiments of the map updating device provided below may refer to the limitation of the map updating method hereinabove, and will not be repeated here.
In one embodiment, as shown in fig. 6, there is provided a map updating apparatus including: a data acquisition module 602, a target data determination module 604, a pose determination module 606, and a map update module 608, wherein:
the data acquisition module 602 is configured to acquire an environmental perception data set acquired by the robot through the environmental perception sensor in the target area.
The target data determining module 604 is configured to sequentially use each environmental awareness data in the environmental awareness data set as target environmental awareness data.
The pose determining module 606 is configured to obtain, from a pose buffer area corresponding to the environmental awareness sensor, a pose of the robot, whose timestamp matches the data receiving time, as a pose of the target robot corresponding to the target environmental awareness data, based on the data receiving time corresponding to the target environmental awareness data; the pose buffer zone comprises a robot pose generated based on environment sensing data acquired by the environment sensing sensor, and a timestamp corresponding to the robot pose in the pose buffer zone is a data receiving time corresponding to the corresponding environment sensing data.
The map updating module 608 is configured to update a map corresponding to the target area based on each of the environment-aware data and the pose of the target robot corresponding to each of the environment-aware data.
In one embodiment, the map updating apparatus further includes:
The pose calculation module is used for receiving the environment sensing data transmitted by the environment sensing sensor before acquiring the environment sensing data set acquired by the robot in the target area through the environment sensing sensor; calculating the pose of the robot based on the environment sensing data, and taking the data receiving time corresponding to the environment sensing data as a time stamp corresponding to the pose of the robot; and storing the pose of the robot into a pose buffer area corresponding to the environment sensing sensor, continuously acquiring the next environment sensing data transmitted by the environment sensing sensor, and returning to the step of calculating the pose of the robot based on the environment sensing data for execution until the ending condition is met.
In one embodiment, the pose determination module 606 is further to:
When the robot pose with the timestamp matched with the data receiving time corresponding to the target environment sensing data does not exist in the pose buffer zone, waiting for a first preset duration, continuously obtaining the robot pose with the timestamp matched with the data receiving time from the pose buffer zone based on the data receiving time corresponding to the target environment sensing data, and when the obtaining fails, taking the candidate robot pose corresponding to the target environment sensing data as the target robot pose corresponding to the target environment sensing data; candidate robot poses corresponding to the target environment perception data are robot poses with data generation time matched with data receiving time corresponding to the target environment perception data.
In one embodiment, the pose determining module 606 is further configured to obtain, from the pose buffer, a pose of the robot whose timestamp matches the data acquisition time based on the data acquisition time corresponding to the target environmental awareness data, as a pose of the target robot corresponding to the target environmental awareness data; when the robot pose with the timestamp matched with the data acquisition time corresponding to the target environment sensing data does not exist in the pose buffer zone, acquiring the forward robot pose and the backward robot pose with the timestamp matched with the target environment sensing data from the pose buffer zone; and interpolating the forward robot pose and the backward robot pose to obtain the target robot pose corresponding to the target environment perception data.
In one embodiment, the data acquisition time corresponding to the environmental awareness data is obtained by performing time compensation on the data receiving time corresponding to the environmental awareness data based on the data generation duration corresponding to the environmental awareness data.
In one embodiment, the pose determination module 606 is further to:
determining interpolation weights corresponding to forward robot poses based on forward time differences between the target environment sensing data and the forward robot poses, and determining interpolation weights corresponding to the backward robot poses based on backward time differences between the target environment sensing data and the backward robot poses; the interpolation weight is inversely related to the corresponding time difference; and interpolating the forward robot pose and the backward robot pose based on interpolation weights respectively corresponding to the forward robot pose and the backward robot pose to obtain the target robot pose corresponding to the target environment perception data.
In one embodiment, the pose determination module 606 is further to:
Determining a forward time range and a backward time range corresponding to the target environment sensing data based on the data acquisition time corresponding to the target environment sensing data and the data acquisition period corresponding to the environment sensing data; acquiring a forward robot pose with a time stamp in a forward time range and a backward robot pose with a time stamp in a backward time range from a pose buffer zone; when the forward robot pose or the backward robot pose does not exist in the pose buffer zone, waiting for a second preset time period, continuously acquiring the forward robot pose with the timestamp in the forward time range and the backward robot pose with the timestamp in the backward time range from the pose buffer zone, and when the forward robot pose or the backward robot pose does not exist in the pose buffer zone, taking the candidate robot pose corresponding to the target environment perception data as the target robot pose corresponding to the target environment perception data; candidate robot poses corresponding to the target environment perception data are robot poses with data generation time matched with image receiving time corresponding to the target environment perception data.
According to the map updating device, the pose buffer zone is created, the pose buffer zone comprises the robot pose generated based on the environment sensing data, and the data receiving time corresponding to the environment sensing data is used as the time stamp corresponding to the robot pose, so that the environment sensing data and the robot pose generated based on the environment sensing data carry the same time stamp. Therefore, when the map is updated based on the target environment sensing data in the environment sensing data set, the target robot pose corresponding to the target environment sensing data can be quickly and accurately searched in the pose buffer zone storing the robot poses corresponding to the environment sensing data respectively according to the data receiving time corresponding to the environment sensing data, and therefore the synchronism between the environment sensing data and the robot poses is improved. Because the accuracy of the pose of the target robot corresponding to the acquired target environment sensing data is improved, the map corresponding to the target area is updated based on the target environment sensing data and the pose of the target robot corresponding to the target environment sensing data, and the accuracy of map updating can be improved.
The respective modules in the map updating apparatus described above may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a terminal, and the internal structure of which may be as shown in fig. 7. The computer device includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input means. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface, the display unit and the input device are connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a map updating method. The display unit of the computer device is used for forming a visual picture, and can be a display screen, a projection device or a virtual reality imaging device. The display screen can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be a key, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in FIG. 7 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed. The computer device may be a robot, the robot being provided with an environment-aware sensor, the robot further comprising a memory and a processor, the memory storing a computer program, the processor executing the computer program to perform the steps of the method embodiments described above. In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, implements the steps of the method embodiments described above.
In one embodiment, a computer program product or computer program is provided that includes computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the steps in the above-described method embodiments.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magneto-resistive random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (PHASE CHANGE Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in various forms such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), etc. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.

Claims (10)

1. A map updating method, characterized in that the method comprises:
Acquiring an environment sensing data set acquired by a robot in a target area through an environment sensing sensor;
sequentially taking each environmental perception data in the environmental perception data set as target environmental perception data;
Acquiring a robot pose with a timestamp matched with the data receiving time as a target robot pose corresponding to the target environment sensing data based on the data receiving time corresponding to the target environment sensing data from a pose buffer area corresponding to the environment sensing sensor; the pose buffer zone comprises a robot pose generated based on the environment sensing data acquired by the environment sensing sensor, and a timestamp corresponding to the robot pose in the pose buffer zone is a data receiving time corresponding to the corresponding environment sensing data;
And updating a map corresponding to the target area based on the environment sensing data and the target robot pose corresponding to the environment sensing data.
2. The method of claim 1, wherein the acquiring robot is further configured to, prior to the acquisition of the context-aware dataset by the context-aware sensor for the target region:
Receiving environment sensing data transmitted by the environment sensing sensor;
Calculating the pose of the robot based on the environment sensing data, and taking the data receiving time corresponding to the environment sensing data as a time stamp corresponding to the pose of the robot;
And storing the robot pose into a pose buffer area corresponding to the environment sensing sensor, continuously acquiring the next environment sensing data transmitted by the environment sensing sensor, and returning to the step of calculating the robot pose based on the environment sensing data for execution until the ending condition is met.
3. The method according to claim 1, wherein the method further comprises:
When the robot pose with the timestamp matched with the data receiving time corresponding to the target environment sensing data does not exist in the pose buffer zone, waiting for a first preset duration, continuing to acquire the robot pose with the timestamp matched with the data receiving time from the pose buffer zone based on the data receiving time corresponding to the target environment sensing data, and taking the candidate robot pose corresponding to the target environment sensing data as the target robot pose corresponding to the target environment sensing data when acquisition fails; and the candidate robot pose corresponding to the target environment perception data is a robot pose with the data generation time matched with the data receiving time corresponding to the target environment perception data.
4. The method according to claim 1, wherein the method further comprises:
Acquiring a robot pose with a timestamp matched with the data acquisition time from the pose buffer zone based on the data acquisition time corresponding to the target environment perception data as a target robot pose corresponding to the target environment perception data;
when the pose buffer zone does not have the robot pose with the timestamp matched with the data acquisition time corresponding to the target environment sensing data, acquiring the forward robot pose and the backward robot pose with the timestamp matched with the target environment sensing data from the pose buffer zone;
And interpolating the forward robot pose and the backward robot pose to obtain the target robot pose corresponding to the target environment perception data.
5. The method according to any one of claims 1 to 4, wherein the data acquisition time corresponding to the environment-aware data is obtained by time-compensating the data reception time corresponding to the environment-aware data based on a data generation duration corresponding to the environment-aware data.
6. The method of claim 4, wherein interpolating the forward robot pose and the backward robot pose to obtain a target robot pose corresponding to the target environmental awareness data comprises:
Determining interpolation weights corresponding to the forward robot pose based on a forward time difference between the target environment sensing data and the forward robot pose, and determining interpolation weights corresponding to the backward robot pose based on a backward time difference between the target environment sensing data and the backward robot pose; the interpolation weight is inversely related to the corresponding time difference;
And interpolating the forward robot pose and the backward robot pose based on interpolation weights respectively corresponding to the forward robot pose and the backward robot pose to obtain a target robot pose corresponding to the target environment perception data.
7. The method of claim 4, wherein the obtaining from the pose buffer forward and backward robot poses having time stamps matching the target context awareness data comprises:
Determining a forward time range and a backward time range corresponding to the target environment perception data based on the data acquisition time corresponding to the target environment perception data and the data acquisition period corresponding to the environment perception data;
acquiring a forward robot pose with a time stamp in the forward time range and a backward robot pose with a time stamp in the backward time range from the pose buffer;
When the forward robot pose or the backward robot pose does not exist in the pose buffer zone, waiting for a second preset time period, continuing to acquire the forward robot pose with the timestamp in the forward time range and the backward robot pose with the timestamp in the backward time range from the pose buffer zone, and when the forward robot pose or the backward robot pose does not exist in the pose buffer zone, taking the candidate robot pose corresponding to the target environment perception data as the target robot pose corresponding to the target environment perception data; and the candidate robot pose corresponding to the target environment perception data is a robot pose with the data generation time matched with the data receiving time corresponding to the target environment perception data.
8. A map updating apparatus, characterized in that the apparatus comprises:
The data acquisition module is used for acquiring an environment perception data set acquired by the robot through the environment perception sensor in the target area;
The target data determining module is used for sequentially taking each environmental perception data in the environmental perception data set as target environmental perception data;
The pose determining module is used for acquiring a robot pose with a timestamp matched with the data receiving time from a pose buffer area corresponding to the environment sensing sensor based on the data receiving time corresponding to the target environment sensing data as a target robot pose corresponding to the target environment sensing data; the pose buffer zone comprises a robot pose generated based on the environment sensing data acquired by the environment sensing sensor, and a timestamp corresponding to the robot pose in the pose buffer zone is a data receiving time corresponding to the corresponding environment sensing data;
and the map updating module is used for updating the map corresponding to the target area based on the environment perception data and the pose of the target robot corresponding to the environment perception data.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 7.
CN202410510963.XA 2024-04-26 2024-04-26 Map updating method, map updating device, computer equipment and storage medium Pending CN118089705A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410510963.XA CN118089705A (en) 2024-04-26 2024-04-26 Map updating method, map updating device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410510963.XA CN118089705A (en) 2024-04-26 2024-04-26 Map updating method, map updating device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN118089705A true CN118089705A (en) 2024-05-28

Family

ID=91153551

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410510963.XA Pending CN118089705A (en) 2024-04-26 2024-04-26 Map updating method, map updating device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN118089705A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108759844A (en) * 2018-06-07 2018-11-06 科沃斯商用机器人有限公司 Robot relocates and environmental map construction method, robot and storage medium
CN110000786A (en) * 2019-04-12 2019-07-12 珠海市一微半导体有限公司 A kind of historical map or atlas of view-based access control model robot utilizes method
CN111247390A (en) * 2017-10-06 2020-06-05 高通股份有限公司 Concurrent relocation and reinitialization of VSLAMs
CN112333491A (en) * 2020-09-23 2021-02-05 字节跳动有限公司 Video processing method, display device and storage medium
CN112799095A (en) * 2020-12-31 2021-05-14 深圳市普渡科技有限公司 Static map generation method and device, computer equipment and storage medium
CN114387514A (en) * 2021-12-30 2022-04-22 北京旷视机器人技术有限公司 Information acquisition method and device, robot and storage medium
CN114440928A (en) * 2022-01-27 2022-05-06 杭州申昊科技股份有限公司 Combined calibration method for laser radar and odometer, robot, equipment and medium
CN114529684A (en) * 2022-02-11 2022-05-24 深圳市杉川机器人有限公司 Three-dimensional map reconstruction method, self-moving device and computer-readable storage medium
CN115512124A (en) * 2022-10-20 2022-12-23 亿咖通(湖北)技术有限公司 Method and device for determining relocation frame, vehicle and storage medium
CN117291974A (en) * 2022-06-16 2023-12-26 广州视源电子科技股份有限公司 Map data processing method and device, electronic equipment and mobile equipment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111247390A (en) * 2017-10-06 2020-06-05 高通股份有限公司 Concurrent relocation and reinitialization of VSLAMs
CN108759844A (en) * 2018-06-07 2018-11-06 科沃斯商用机器人有限公司 Robot relocates and environmental map construction method, robot and storage medium
CN110000786A (en) * 2019-04-12 2019-07-12 珠海市一微半导体有限公司 A kind of historical map or atlas of view-based access control model robot utilizes method
CN112333491A (en) * 2020-09-23 2021-02-05 字节跳动有限公司 Video processing method, display device and storage medium
CN112799095A (en) * 2020-12-31 2021-05-14 深圳市普渡科技有限公司 Static map generation method and device, computer equipment and storage medium
CN114387514A (en) * 2021-12-30 2022-04-22 北京旷视机器人技术有限公司 Information acquisition method and device, robot and storage medium
CN114440928A (en) * 2022-01-27 2022-05-06 杭州申昊科技股份有限公司 Combined calibration method for laser radar and odometer, robot, equipment and medium
CN114529684A (en) * 2022-02-11 2022-05-24 深圳市杉川机器人有限公司 Three-dimensional map reconstruction method, self-moving device and computer-readable storage medium
CN117291974A (en) * 2022-06-16 2023-12-26 广州视源电子科技股份有限公司 Map data processing method and device, electronic equipment and mobile equipment
CN115512124A (en) * 2022-10-20 2022-12-23 亿咖通(湖北)技术有限公司 Method and device for determining relocation frame, vehicle and storage medium

Similar Documents

Publication Publication Date Title
US11145083B2 (en) Image-based localization
US10068344B2 (en) Method and system for 3D capture based on structure from motion with simplified pose detection
EP2993490B1 (en) Operating device, operating method, and program therefor
EP2959315B1 (en) Generation of 3d models of an environment
WO2018112926A1 (en) Locating method, terminal and server
US20160260250A1 (en) Method and system for 3d capture based on structure from motion with pose detection tool
US10157478B2 (en) Enabling use of three-dimensional locations of features with two-dimensional images
WO2018205803A1 (en) Pose estimation method and apparatus
KR20220028042A (en) Pose determination method, apparatus, electronic device, storage medium and program
US20180075614A1 (en) Method of Depth Estimation Using a Camera and Inertial Sensor
Goldberg et al. Stereo and IMU assisted visual odometry on an OMAP3530 for small robots
WO2024027350A1 (en) Vehicle positioning method and apparatus, computer device and storage medium
CN111735439A (en) Map construction method, map construction device and computer-readable storage medium
WO2023005457A1 (en) Pose calculation method and apparatus, electronic device, and readable storage medium
CN114111776A (en) Positioning method and related device
CN113610702B (en) Picture construction method and device, electronic equipment and storage medium
CN117232499A (en) Multi-sensor fusion point cloud map construction method, device, equipment and medium
CN113295159B (en) Positioning method and device for end cloud integration and computer readable storage medium
CN112414444B (en) Data calibration method, computer equipment and storage medium
JP2023503750A (en) ROBOT POSITIONING METHOD AND DEVICE, DEVICE, STORAGE MEDIUM
WO2023160445A1 (en) Simultaneous localization and mapping method and apparatus, electronic device, and readable storage medium
CN118089705A (en) Map updating method, map updating device, computer equipment and storage medium
CN115830073A (en) Map element reconstruction method, map element reconstruction device, computer equipment and storage medium
CN115222815A (en) Obstacle distance detection method, obstacle distance detection device, computer device, and storage medium
CN115294280A (en) Three-dimensional reconstruction method, apparatus, device, storage medium, and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination