CN115979251B - Map generation method and robot - Google Patents

Map generation method and robot Download PDF

Info

Publication number
CN115979251B
CN115979251B CN202310271548.9A CN202310271548A CN115979251B CN 115979251 B CN115979251 B CN 115979251B CN 202310271548 A CN202310271548 A CN 202310271548A CN 115979251 B CN115979251 B CN 115979251B
Authority
CN
China
Prior art keywords
point cloud
height
grid
grids
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310271548.9A
Other languages
Chinese (zh)
Other versions
CN115979251A (en
Inventor
黄游平
方根在
肖晶
钟望坤
肖志光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Pengxing Intelligent Research Co Ltd
Original Assignee
Shenzhen Pengxing Intelligent Research Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Pengxing Intelligent Research Co Ltd filed Critical Shenzhen Pengxing Intelligent Research Co Ltd
Priority to CN202310271548.9A priority Critical patent/CN115979251B/en
Publication of CN115979251A publication Critical patent/CN115979251A/en
Application granted granted Critical
Publication of CN115979251B publication Critical patent/CN115979251B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The application provides a map generation method and a robot, comprising the following steps: acquiring environment point cloud data corresponding to the surrounding environment of the robot; performing grid processing on the environmental point cloud data corresponding to the surrounding environment to obtain grid point cloud data corresponding to each of a plurality of grids; determining the surface layer elevation and the suspension layer elevation corresponding to each of the grids according to the grid point cloud data corresponding to each of the grids; and generating a target map corresponding to the surrounding environment according to the surface layer elevation and the suspended layer elevation corresponding to each of the grids. According to the method and the device, the surface layer elevation and the suspension layer elevation can be determined according to the environmental point cloud data corresponding to the surrounding environment where the robot is located, and then, the target map can be generated according to the surface layer elevation and the suspension layer Gao Chenglai, so that the robot is guided to pass through the object or the terrain in the hollow state, the map generation efficiency is improved, and the target map is more accordant with the actual situation.

Description

Map generation method and robot
Technical Field
The present application relates to the field of artificial intelligence, and more particularly, to a map generation method, apparatus, robot, and storage medium.
Background
With the rapid development of technology, autonomous mobile robots are increasingly used in real life. At present, the position of the mobile robot in an unknown environment is mainly positioned by a synchronous map construction and positioning technology, so that autonomous movement is realized.
The traditional mobile robot map construction mainly relies on a manual intervention mode, and map construction is realized by manually setting some navigation target points or directly controlling the mobile robot to move. Time, manpower and material resources are wasted when facing large and complex environments, and environmental attributes cannot be set before tasks are executed, so that tasks for operating local areas cannot be executed.
Disclosure of Invention
In view of this, the embodiments of the present application provide a map generating method, apparatus, robot and storage medium, so as to solve the existing technical problems.
According to an aspect of the embodiments of the present application, there is provided a map generating method, applied to a robot, including: acquiring environment point cloud data corresponding to the surrounding environment of the robot; performing grid processing on the environmental point cloud data corresponding to the surrounding environment to obtain grid point cloud data corresponding to each of a plurality of grids; determining the surface layer elevation and the suspension layer elevation corresponding to each of the grids according to the grid point cloud data corresponding to each of the grids; and generating a target map corresponding to the surrounding environment according to the surface layer elevation and the suspension layer elevation corresponding to each of the grids.
According to an aspect of embodiments of the present application, there is provided a robot including: a body; a control system in communication with the fuselage, the control system comprising a processor and a memory in communication with the processor, the memory storing instructions that when executed on the processor cause the processor to perform operations comprising: acquiring environment point cloud data corresponding to the surrounding environment of the robot; performing grid processing on the environmental point cloud data corresponding to the surrounding environment to obtain grid point cloud data corresponding to each of a plurality of grids; determining the surface layer elevation and the suspension layer elevation corresponding to each of the grids according to the grid point cloud data corresponding to each of the grids; and generating a target map corresponding to the surrounding environment according to the surface layer elevation and the suspension layer elevation corresponding to each of the grids.
In the scheme of the application, the environmental point cloud data is processed according to the environmental point cloud data corresponding to the surrounding environment of the robot, so that grid point cloud data corresponding to each of a plurality of grids is obtained, the surface layer elevation and the suspension layer elevation corresponding to each of the plurality of grids can be determined based on the grid point cloud data corresponding to each of the plurality of grids, and finally the target map is generated according to the surface layer elevation and the suspension layer elevation corresponding to each of the plurality of grids. According to the scheme, the target map with the surface layer elevation and the suspension layer elevation can be generated, the robot can be guided to pass through the object or the terrain in the hollow state, the generation efficiency of the map is improved, the target map is more in line with the actual situation, the accuracy of the target map is effectively improved, the generated target map is smaller in data volume compared with the 3D map, and processing resources and storage resources can be saved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely examples of the application and that other drawings may be derived from them without undue burden.
Fig. 1 is a schematic diagram of a hardware configuration of a robot according to an embodiment of the present application.
Fig. 2 is a schematic view of a mechanical structure of a robot according to an embodiment of the present application.
Fig. 3 is a flowchart of a map generation method according to an embodiment of the present application.
Fig. 4 is a schematic diagram of an application scenario illustrated according to an embodiment of the present application.
FIG. 5 is a schematic view of an object or terrain having a "hollow" morphology, according to an embodiment of the present application.
Fig. 6 is a schematic diagram of a target map, according to an embodiment of the application.
FIG. 7 is a flowchart showing specific steps for determining a surface elevation and a flying layer elevation corresponding to each of a plurality of grids according to grid point cloud data corresponding to each of the plurality of grids according to one embodiment of the present application.
FIG. 8 is a schematic diagram illustrating different elevations according to an embodiment of the application.
FIG. 9 is a flowchart illustrating specific steps for determining a surface elevation and a flying layer elevation corresponding to each of a plurality of grids according to at least one grid point cloud set corresponding to each of the plurality of grids and the height relationship, according to an embodiment of the present application.
Fig. 10 is a flowchart illustrating a specific step of determining a surface elevation and a suspended layer elevation corresponding to each of the plurality of grids according to at least one grid point cloud set corresponding to each of the plurality of grids if the height relationship is that the first point cloud height is greater than the preset height according to an embodiment of the present application.
Fig. 11 is a schematic diagram of a local grid map and grid point cloud data according to an embodiment of the present application.
Fig. 12 is a flowchart illustrating a specific step of determining the surface layer elevation corresponding to each of the plurality of grids according to at least one grid point cloud set corresponding to each of the plurality of grids if the height relationship is that the second point cloud height is smaller than the preset height according to an embodiment of the present application.
Fig. 13 is a schematic diagram of a local grid map and grid point cloud data, according to another embodiment of the present application.
Fig. 14 is a flowchart illustrating a specific step of clustering grid point cloud data corresponding to each of a plurality of grids to obtain at least one grid point cloud set corresponding to each of the plurality of grids according to an embodiment of the present application.
Fig. 15 is a map of targets showing stair correspondence according to an embodiment of the present application.
Fig. 16 is a flowchart of a map generation method according to another embodiment of the present application.
Fig. 17 is a block diagram of a map generation apparatus according to an embodiment of the present application.
There has been shown in the foregoing drawings, and will hereinafter be described in more detail, specific embodiments of the invention with the understanding that the present disclosure is to be considered in all respects as illustrative, and not restrictive, the scope of the inventive concepts being limited to the specific embodiments shown and described.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art.
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
In the following description, suffixes such as "module", "component", or "unit" for representing components are used only for facilitating the description of the present invention, and have no specific meaning in themselves. Thus, "module," "component," or "unit" may be used in combination.
Referring to fig. 1, fig. 1 is a schematic hardware structure of a robot 100 according to one embodiment of the present disclosure. The robot 100 may be any of a variety of robots, including, but not limited to, at least one of a wheeled robot, a foot robot, a crawler robot, a crawling robot, a peristaltic robot, a swimming robot, etc., and for example, the robot 100 may be a foot robot, or a robot combining a foot robot and a wheel robot. Wherein, the foot robot comprises a single-foot robot, a double-foot robot or a multi-foot robot. The multi-legged robot means a legged robot having three legs or more, and for example, the multi-legged robot may be a four-legged robot. The robot means a machine capable of semi-autonomously or fully autonomously performing work, and the robot is not limited to a humanoid machine device, and may include a robot of a configuration such as a dog, a horse, a snake, a fish, a ape, or a monkey, and for example, the robot may be a quadruped robot horse. In the embodiment shown in fig. 1, the robot 100 includes a mechanical unit 101, a communication unit 102, a sensing unit 103, an interface unit 104, a storage unit 105, a display unit 106, an input unit 107, a control module 110, and a power source 111. The various components of the robot 100 may be connected in any manner, including wired or wireless connections, and the like. It will be appreciated by those skilled in the art that the particular configuration of the robot 100 shown in fig. 1 does not constitute a limitation of the robot 100, and that the robot 100 may include more or less components than illustrated, that certain components do not necessarily constitute the robot 100, that certain components may be omitted entirely or combined as desired within the scope of not changing the essence of the invention.
Fig. 2 is a schematic mechanical structure of a robot according to an embodiment of the present application. The following describes the various components of the robot 100 in detail with reference to fig. 1 and 2:
the machine unit 101 is hardware of the robot 100. As shown in fig. 1, the mechanical unit 101 may include a drive plate 1011, a motor 1012, a mechanical structure 1013, as shown in fig. 2, the mechanical structure 1013 may include a body 1014, extendable legs 1015, feet 1016, and in other embodiments, the mechanical structure 1013 may further include an extendable mechanical arm (not shown), a rotatable head structure 1017, a swingable tail structure 1018, a carrier structure 1019, a saddle structure 1020, a camera structure 1021, and the like. It should be noted that, the number of the component modules of the mechanical unit 101 may be one or more, and may be set according to the specific situation, for example, the number of the legs 1015 may be 4, each leg 1015 may be configured with 3 motors 1012, and the number of the corresponding motors 1012 is 12.
The communication unit 102 may be used for receiving and transmitting signals, or may be used for communicating with a network and other devices, for example, receiving command information sent by the remote controller or other robots 100 to move in a specific direction with a specific speed value according to a specific gait, and then transmitting the command information to the control module 110 for processing. The communication unit 102 includes, for example, a WiFi module, a 4G module, a 5G module, a bluetooth module, an infrared module, and the like.
The sensing unit 103 is used for acquiring information data of the surrounding environment of the robot 100 and parameter data of each component in the monitoring robot 100, and sending the information data to the control module 110. The sensing unit 103 includes various sensors such as a sensor that acquires surrounding environment information: lidar (for remote object detection, distance determination and/or speed value determination), millimeter wave radar (for short range object detection, distance determination and/or speed value determination), cameras, infrared cameras, global navigation satellite systems (GNSS, global Navigation Satellite System), etc. Such as sensors that monitor various components within the robot 100: an inertial measurement unit (IMU, inertial Measurement Unit) (values for measuring velocity values, acceleration values and angular velocity values), plantar sensors (for monitoring plantar force point position, plantar posture, touchdown force magnitude and direction), temperature sensors (for detecting component temperature). As for other sensors such as a load sensor, a touch sensor, a motor angle sensor, a torque sensor, etc. that may be further configured for the robot 100, the description thereof will be omitted.
The interface unit 104 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more components within the robot 100, or may be used to output (e.g., data information, power, etc.) to an external device. The interface unit 104 may include a power port, a data port (e.g., a USB port), a memory card port, a port for connecting devices having identification modules, an audio input/output (I/O) port, a video I/O port, and the like.
The storage unit 105 is used to store a software program and various data. The storage unit 105 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system program, a motion control program, an application program (such as a text editor), and the like; the data storage area may store data generated by the robot 100 in use (such as various sensing data acquired by the sensing unit 103, log file data), and the like. In addition, the storage unit 105 may include high-speed random access memory, and may also include nonvolatile memory, such as disk memory, flash memory, or other volatile solid state memory.
The display unit 106 is used to display information input by a user or information provided to the user. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), or the like.
The input unit 107 may be used to receive input numeric or character information. In particular, the input unit 107 may include a touch panel 1071 and other input devices 1072. The touch panel 1071, also referred to as a touch screen, may collect touch operations of a user (e.g., operations of the user on the touch panel 1071 or in the vicinity of the touch panel 1071 using a palm, a finger, or a suitable accessory), and drive the corresponding connection device according to a preset program. The touch panel 1071 may include two parts of a touch detection device 1073 and a touch controller 1074. The touch detection device 1073 detects the touch orientation of the user, detects a signal caused by the touch operation, and transmits the signal to the touch controller 1074; the touch controller 1074 receives touch information from the touch detecting device 1073, converts it into touch point coordinates, and sends the touch point coordinates to the control module 110, and can receive and execute commands sent from the control module 110. The input unit 107 may include other input devices 1072 in addition to the touch panel 1071. In particular, other input devices 1072 may include, but are not limited to, one or more of a remote control handle or the like, as is not limited herein.
Further, the touch panel 1071 may cover the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or thereabout, the touch operation is transmitted to the control module 110 to determine the type of touch event, and then the control module 110 provides a corresponding visual output on the display panel 1061 according to the type of touch event. Although in fig. 1, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions, in some embodiments, the touch panel 1071 may be integrated with the display panel 1061 to implement the input and output functions, which is not limited herein.
The control module 110 is a control center of the robot 100, connects various components of the entire robot 100 using various interfaces and lines, and performs overall control of the robot 100 by running or executing a software program stored in the storage unit 105 and calling data stored in the storage unit 105.
The power supply 111 is used to supply power to the various components, and the power supply 111 may include a battery and a power control board for controlling functions such as battery charging, discharging, and power consumption management. In the embodiment shown in fig. 1, the power source 111 is electrically connected to the control module 110, and in other embodiments, the power source 111 may be further electrically connected to the sensing unit 103 (such as a camera, a radar, a speaker, etc.), and the motor 1012, respectively. It should be noted that each component may be connected to a different power source 111, or may be powered by the same power source 111.
On the basis of the above-described embodiments, specifically, in some embodiments, the communication connection with the robot 100 may be performed through a terminal device, instruction information may be transmitted to the robot 100 through the terminal device when the terminal device communicates with the robot 100, the robot 100 may receive the instruction information through the communication unit 102, and the instruction information may be transmitted to the control module 110 in case of receiving the instruction information, so that the control module 110 may process to obtain the target speed value according to the instruction information. Terminal devices include, but are not limited to: a mobile phone, a tablet personal computer, a server, a personal computer, a wearable intelligent device and other electrical equipment with an image shooting function.
The instruction information may be determined according to preset conditions. In one embodiment, the robot 100 may include a sensing unit 103, and the sensing unit 103 may generate instruction information according to the current environment in which the robot 100 is located. The control module 110 may determine whether the current speed value of the robot 100 meets the corresponding preset condition according to the instruction information. If so, maintaining the current speed value and current gait movement of the robot 100; if not, the target speed value and the corresponding target gait are determined according to the corresponding preset conditions, so that the robot 100 can be controlled to move at the target speed value and the corresponding target gait. The environmental sensor may include a temperature sensor, a barometric pressure sensor, a visual sensor, an acoustic sensor. The instruction information may include temperature information, air pressure information, image information, sound information. The communication mode between the environment sensor and the control module 110 may be wired communication or wireless communication. Means of wireless communication include, but are not limited to: wireless networks, mobile communication networks (3G, 4G, 5G, etc.), bluetooth, infrared. Fig. 3 is a flowchart of a map generating method according to an embodiment of the present application, where the method may be performed by an electronic device with processing capability, such as a server, a cloud server, a terminal, a robot, or the like, and is not specifically limited herein. As shown in fig. 3, the method is described as applied to a robot, and specifically includes the following steps:
Step S110, acquiring environment point cloud data corresponding to the surrounding environment where the robot is located.
As one way, the environmental point cloud data refers to point cloud data for characterizing the surrounding environment in which the robot is located, where the environmental point cloud data may include point cloud data information corresponding to the surrounding environment. Ambient Point Cloud data refers to a large number of Point sets expressing the spatial distribution of a target and the characteristics of the surface of the target under the same spatial reference system, and can be called Point Cloud (Point Cloud), and the Point Cloud data information contains rich information, for example, can include three-dimensional coordinates X, Y, Z, colors, classification values, intensity values, time and the like. In an embodiment of the present application, the environmental point cloud data may be point cloud data information of an environmental object detected by a sensor of the robot in a spatial reference coordinate system with the sensor or the robot as an origin. Alternatively, the point cloud data may be data information representing a spatial distribution of the environmental object. Wherein the spatial reference coordinate system may be a radar coordinate system. For example, the point cloud data in the radar coordinate system may be converted to a camera coordinate system or may be converted to an image coordinate system, so as to facilitate subsequent data processing.
In some embodiments, the sensors configured on the robot may be depth sensors, corresponding to the detected ambient point cloud data of the ambient object being depth data information. Optionally, the depth sensor may directly detect a depth image of the environmental object, where each pixel in the depth image includes an actual distance between the environmental object and the depth sensor, and directly reflects a geometric shape of a visible surface of the environmental object, and the depth image may obtain point cloud data information corresponding to the depth image through point cloud transformation.
In other embodiments, the sensor configured on the robot may also be a radar sensor or a laser sensor, and the corresponding environmental point cloud data of the detected environmental object is radar point cloud data or laser point cloud data. The laser point cloud data refers to that after a beam of laser irradiates the surface of an obstacle, the returned data information comprises coordinate information of various points on the surface of the environmental object in a three-dimensional space, the combination of the points is the laser point cloud, and the obtained data information is the laser point cloud data information.
The sensor configured on the robot can comprise one or more of an active binocular camera, a passive binocular camera, a radar sensor or a laser sensor, and the robot can acquire environmental point cloud data corresponding to the surrounding environment through the one or more sensors.
As another way, the position information of the robot can also be acquired, so as to position the robot. The position information of the robot may be determined according to information acquired by a GNSS (Global Navigation Satellite System ) module, a GPS module, an IMU (Inertial Measurement Unit, inertial measurement unit) module, or an odometer in the robot. The IMU may be a module composed of a plurality of sensors such as a three-axis accelerometer, a three-axis gyroscope, a three-axis magnetometer, and the like. The odometer is used to detect the distance that a mobile device (e.g., two or more legs, or wheels) of a robot moves over a period of time, thereby deducing the change in relative pose (position or heading) of the robot.
As another way, pose information of the robot may be also acquired, the pose information of the robot may indicate position information of the robot and pose information of the robot, and the pose of the robot may include a pitch angle, a yaw angle, a roll angle, and the like of the robot.
The position information of the robot may be determined by a built-in GNSS module of the robot from the acquired GNSS signals or by a GPS module from the acquired GPS signals. When the robot is located in a place where the GNSS signal is weak or the GPS signal is weak (for example, entering an indoor area such as an underground parking lot), the robot can carry out dead reckoning according to the position information of the position points where the GNSS signal or the GPS signal is lower than a set threshold value and the information acquired by the odometer and the IMU module, so as to obtain the position information of each position point in the moving process.
In the indoor or outdoor moving process of the robot, the configured sensor can be called based on the preset frequency to collect the environmental point cloud data corresponding to the surrounding environment. The preset frequency may be preset according to actual application requirements, for example, may be real-time, may be 2 seconds/time, may be dynamically adjusted according to the surrounding environment, and is not limited herein. The robot can identify environmental objects in the surrounding environment by acquiring the environmental point cloud data corresponding to the surrounding environment, so that the robot is guided to move or avoid the obstacle, and the moving safety of the robot is improved.
Step S120, grid processing is carried out on the environmental point cloud data corresponding to the surrounding environment, and grid point cloud data corresponding to each of the multiple grids is obtained.
The robot can perform grid processing on the environmental point cloud data corresponding to the surrounding environment to obtain grid point cloud data corresponding to each of the multiple grids. Where grid refers to the division of space into regular grids, each grid being referred to as a grid. The size of the grids can be determined according to practical application requirements, for example, when the precision requirement is high for the same space, the space can be divided into a plurality of grids with more space, and each grid space is smaller; when the accuracy requirement is low, the space can be divided into a plurality of smaller ones, each grid space being larger. Grid processing refers to partitioning ambient point cloud data into multiple grids, which may represent ambient point cloud data partitioned into corresponding grids.
As one way, the robot performing grid processing on the environmental point cloud data corresponding to the surrounding environment may include updating the environmental point cloud data to a point cloud map, and then dividing the point cloud map into grids to obtain grid point cloud data corresponding to each of a plurality of grids in the point cloud map.
As another way, the robot may set a grid map in advance, update the environmental point cloud data to the point cloud map, and then superimpose the point cloud map and the grid map to obtain grid point cloud data corresponding to each of the multiple grids.
As another way, the robot may further project the environmental point cloud data to the grid map to obtain grid point cloud data corresponding to each of the plurality of grids.
Optionally, the point cloud map may be a local point cloud map, and the local point cloud map may be a map centered on the robot and having a preset range, and may be an updated point cloud map in a world coordinate system. The point cloud map may be updated in real time, and in the moving process of the robot, after the environmental point cloud data corresponding to the surrounding environment is obtained, the environmental point cloud data may be converted into a world coordinate system, and updated into the local point cloud map, and the point cloud data outside the preset range is deleted.
As shown in fig. 4, when the robot 100 is at the a position, the corresponding local map is a point cloud map 220 having a set range (for example, a range of 10 meters×10 meters) around the robot 100, and after the robot 100 moves to the B position along the movement route, the corresponding local map is a local map 230 having a set range at the B position around the position where the robot 100 moves. Wherein, with the movement of the robot, the history data outside the set range is discarded, and the new environment data is continuously updated to the local map. When the robot enters a strange environment, the information of the surrounding environment can be known in real time through the local map.
Grid map refers to the division of a map into a series of grids, where each grid is given a possible value representing the probability that the grid is occupied. Alternatively, the grid map may be represented by a matrix (two-dimensional array).
As one way, the point cloud map may be rasterized by projecting the point cloud map onto a certain plane of the world coordinate system, for example, projecting the point cloud map onto an XOY plane of the world coordinate system, and then dividing the map on the XOY plane into grids of uniform size.
As another approach, the grid map may be any one of a 2D grid map, a 2.5D grid map, and a 3D grid map. For the grid map, there may be a grid obstacle map in which each grid may represent a state of a corresponding position of a corresponding target point cloud map, that is, each grid is passable (free), occupied (occupy), and unknown (unknown).
As another way, the point cloud map may also be a point cloud map that is stored in advance in a local database or a cloud database by the robot, where the point cloud map may include historical point cloud data information, that is, point cloud data information of an environmental object existing when the point cloud map is determined, and the point cloud data information of the environmental object existing at present needs to be updated to obtain a local point cloud map.
In one embodiment, in order to ensure accuracy of the grid point cloud data and accuracy of the point cloud map, time synchronization needs to be performed on pose information of the robot and environmental point cloud data corresponding to an environment where the robot is located. When the robot detects pose information and environment point cloud data corresponding to the surrounding environment where the robot is located, the detected information at different moments contains the detected time stamp, and the robot can perform time synchronization by acquiring the first time stamp of the pose information and the second time stamp of the environment point cloud data.
Specifically, the robot may match the first timestamp with the second timestamp, and it may be understood that the first timestamp and the second timestamp having the same time or within a preset time difference are associated, and then pose information corresponding to the first timestamp and environmental point cloud data corresponding to the second timestamp matched with the first timestamp are associated and stored, so that time synchronization is performed.
Alternatively, a plurality of information sets may be preset, pose information corresponding to the first timestamp and environmental point cloud data corresponding to the second timestamp matched with the first timestamp are stored in the information sets, and the corresponding timestamp (at least one of the first timestamp and the second timestamp) is reserved. Different time stamps may correspond to different sets of information.
Optionally, the pose information corresponding to the first timestamp and the environmental point cloud data corresponding to the second timestamp matched with the first timestamp can be set with the same and unique identifier, and the data are ordered according to the time sequence corresponding to the timestamps, so that the efficiency of data processing is improved.
In order to determine the environmental point cloud data in the world coordinate system, since the environmental point cloud data is data in a coordinate system with a sensor as an origin (hereinafter, simply referred to as a sensor coordinate system for convenience of description), the robot may determine the environmental point cloud data of the environmental point cloud data information in the coordinate system with the robot as an origin based on a relationship between the sensor and the robot. The relation between the sensor and the robot can be obtained through calibration. Then, a relation between a coordinate system (hereinafter referred to simply as a reference coordinate system for convenience of description) with the robot as an origin and a world coordinate system is determined according to pose information of the robot, and point cloud data information in the world coordinate system is determined based on environmental point cloud data in the reference coordinate system. It can be appreciated that the transformation between the reference coordinate system and the world coordinate system is a rigid transformation, which can be determined according to the pose information of the robot; the relation between the reference coordinate system and the sensor coordinate system is related to the position of the sensor in the robot, and the position of the sensor in the robot is preset, namely the relation between the reference coordinate system and the sensor coordinate system can be determined according to the preset position of the sensor in the robot, so that the point cloud map can be updated according to the point cloud data information under the world coordinate system, and the point cloud map of the robot is obtained, and the instantaneity of the point cloud map is improved.
As another way, the robot may also update the point cloud map according to pose information of the robot and environmental point cloud data. Specifically, the robot can determine the current position information of the robot and the gesture information of the robot according to the gesture information of the robot, and determine the pose relationship between the obstacle in the surrounding environment of the robot and the robot according to the environment point cloud data corresponding to the surrounding environment of the robot. According to the pose relation, the robot can fuse the environmental point cloud data into the point cloud map, and then can divide grids according to the point cloud map or superimpose the point cloud map and the grid map to obtain grid point cloud data.
With continued reference to fig. 3, step S130 determines the surface layer elevation and the suspended layer elevation corresponding to each of the plurality of grids according to the grid point cloud data corresponding to each of the plurality of grids.
As one way, when the grid processing is performed, the area size of each grid is preset, that is, each grid corresponds to a preset range, and the grid point cloud data corresponding to each of the multiple grids can be determined according to the preset range.
Optionally, the data sets may be preset, and according to the preset area size of each grid, the grid point cloud data in the preset range corresponding to the preset area size is determined, where the grid point cloud data corresponding to each grid is stored in a corresponding unique data set, and the corresponding position identifier is set for each data set. After determining the grid needing to determine the elevation, determining a data set corresponding to the position identification according to the position identification of the grid, and determining grid point cloud data corresponding to the grid from the data set.
As a way, the surface layer may be used to represent the ground where the robot is located, and the surface layer may include a region where the robot can pass, an unpublished region where an obstacle is located, or an unpublished position region, and the surface layer may specifically be a layer in a map, or may simply be a spatial division of the surrounding environment where the robot is located, and divide the surrounding environment into a surface layer and a suspension layer. The suspended layer may be used to represent an obstacle or terrain in a "hollow" state, such as an object like a hollow table, chair, or terrain like a cave. Alternatively, the surface level may be the level of the surface layer represented on a grid map, and the suspended layer level may be the level of a "hollow" object or terrain from the ground. In the embodiment of the application, the environmental point cloud data can be divided into two types, namely the surface layer point cloud data and the suspension layer point cloud data, wherein the surface layer point cloud data can be used for confirming the surface layer elevation correspondingly, and the suspension layer point cloud data can be used for confirming the suspension layer elevation. As shown in fig. 5, the hollow object may be, for example, a billiard table 310 in the figure, and for the hollow object or the hollow terrain, the height of the robot may be combined to determine whether the robot can pass.
Because the traditional grid map directly takes the highest point cloud data in each grid as the elevation of the corresponding grid, the ground surface layer and the suspended layer are not divided, and even hollow objects in the traditional grid map are solid and cannot be represented in a hollow state, so that the robot cannot pass through. For example, when a robot is located on a cave terrain, a conventional grid map cannot represent a hollow state thereof for the cave terrain, and only the cave can be represented as a solid terrain or a corresponding grid as an unvented state.
Since the grid point cloud data may indicate the height information of each point cloud, the grid heights corresponding to each of the plurality of grids may be determined by determining the heights corresponding to the grid point cloud data of each grid, where the grid heights may include a surface layer height and a suspended layer height. It will be appreciated that if it is determined that a certain grid has a suspended floor elevation and a surface elevation, it may be determined that there is a hollow object or a hollow terrain in a position corresponding to the grid, and the robot may determine whether the current environment may pass through the surface elevation and the suspended floor Gao Chenglai, so as to perform tasks such as moving and obstacle avoidance by lowering the height of the robot, detouring, and the like.
Optionally, when it is determined that the grid does not have a suspended layer object, that is, there is no suspended layer elevation, according to grid point cloud data corresponding to each of the multiple grids, the height of the suspended layer elevation may be set to 0 correspondingly, or identification information for indicating that the suspended layer elevation is empty may be added, so as to determine the elevation of the suspended layer.
And step S140, generating a target map corresponding to the surrounding environment according to the surface layer elevation and the suspended layer elevation corresponding to each of the grids.
As one way, after determining the surface level and the suspended level corresponding to each of the plurality of grids, the robot may generate a target map capable of displaying the surrounding environment where the robot is located according to the surface level and the suspended level corresponding to each of the plurality of grids. The target map may be a local map for representing the surrounding environment where the robot is located, and the robot may perform tasks such as moving or obstacle avoidance based on the target map. The target map may be a 2D map with altitude information, or may be a 2.5D, 3D, or double-layer 2.5D altitude map. As shown in fig. 6, a hover layer 410 and a surface layer 420 are included in the target map.
In the scheme of the application, the environmental point cloud data is processed according to the environmental point cloud data corresponding to the surrounding environment of the robot, so that grid point cloud data corresponding to each of a plurality of grids is obtained, the surface layer elevation and the suspension layer elevation corresponding to each of the plurality of grids can be determined based on the grid point cloud data corresponding to each of the plurality of grids, and finally the target map is generated according to the surface layer elevation and the suspension layer elevation corresponding to each of the plurality of grids. According to the scheme, the target map with the surface layer elevation and the suspension layer elevation can be generated, the robot can be guided to pass through the object or the terrain in the hollow state, the generation efficiency of the map is improved, the target map is more in line with the actual situation, the accuracy of the target map is effectively improved, the generated target map is smaller in data volume compared with the 3D map, and processing resources and storage resources can be saved.
In some embodiments, as shown in fig. 7, step S130 includes: step S210, determining a first point cloud height and a second point cloud height corresponding to each of the grids according to grid point cloud data corresponding to each of the grids, wherein the second point cloud height is larger than the first point cloud height.
As one way, the first point cloud height may be a minimum point cloud height in grid point cloud data corresponding to the grid, and the second point cloud height may be a maximum point cloud height in grid point cloud data corresponding to the grid, the second point cloud height being greater than the first point cloud height. It is understood that the robot may determine, from grid point cloud data of each grid, a respective first point cloud height and a respective second point cloud height of each grid. Optionally, because the data information of the grid point cloud data corresponding to the grid is very much, in order to facilitate subsequent processing, the first point cloud height and the second point cloud height corresponding to the grid can be determined according to the grid point cloud data corresponding to the grid, so that the efficiency of determining the surface layer elevation and the suspension layer elevation corresponding to each of the multiple grids can be improved.
As another way, in order to ensure that the robot can perform safe movement according to the determined flying layer elevation and the determined ground layer elevation, the first point cloud height and the second point cloud height may be set to other heights, and may be set according to the height of the actual robot, which is not particularly limited herein. Optionally, before determining the first point cloud height and the second point cloud height, the grid point cloud data may be filtered, so that the noise reduction processing is performed on the grid point cloud data, so that the determined first point cloud height and second point cloud height are more accurate. Optionally, the first point cloud height and the second point cloud height may also be determined according to an average height in the grid point cloud data.
Step S220, obtaining a preset height corresponding to the robot, and determining a height relation between the preset height and the first point cloud height and the second point cloud height.
As one approach, to determine whether there is a surface elevation and a flying elevation, the robot may combine a first point cloud height, a second point cloud height, and a preset height. The preset height may be a height threshold value determined in advance according to the height of the robot, and the preset height may specifically be the height of the robot, or may be a height corresponding to the robot, for example, a height in a preset range of the height of the robot, or may be a fixed or dynamic height determined according to actual application requirements. The preset height may be set according to actual conditions, and is not particularly limited herein.
The robot can compare the preset height with the first point cloud height and the second point cloud height respectively to obtain a height relation between the preset height and the first point cloud height and the second point cloud height. The relationship between the first and second point cloud heights and the preset height may include: the preset height is located between the first point cloud height and the second point cloud height, the first point cloud height is larger than the preset height and the second point cloud height is smaller than any one of the preset height, wherein the preset height is located between the first point cloud height and the second point cloud height and is equal to the first point cloud height or the second point cloud height.
Step S230, clustering is carried out on the grid point cloud data corresponding to each of the grids, so as to obtain at least one grid point cloud set corresponding to each of the grids.
The corresponding ambient point cloud data of the robot can change during the moving process. Specifically, if the robot is indoor, there will be an obstacle corresponding to the robot, and if the environmental object is an object or a terrain having a hollow shape, such as an indoor dining table, an office table, a billiard table, an outdoor cave, etc., the corresponding grid point cloud data may include two types, that is, the grid point cloud data corresponding to the surface and the grid point cloud data corresponding to the ground surface. If the environment object is a slope, the corresponding grid point cloud data only has grid point cloud data corresponding to the ground surface. As shown in fig. 8, where an obstacle such as a table 510 having a "hollow" morphology has two kinds of grid point cloud data that can characterize the surface level 540 and the suspended layer 530, and for the ramp 520, only the grid point cloud data that can characterize the surface level 540. Optionally, there is also grid point cloud data including multiple categories in the grid point cloud set due to the sensor of the robot, for example, in the moving process of the robot, due to a certain response time required for collecting the point cloud data by the sensor, data collection may be not timely enough, a certain error exists in the grid point cloud data, and further, clustering processing is performed on the grid point cloud data, so that multiple categories of point cloud data exist.
As a way, in order to quickly and accurately determine the surface layer elevation and the suspended layer elevation corresponding to each of the plurality of grids, the robot may perform clustering processing on the grid point cloud data corresponding to each of the plurality of grids to determine at least one grid point cloud set corresponding to each of the plurality of grids, so that the efficiency of determining the surface layer elevation and the suspended layer elevation corresponding to each of the plurality of grids can be improved.
The clustering process groups data objects according to information found in the data describing the objects and their relationships. In the solution of the present application, the clustering process may be to group the grid point cloud data of each grid according to the height values of the grid point cloud data corresponding to each of the multiple grids in the world coordinate system. Specifically, different height ranges can be preset, and grouping is performed according to the height values of the grid point cloud data of each grid and the preset height ranges, so that one or more grid point cloud sets corresponding to the grids are obtained. The robot can also perform clustering processing on the grid point cloud data by utilizing a point cloud clustering algorithm according to the position relationship among the point clouds in the grid point cloud data.
Step S240, determining the surface layer elevation and the suspended layer elevation corresponding to each of the grids according to at least one grid point cloud set and the height relation corresponding to each of the grids.
As a mode, the relation between the first point cloud height and the second point cloud height and the preset height can reflect the height relation between the environment object and the robot, the robot can estimate whether each grid has a region which the robot can pass through according to the height relation, and the surface layer height and the suspension layer height corresponding to each grid are determined based on at least one grid point cloud set and the height relation.
For example, if the relationship between the maximum height and the minimum height of the environmental object and the height of the robot is that the height of the robot is between the maximum height and the minimum height, the surface layer height and the suspended layer height corresponding to each of the plurality of grids may be determined, the robot may determine whether the current grid is a passable grid through the surface layer height and the suspended layer height, and the robot may adjust the movement strategy of the robot according to the determined passable grid height, for example, adjust the posture of the robot, and reduce the height of the robot, so that the robot may pass smoothly.
In this embodiment, the first point cloud height and the second point cloud height are determined according to the grid point cloud data corresponding to each of the multiple grids, so that the surface layer height and the suspended layer height corresponding to each of the multiple grids can be determined according to the height relation between the first point cloud height and the second point cloud height and the preset height, and the grid point cloud data corresponding to each of the multiple grids is clustered, so that the determined surface layer height and suspended layer height Cheng Gengjia corresponding to each of the multiple grids are accurate.
In some embodiments, as shown in fig. 9, step S240, determining the surface layer elevation and the suspended layer elevation corresponding to each of the plurality of grids according to the at least one grid point cloud set and the height relationship corresponding to each of the plurality of grids includes:
in step S310, if the height relationship is that the first cloud point height is greater than the preset height, determining the surface layer height and the suspended layer height corresponding to each of the plurality of grids according to at least one grid point cloud set corresponding to each of the plurality of grids.
As a way, when the height relationship is that the first point cloud height is greater than the preset height, it represents that there is an object or a suspended object with a "hollow" structure in the grid point cloud set corresponding to the current grid, such as a table, a chair, a suspended lamp, etc., through which the robot can smoothly pass.
Alternatively, the moving route of the robot may be determined first, and a plurality of target grids corresponding to the moving route may be determined according to the determined moving route, or the plurality of grids may be traversed, and the currently traversed grid may be used as the target grid. The robot can determine at least one grid point cloud set based on point cloud data in a plurality of target grids, so that the surface layer elevation and the suspended layer elevation corresponding to the target grids can be determined.
Alternatively, when the height relationship is that the first point cloud height is greater than the preset height, the first point cloud height may be determined as the surface layer height or the suspended layer height. For example, when the first cloud height is h1, the preset height is h0, if h1 > h0, hf=h1, where HF is the suspended layer height.
Step S320, if the height relation is that the second point cloud height is smaller than the preset height, determining the surface layer height corresponding to each of the grids according to at least one grid point cloud set corresponding to each of the grids.
As one way, when the height relation is that the second point cloud height is smaller than the preset height, the second point cloud height represents that an object or a topography on the ground exists in the grid point cloud set corresponding to the current grid, such as a wall, a television cabinet, a washing machine and the like. At this time, the point cloud data information of the ground surface layer included in the corresponding grid point cloud data can be estimated, and the floating point cloud data information is not included, namely, the height of the floating layer is 0 or is empty at this time.
Optionally, when the height relationship is that the second point cloud height is smaller than the preset height, the second point cloud height may be determined as the surface layer height or the suspended layer height. For example, when the second point cloud height is h2, the preset height is h0, and if h2 is less than h0, hg=h2, where HG is the surface elevation.
Step S330, if the height relation is that the preset height is located between the first point cloud height and the second point cloud height, determining a target point cloud set from at least one grid point cloud set, and determining the surface layer height and the suspension layer height corresponding to each of the grids according to the target point cloud set.
As one way, when the height relation is that the preset height is located between the first point cloud height and the second point cloud height, it may be determined that grid point cloud data corresponding to the multiple grids simultaneously includes the earth surface layer point cloud and the suspended layer point cloud.
Alternatively, the robot may determine the obstacle height as a surface elevation and the second point cloud height as a flying elevation. The obstacle height refers to a height through which the robot cannot pass, and may be, for example, a robot height or a preset height.
Optionally, the robot may further determine a maximum point cloud height in the grid point cloud set corresponding to the first point cloud height, and a minimum point cloud height in the grid point cloud set corresponding to the second point cloud height, respectively. The robot determines the surface layer height and the suspension layer height according to the height relation between the maximum point cloud height and the minimum point cloud height and the obstacle height. In some of these embodiments, the robot may further determine a maximum point cloud height of a lower grid point cloud set that is close to the preset height and a minimum point cloud height of an upper grid point cloud set that is close to the preset height, wherein the lower grid point cloud set and the upper grid point cloud set are sets that are respectively adjacent to the obstacle heights.
Specifically, for the surface layer elevation, if the maximum point cloud height is larger than the barrier height, the barrier height can be used as the surface layer elevation; and if the maximum point cloud height is smaller than or equal to the barrier height, taking the maximum point cloud height as the surface layer height. For example, if h_low_max > h0, then h_group=h0; if h_low_max < =h0, h_group=h_low_max. Wherein h_low_max is the maximum point cloud height, h_group is the surface layer height, and h0 is the barrier height.
Regarding the elevation of the suspended layer, if the minimum point cloud height is larger than the barrier height, taking the minimum point cloud height as the elevation of the suspended layer; and if the minimum point cloud height is smaller than or equal to the barrier height, taking the barrier height as the height of the suspended layer. For example, if h_up_min > h0, then h_float=h_up_min; if h_up_min < =h0, then h_float=h0. Wherein h_up_min is the minimum point cloud height, and h_float is the suspended layer height.
In this embodiment, for different height relationships, the surface layer height and the suspended layer height may be determined in different manners, which proposes a targeted scheme for determining the surface layer height and the suspended layer height, so that accuracy in determining the surface layer height and the suspended layer height is effectively improved, and further accuracy in generating the target map is improved.
In some embodiments, as shown in fig. 10, step S310 includes:
in step S410, if the height relationship is that the first point cloud height is greater than the preset height, determining a neighboring grid of the target grid and a neighboring point cloud set corresponding to the neighboring grid from the multiple grids.
As one way, when the height relationship is that the first point cloud height is greater than the preset height, the data type corresponding to the grid point cloud data cannot be determined. The surface level elevation, which may correspond to the overall environment higher, may also correspond to the suspended level elevation, and the point cloud data representing the surface level elevation is not represented, i.e. the surface level elevation is not covered by the point cloud, for example, may be because the sensor of the robot is not scanned or the sensor of the robot is affected by the environment, so that the point cloud data is not detected at the position, i.e. it is currently impossible to determine whether the overall higher grid point cloud data belongs to the suspended level or the ground level.
In particular, the robot may make the determination by combining a neighborhood point cloud set of a neighborhood grid adjacent to the target grid. The neighbor grids refer to grids adjacent in any one of the horizontal direction, the vertical direction and the diagonal direction of the target grid, the neighbor grids can be one or more grids adjacent to the target grid, and the distances between different neighbor grids and the target grid can be the same or different.
Step S420, comparing the point cloud height of the neighborhood point cloud set with a first height threshold.
As a way, in order to determine the type of the point cloud data of the target grid, the point cloud height corresponding to the neighborhood point cloud set of the neighborhood grid may be compared with the first height threshold, so as to determine whether the target grid has a suspended layer, or whether the first point cloud height is greater than a preset height, such as an obstacle like a step, due to a higher overall environment.
In other embodiments, when the robot determines at least two neighbor grids corresponding to the target grid among the multiple grids, the robot may record at least two comparison results obtained after comparing the point cloud heights of the neighbor point clouds of the at least two neighbor grids with the first height threshold.
The first height threshold may be the same as the preset height, or may be a threshold of another height. For example, the first height threshold may be a height lower than that of the robot, and the first height threshold may be set according to actual needs, which is not particularly limited herein.
Step S430, when the point cloud height of the neighborhood point cloud set is smaller than or equal to the first height threshold, determining a suspension layer height corresponding to the target grid according to the target grid point cloud set, and determining a surface layer height corresponding to the target grid according to the neighborhood point cloud set.
As a way, if there is a point cloud height with a height less than or equal to the first height threshold in the neighborhood point cloud set corresponding to the neighborhood grid adjacent to the target grid, which indicates that there is a lower object or terrain in the surrounding environment corresponding to the target grid, the grid point cloud data corresponding to the target grid corresponds to the suspended layer, and the ground surface layer data may not acquire corresponding point cloud data due to environmental interference or other reasons. Thus, the robot may determine the possible presence of ground and overhanging layers in the target grid. As shown in fig. 11, the grid point cloud data in the target grid 610 only includes point cloud data representing the height of the suspended layer, the first neighborhood grid 620 and the second neighborhood grid 630 have respective corresponding neighborhood point cloud data to form a neighborhood point cloud set, the point cloud height 611 is the maximum height in the grid point cloud set of the target grid 610, the point cloud height 612 is the minimum height in the grid point cloud set of the target grid 610, at this time, the first point cloud height of the grid point cloud set of the target grid 611 can be determined to be the point cloud height 612, and the robot can determine the height relationship between the point cloud height and the height threshold value, where the first neighborhood grid 620 and the second neighborhood grid 630 have respective corresponding neighborhood point cloud data to form the neighborhood point cloud set. Alternatively, the minimum point cloud height in the neighborhood point cloud data corresponding to each of the first neighborhood grid 620 and the second neighborhood grid 630 may be used as the point cloud height of the corresponding neighborhood point cloud data. And the minimum heights in the neighborhood point cloud set are the minimum point cloud heights of the neighborhood point cloud data corresponding to each neighborhood grid respectively.
Optionally, in order to improve accuracy of the determined surface layer elevation of the target grid, when the point cloud heights of the neighborhood point cloud sets corresponding to the neighborhood grids of the target grid are smaller than or equal to the first height threshold, determining the suspended layer elevation and the surface layer elevation of the target grid.
Optionally, a first preset number of point cloud heights of the neighborhood point cloud sets are smaller than or equal to a first height threshold value can be further set, and when the number of point cloud heights of the target grid neighborhood point cloud sets is smaller than the first height threshold value and is larger than or equal to the first preset number, the surface layer height and the suspended layer height corresponding to the target grid are determined. The first preset number may be set according to actual needs, and is not specifically limited herein. If the number of the neighborhood grids with the point cloud height smaller than or equal to the first height threshold is smaller than the first preset number, determining that only the surface layer height or the suspended layer height of the target grid is 0.
As one way, when the point cloud height of the neighboring point cloud set is less than or equal to the first height threshold, the surface level of the target grid may be determined according to the first height threshold or the neighboring point cloud set of the neighboring grid, for example, the point cloud height of the neighboring point cloud set is determined as the surface level, and the suspended level height corresponding to the target grid is determined according to the target grid point cloud set, for example, the minimum height in the target grid point cloud set is determined as the suspended level height of the target grid.
Step S440, when the point cloud height of the neighborhood point cloud set is larger than the first height threshold, determining the surface layer height corresponding to the target grid according to the target grid point cloud set.
As one way, when the point cloud height of the neighborhood point set is larger than the first height threshold, it can be determined that only the ground surface layer exists in the target grid, that is, the whole surrounding environment corresponding to the target grid is relatively high, for example, a step and other scenes, even if the first point cloud height in the target grid is larger than the preset height, the first point cloud height in the target grid is still not a suspended layer, and point cloud data corresponding to the suspended layer does not exist in the target grid. The robot may determine a surface elevation of the target grid from the set of target grid point clouds. For example, a minimum point cloud height in the set of target grid point clouds may be determined as a surface level elevation to which the target grid corresponds.
In the embodiment, the neighborhood grids corresponding to the target grids and the neighborhood point cloud sets corresponding to the neighborhood grids are determined, and the point cloud heights of the neighborhood point cloud sets are compared with the first height threshold, so that whether the target grids have the suspended layer or not is judged by utilizing the neighborhood grids in an auxiliary mode, the overall scene of the environment is integrated, the accuracy of determining the heights corresponding to the target grids is effectively improved, and the accuracy of the generated target map is further improved.
In some embodiments, as shown in fig. 12, step S320 includes:
step S510, if the height relation is that the second point cloud height is smaller than the preset height, determining a neighborhood grid of each grid and a neighborhood point cloud set corresponding to the neighborhood grid in the multiple grids.
As a way, when the height relation is that the second point cloud height is smaller than the preset height, the grid point cloud data in the target grid is lower as a whole, and only the point cloud data representing the surface layer height is included in the grid point cloud data, namely, only the surface layer height of the corresponding grid can be confirmed currently, and whether the suspended layer height exists in the target grid cannot be confirmed. To determine whether a target grid has a flying height, the robot may make a determination in conjunction with a neighborhood point cloud set of a neighborhood grid adjacent to the target grid. The neighbor grids refer to grids adjacent in any one of the horizontal direction, the vertical direction and the diagonal direction of the target grid, the neighbor grids can be one or more grids adjacent to the target grid, and the distances between different neighbor grids and the target grid can be the same or different.
Step S520, comparing the point cloud height of the neighborhood point cloud set with a second height threshold.
As one approach, to determine the flying and ground level heights of the target grid, the determined flying and ground level heights Cheng Gengjia of the target grid may be made accurate by determining the point cloud heights of the neighborhood point clouds of the neighborhood grid adjacent to the grid to compare to a second height threshold.
Alternatively, the second height threshold may be the same as the preset height threshold, or may be a threshold of another height. Alternatively, the second height threshold may be a height higher than the robot height, and the height threshold may be set according to actual needs, which is not specifically limited herein, and the second height threshold is different from the first height threshold and is greater than the first height threshold.
In other embodiments, when the robot determines at least two neighbor grids corresponding to the target grid among the multiple grids, the robot may record at least two comparison results obtained after comparing the point cloud heights of the neighbor point clouds of the at least two neighbor grids with the second height threshold.
In step S530, when the point cloud height of the neighboring point cloud set is greater than or equal to the second height threshold, determining a suspension layer height corresponding to the target grid according to the neighboring point cloud set, and determining a surface layer height corresponding to the target grid according to the grid point cloud set corresponding to the target grid.
As a way, if there is a point cloud height greater than or equal to the second height threshold in the neighborhood point cloud set corresponding to the neighborhood grid adjacent to the target grid, which indicates that all surrounding environments corresponding to the target grid have higher environmental objects, the target grid may not acquire point cloud data corresponding to the suspended layer due to factors such as environmental interference, and the robot may determine that there are a ground surface layer and a suspended layer in the target grid. As shown in fig. 13, the grid point cloud data in the target grid 610 only includes point cloud data representing the height of the ground layer, the third neighboring grid 640 and the fourth neighboring grid 650 have respective corresponding neighboring point cloud data, the neighboring point cloud data of the third neighboring grid 640 and the fourth neighboring grid 650 form a neighboring point cloud set, the point cloud height 614 is the minimum height in the grid point cloud set of the target grid 610, the point cloud height 613 is the maximum height in the grid point cloud set of the target grid 610, and at this time, the second point cloud height in the grid point cloud set of the target grid 610 can be determined to be the point cloud height 613. The robot may determine whether a point cloud height in the neighborhood point cloud set formed by respective corresponding neighborhood point cloud data of third neighborhood grid 640 and fourth neighborhood grid 650 is greater than or equal to a second height threshold. Alternatively, third neighborhood grid 640 and fourth neighborhood grid 650 may each have a maximum point cloud height in their respective neighborhood point cloud data as the point cloud height of the corresponding neighborhood point cloud data.
Optionally, in order to improve accuracy of the determined suspension layer elevation and the surface layer elevation of the target grid, when the point cloud heights of the neighborhood point cloud sets corresponding to the neighborhood grids of the target grid are all greater than or equal to the second height threshold, the suspension layer elevation and the surface layer elevation of the target grid may be determined. Optionally, a second preset number of point cloud heights of the neighborhood point cloud set is set to be greater than or equal to a second height threshold, and when the number of point cloud heights of the neighborhood point cloud set of the target grid is greater than or equal to the second height threshold is greater than or equal to the second preset number, the target grid is determined to comprise the surface layer height and the suspended layer height. The second preset number may be set according to actual needs, and is not specifically limited herein, and the first preset number and the second preset number may be equal or unequal, and may be set according to actual needs.
Optionally, if the number of neighbor grids with the point cloud height greater than or equal to the second height threshold is less than the second preset number, determining that only the surface layer height or the suspended layer height of the target grid is 0.
As one way, when the point cloud height of the neighboring point cloud set is greater than or equal to the second height threshold, the flying layer height of the target grid may be determined according to the neighboring point cloud set, e.g., the second height threshold or the point cloud height of the neighboring point cloud set is determined as the flying layer height of the target grid, and the surface layer height is determined according to the grid point cloud set corresponding to the target grid, e.g., the second point cloud height in the target grid point cloud set of the target grid is determined as the surface layer height of the target grid.
Step S540, when the point cloud height of the neighborhood point cloud set is smaller than the second height threshold, determining the surface layer height corresponding to the target grid according to the grid point cloud set corresponding to the target grid.
As one mode, when the point cloud height of the neighborhood point set is smaller than the second height threshold, the surrounding of the target grid can be determined to be the surface layer, the overall environment height is low, and the point cloud data corresponding to the suspension layer does not exist in the target grid. The robot may determine a surface elevation of the target grid from the set of target grid point clouds. For example, a maximum point cloud height in the set of target grid point clouds may be determined as the surface elevation.
In this embodiment, by determining the neighborhood grid corresponding to the target grid and the neighborhood point cloud set corresponding to the neighborhood grid, and comparing the point cloud height of the neighborhood point cloud set with the second height threshold, the neighborhood grid is utilized to assist in judging whether the target grid has a suspended layer, so that the whole scene of the environment is synthesized, map errors caused by that the point cloud corresponding to the suspended layer is not acquired due to data acquisition errors or environmental interference and the like are avoided, the accuracy of determining the elevation corresponding to the target grid is effectively improved, and the accuracy of the generated target map is further improved.
In some embodiments, as shown in fig. 14, step S230 includes:
in step S610, if the height relationship is that the preset height is between the first point cloud height and the second point cloud height, the number of the grid point cloud sets is determined.
In one mode, when the preset height is located between the first point cloud height and the second point cloud height, it can be determined that the target grid has point cloud data corresponding to the earth surface layer and point cloud data corresponding to the suspension layer, and in order to determine the earth surface layer height and the suspension layer height, the robot can perform cluster analysis on the grid point cloud data in the target grid, so as to determine the collection number of different types of point cloud data sets in the grid point cloud collection corresponding to the target grid, and accordingly determine the earth surface layer height and the suspension layer height of the target grid according to the collection number.
In step S620, when the number of sets is one, the grid point cloud set is determined as the target point cloud set, and the surface layer elevation and the suspension layer elevation corresponding to each of the plurality of grids are determined according to the target point cloud set.
As one way, when determining that the number of sets of grid point cloud sets corresponding to the grids is one, the robot may determine the grid point cloud sets as target point cloud sets, and determine the surface layer elevation and the flying layer elevation corresponding to each of the plurality of grids according to the target point cloud sets. Specifically, since the grid has only one corresponding grid point cloud set, and the preset height is located between the first point cloud height and the second point cloud height, the robot can determine that the grid comprises the ground surface layer and the suspended layer at the same time. The robot can determine the obstacle height as the surface layer height corresponding to the surface layer, and determine the second point cloud height in the target point cloud set as the suspension layer height corresponding to the suspension layer. The obstacle height refers to a height at which the robot cannot pass, and may be, for example, a robot height or a preset height, or may be determined according to the movement capability of the robot.
In step S630, when the number of sets is two or more, a target point cloud set is obtained by screening from at least two grid point cloud sets according to a preset height, and the surface layer elevation and the suspended layer elevation corresponding to each of the plurality of grids are determined according to the target point cloud set.
As a way, when the number of the sets is determined to be two or more, the robot may screen at least one target point cloud set from at least two grid point cloud sets according to the preset height, for example, the robot may determine an upper grid point cloud set and a lower grid point cloud set closest to the preset height as the target point cloud set, may also determine one grid point cloud set with the preset height between the maximum height of the set and the minimum height of the set as the target point cloud set, and then determine the surface layer height and the suspended layer height corresponding to each of the plurality of grids based on the point cloud heights in the target point cloud set.
Optionally, if the number of the grid point cloud sets of the target grid is two, determining average point cloud heights of the two target point cloud sets respectively, then determining a suspended layer point cloud set by using point cloud sets with average point cloud heights being greater than or equal to a preset height, and determining the minimum point cloud height in the suspended layer point cloud set as a suspended layer height; and determining the point cloud set with the average point cloud height smaller than the preset height as a surface layer point cloud set, and then determining the maximum point cloud height in the surface layer point cloud set as the surface layer height.
Optionally, if the number of grid point cloud sets of the target grid is more than two, taking an upper grid point cloud set and a lower grid point cloud set which are closest to the preset height as a target point cloud set, determining a corresponding floating layer point cloud set and a corresponding ground layer point cloud set in the target point cloud set, determining a floating layer elevation based on the floating layer point cloud set, and determining a ground layer elevation based on the ground layer point cloud set.
In other embodiments, cluster analysis may be performed on the grid point cloud sets corresponding to each grid, and the number of sets of the grid point cloud sets is determined, if the number of sets is one, the grid point cloud sets are determined to be target point cloud sets, and then the relationship between the minimum point cloud height and the highest point cloud height of the target point cloud sets and the preset height is determined; if the height relation is that the minimum point cloud height is greater than or equal to the preset height, determining the surface layer height and the suspended layer height according to the method shown in FIG. 10; if the height relation is that the minimum point cloud height is smaller than or equal to the preset height, the surface layer height and the hanging layer height are determined according to the method shown in fig. 12, and if the height relation is that the preset height is located between the minimum point cloud height and the maximum point cloud height, the surface layer height is determined according to the minimum point cloud height and the hanging layer height is determined according to the maximum point cloud height.
If the number of the sets is two or more, screening from at least two grid point cloud sets according to preset heights to obtain target point cloud sets, and then respectively determining the minimum point cloud height and the height relation between the maximum point cloud height and the preset height corresponding to the target point cloud sets; if the height relation is that the average point cloud height in the target point cloud set is larger than or equal to the minimum point cloud height of the point cloud set with the preset height and is larger than or equal to the preset height, determining the surface layer height and the suspension layer height according to the method shown in fig. 10; if the height relation is that the average point cloud height in the target point cloud set is smaller than or equal to the maximum point cloud height of the point cloud set with the preset height, the surface layer height and the suspension layer height are determined according to the method shown in fig. 12, and if the height relation is that the preset height is located between the minimum point cloud height and the maximum point cloud height, the surface layer height is determined according to the minimum point cloud height and the suspension layer height is determined according to the maximum point cloud height.
In other embodiments, the robot may further determine the surface layer elevation and the suspended layer elevation corresponding to each of the multiple grids according to the target point cloud set in the manner of step S330.
In the embodiment, the number of the sets corresponding to the grid point cloud sets is used for determining and obtaining the target point cloud sets from the grid point cloud sets according to the different number of the sets, so that grid point cloud data in the grids are screened, the surface layer elevation and the suspension layer elevation corresponding to the grids are determined by using the screened target point cloud sets, the accuracy of the generated target map is effectively improved, processing resources required to be consumed in the map generation process are saved, and the map generation cost is reduced.
In some embodiments, step S140 includes: establishing a surface layer map according to the surface layer elevation corresponding to each of the grids; establishing a suspended layer map according to the suspended layer elevation corresponding to each of the grids; and carrying out fusion processing on the surface layer map and the suspension layer map to obtain a target map corresponding to the surrounding environment.
As one way, the corresponding surface layer map and the floating layer map may be determined according to the surface layer heights and the floating layer heights corresponding to the multiple grids, respectively, and then the surface layer map and the floating layer map are overlapped or fused to obtain the target map.
In another mode, the surface layer elevation and the suspension layer elevation corresponding to the grids can be overlapped or fused in the bottom map according to the determined surface layer elevation and suspension layer elevation, and the target map is obtained. Alternatively, the grid map with elevation information can be obtained by combining a plurality of grids with the elevation of the ground layer and the elevation of the suspended layer according to the pose information of each grid in the world coordinate system directly and according to the elevation of the ground layer and the elevation of the suspended layer corresponding to the grids, and the grid map is the target map.
Fig. 15 is a target map corresponding to a stairway determined according to the method of the present application, in fig. 15, a hollow space 710 of the stairway is determined as a passable space having a surface level and a suspended level, and the passable space may be optionally displayed in a "null" form, thereby indicating that the hollow space 710 is a robot-passable portion.
Fig. 16 is a flowchart of a map generation method according to another embodiment of the present application, which may be performed by an electronic device with processing capability, for example, a server, a cloud server, an in-vehicle terminal, or the like, which is not particularly limited herein. As shown in fig. 16, the method specifically includes the steps of:
step S710, acquiring environment point cloud data corresponding to the surrounding environment where the robot is located.
Step S720, grid processing is carried out on the environmental point cloud data corresponding to the surrounding environment, and grid point cloud data corresponding to each of the multiple grids is obtained.
Step S730, determining the surface layer elevation and the suspended layer elevation corresponding to each of the grids according to the grid point cloud data corresponding to each of the grids.
Step S740, generating a target map corresponding to the surrounding environment according to the surface layer elevation and the suspended layer elevation corresponding to each of the grids.
Step S750, a moving path of the robot is acquired.
As one way, an execution task of the robot may be acquired first, and then a movement path of the robot to execute the task may be determined based on the execution task. For example, when the robot performs a task of cleaning a room, a moving path of the robot is determined according to the size of the room. The robot may also perform tasks including movement, follow-up, navigation, etc., and the robot may move according to a movement path.
Step S760, determining a movement strategy corresponding to the robot according to the movement path and the target map, wherein the movement strategy includes adjusting the movement posture of the robot.
As one way, after determining the movement path of the robot, it is determined in the target map whether there is a target surface level and a target flying level on the movement path. It can be understood that whether an object with a hollow structure and a structure through which the robot cannot pass exist in the moving path (that is, the robot collides with an object corresponding to the structure, so that the robot cannot normally execute a task) are determined in the target map, so that the robot can determine a moving strategy, and further, the moving posture of the robot can be adjusted according to the moving strategy, and the robot can move according to the adjusted moving strategy to execute the corresponding task.
Step S770, controlling the robot to move according to the movement strategy.
As one way, after determining the target terrain elevation and the target flying layer elevation, the robot may determine a movement strategy according to the target flying layer elevation and the terrain elevation, which may include, but is not limited to, adjusting at least one of a movement pose, a movement speed, a movement path, or the like of the robot. For example, when the height of the target suspended floor is actually smaller than the normal height of the robot, the posture of the robot can be adjusted to be a creeping posture for lowering the height of the robot, so that the robot can move in the creeping posture, and the robot can move normally according to the moving path through the space between the target suspended floor and the target ground surface layer.
In this embodiment, the movement strategy of the robot is determined according to the movement path and the target map of the robot, including adjusting the movement gesture of the robot, so that the robot can move according to the original movement path, the position where the robot can move or reach is enlarged, and the robot is more intelligent and bionic.
Fig. 17 is a block diagram of a map generation apparatus according to an embodiment of the present application, and as shown in fig. 17, the map generation apparatus 800 includes: an acquisition module 810, a grid processing module 820, an elevation determination module 830, and a target map generation module 840.
The acquisition module is used for acquiring environment point cloud data corresponding to the surrounding environment where the robot is located; the grid processing module is used for carrying out grid processing on the environmental point cloud data corresponding to the surrounding environment to obtain grid point cloud data corresponding to each of the multiple grids; the elevation determining module is used for determining the surface layer elevation and the suspension layer elevation corresponding to each of the grids according to the grid point cloud data corresponding to each of the grids; and the target map generation module is used for generating a target map corresponding to the surrounding environment according to the surface layer elevation and the suspension layer elevation corresponding to each of the grids.
The present application also provides a robot, which may include: a body; a control system in communication with the fuselage, the control system comprising a processor and a memory in communication with the processor, the memory storing instructions that when executed on the processor cause the processor to implement the method of: acquiring environment point cloud data corresponding to the surrounding environment of the robot; performing grid processing on the environmental point cloud data corresponding to the surrounding environment to obtain grid point cloud data corresponding to each of a plurality of grids; determining the surface layer elevation and the suspension layer elevation corresponding to each of the grids according to the grid point cloud data corresponding to each of the grids; and generating a target map corresponding to the surrounding environment according to the surface layer elevation and the suspended layer elevation corresponding to each of the grids.
The robot realizes the operation of determining the surface layer elevation and the suspension layer elevation corresponding to each of a plurality of grids according to the grid point cloud data corresponding to each of the plurality of grids, and comprises the following steps: determining a first point cloud height and a second point cloud height corresponding to each of the grids according to grid point cloud data corresponding to each of the grids, wherein the second point cloud height is larger than the first point cloud height; acquiring a preset height corresponding to the robot, and determining a height relation between the preset height and the first point cloud height and the second point cloud height; clustering the grid point cloud data corresponding to each of the multiple grids to obtain at least one grid point cloud set corresponding to each of the multiple grids; and determining the surface layer elevation and the suspended layer elevation corresponding to each of the grids according to at least one grid point cloud set and the height relation corresponding to each of the grids.
The robot realizes the operation of determining the surface layer elevation and the suspension layer elevation corresponding to each of a plurality of grids according to at least one grid point cloud set and the height relation corresponding to each of the plurality of grids, and comprises the following steps: if the height relation is that the first point cloud height is larger than the preset height, determining the surface layer height and the suspension layer height corresponding to each of the grids according to at least one grid point cloud set corresponding to each of the grids; if the height relation is that the second point cloud height is smaller than the preset height, determining the surface layer height and the suspended layer height corresponding to each of the grids according to at least one grid point cloud set corresponding to each of the grids; if the height relation is that the preset height is located between the first point cloud height and the second point cloud height, determining a target point cloud set from at least one grid point cloud set, and determining the surface layer height and the suspended layer height corresponding to each of the grids according to the target point cloud set.
The robot realizes the operation of determining the surface layer elevation and the suspension layer elevation corresponding to each of the grids according to at least one grid point cloud set corresponding to each of the grids if the height relation is that the first point cloud height is larger than the preset height, and comprises the following steps: if the height relation is that the first point cloud height is larger than the preset height, determining a neighborhood grid corresponding to the target grid and a neighborhood point cloud set corresponding to the neighborhood grid from the multiple grids; comparing the point cloud height of the neighborhood point cloud set with a first height threshold; when the point cloud height of the neighborhood point cloud set is smaller than or equal to a first height threshold, determining a suspension layer height corresponding to the target grid according to the target grid point cloud set, and determining a surface layer height corresponding to the target grid according to the neighborhood point cloud set; and when the point cloud height of the neighborhood point cloud set is larger than a first height threshold, determining the surface layer height corresponding to the target grid according to the target grid point cloud set.
The robot realizes the operation of determining the surface layer elevation corresponding to each of the plurality of grids according to at least one grid point cloud set corresponding to each of the plurality of grids if the height relation is that the second point cloud height is smaller than the preset height, and comprises the following steps: if the height relation is that the second point cloud height is smaller than the preset height, determining a neighborhood grid of the target grid and a neighborhood point cloud set corresponding to the neighborhood grid from the multiple grids; comparing the point cloud height of the neighborhood point cloud set with a second height threshold; when the point cloud height of the neighborhood point cloud set is larger than or equal to a second height threshold, determining the suspended layer height corresponding to the target grid according to the neighborhood point cloud set, and determining the surface layer height according to the grid point cloud set corresponding to the target grid; and when the point cloud height of the neighborhood point cloud set is smaller than a second height threshold, determining the surface layer height according to the grid point cloud set corresponding to the target grid.
The robot realizes the operation of determining a target point cloud set from at least one grid point cloud set if the height relation is that a preset height is located between a first point cloud height and a second point cloud height, and determining the surface layer height and the suspension layer height corresponding to each of a plurality of grids according to the target point cloud set, wherein the operation comprises the following steps: if the height relation is that the preset height is located between the first point cloud height and the second point cloud height, determining the number of the grid point cloud sets; when the number of the sets is one, determining the grid point cloud set as a target point cloud set, and determining the surface layer elevation and the suspension layer elevation corresponding to each of the grids according to the target point cloud set; when the number of the sets is two or more, screening from at least two grid point cloud sets according to preset heights to obtain a target point cloud set, and determining the surface layer elevation and the suspension layer elevation corresponding to each of the grids according to the target point cloud set.
The robot realizes the operation of generating a target map corresponding to the surrounding environment according to the surface layer elevation and the suspension layer elevation corresponding to each of the grids, and comprises the following steps: establishing a surface layer map according to the surface layer elevation corresponding to each of the grids; establishing a suspended layer map according to the suspended layer elevation corresponding to each of the grids; and carrying out fusion processing on the surface layer map and the suspension layer map to obtain a target map corresponding to the surrounding environment.
After the robot is operated to generate the target map corresponding to the surrounding environment according to the surface layer elevation and the suspension layer elevation corresponding to each of the grids, the operation further comprises: acquiring a moving path of a robot; determining a movement strategy corresponding to the robot according to the movement path and the target map, wherein the movement strategy comprises the step of adjusting the movement posture of the robot; and controlling the robot to move according to the movement strategy.
The present application also provides a computer-readable storage medium having stored thereon computer-readable instructions, which may be contained in the electronic device described in the above embodiments; or may exist alone without being incorporated into the electronic device. The computer readable storage medium carries computer readable instructions which, when executed by a processor, implement the method of the above-described method embodiments.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the embodiments disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains.
It is to be understood that the present application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (12)

1. A map generation method, applied to a robot, comprising:
acquiring environment point cloud data corresponding to the surrounding environment of the robot;
performing grid processing on the environmental point cloud data corresponding to the surrounding environment to obtain grid point cloud data corresponding to each of a plurality of grids;
determining the surface layer elevation and the suspended layer elevation corresponding to each of the grids according to the grid point cloud data corresponding to each of the grids, wherein the determining comprises the following steps:
determining a first point cloud height and a second point cloud height corresponding to each of the plurality of grids according to grid point cloud data corresponding to each of the plurality of grids, wherein the second point cloud height is larger than the first point cloud height;
acquiring a preset height corresponding to the robot, and determining a height relation between the preset height and the first point cloud height and the second point cloud height;
clustering the grid point cloud data corresponding to each of the multiple grids to obtain at least one grid point cloud set corresponding to each of the multiple grids;
Determining the surface layer elevation and the suspended layer elevation corresponding to each of the grids according to at least one grid point cloud set corresponding to each of the grids and the height relation, wherein the determining comprises the following steps:
if the height relation is that the first point cloud height is larger than the preset height, determining the surface layer height and the suspended layer height corresponding to the grids according to at least one grid point cloud set corresponding to the grids;
if the height relation is that the second point cloud height is smaller than the preset height, determining the surface layer elevation corresponding to each of the grids according to at least one grid point cloud set corresponding to each of the grids;
if the height relation is that the preset height is located between the first point cloud height and the second point cloud height, determining a target point cloud set from the at least one grid point cloud set, and determining the surface layer height and the suspension layer height corresponding to each of the grids according to the target point cloud set;
and generating a target map corresponding to the surrounding environment according to the surface layer elevation and the suspension layer elevation corresponding to each of the grids.
2. The method of claim 1, wherein determining the surface elevation and the suspended layer elevation corresponding to each of the plurality of grids from at least one grid point cloud set corresponding to each of the plurality of grids if the height relationship is that the first point cloud height is greater than the preset height comprises:
if the height relation is that the first point cloud height is larger than the preset height, determining a neighborhood grid corresponding to a target grid and a neighborhood point cloud set corresponding to the neighborhood grid from the grids;
comparing the point cloud height of the neighborhood point cloud set with a first height threshold;
when the point cloud height of the neighborhood point cloud set is smaller than or equal to the first height threshold, determining the suspension layer height corresponding to the target grid according to the target grid point cloud set, and determining the surface layer height corresponding to the target grid according to the neighborhood point cloud set;
and when the point cloud height of the neighborhood point cloud set is larger than the first height threshold, determining the surface layer elevation corresponding to the target grid according to the target grid point cloud set.
3. The method of claim 1, wherein if the height relationship is that the second point cloud height is less than the preset height, determining the surface layer height corresponding to each of the plurality of grids according to at least one grid point cloud set corresponding to each of the plurality of grids comprises:
If the height relation is that the second point cloud height is smaller than the preset height, determining a neighborhood grid of a target grid and a neighborhood point cloud set corresponding to the neighborhood grid from the grids;
comparing the point cloud height of the neighborhood point cloud set with a second height threshold;
when the point cloud height of the neighborhood point cloud set is greater than or equal to the second height threshold, determining the suspension layer height corresponding to the target grid according to the neighborhood point cloud set, and determining the surface layer height corresponding to the target grid according to the grid point cloud set corresponding to the target grid;
and when the point cloud height of the neighborhood point cloud set is smaller than the second height threshold, determining the surface layer elevation corresponding to the target grid according to the grid point cloud set corresponding to the target grid.
4. The method of claim 1, wherein if the height relationship is that the preset height is between the first point cloud height and the second point cloud height, determining a target point cloud set from the at least one grid point cloud set, and determining the surface layer height and the suspended layer height corresponding to each of the plurality of grids according to the target point cloud set, comprises:
If the height relation is that the preset height is located between the first point cloud height and the second point cloud height, determining the number of the grid point cloud sets;
when the number of the sets is one, determining the grid point cloud set as a target point cloud set, and determining the surface layer elevation and the suspension layer elevation corresponding to each of the grids according to the target point cloud set;
when the number of the sets is two or more, screening from at least two grid point cloud sets according to the preset height to obtain a target point cloud set, and determining the surface layer elevation and the suspension layer elevation corresponding to each of the grids according to the target point cloud set.
5. The method of claim 1, wherein generating the target map corresponding to the surrounding environment according to the surface elevation and the suspended layer elevation corresponding to each of the plurality of grids comprises:
establishing a surface layer map according to the surface layer elevation corresponding to each of the grids;
establishing a suspended layer map according to the suspended layer elevation corresponding to each of the grids;
and carrying out fusion processing on the surface layer map and the suspended layer map to obtain a target map corresponding to the surrounding environment.
6. The method of claim 1, wherein after the generating the target map corresponding to the surrounding environment according to the respective surface elevation and the suspended layer elevation of the plurality of grids, the method further comprises:
acquiring a moving path of the robot;
determining a movement strategy corresponding to the robot according to the movement path and the target map, wherein the movement strategy comprises the step of adjusting the movement gesture of the robot;
and controlling the robot to move according to the movement strategy.
7. A robot, the robot comprising:
a body;
a control system in communication with the fuselage, the control system comprising a processor and a memory in communication with the processor, the memory storing instructions that when executed on the processor cause the processor to perform operations comprising:
acquiring environment point cloud data corresponding to the surrounding environment of the robot;
performing grid processing on the environmental point cloud data corresponding to the surrounding environment to obtain grid point cloud data corresponding to each of a plurality of grids;
determining the surface layer elevation and the suspended layer elevation corresponding to each of the grids according to the grid point cloud data corresponding to each of the grids, wherein the determining comprises the following steps:
Determining a first point cloud height and a second point cloud height corresponding to each of the plurality of grids according to grid point cloud data corresponding to each of the plurality of grids, wherein the second point cloud height is larger than the first point cloud height;
acquiring a preset height corresponding to the robot, and determining a height relation between the preset height and the first point cloud height and the second point cloud height;
clustering the grid point cloud data corresponding to each of the multiple grids to obtain at least one grid point cloud set corresponding to each of the multiple grids;
determining the surface layer elevation and the suspended layer elevation corresponding to each of the grids according to at least one grid point cloud set corresponding to each of the grids and the height relation, wherein the determining comprises the following steps:
if the height relation is that the first point cloud height is larger than the preset height, determining the surface layer height and the suspended layer height corresponding to the grids according to at least one grid point cloud set corresponding to the grids;
if the height relation is that the second point cloud height is smaller than the preset height, determining the surface layer elevation corresponding to each of the grids according to at least one grid point cloud set corresponding to each of the grids;
If the height relation is that the preset height is located between the first point cloud height and the second point cloud height, determining a target point cloud set from the at least one grid point cloud set, and determining the surface layer height and the suspension layer height corresponding to each of the grids according to the target point cloud set;
and generating a target map corresponding to the surrounding environment according to the surface layer elevation and the suspension layer elevation corresponding to each of the grids.
8. The robot of claim 7, wherein if the height relationship is that the first cloud point height is greater than the preset height, determining the surface level elevation and the suspended layer elevation corresponding to each of the plurality of grids according to at least one grid point cloud set corresponding to each of the plurality of grids comprises:
if the height relation is that the first point cloud height is larger than the preset height, determining a neighborhood grid corresponding to a target grid and a neighborhood point cloud set corresponding to the neighborhood grid from the grids;
comparing the point cloud height of the neighborhood point cloud set with a first height threshold;
when the point cloud height of the neighborhood point cloud set is smaller than or equal to the first height threshold, determining the suspension layer height corresponding to the target grid according to the target grid point cloud set, and determining the surface layer height corresponding to the target grid according to the neighborhood point cloud set;
And when the point cloud height of the neighborhood point cloud set is larger than the first height threshold, determining the surface layer elevation corresponding to the target grid according to the target grid point cloud set.
9. The robot of claim 7, wherein if the height relationship is that the second point cloud height is less than the preset height, determining the surface layer elevation corresponding to each of the plurality of grids according to at least one grid point cloud set corresponding to each of the plurality of grids comprises:
if the height relation is that the second point cloud height is smaller than the preset height, determining a neighborhood grid of a target grid and a neighborhood point cloud set corresponding to the neighborhood grid from the grids;
comparing the point cloud height of the neighborhood point cloud set with a second height threshold;
when the point cloud height of the neighborhood point cloud set is larger than or equal to the second height threshold, determining the suspension layer height corresponding to the target grid according to the neighborhood point cloud set, and determining the surface layer height according to the grid point cloud set corresponding to the target grid;
and when the point cloud height of the neighborhood point cloud set is smaller than the second height threshold, determining the surface layer height according to the grid point cloud set corresponding to the target grid.
10. The robot of claim 7, wherein if the height relationship is that the preset height is between the first point cloud height and the second point cloud height, determining a set of target point clouds from the at least one set of grid point clouds, and determining the surface layer elevation and the suspended layer elevation corresponding to each of the plurality of grids according to the set of target point clouds comprises:
if the height relation is that the preset height is located between the first point cloud height and the second point cloud height, determining the number of the grid point cloud sets;
when the number of the sets is one, determining the grid point cloud set as a target point cloud set, and determining the surface layer elevation and the suspension layer elevation corresponding to each of the grids according to the target point cloud set;
when the number of the sets is two or more, screening from at least two grid point cloud sets according to the preset height to obtain a target point cloud set, and determining the surface layer elevation and the suspension layer elevation corresponding to each of the grids according to the target point cloud set.
11. The robot of claim 7, wherein the operation of generating the target map corresponding to the surrounding environment according to the surface elevation and the suspended layer elevation corresponding to each of the plurality of grids comprises:
Establishing a surface layer map according to the surface layer elevation corresponding to each of the grids;
establishing a suspended layer map according to the suspended layer elevation corresponding to each of the grids;
and carrying out fusion processing on the surface layer map and the suspended layer map to obtain a target map corresponding to the surrounding environment.
12. The robot of claim 7, wherein after the operation of generating the target map for the surrounding environment from the respective surface level and the flying level of the plurality of grids, the operation further comprises:
acquiring a moving path of the robot;
determining a movement strategy corresponding to the robot according to the movement path and the target map, wherein the movement strategy comprises the step of adjusting the movement gesture of the robot;
and controlling the robot to move according to the movement strategy.
CN202310271548.9A 2023-03-20 2023-03-20 Map generation method and robot Active CN115979251B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310271548.9A CN115979251B (en) 2023-03-20 2023-03-20 Map generation method and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310271548.9A CN115979251B (en) 2023-03-20 2023-03-20 Map generation method and robot

Publications (2)

Publication Number Publication Date
CN115979251A CN115979251A (en) 2023-04-18
CN115979251B true CN115979251B (en) 2023-06-27

Family

ID=85966879

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310271548.9A Active CN115979251B (en) 2023-03-20 2023-03-20 Map generation method and robot

Country Status (1)

Country Link
CN (1) CN115979251B (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102152641B1 (en) * 2013-10-31 2020-09-08 엘지전자 주식회사 Mobile robot
US11279042B2 (en) * 2019-03-12 2022-03-22 Bear Robotics, Inc. Robots for serving food and/or drinks
CN112000093B (en) * 2020-07-15 2021-03-05 珊口(深圳)智能科技有限公司 Control method, control system and storage medium for mobile robot
CN113686347A (en) * 2021-08-11 2021-11-23 追觅创新科技(苏州)有限公司 Method and device for generating robot navigation path
CN113848944A (en) * 2021-10-18 2021-12-28 云鲸智能科技(东莞)有限公司 Map construction method and device, robot and storage medium

Also Published As

Publication number Publication date
CN115979251A (en) 2023-04-18

Similar Documents

Publication Publication Date Title
US20220234733A1 (en) Aerial Vehicle Smart Landing
CN112650255B (en) Robot positioning navigation method based on visual and laser radar information fusion
JP7263630B2 (en) Performing 3D reconstruction with unmanned aerial vehicles
EP3674657A1 (en) Construction and update of elevation maps
CN108536145A (en) A kind of robot system intelligently followed using machine vision and operation method
CN113359782B (en) Unmanned aerial vehicle autonomous addressing landing method integrating LIDAR point cloud and image data
WO2022016754A1 (en) Multi-machine cooperative vehicle washing system and method based on unmanned vehicle washing device
CN114564027A (en) Path planning method of foot type robot, electronic equipment and readable storage medium
CN114683290A (en) Method and device for optimizing pose of foot robot and storage medium
CN115435772A (en) Method and device for establishing local map, electronic equipment and readable storage medium
CN115979251B (en) Map generation method and robot
CN116352722A (en) Multi-sensor fused mine inspection rescue robot and control method thereof
CN115972217B (en) Map building method based on monocular camera and robot
WO2021087785A1 (en) Terrain detection method, movable platform, control device and system, and storage medium
CN114872051B (en) Traffic map acquisition system, method, robot and computer readable storage medium
CN113781676B (en) Security inspection system based on quadruped robot and unmanned aerial vehicle
NL2030831B1 (en) Computer implementation method based on using unmanned aerial vehicle to scann underground goaf
CN117589153A (en) Map updating method and robot
CN116955376A (en) Regional map updating method, robot and computer readable storage medium
US20230288224A1 (en) Ultrasonic wave-based indoor inertial navigation mapping method and system
CN116358522A (en) Local map generation method and device, robot, and computer-readable storage medium
CN115471629A (en) Object information extraction method, device, robot and computer-readable storage medium
CN116295338A (en) Positioning method, positioning device, robot and computer readable storage medium
JP2023086500A (en) Construction management system and construction management method
CN115464620A (en) Equipment maintenance method and device, operation and maintenance system and operation and maintenance robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant