CN117132879A - Dynamic obstacle recognition method and device, storage medium and electronic device - Google Patents

Dynamic obstacle recognition method and device, storage medium and electronic device Download PDF

Info

Publication number
CN117132879A
CN117132879A CN202210550728.6A CN202210550728A CN117132879A CN 117132879 A CN117132879 A CN 117132879A CN 202210550728 A CN202210550728 A CN 202210550728A CN 117132879 A CN117132879 A CN 117132879A
Authority
CN
China
Prior art keywords
point cloud
obstacle
partial
ground
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210550728.6A
Other languages
Chinese (zh)
Inventor
程立业
孙樱日
朱晨阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dreame Innovation Technology Suzhou Co Ltd
Original Assignee
Dreame Innovation Technology Suzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dreame Innovation Technology Suzhou Co Ltd filed Critical Dreame Innovation Technology Suzhou Co Ltd
Priority to CN202210550728.6A priority Critical patent/CN117132879A/en
Publication of CN117132879A publication Critical patent/CN117132879A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects

Abstract

The application provides a dynamic obstacle identification method and device, a storage medium and an electronic device, wherein the method comprises the following steps: continuously collecting point clouds in a collecting area of a target sensor through the target sensor on the cleaning equipment to obtain multi-frame point clouds; according to each frame of point cloud of the multi-frame point cloud, mapping points with the height above the ground and smaller than or equal to a first height threshold value into one map so as to obtain a plurality of partial maps; determining obstacles contained in each of the plurality of partial graphs by performing a clustering operation on points within each partial graph; and sequentially identifying the obstacles contained in each partial graph to obtain the dynamic obstacle in the acquisition area. The application solves the problem that the timeliness of dynamic obstacle recognition is poor due to the large data quantity required to be processed in the dynamic obstacle recognition mode in the related technology.

Description

Dynamic obstacle recognition method and device, storage medium and electronic device
[ field of technology ]
The application relates to the field of smart home, in particular to a method and a device for identifying dynamic obstacles, a storage medium and an electronic device.
[ background Art ]
During operation of the cleaning device, dynamic obstacle detection may be performed by sensors configured on the cleaning device to identify obstacles that are moving or moving away from the cleaning device. After the dynamic obstacle is identified, the obstacle avoidance can be performed through the configured obstacle avoidance strategy, so that abnormal conditions such as equipment damage and the like caused by collision with the dynamic obstacle are avoided.
Currently, when dynamic obstacle recognition is performed, the acquired multi-frame images are used for obstacle recognition, and the dynamic obstacle is recognized according to the position change of the obstacle in the multi-frame images. However, in the above-mentioned dynamic obstacle recognition method, the dynamic obstacle cannot be recognized in time due to the large amount of data to be processed, so that the situation that the cleaning device collides with the dynamic obstacle still occurs.
As can be seen from the above, the dynamic obstacle recognition method in the related art has a problem of poor timeliness of dynamic obstacle recognition due to a large amount of data to be processed.
[ application ]
The application aims to provide a method and a device for identifying dynamic obstacles, a storage medium and an electronic device, which at least solve the problem that the timeliness of dynamic obstacle identification is poor due to the large data volume required to be processed in the identification mode of the dynamic obstacles in the related technology.
The application aims at realizing the following technical scheme:
according to an aspect of an embodiment of the present application, there is provided a method for identifying a dynamic obstacle, including: continuously collecting point clouds in a collecting area of a target sensor through the target sensor on the cleaning equipment to obtain multi-frame point clouds; according to each frame of point cloud of the multi-frame point cloud, mapping points with the height above the ground and smaller than or equal to a first height threshold value into one map so as to obtain a plurality of partial maps; determining obstacles contained in each of the plurality of partial graphs by performing a clustering operation on points within each partial graph; and sequentially identifying the obstacles contained in each partial graph to obtain the dynamic obstacle in the acquisition area.
In an exemplary embodiment, the continuously performing, by using an object sensor on a cleaning device, point cloud acquisition on an acquisition area of the object sensor to obtain a multi-frame point cloud includes: and continuously collecting point clouds in an acquisition area of the area array TOF sensor through the area array TOF sensor to obtain the multi-frame point clouds.
In an exemplary embodiment, the mapping, according to each frame of the multi-frame point cloud, the point with the height above the ground and less than or equal to the first height threshold into one map to obtain multiple partial maps includes: the following steps are executed to each frame of point cloud of the multi-frame point cloud to obtain the multiple partial graphs, wherein when the following steps are executed, each frame of point cloud is the current frame of point cloud: performing ground detection on the current frame point cloud to obtain a ground point cloud in the current frame point cloud; and mapping points with heights above the ground point cloud in the current frame point cloud and smaller than or equal to the first height threshold value in the current frame point cloud into a map to obtain a local map corresponding to the current frame point cloud.
In an exemplary embodiment, the performing ground detection on the current frame point cloud to obtain a ground point cloud in the current frame point cloud includes: mapping points with the height smaller than or equal to a second height threshold value in the current frame point cloud into a graph to obtain a reference graph corresponding to the current frame point cloud; calculating the height difference between any two adjacent points in the reference graph; determining two adjacent points with the height difference smaller than or equal to a height difference threshold value as a group of candidate ground points corresponding to the current frame point cloud; performing clustering operation on the group of candidate ground points to obtain a group of multiple candidate ground point clouds; and determining the ground point cloud in the current frame point cloud from the plurality of candidate ground point clouds according to the point cloud parameters of each candidate plurality of candidate ground point clouds in the set of candidate point clouds.
In an exemplary embodiment, the determining the obstacle included in each of the partial graphs by performing a clustering operation on points within each of the partial graphs includes: executing the following steps on each partial graph in the plurality of partial graphs to obtain an obstacle contained in each partial graph, wherein when executing the following steps, each partial graph is a current partial graph: performing clustering operation on points in the current local graph to obtain a plurality of candidate barriers, wherein each candidate barrier of the plurality of candidate barriers corresponds to one cluster obtained by clustering; and selecting an obstacle with a size larger than or equal to a target size threshold from the plurality of candidate obstacles, and obtaining the obstacle contained in the current partial graph.
In an exemplary embodiment, the identifying the obstacle included in each partial graph in turn to obtain the dynamic obstacle in the acquisition area includes: obtaining the position of any obstacle in the acquisition area in the plurality of partial images by performing obstacle matching on the obstacle contained in each partial image; and determining a dynamic identification result of any obstacle according to the positions of the any obstacle in the plurality of partial graphs, wherein the dynamic identification result is used for indicating whether the target obstacle is a dynamic obstacle or not.
In an exemplary embodiment, the determining the target recognition result of the any obstacle according to the positions of the any obstacle in the plurality of partial graphs includes: determining the equipment position of the cleaning equipment according to the movement parameters of the cleaning equipment under the condition that the cleaning equipment is in a moving state; converting the position of any obstacle in the plurality of partial figures into a position under a world coordinate system according to the equipment position of the cleaning equipment to obtain a group of position sequences of any obstacle; and determining the dynamic identification result of any obstacle according to the distance between two adjacent positions in the group of position sequences.
According to another aspect of the embodiment of the present application, there is also provided an apparatus for identifying a dynamic obstacle, including: the acquisition unit is used for continuously carrying out point cloud acquisition on an acquisition area of the target sensor through the target sensor on the cleaning equipment to obtain multi-frame point cloud; the mapping unit is used for mapping points with the height above the ground and smaller than or equal to a first height threshold value into one map according to each frame of point cloud of the multi-frame point cloud so as to obtain a plurality of partial maps; a clustering unit configured to determine an obstacle included in each of the partial graphs by performing a clustering operation on points within each of the partial graphs; and the identification unit is used for sequentially identifying the obstacles contained in each partial graph to obtain the dynamic obstacle in the acquisition area.
In an exemplary embodiment, the acquisition unit includes: and the acquisition module is used for continuously carrying out point cloud acquisition on an acquisition area of the area array TOF sensor through the area array TOF sensor to obtain the multi-frame point cloud.
In an exemplary embodiment, the mapping unit includes: the first execution module is configured to execute the following steps for each frame of point cloud of the multi-frame point cloud to obtain the plurality of local graphs, where when the following steps are executed, each frame of point cloud is a current frame of point cloud: performing ground detection on the current frame point cloud to obtain a ground point cloud in the current frame point cloud; and mapping points with heights above the ground point cloud in the current frame point cloud and smaller than or equal to the first height threshold value in the current frame point cloud into a map to obtain a local map corresponding to the current frame point cloud.
In one exemplary embodiment, the first execution module includes: the mapping sub-module is used for mapping points with the height smaller than or equal to a second height threshold value in the current frame point cloud into a graph to obtain a reference graph corresponding to the current frame point cloud; the calculating sub-module is used for calculating the height difference between any two adjacent points in the reference graph; a first determining submodule, configured to determine two adjacent points, where the height difference is less than or equal to a height difference threshold, as candidate ground points corresponding to the current frame point cloud; the clustering sub-module is used for performing clustering operation on the candidate ground points to obtain a plurality of candidate ground point clouds; and the second determining submodule is used for determining the ground point cloud in the current frame point cloud from the candidate ground point clouds according to the point cloud parameters of the candidate ground point clouds.
In an exemplary embodiment, the clustering unit includes: the second execution module is configured to execute the following steps on each of the plurality of partial graphs to obtain an obstacle included in each partial graph, where when the following steps are executed, each partial graph is a current partial graph: performing clustering operation on points in the current local graph to obtain a plurality of candidate barriers, wherein each candidate barrier of the plurality of candidate barriers corresponds to one cluster obtained by clustering; and selecting an obstacle with a size larger than or equal to a target size threshold from the plurality of candidate obstacles, and obtaining the obstacle contained in the current partial graph.
In an exemplary embodiment, the identification unit includes: the matching module is used for obtaining the position of any obstacle in the acquisition area in the plurality of partial images by performing obstacle matching on the obstacles contained in each partial image; the determining module is used for determining a dynamic identification result of any obstacle according to the positions of the any obstacle in the plurality of partial graphs, wherein the dynamic identification result is used for indicating whether the target obstacle is a dynamic obstacle or not.
In one exemplary embodiment, the determining module includes: a third determining submodule, configured to determine an equipment position of the cleaning equipment according to a movement parameter of the cleaning equipment when the cleaning equipment is in a moving state; the conversion sub-module is used for converting the position of any obstacle in the plurality of partial figures into the position under a world coordinate system according to the equipment position of the cleaning equipment to obtain a group of position sequences of any obstacle; and the fourth determination submodule is used for determining the dynamic identification result of any obstacle according to the distance between two adjacent positions in the group of position sequences.
According to a further aspect of embodiments of the present application, there is also provided a computer readable storage medium having a computer program stored therein, wherein the computer program is arranged to perform the above-described method of identifying dynamic obstacles when run.
According to still another aspect of the embodiments of the present application, there is further provided an electronic device including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor executes the method for identifying a dynamic obstacle according to the computer program.
In the embodiment of the application, a mode of dynamic obstacle recognition through local point cloud data is adopted, and point cloud acquisition is continuously carried out on an acquisition area of a target sensor through the target sensor on cleaning equipment, so as to obtain multi-frame point cloud; according to each frame of point cloud of the multi-frame point cloud, mapping points with the height above the ground and smaller than or equal to a first height threshold value into one map to obtain a plurality of partial maps; determining obstacles contained in each of the plurality of partial graphs by performing a clustering operation on points within each of the partial graphs; the method comprises the steps of sequentially identifying the obstacles contained in each partial graph to obtain the dynamic obstacles in an acquisition area, and because the points with the height above the ground and not higher than a height threshold value in each frame of point cloud are respectively mapped into one partial graph, the number of the points contained in the point cloud in the partial graph is lower than that of the original point cloud, the purpose of reducing the data quantity required to be processed in dynamic obstacle identification can be achieved, the technical effect of improving the timeliness of dynamic obstacle identification is achieved, and the problem that the timeliness of dynamic obstacle identification is poor due to the fact that the data quantity required to be processed is large in the dynamic obstacle identification mode in the related technology is solved.
[ description of the drawings ]
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the application or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, and it will be obvious to a person skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a schematic diagram of a hardware environment of an alternative dynamic obstacle recognition method according to an embodiment of the application;
FIG. 2 is a flow chart of an alternative method of identifying dynamic obstacles in accordance with an embodiment of the application;
FIG. 3 is a flow chart of another alternative method of dynamic obstacle identification in accordance with an embodiment of the application;
FIG. 4 is a block diagram of an alternative dynamic obstacle identification device in accordance with an embodiment of the application;
fig. 5 is a block diagram of an alternative electronic device according to an embodiment of the application.
[ detailed description ] of the application
The application will be described in detail hereinafter with reference to the drawings in conjunction with embodiments. It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order.
According to an aspect of an embodiment of the present application, there is provided a method for identifying a dynamic obstacle. Alternatively, in the present embodiment, the above-described method for identifying dynamic obstacles may be applied to a hardware environment configured by the cleaning device 102, the base station 104, and the cloud platform 106 as shown in fig. 1. As shown in fig. 1, the cleaning device 102 may be connected to the base station 104 and/or the cloud platform 106 via a network to enable interaction between the cleaning device 102 and the base station 104 and/or the cloud platform 106.
The network may include, but is not limited to, at least one of: wired network, wireless network. The wired network may include, but is not limited to, at least one of: a wide area network, a metropolitan area network, a local area network, and the wireless network may include, but is not limited to, at least one of: WIFI (Wireless Fidelity ), bluetooth, infrared. The network used by the cleaning device 102 to communicate with the base station 104 and/or the cloud platform 106 may be the same as or different from the network used by the base station 104 to communicate with the cloud platform 106. The cleaning device 102 may include, but is not limited to: a sweeper, a floor washer, etc.
The method for identifying the dynamic obstacle according to the embodiment of the present application may be performed by the cleaning device 102, the base station 104, or the cloud platform 106 alone, or may be performed by at least two of the cleaning device 102, the base station 104, and the cloud platform 106 together. The method for identifying the dynamic obstacle, which is executed by the cleaning device 102 or the base station 104 according to the embodiment of the present application, may also be executed by a client installed thereon.
Taking the cleaning device 102 as an example to perform the method for identifying a dynamic obstacle in this embodiment, fig. 2 is a schematic flow chart of an alternative method for identifying a dynamic obstacle according to an embodiment of the present application, as shown in fig. 2, the flow of the method may include the following steps:
step S202, continuously collecting point clouds of a collecting area of a target sensor through the target sensor on the cleaning equipment to obtain multi-frame point clouds.
The method for identifying the dynamic obstacle in the embodiment can be applied to a scene of identifying the dynamic obstacle by the cleaning equipment. The cleaning device can comprise a sweeping robot and a washing robot, and other devices with the area cleaning function are acquired. The identification of the dynamic obstacle may be performed during stationary movement of the cleaning device, for example during the execution of a cleaning task in the cleaning area, during the arrival from the base station at the cleaning area, or during the return to the base station. The dynamic obstacle may be an obstacle having a certain moving speed, e.g. an obstacle in motion, away from or close to the cleaning device.
In this embodiment, the dynamic obstacle recognition may be performed using a target sensor provided on the cleaning device, and the target sensor may be a sensor for performing point cloud data acquisition, for example, a lattice, linear array laser sensor. The target sensor may be a position provided at a front end, a tip, a left side, a right side, or the like of the cleaning apparatus, which is capable of capturing a front end region in a moving direction of the cleaning apparatus. The type and the arrangement position of the target sensor are not limited in this embodiment.
Alternatively, the object sensor may be a rotatable sensor, for example, the bottom of the object sensor may be connected to a rotatable base, by rotating the rotatable base, the acquisition direction of the object sensor may be controlled. Before continuously collecting point clouds of a collecting area of a target sensor through the target sensor on the cleaning equipment, determining a rotating angle of a rotatable base according to the moving direction of the cleaning equipment, wherein the rotated target sensor can collect a front end area of the moving direction of the cleaning equipment; the rotatable base is rotated according to the determined rotation angle.
In this embodiment, the cleaning device may continuously perform point cloud acquisition on the acquisition area of the target sensor through the target sensor thereon, to obtain a multi-frame point cloud. The acquisition interval of each frame of point cloud data may be a preset interval, for example, 1 acquisition for 1s, 1 acquisition for a plurality of times, and the like. When each frame of point cloud data is acquired, a detection signal can be sent to an acquisition area of the point cloud data through a target sensor, and the current frame of point cloud is generated based on the received reflection signal.
Step S204, according to each frame of point cloud of the multi-frame point cloud, mapping the points with the heights above the ground and less than or equal to the first height threshold value into one map so as to obtain a plurality of partial maps.
In this embodiment, for each frame of point cloud, the cleaning device may first identify the floor in each frame of point cloud, as well as the height of each point; and then mapping the points with the height above the ground and less than or equal to the first height threshold value into a map, such as a blank map, to obtain a local map, wherein the local map refers to a map containing local point cloud data in one frame of point cloud, and by performing the mapping, the data volume of the point cloud required to be processed in each frame of point cloud can be reduced, and the obstacle recognition efficiency is improved.
Alternatively, the first height threshold may be a preset height threshold, the height threshold being set at least greater than the height of the cleaning device. Given that the cleaning device itself has a certain height, the obstacle in the higher position, whether it be dynamic or static, has no effect on the operation of the cleaning device and therefore no processing of this part of the data is necessary. By filtering out the points with the heights above the first height threshold, the efficiency of identifying the dynamic obstacle can be improved while the operation of the equipment is not affected.
Step S206, determining the obstacle included in each partial graph by performing a clustering operation on the points within each partial graph in the plurality of partial graphs.
For each partial graph, the cleaning device may perform a clustering operation on points within each partial graph, determining obstacles contained in each partial graph. The clustering algorithm used to perform the clustering operation may be a clustering algorithm (e.g., a K-means clustering algorithm) that specifies the number of clusters, or a clustering algorithm (e.g., a hierarchical clustering algorithm) that does not specify the number of clusters, which is not limited in this embodiment.
By performing a clustering operation on the points in each partial graph, a plurality of class clusters corresponding to each partial graph can be obtained, each class cluster can correspond to one obstacle, can correspond to a plurality of obstacles (for example, the obstacles overlap at the angle of the cleaning device) or can be no obstacle, and can further perform obstacle recognition in combination with other information of the class clusters, such as length, width, height, and the like.
The number of obstacles contained in each partial graph may not be fixed, and the number of obstacles contained in adjacent partial graphs may be the same or different. When the number of the included obstacles is the same, the included obstacles may be the same or different.
Step S208, obstacle recognition is sequentially carried out on the obstacles contained in each partial graph, and dynamic obstacles in the acquisition area are obtained.
After determining the obstacles contained in each partial graph, the cleaning device can determine the positions of the same obstacle in different partial graphs and also determine the positions of the same obstacle at different moments by matching the obstacles. Based on the change in position of the same obstacle at different times, it can be determined whether the obstacle is a dynamic obstacle. By identifying whether each obstacle appearing in the partial images is a dynamic obstacle or not, respectively, the dynamic obstacle in the acquisition area can be determined.
After the dynamic obstacle in the acquisition area is identified, the moving parameter of the dynamic obstacle can be determined, and whether the cleaning equipment collides with the dynamic obstacle or not is determined according to the moving parameter of the cleaning equipment and the moving parameter of the dynamic obstacle; if so, the obstacle avoidance strategy can be executed in advance, so that the risk of collision between the cleaning equipment and the dynamic obstacle is reduced. The movement parameters may include, but are not limited to, at least one of: a moving speed and a moving direction.
By adopting the dynamic obstacle recognition method provided by the embodiment, the efficiency of dynamic obstacle recognition can be improved, the timeliness of obstacle avoidance can be improved when obstacle avoidance is performed, the risk of collision between the cleaning equipment and the dynamic obstacle is reduced, and the operation safety of the cleaning equipment is also improved.
Through the steps S202 to S208, continuously performing point cloud acquisition on an acquisition area of the target sensor through the target sensor on the cleaning device, so as to obtain multi-frame point cloud; according to each frame of point cloud of the multi-frame point cloud, mapping points with the height above the ground and smaller than or equal to a first height threshold value into one map to obtain a plurality of partial maps; determining obstacles contained in each of the plurality of partial graphs by performing a clustering operation on points within each of the partial graphs; the obstacle recognition is sequentially carried out on the obstacles contained in each partial graph, so that the dynamic obstacles in the acquisition area are obtained, the problem that the timeliness of the dynamic obstacle recognition is poor due to the fact that the data amount to be processed is large in a dynamic obstacle recognition mode in the related technology is solved, and the timeliness of the dynamic obstacle recognition is improved.
In one exemplary embodiment, continuously performing point cloud acquisition on an acquisition area of an object sensor by the object sensor on the cleaning device to obtain a multi-frame point cloud, including:
s11, continuously carrying out point cloud acquisition on an acquisition area of the area array TOF sensor through the area array TOF sensor to obtain multi-frame point cloud.
For the mode of using dot matrix, linear array laser sensor to gather point cloud data, because of the influence of sensor error, calibration error etc. the discernment to dynamic barrier is not accurate enough. Moreover, the scanning speed of the sensor is low, so that the timeliness of dynamic obstacle recognition is poor.
In the present embodiment, an area array ToF (Time of Flight) sensor is used as the target sensor. The ToF sensor continuously transmits light pulses to a target object (an object within an acquisition area of the ToF sensor), receives light signals returned from the target object, and obtains a distance between the ToF sensor and the target object by determining a flight (round trip) time of the transmitted and received light pulses.
When the point cloud is collected, the cleaning equipment can continuously collect the point cloud in the collecting area of the area array TOF sensor through the TOF sensor to obtain multi-frame point cloud, when the point cloud is collected each time, the TOF sensor can emit detection signals to the collecting area and receive reflected signals of the returned detection signals, the distance between the TOF sensor and a target corresponding to the reflected signals is determined according to the time difference between the time of emitting the detection signals and the time of receiving the reflected signals, and then the point cloud is generated based on the determined distance. Since the time from the reflected signals of different objects to the ToF sensor is different, a point cloud matching the acquisition area can be determined according to the time of reception of each reflected signal.
Through this embodiment, through the area array TOF sensor carrying out the point cloud and gathering, can improve the speed and the precision that the point cloud gathered, improve the timeliness of dynamic barrier discernment.
In one exemplary embodiment, mapping points having a height above the ground and less than or equal to a first height threshold into a map to obtain a plurality of partial maps according to each frame of a multi-frame point cloud, includes:
s21, executing the following steps on each frame of point cloud of the multi-frame point cloud to obtain a plurality of partial graphs, wherein each frame of point cloud is the current frame of point cloud when the following steps are executed:
performing ground detection on the current frame point cloud to obtain a ground point cloud in the current frame point cloud;
and mapping points with heights above the ground point cloud in the current frame point cloud and smaller than or equal to a first height threshold value in the current frame point cloud into a map to obtain a partial map corresponding to the current frame point cloud.
In this embodiment, in order to obtain a local graph corresponding to each frame of point cloud, a mapping operation may be performed on each frame of point cloud, so as to obtain a local graph corresponding to each frame of point cloud. For example, for each frame of point cloud, the following mapping operation may be performed as the current frame of point cloud, so as to obtain a local graph corresponding to the current frame of point cloud:
Firstly, performing ground detection on the current frame point cloud to obtain the ground point cloud in the current frame point cloud, wherein the ground detection can be performed by adopting a ground detection mode provided in the related technology, and the description is omitted herein;
then, for each point in the point cloud of the current frame, the height of the ground point at the same position is determined, and for the point with the height not lower than the height of the corresponding ground point and less than or equal to the first height threshold value, the point can be mapped into a graph.
After all the points are processed, a local graph corresponding to the point cloud of the current frame can be acquired. Alternatively, in order to improve the rationality of dynamic obstacle recognition, points in the current frame point cloud, the height of which is not lower than the height of the corresponding ground point and the height difference from the corresponding ground point is not greater than the first height threshold value, may be mapped into a map, so as to obtain a partial map corresponding to the current frame point cloud.
By means of the method and the device, ground identification is carried out, and the points mapped to the local map are determined according to the height relation between each point in the point cloud and the corresponding ground point, so that the rationality of determining the local map can be improved.
In an exemplary embodiment, performing ground detection on the current frame point cloud to obtain a ground point cloud in the current frame point cloud, including:
S31, mapping points with the height smaller than or equal to a second height threshold value in the point cloud of the current frame into a graph to obtain a reference graph corresponding to the point cloud of the current frame;
s32, calculating the height difference between any two adjacent points in the reference graph;
s33, determining two adjacent points with the height difference smaller than or equal to a height difference threshold value as candidate ground points corresponding to the point cloud of the current frame;
s34, clustering operation is carried out on the candidate ground points, and a plurality of candidate ground point clouds are obtained;
and S35, determining the ground point cloud in the current frame point cloud from the plurality of candidate ground point clouds according to the point cloud parameters of the plurality of candidate ground point clouds.
In the ground detection, if a default ground is a plane with a height of 0, the ground is not necessarily a plane with a height of 0 due to sensor errors, calibration errors, and the like, and the ground is easily recognized as an obstacle, thereby causing incomplete cleaning.
In this embodiment, a certain height redundancy space may be set for the ground, that is, points with heights not higher than the second height threshold are all treated as candidate ground points. For the current frame point cloud, points with the height not higher than a second height threshold value in the current frame point cloud may be mapped into one graph to obtain a reference graph corresponding to the current frame point cloud, where the reference graph is a graph (may also be considered as a partial graph) used for ground detection, and the second height threshold value may be smaller than the first height threshold value.
In view of the small fluctuation of the ground, a point having a small difference in height from the neighboring point may be determined as a candidate ground point. For a reference graph corresponding to the point cloud of the current frame, the height difference between any two adjacent points in the reference graph can be calculated, and two adjacent points, the height difference of which is smaller than or equal to a height difference threshold value, in the reference graph are determined to be candidate ground points, so that a plurality of candidate ground points corresponding to the point cloud of the current frame are obtained.
For a plurality of candidate ground points, a clustering operation may be performed thereon, resulting in a plurality of candidate ground point clouds, each of which may contain a portion of the plurality of candidate ground points. The ground has specificity relative to other obstacles, such as density, length, width, area and other parameters are larger than other obstacles. The ground in the current frame point cloud may be determined from a plurality of candidate ground point clouds according to the point cloud parameters of each candidate point cloud in the plurality of candidate point clouds. For example, the point cloud parameters of each candidate point cloud may be counted, and the point cloud parameters may include, but are not limited to, at least one of the following: density, length, width, area; and judging the point cloud with the point cloud parameters meeting the set threshold as the ground.
According to the embodiment, the candidate ground points are determined by judging the height difference between the points with the height lower than the height threshold value and the adjacent points, and the ground is determined by clustering the candidate ground points, so that the accuracy of ground identification can be improved.
In one exemplary embodiment, determining an obstacle contained in each of the partial graphs by performing a clustering operation on points within each of the partial graphs, includes:
s41, executing the following steps on each partial graph in the partial graphs to obtain the obstacle contained in each partial graph, wherein when executing the following steps, each partial graph is the current partial graph:
performing clustering operation on points in the current local graph to obtain a plurality of candidate barriers, wherein each candidate barrier of the plurality of candidate barriers corresponds to one cluster obtained by clustering;
and selecting an obstacle with a size larger than or equal to a target size threshold from the plurality of candidate obstacles, and obtaining the obstacle contained in the current partial graph.
In this embodiment, in order to determine the obstacle included in each partial graph, a clustering operation may be performed on each partial graph, respectively, to determine the obstacle included in each partial graph. For example, for each partial graph, the following clustering operation may be performed as the current partial graph, so as to determine the obstacle included in the current partial graph:
Firstly, performing clustering operation on points in a current local graph to obtain a plurality of class clusters, and determining each class cluster as a candidate obstacle to obtain a plurality of candidate obstacles;
then, determining an obstacle size of each candidate obstacle, for example, calculating at least one of a length, a width, and a height of each candidate obstacle;
and finally, obtaining the obstacles contained in the current local map by using the candidate obstacles with the obstacle sizes larger than or equal to the target size threshold, namely, respectively determining each candidate obstacle with the obstacle sizes larger than or equal to the target size threshold as one obstacle contained in the current local map, thereby obtaining all the obstacles contained in the current local map.
Alternatively, the obstacle size may include at least one of a length, a width, and a height of the candidate obstacle, and when determining whether the obstacle size is greater than or equal to the target size threshold, a size relationship between each parameter (i.e., length, width, and height) and each parameter threshold (the target size threshold includes each parameter threshold) may be determined, and the candidate obstacle, for which each parameter is greater than or equal to the corresponding parameter threshold, may be determined as the actual obstacle. Alternatively, a magnitude relation between the product of the respective parameters and a set size threshold (i.e., a target size threshold) may be determined, and the candidate obstacle whose product of the respective parameters is greater than or equal to the set size threshold may be determined as the actual obstacle.
By this embodiment, by selecting the candidate obstacle whose obstacle size is greater than or equal to the set size threshold value to determine as the actual obstacle, the accuracy of obstacle determination can be improved.
In an exemplary embodiment, the obstacle recognition is performed on the obstacles included in each partial graph in sequence, so as to obtain a dynamic obstacle in the acquisition area, including:
s51, performing obstacle matching on the obstacles contained in each partial graph to obtain the positions of any obstacle in the acquisition area in the plurality of partial graphs;
s52, determining a dynamic identification result of any obstacle according to the positions of any obstacle in the partial graphs.
In this embodiment, when performing obstacle recognition, obstacle matching may be performed on the obstacle included in each partial graph, a partial graph including the same obstacle in the multiple partial graphs and the position of the same obstacle in the partial graph where the same obstacle is located may be determined, so as to determine whether the same obstacle is a dynamic obstacle based on the position in the corresponding partial graph, and determine the dynamic motion state of the dynamic obstacle, such as the moving direction, the moving speed, and the like, when the dynamic obstacle exists.
For any obstacle in the partial graphs, for example, the current obstacle, the cleaning device may determine the location of the current obstacle within the multiple partial graphs by performing an obstacle match for the obstacle contained in each partial graph. For partial graphs that do not contain the current obstacle, the position of the current obstacle within these partial graphs is empty; for partial maps containing the current obstacle, the position of the current obstacle within these partial maps is the position of the current obstacle under the sensor coordinate system.
Based on the location of the current obstacle within the plurality of partial figures, the cleaning device may determine a dynamic recognition result of the current obstacle, where the dynamic recognition result may be used to indicate whether the current obstacle is a dynamic obstacle. If it is determined that the position change of the current obstacle is small based on the positions of the current obstacle in the plurality of partial graphs (for example, the position change range is smaller than a preset range threshold, the position change is small and can be considered as no position change in consideration of the problem of detection accuracy), the current obstacle can be considered as a static obstacle, and it is determined that the position change of the current obstacle is large (for example, the position change range is greater than or equal to the preset range threshold), the current obstacle can be considered as a dynamic obstacle. By executing the dynamic obstacle recognition process described above with any obstacle as the current obstacle, all dynamic obstacles in the multi-frame point cloud can be determined.
According to the embodiment, the dynamic obstacle recognition is performed according to the positions of the same obstacle in the plurality of partial graphs, so that the accuracy of the obstacle recognition can be improved.
In one exemplary embodiment, determining the target recognition result of any obstacle according to the position of any obstacle in the plurality of partial graphs includes:
s61, determining the equipment position of the cleaning equipment according to the movement parameters of the cleaning equipment under the condition that the cleaning equipment is in a moving state;
s62, converting the position of any obstacle in the multiple partial graphs into the position under the world coordinate system according to the equipment position of the cleaning equipment to obtain a group of position sequences of any obstacle;
s63, determining the dynamic identification result of any obstacle according to the distance between two adjacent positions in a group of position sequences.
The position of any obstacle in the partial figures is the position of any obstacle in the sensor coordinate system, and the actual position of any obstacle is not only related to the position of the obstacle in the partial figures, but also to the device position of the cleaning device. When the cleaning equipment is in a moving state, the dynamic identification result of any obstacle can be determined according to the position of the cleaning equipment and the position of any obstacle in the multiple partial graphs: converting the position of any obstacle in the partial figures into the position under the world coordinate system according to the equipment position of the cleaning equipment to obtain a group of position sequences of any obstacle; and determining the dynamic identification result of any obstacle according to the distance between two adjacent positions in a group of position sequences.
The device position of the cleaning device may be a position of the cleaning device under a world coordinate system, and the world coordinate system may be a three-dimensional coordinate system with a position of the cleaning device at the time of start-up as an origin. When converting the position of any obstacle in the plurality of partial images into the position under the world coordinate system according to the equipment position of the cleaning equipment, a group of reference position sequences of any obstacle under the equipment coordinate system can be determined according to the position of any obstacle in the plurality of partial images and the coordinate conversion relation between the sensor coordinate system and the equipment coordinate system of the cleaning equipment; and converting a group of reference position sequences into a group of position sequences under the world coordinate system according to the equipment position of the cleaning equipment and the coordinate conversion relation between the equipment coordinate system and the world coordinate system.
According to the distance between two adjacent positions in a group of position sequences, the position change of any obstacle can be determined, and similarly to the previous embodiment, whether any obstacle is a dynamic obstacle or not can be determined according to the position change of any obstacle, so that the dynamic recognition result of any obstacle is obtained.
In this embodiment, the obstacle recognition is performed according to the device position of the cleaning device and the positions of the obstacles in the plurality of partial graphs, so that the accuracy of dynamic obstacle recognition can be improved.
The following explains a method for identifying a dynamic obstacle in the embodiment of the present application with reference to an alternative example. In this alternative example, the cleaning device is a sweeper (i.e., a sweeping robot) and the target sensor is an area array ToF sensor.
The alternative example provides an obstacle recognition method based on an area array ToF sensor, which can recognize obstacles according to multi-frame ToF data, so as to recognize objects moving or being far away from a sweeper.
As shown in connection with fig. 3, the flow of the obstacle recognition method in this alternative example may include the following steps:
step S302, obstacle recognition is carried out on each frame of point clouds in the continuous multi-frame point clouds to obtain multi-frame results, and the multi-frame results are stored.
The single frame ToF data is processed as follows: creating a partial graph, and mapping ToF points with heights above the ground and below a certain threshold value of the airframe; clustering points in the partial graph; and calculating the length, width and height of each clustered point cloud, and if the length, width and height Yu Junyi of one point cloud are fixed to a threshold value, storing the point cloud.
Step S304, comparing the stored multi-frame results, and judging as a dynamic obstacle if the movement or disappearance occurs between frames.
Through the optional example, dynamic obstacle recognition is performed based on the area array TOF sensor, so that accuracy of dynamic obstacle recognition can be improved, and efficiency of dynamic obstacle recognition is improved.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present application is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present application.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM (Read-Only Memory)/RAM (Random Access Memory), magnetic disk, optical disk), comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method of the embodiments of the present application.
According to another aspect of the embodiment of the present application, there is also provided a dynamic obstacle recognition device for implementing the dynamic obstacle recognition method. Fig. 4 is a block diagram of an alternative dynamic obstacle recognition device according to an embodiment of the application, as shown in fig. 4, the device may include:
the acquisition unit 402 is configured to continuously perform point cloud acquisition on an acquisition area of the target sensor through the target sensor on the cleaning device, so as to obtain a multi-frame point cloud;
the mapping unit 404 is connected to the acquisition unit 402, and is configured to map, according to each frame of point cloud of the multi-frame point cloud, points with heights above the ground and less than or equal to the first height threshold value into one map, so as to obtain multiple partial maps;
a clustering unit 406, connected to the mapping unit 404, for determining the obstacles contained in each of the partial graphs by performing a clustering operation on the points within each of the partial graphs;
the identifying unit 408 is connected to the clustering unit 406, and is configured to sequentially identify the obstacles included in each local graph, so as to obtain a dynamic obstacle in the acquisition area.
It should be noted that, the acquisition unit 402 in this embodiment may be used to perform the above-mentioned step S202, the mapping unit 404 in this embodiment may be used to perform the above-mentioned step S204, the clustering unit 406 in this embodiment may be used to perform the above-mentioned step S206, and the identification unit 408 in this embodiment may be used to perform the above-mentioned step S208.
Through the module, continuously acquiring point cloud of an acquisition area of the target sensor through the target sensor on the cleaning equipment to obtain multi-frame point cloud; according to each frame of point cloud of the multi-frame point cloud, mapping points with the height above the ground and smaller than or equal to a first height threshold value into one map to obtain a plurality of partial maps; determining obstacles contained in each of the plurality of partial graphs by performing a clustering operation on points within each of the partial graphs; the obstacle recognition is sequentially carried out on the obstacles contained in each partial graph, so that the dynamic obstacles in the acquisition area are obtained, the problem that the timeliness of dynamic obstacle recognition is poor due to the fact that the data amount to be processed is large in a dynamic obstacle recognition mode in the related technology is solved, and the timeliness of dynamic obstacle recognition is improved.
In one exemplary embodiment, the acquisition unit includes:
and the acquisition module is used for continuously carrying out point cloud acquisition on an acquisition area of the area array TOF sensor through the area array TOF sensor to obtain multi-frame point cloud.
An optional example of this embodiment may refer to an example shown in the operation control method of the above-described apparatus, and will not be described herein.
In one exemplary embodiment, the mapping unit includes:
the first execution module is configured to execute the following steps for each frame of point cloud of the multi-frame point cloud to obtain multiple partial graphs, where each frame of point cloud is a current frame of point cloud when the following steps are executed:
performing ground detection on the current frame point cloud to obtain a ground point cloud in the current frame point cloud;
and mapping points with heights above the ground point cloud in the current frame point cloud and smaller than or equal to a first height threshold value in the current frame point cloud into a map to obtain a partial map corresponding to the current frame point cloud.
An optional example of this embodiment may refer to an example shown in the operation control method of the above-described apparatus, and will not be described herein.
In one exemplary embodiment, the first execution module includes:
the mapping sub-module is used for mapping points with the height smaller than or equal to the second height threshold value in the point cloud of the current frame into one image to obtain a reference image corresponding to the point cloud of the current frame;
the calculation sub-module is used for calculating the height difference between any two adjacent points in the reference graph;
the first determining submodule is used for determining two adjacent points with the height difference smaller than or equal to the height difference threshold value as candidate ground points corresponding to the point cloud of the current frame;
The clustering sub-module is used for performing clustering operation on the candidate ground points to obtain a plurality of candidate ground point clouds;
and the second determining submodule is used for determining the ground point cloud in the current frame point cloud from the plurality of candidate ground point clouds according to the point cloud parameters of the plurality of candidate ground point clouds.
An optional example of this embodiment may refer to an example shown in the operation control method of the above-described apparatus, and will not be described herein.
In one exemplary embodiment, the clustering unit includes:
the second execution module is configured to execute the following steps on each of the plurality of partial graphs to obtain an obstacle included in each partial graph, where each partial graph is a current partial graph when the following steps are executed:
performing clustering operation on points in the current local graph to obtain a plurality of candidate barriers, wherein each candidate barrier of the plurality of candidate barriers corresponds to one cluster obtained by clustering;
and selecting an obstacle with a size larger than or equal to a target size threshold from the plurality of candidate obstacles, and obtaining the obstacle contained in the current partial graph.
An optional example of this embodiment may refer to an example shown in the operation control method of the above-described apparatus, and will not be described herein.
In one exemplary embodiment, the identification unit includes:
the matching module is used for obtaining the position of any obstacle in the acquisition area in the plurality of partial images by performing obstacle matching on the obstacle contained in each partial image;
the determining module is used for determining a dynamic identification result of any obstacle according to the positions of any obstacle in the plurality of partial graphs, wherein the dynamic identification result is used for indicating whether the target obstacle is the dynamic obstacle or not.
An optional example of this embodiment may refer to an example shown in the operation control method of the above-described apparatus, and will not be described herein.
In one exemplary embodiment, the determining module includes:
a third determining sub-module for determining a device position of the cleaning device according to the movement parameter of the cleaning device in a state where the cleaning device is in a movement state;
the conversion sub-module is used for converting the position of any obstacle in the plurality of partial graphs into the position under the world coordinate system according to the equipment position of the cleaning equipment to obtain a group of position sequences of any obstacle;
and the fourth determination submodule is used for determining the dynamic identification result of any obstacle according to the distance between two adjacent positions in a group of position sequences.
An optional example of this embodiment may refer to an example shown in the operation control method of the above-described apparatus, and will not be described herein.
It should be noted that the above modules are the same as examples and application scenarios implemented by the corresponding steps, but are not limited to what is disclosed in the above embodiments. It should be noted that the above modules may be implemented in software or in hardware as part of the apparatus shown in fig. 1, where the hardware environment includes a network environment.
According to yet another aspect of an embodiment of the present application, there is also provided a storage medium. Alternatively, in the present embodiment, the above-described storage medium may be used to execute the program code of the method for identifying a dynamic obstacle according to any one of the above-described embodiments of the present application.
Alternatively, in this embodiment, the storage medium may be located on at least one network device of the plurality of network devices in the network shown in the above embodiment.
Alternatively, in the present embodiment, the storage medium is configured to store program code for performing the steps of:
s1, continuously collecting point clouds in a collecting area of a target sensor through the target sensor on cleaning equipment to obtain multi-frame point clouds;
S2, mapping points with the height above the ground and smaller than or equal to a first height threshold value into one graph according to each frame of point cloud of the multi-frame point cloud to obtain a plurality of partial graphs;
s3, determining obstacles contained in each partial graph by performing clustering operation on points in each partial graph in the plurality of partial graphs;
s4, sequentially identifying the obstacles contained in each partial graph, and obtaining the dynamic obstacles in the acquisition area.
Alternatively, specific examples in the present embodiment may refer to examples described in the above embodiments, which are not described in detail in the present embodiment.
Alternatively, in the present embodiment, the storage medium may include, but is not limited to: various media capable of storing program codes, such as a U disk, ROM, RAM, a mobile hard disk, a magnetic disk or an optical disk.
According to still another aspect of the embodiments of the present application, there is also provided an electronic device for implementing the above-mentioned dynamic obstacle recognition method, where the electronic device may be a server, a terminal, or a combination thereof.
Fig. 5 is a block diagram of an alternative electronic device according to an embodiment of the present application, as shown in fig. 5, including a processor 502, a communication interface 504, a memory 506, and a communication bus 508, wherein the processor 502, the communication interface 504, and the memory 506 communicate with each other via the communication bus 508, wherein,
A memory 506 for storing a computer program;
the processor 502 is configured to execute the computer program stored in the memory 506, and implement the following steps:
s1, continuously collecting point clouds in a collecting area of a target sensor through the target sensor on cleaning equipment to obtain multi-frame point clouds;
s2, mapping points with the height above the ground and smaller than or equal to a first height threshold value into one graph according to each frame of point cloud of the multi-frame point cloud to obtain a plurality of partial graphs;
s3, determining obstacles contained in each partial graph by performing clustering operation on points in each partial graph in the plurality of partial graphs;
s4, sequentially identifying the obstacles contained in each partial graph, and obtaining the dynamic obstacles in the acquisition area.
Alternatively, in the present embodiment, the communication bus may be a PCI (Peripheral Component Interconnect, peripheral component interconnect standard) bus, or an EISA (Extended Industry Standard Architecture ) bus, or the like. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, only one thick line is shown in fig. 5, but not only one bus or one type of bus. The communication interface is used for communication between the electronic device and other equipment.
The memory may include RAM or nonvolatile memory (non-volatile memory), such as at least one disk memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
As an example, the above memory 506 may include, but is not limited to, an acquisition unit 402, a mapping unit 404, a clustering unit 406, and an identification unit 408 in a control device including the above apparatus. In addition, other module units in the control device of the above apparatus may be included, but are not limited to, and are not described in detail in this example.
The processor may be a general purpose processor and may include, but is not limited to: CPU (Central Processing Unit ), NP (Network Processor, network processor), etc.; but also DSP (Digital Signal Processing, digital signal processor), ASIC (Application Specific Integrated Circuit ), FPGA (Field-Programmable Gate Array, field programmable gate array) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components.
Alternatively, specific examples in this embodiment may refer to examples described in the foregoing embodiments, and this embodiment is not described herein.
It will be understood by those skilled in the art that the structure shown in fig. 5 is only schematic, and the device implementing the method for identifying a dynamic obstacle may be a terminal device, and the terminal device may be a smart phone (such as an Android mobile phone, an iOS mobile phone, etc.), a tablet computer, a palm computer, a mobile internet device (Mobile Internet Devices, MID), a PAD, etc. Fig. 5 is not limited to the structure of the electronic device. For example, the electronic device may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in FIG. 5, or have a different configuration than shown in FIG. 5.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of the above embodiments may be implemented by a program for instructing a terminal device to execute in association with hardware, the program may be stored in a computer readable storage medium, and the storage medium may include: flash disk, ROM, RAM, magnetic or optical disk, etc.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
The integrated units in the above embodiments may be stored in the above-described computer-readable storage medium if implemented in the form of software functional units and sold or used as separate products. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, comprising several instructions for causing one or more computer devices (which may be personal computers, servers or network devices, etc.) to perform all or part of the steps of the method described in the embodiments of the present application.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In several embodiments provided by the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, such as the division of the units, is merely a logical function division, and may be implemented in another manner, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution provided in the present embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application, which are intended to be comprehended within the scope of the present application.

Claims (10)

1. A method of identifying a dynamic obstacle, comprising:
continuously collecting point clouds in a collecting area of a target sensor through the target sensor on the cleaning equipment to obtain multi-frame point clouds;
according to each frame of point cloud of the multi-frame point cloud, mapping points with the height above the ground and smaller than or equal to a first height threshold value into one map so as to obtain a plurality of partial maps;
determining obstacles contained in each of the plurality of partial graphs by performing a clustering operation on points within each partial graph;
And sequentially identifying the obstacles contained in each partial graph to obtain the dynamic obstacle in the acquisition area.
2. The method according to claim 1, wherein the continuously performing, by the target sensor on the cleaning device, the point cloud acquisition on the acquisition area of the target sensor to obtain a multi-frame point cloud includes:
and continuously collecting point clouds in an acquisition area of the area array TOF sensor through the area array TOF sensor to obtain the multi-frame point clouds.
3. The method according to claim 1, wherein mapping points with a height above the ground and less than or equal to a first height threshold into one map according to each frame of the multi-frame point cloud to obtain a plurality of partial maps comprises:
the following steps are executed to each frame of point cloud of the multi-frame point cloud to obtain the multiple partial graphs, wherein when the following steps are executed, each frame of point cloud is the current frame of point cloud:
performing ground detection on the current frame point cloud to obtain a ground point cloud in the current frame point cloud;
and mapping points with heights above the ground point cloud in the current frame point cloud and smaller than or equal to the first height threshold value in the current frame point cloud into a map to obtain a local map corresponding to the current frame point cloud.
4. A method according to claim 3, wherein said performing ground detection on said current frame point cloud to obtain a ground point cloud in said current frame point cloud comprises:
mapping points with the height smaller than or equal to a second height threshold value in the current frame point cloud into a graph to obtain a reference graph corresponding to the current frame point cloud;
calculating the height difference between any two adjacent points in the reference graph;
determining two adjacent points with the height difference smaller than or equal to a height difference threshold value as candidate ground points corresponding to the current frame point cloud;
clustering operation is carried out on the candidate ground points, so that a plurality of candidate ground point clouds are obtained;
and determining the ground point cloud in the current frame point cloud from the plurality of candidate ground point clouds according to the point cloud parameters of the plurality of candidate ground point clouds.
5. The method of claim 1, wherein the determining the obstacles contained in each of the plurality of partial graphs by performing a clustering operation on points within each of the partial graphs comprises:
executing the following steps on each partial graph in the plurality of partial graphs to obtain an obstacle contained in each partial graph, wherein when executing the following steps, each partial graph is a current partial graph:
Performing clustering operation on points in the current local graph to obtain a plurality of candidate barriers, wherein each candidate barrier of the plurality of candidate barriers corresponds to one cluster obtained by clustering;
and selecting an obstacle with a size larger than or equal to a target size threshold from the plurality of candidate obstacles, and obtaining the obstacle contained in the current partial graph.
6. The method according to any one of claims 1 to 5, wherein the sequentially performing obstacle recognition on the obstacles included in each partial graph to obtain the dynamic obstacle in the acquisition region includes:
obtaining the position of any obstacle in the acquisition area in the plurality of partial images by performing obstacle matching on the obstacle contained in each partial image;
and determining a dynamic identification result of any obstacle according to the positions of the any obstacle in the plurality of partial graphs.
7. The method of claim 6, wherein determining the target recognition result of the any obstacle according to the positions of the any obstacle in the plurality of partial figures comprises:
determining the equipment position of the cleaning equipment according to the movement parameters of the cleaning equipment under the condition that the cleaning equipment is in a moving state;
Converting the position of any obstacle in the plurality of partial figures into a position under a world coordinate system according to the equipment position of the cleaning equipment to obtain a group of position sequences of any obstacle;
and determining the dynamic identification result of any obstacle according to the distance between two adjacent positions in the group of position sequences.
8. A dynamic obstacle recognition device, comprising:
the acquisition unit is used for continuously carrying out point cloud acquisition on an acquisition area of the target sensor through the target sensor on the cleaning equipment to obtain multi-frame point cloud;
the mapping unit is used for mapping points with the height above the ground and smaller than or equal to a first height threshold value into one map according to each frame of point cloud of the multi-frame point cloud so as to obtain a plurality of partial maps;
a clustering unit configured to determine an obstacle included in each of the partial graphs by performing a clustering operation on points within each of the partial graphs;
and the identification unit is used for sequentially identifying the obstacles contained in each partial graph to obtain the dynamic obstacle in the acquisition area.
9. A computer-readable storage medium, characterized in that the computer-readable storage medium comprises a stored program, wherein the program when run performs the method of any one of claims 1 to 7.
10. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the method according to any of claims 1 to 7 by means of the computer program.
CN202210550728.6A 2022-05-20 2022-05-20 Dynamic obstacle recognition method and device, storage medium and electronic device Pending CN117132879A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210550728.6A CN117132879A (en) 2022-05-20 2022-05-20 Dynamic obstacle recognition method and device, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210550728.6A CN117132879A (en) 2022-05-20 2022-05-20 Dynamic obstacle recognition method and device, storage medium and electronic device

Publications (1)

Publication Number Publication Date
CN117132879A true CN117132879A (en) 2023-11-28

Family

ID=88849558

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210550728.6A Pending CN117132879A (en) 2022-05-20 2022-05-20 Dynamic obstacle recognition method and device, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN117132879A (en)

Similar Documents

Publication Publication Date Title
CN107610084B (en) Method and equipment for carrying out information fusion on depth image and laser point cloud image
US11709058B2 (en) Path planning method and device and mobile device
CN110147706B (en) Obstacle recognition method and device, storage medium, and electronic device
CN108875804B (en) Data processing method based on laser point cloud data and related device
CN108307113B (en) Image acquisition method, image acquisition control method and related device
CN111932943B (en) Dynamic target detection method and device, storage medium and roadbed monitoring equipment
CN110865393A (en) Positioning method and system based on laser radar, storage medium and processor
CN110470333B (en) Calibration method and device of sensor parameters, storage medium and electronic device
CN112347999A (en) Obstacle recognition model training method, obstacle recognition method, device and system
CN110471086B (en) Radar fault detection system and method
US20230260132A1 (en) Detection method for detecting static objects
CN111275087A (en) Data processing method and device, electronic equipment and motor vehicle
CN113768419A (en) Method and device for determining sweeping direction of sweeper and sweeper
CN117132879A (en) Dynamic obstacle recognition method and device, storage medium and electronic device
CN114882115B (en) Vehicle pose prediction method and device, electronic equipment and storage medium
CN115685249A (en) Obstacle detection method and device, electronic equipment and storage medium
CN112667924A (en) Robot map acquisition method and device, processor and electronic device
CN117137382A (en) Ground recognition method and device, storage medium and electronic device
CN117095050A (en) Repositioning method and device for robot, storage medium and electronic device
CN117253137A (en) Obstacle recognition method and device, storage medium and electronic device
WO2023193567A1 (en) Movement control method and apparatus for robot, and storage medium and electronic apparatus
CN117095279A (en) Robot repositioning method and device, storage medium and electronic device
CN117148826A (en) Method and device for searching region, storage medium and electronic device
CN112651405B (en) Target detection method and device
CN117095043A (en) Robot repositioning method and device, storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination