CN116486130A - Obstacle recognition method, device, self-mobile device and storage medium - Google Patents

Obstacle recognition method, device, self-mobile device and storage medium Download PDF

Info

Publication number
CN116486130A
CN116486130A CN202310376907.7A CN202310376907A CN116486130A CN 116486130 A CN116486130 A CN 116486130A CN 202310376907 A CN202310376907 A CN 202310376907A CN 116486130 A CN116486130 A CN 116486130A
Authority
CN
China
Prior art keywords
point cloud
cloud data
obstacle
candidate
self
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310376907.7A
Other languages
Chinese (zh)
Inventor
陈俊全
刘元财
张泫舜
陈浩宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ecoflow Technology Ltd
Original Assignee
Ecoflow Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ecoflow Technology Ltd filed Critical Ecoflow Technology Ltd
Priority to CN202310376907.7A priority Critical patent/CN116486130A/en
Publication of CN116486130A publication Critical patent/CN116486130A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The application is applicable to the technical field of obstacle recognition, and provides a method, a device, self-moving equipment and a storage medium for obstacle recognition, wherein the method comprises the following steps: acquiring an environment image of an environment where the mobile device is located and environment point cloud data corresponding to the environment image; determining obstacle probability information corresponding to each position in the environment image; determining first candidate obstacle point cloud data from the environmental point cloud data according to the pose of each point in the environmental point cloud data relative to the self-mobile device; clustering the first candidate obstacle point cloud data to obtain second candidate obstacle point cloud data; and determining an obstacle recognition result of the second candidate obstacle point cloud data according to the obstacle probability information corresponding to the second candidate obstacle point cloud data. The method effectively solves the problems of wrong obstacle recognition and poor obstacle avoidance effect caused by the influence of the environment on the self-moving equipment, improves the accuracy of the obstacle recognition of the self-moving equipment, and further improves the working efficiency of the self-moving equipment.

Description

Obstacle recognition method, device, self-mobile device and storage medium
Technical Field
The present disclosure relates to the field of self-mobile devices, and in particular, to a method and an apparatus for identifying an obstacle, a self-mobile device, and a computer storage medium.
Background
In the moving process of the self-moving device, such as a mower and a sweeper, in order to ensure that the self-moving device can move smoothly, the obstacles around the self-moving device need to be accurately identified so as to prevent the collision between the self-moving device and the obstacles. However, in the related art, when identifying the obstacle, the obstacle is easily affected by the external environment, resulting in poor obstacle avoidance effect and reduced working efficiency of the self-mobile device. For example, when the outdoor mower has dust raising problem in the operation process, dust is easily identified as an obstacle by mistake and obstacle avoidance is performed, so that the invalid obstacle avoidance frequency of the mower is increased, and the operation efficiency of the mower is greatly reduced. Or when the sweeper cannot sense uneven ground or stairs in front, the sweeper cannot effectively avoid the obstacle, and the sweeper is easy to damage, so that the working efficiency of the sweeper is lower.
It can be seen that the poor obstacle avoidance effect in the related art results in low operation efficiency of the self-mobile device.
Disclosure of Invention
The embodiment of the application mainly aims to provide a method and a device for identifying an obstacle, self-moving equipment and a computer storage medium, and aims to solve the problem that the obstacle identification is wrong due to the influence of the environment on the self-moving equipment, further reduce the obstacle avoidance effect, improve the accuracy of the obstacle identification of the self-moving equipment and further improve the operation efficiency of the self-moving equipment.
In a first aspect, an embodiment of the present application provides a method for identifying an obstacle, including:
acquiring an environment image of an environment where the mobile device is located and environment point cloud data corresponding to the environment image;
determining obstacle probability information corresponding to each position in the environment image;
determining first candidate obstacle point cloud data from the environmental point cloud data according to the pose of each point in the environmental point cloud data relative to the self-mobile device;
clustering the first candidate obstacle point cloud data to obtain second candidate obstacle point cloud data;
and determining an obstacle recognition result of the second candidate obstacle point cloud data according to the obstacle probability information corresponding to the second candidate obstacle point cloud data.
In a second aspect, embodiments of the present application further provide an obstacle identifying apparatus, including:
The data acquisition module is used for acquiring an environment image of the environment where the mobile equipment is located and environment point cloud data corresponding to the environment image;
the data processing module is used for determining obstacle probability information corresponding to each position in the environment image;
the data screening module is used for determining first candidate obstacle point cloud data from the environment point cloud data according to the pose of each point in the environment point cloud data relative to the self-mobile device;
the data clustering module is used for clustering the first candidate obstacle point cloud data to obtain second candidate obstacle point cloud data;
the data identification module is used for determining an obstacle identification result of the second candidate obstacle point cloud data according to the obstacle probability information corresponding to the second candidate obstacle point cloud data.
In a third aspect, embodiments of the present application also provide a self-mobile device comprising a processor, a memory, a computer program stored on the memory and executable by the processor, and a data bus for enabling a connection communication between the processor and the memory, wherein the computer program, when executed by the processor, implements the steps of the method of obstacle recognition as provided in any of the present application.
In a fourth aspect, embodiments of the present application also provide a storage medium for computer readable storage, wherein the storage medium stores one or more programs executable by one or more processors to implement the method steps of obstacle identification as provided in any one of the present application specifications.
The embodiment of the application provides a method, a device, a self-mobile device and a storage medium for identifying an obstacle, wherein the method comprises the steps of obtaining an environment image of an environment where the mobile device is located and environment point cloud data corresponding to the environment image; identifying each position in the environment image and determining probability information of an obstacle corresponding to each position; firstly, determining first candidate obstacle point cloud data from the environment point cloud data according to pose information of each point in the environment point cloud data relative to the self-mobile device; clustering the first candidate obstacle point cloud data to obtain second candidate obstacle point cloud data; and finally, determining an obstacle recognition result of the second candidate obstacle point cloud data according to the obstacle probability information corresponding to the second candidate obstacle point cloud data. According to the method, the obstacle probability information of each position identified by the environment image is utilized, and the pose and the clustering result of the environment point cloud data relative to the self-moving equipment are combined to filter the non-obstacle, so that the interference problem of the non-obstacle is effectively reduced, the accuracy of the self-moving equipment in identifying the obstacle is improved, and the working efficiency of the self-moving equipment is further improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of an obstacle recognition method according to an embodiment of the present application;
FIG. 2 is a flowchart of steps corresponding to one embodiment of step S130 in FIG. 1;
fig. 3 is a schematic block diagram of an obstacle identifying apparatus according to an embodiment of the present application;
fig. 4 is a schematic block diagram of a self-mobile device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The flow diagrams depicted in the figures are merely illustrative and not necessarily all of the elements and operations/steps are included or performed in the order described. For example, some operations/steps may be further divided, combined, or partially combined, so that the order of actual execution may be changed according to actual situations.
It is to be understood that the terminology used in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
Taking the self-moving equipment as a mowing robot as an example, when the mowing robot performs mowing operation on a lawn, as the mowing robot performs obstacle recognition, the surrounding environment is affected by the mowing robot or other factors to lift up a lot of dust or sundries, so that the mowing robot recognizes some non-obstacles as obstacles and performs error obstacle avoidance. Or, when the obstacle recognition is carried out on the mowing robot, the problem that the obstacle cannot avoid the obstacle in time due to the fact that the uneven ground or the steps are not recognized, skidding and the like occur is solved, the working efficiency of the mowing robot is reduced, and the mowing robot is possibly damaged when serious.
Therefore, a method for identifying the obstacle is needed, so that the self-mobile device can accurately filter the non-obstacle and identify the obstacle, and the occurrence frequency of error obstacle avoidance is reduced, thereby improving the working efficiency of the self-mobile device.
The method provided by the embodiment of the application can be executed by the first device or can be executed by a chip in the first device, and the first device can be a non-self-mobile device, such as a server, an electronic device (such as a mobile phone) or a self-mobile device. Wherein the self-mobile device may be a device comprising self-mobile assistance functionality. The self-moving auxiliary function can be realized by a vehicle-mounted terminal, and the corresponding self-moving equipment can be a vehicle with the vehicle-mounted terminal. The self-mobile device may also be a semi-self-mobile device or a fully autonomous mobile device. Such as a self-moving device may be a mowing robot, a sweeping robot, a mine-discharging robot, a cruise robot, etc. When the first device is a device other than the self-mobile device, the first device may communicate with the self-mobile device, for example, an APP (application program) corresponding to the self-mobile device may be installed on the mobile phone, and the user may operate the APP in the mobile phone to trigger the mobile phone and the self-mobile device to establish a communication connection. When the first device is a server, a user can trigger the self-mobile device to report the environment image and the environment point cloud data to the server in the APP of the mobile phone which is communicated with the self-mobile device, so that the server executes the obstacle recognition method provided by the embodiment of the application. Or the self-mobile device is provided with a first control, and the user triggers the first control to trigger the self-mobile device to report the depth image acquired by the self-mobile device to the server so that the server executes the obstacle recognition method provided by the embodiment of the application.
Some embodiments of the present application are described in detail below with reference to the accompanying drawings. The following embodiments and features of the embodiments may be combined with each other without conflict.
Referring to fig. 1, fig. 1 is a flow chart of an obstacle recognition method according to an embodiment of the present application.
As shown in fig. 1, the method for identifying the obstacle includes steps S1 to S5.
S110: and acquiring an environment image of the environment where the mobile device is located and environment point cloud data corresponding to the environment image.
The ambient image may include, but is not limited to, a color image (i.e., RGB image), a depth image, a grayscale image, and the like, among others.
The first device may obtain the RGB image and the depth image of the environment of the mobile device through an RGB-D depth camera mounted on the self-mobile device, and obtain the environmental point cloud data of the environment of the mobile device through the depth image.
In some embodiments, the environmental point cloud data may also be acquired by a laser scanner or other instrument such as a lidar.
For example, a two-dimensional image, such as an RGB image, obtained in a conventional camera may locate any pixel by (x, y) coordinates, respectively obtaining three color attributes (R, G, B), while each (x, y) coordinate in a Depth image will correspond to four attributes (Depth, R, G, B), where Depth is the distance of the Depth camera from each pixel point (x, y).
The RGB image provides the x, y coordinates in the pixel coordinate system, while Depth provides the z coordinates in the camera coordinate system, i.e. the distance of the camera from the real point. According to the information of the RGB image and the internal reference of the Depth camera, calculating the coordinates of each pixel point in the RGB image under the camera coordinate system, and further obtaining the (Depth, R, G, B) information of any real point under the camera coordinate system; and calculating coordinate data of any real point under world coordinates, namely environment point cloud data corresponding to an environment image of the environment according to the obtained (Depth, R, G, B) information of any real point under the camera coordinate system and external parameters of the Depth camera. The function of the depth camera internal parameters is to convert data from a camera coordinate system to a pixel coordinate system, the function of the depth camera external parameters is to convert the data from a world coordinate system to the camera coordinate system, and the technology of obtaining the depth camera internal parameters and the depth camera external parameters is the prior art, and is not described in detail in the application.
S120: obstacle probability information corresponding to each position in the environmental image is determined.
In some embodiments, the obstacle detection is performed on the environment image through the target detection model, so as to obtain probability information belonging to the obstacle and a target area of the obstacle. Wherein, each position in the target area marks the probability information of the obstacle, namely the obstacle probability information corresponding to each position in the environment image. It can be understood that the target detection model can be obtained through convolutional neural network model training, and the neural network model and the specific training method can be an existing model and training method in the related technology, and the neural network model and the specific training method are not limited in the application.
In some embodiments, the environmental image may be segmented using a thresholding method to obtain a target area where an obstacle may be present. Pixels in the ambient image above the set threshold are determined as target areas, while pixels below the set threshold are all determined as background objects or non-target areas. The set threshold pixel can be selected according to actual requirements. And after the target area with the possible obstacle in the environment image is obtained, classifying the target area to obtain probability information of the target area being the obstacle. For example, the classification types of the target area classification model are non-obstacle and obstacle, when the target area passes through the target area classification model, non-obstacle probability information and obstacle probability information corresponding to the target area can be obtained, and the sum of the non-obstacle probability information and the obstacle probability information is equal to 1.
S130, determining first candidate obstacle point cloud data from the environment point cloud data according to the pose of each point in the environment point cloud data relative to the self-mobile device;
since the number of points in the environmental point cloud data is large, it is known from a priori knowledge that some targets in the environment are not targets requiring operation of the self-mobile device, and the targets should be determined to be obstacles, but sometimes the targets are insufficient to pose a threat to the self-mobile device in the operation process, so that the working efficiency of the self-mobile device is reduced if such targets are also identified and obstacle avoidance is performed. Therefore, according to the pose information of each point in the environmental point cloud data relative to the self-mobile device, the environmental point cloud data of the self-mobile device, which need to avoid the obstacle, are used as the first candidate obstacle point cloud data, and the rest environmental point cloud data of the self-mobile device, which do not need to avoid the obstacle, can be directly removed without judging the obstacle.
And S140, clustering the first candidate obstacle point cloud data to obtain second candidate obstacle point cloud data.
Specifically, clustering the first candidate obstacle point cloud data to obtain different clustered point cloud subgroups, analyzing the clustered point cloud subgroups to obtain the probability of the existence of obstacles in the clustered point cloud subgroups, and when the probability of the existence of the obstacles in the clustered point cloud subgroups does not meet the clustering requirement, eliminating the corresponding points in the clustered point cloud subgroups from the first candidate obstacle point cloud data, and further taking the rest of the first candidate obstacle point cloud data as second candidate obstacle point cloud data. The clustering requirement may be that if the number of points in the clustered point cloud group meets a preset number threshold, the points in the clustered point cloud group are used as second candidate obstacle point cloud data, otherwise, the clustering requirement is not met. The clustering requirement may also be that when a point with a minimum height far greater than a preset height exists in the clustered point cloud group, it is indicated that the probability of the existence of an obstacle in the clustered point cloud group is low, for example, the preset height is 2m, but the lowest height in the clustered point cloud group is 5m, an object formed by the clustered point cloud group may be dust, that is, a point corresponding to the clustered point cloud group may be removed from the first candidate obstacle point cloud data, and a subsequent obstacle recognition step is not required.
And S150, the first equipment determines an obstacle recognition result of the second candidate obstacle point cloud data according to the obstacle probability information corresponding to the second candidate obstacle point cloud data.
As can be seen from step S110, the environmental image corresponds to the environmental point cloud data one by one, and therefore, the obstacle information corresponding to each point in the second candidate obstacle point cloud data is the probability information of the corresponding position in the environmental image in step S120.
For example, the probability information corresponding to each point in the cluster point cloud group is averaged, when the obtained average value is larger than a preset probability threshold value, the second candidate obstacle point cloud data corresponding to the cluster point cloud group is judged to be an obstacle, and otherwise, the second candidate obstacle point cloud data is judged to be a non-obstacle.
Or, a voting mechanism is adopted, the probability information corresponding to each point in the clustered point cloud group is calculated to be more than or equal to the number of preset probability thresholds, when the number of the probability information which is more than the preset probability thresholds exceeds the preset number, the second candidate obstacle point cloud data corresponding to the clustered point cloud group is judged to be an obstacle, and otherwise, the second candidate obstacle point cloud data is not an obstacle.
It is understood that, by obtaining an environmental image and environmental point cloud data corresponding to the environmental image, determining obstacle probability information corresponding to each position in the environmental image, and determining first candidate obstacle point cloud data from the environmental point cloud data according to the pose of each point in the environmental point cloud data relative to the self-mobile device; clustering the first candidate obstacle point cloud data and analyzing a clustering result to obtain second candidate obstacle point cloud data; and obtaining an obstacle recognition result and filtering non-obstacles by using the second candidate obstacle point cloud data and the obstacle probability information corresponding to each point in the second candidate obstacle point cloud data. According to the method, the obstacle recognition and the non-obstacle filtering are carried out by utilizing the environment image and the environment point cloud data, so that the problem that the obstacle recognition is wrong due to the influence of the environment on the self-mobile equipment is effectively solved, the obstacle avoidance effect is further reduced, the accuracy of the obstacle recognition of the self-mobile equipment is improved, and the working efficiency of the self-mobile equipment is further improved.
In some embodiments, determining obstacle probability information corresponding to each location in the environmental image includes: dividing the environment image into a plurality of sub-environment image areas based on a preset obstacle category; and determining barrier probability information according to the position of each sub-environment image area.
Specifically, the environment image is divided into a plurality of sub-environment image areas, each sub-environment image area is identified, the probability that the sub-environment image area is an obstacle is determined, and after the probability determination of the obstacle corresponding to each sub-environment image area in the environment image is completed, the probability information of the obstacle corresponding to each sub-environment image area is assigned to each position corresponding to each sub-environment image area.
For example, semantic segmentation is performed on the environment image to obtain semantic regions of different objects in the environment image, namely sub-environment image regions, so as to determine the probability that the sub-environment image regions are barriers. FCN (Fully Convolutional Network) can be adopted in semantic segmentation, and FCN proposes to use a full convolution network to carry out semantic segmentation, so that the original network structure based on a full connection layer is promoted, and dense prediction can be carried out under the condition of not having the full connection layer. The FCN can accept an input image of any size, upsample the feature match of the last convolutional layer to recover it to the same size as the input image, thereby generating a prediction for each pixel while preserving spatial information in the original input image, and finally classifying pixel-by-pixel on the upsampled feature map. I.e. to obtain obstacle probability information for each position in the ambient image.
Referring to fig. 2, in some embodiments, step S130 includes steps S210 to S250, that is, the first device determines, according to a pose of each point in the environmental point cloud data relative to the self-mobile device, first candidate obstacle point cloud data from the environmental point cloud data, including:
s210, filtering the environmental point cloud data to obtain first processed point cloud data
The filtering processing is performed on the environmental point cloud data to remove noise data so as to eliminate interference of the noise data on obstacle recognition.
Specifically, since the obstacle avoidance process performed by the self-mobile device performs obstacle avoidance in real time within a preset range of the current position of the self-mobile device, and performs obstacle avoidance when identifying an obstacle, that is, the real-time obstacle avoidance process does not need to pay attention to an obstacle, such as part of dust, outside the preset range. Therefore, the environmental point cloud data whose depth (which may be approximately regarded as the distance from the environmental point cloud data to the mobile device) exceeds the preset distance threshold may be regarded as the point cloud noise data, and after the environmental point cloud data is removed from the point cloud noise data, the remaining environmental point cloud data may be regarded as the first processing point cloud data.
For example, median filtering, wiener filtering, kalman filtering, etc. may be used in the filtering process, and the specific filtering means used are not particularly limited.
S220, determining the position and the angle of each point in the first processing point cloud data relative to the coordinate system of the self-mobile device.
Specifically, each point in the first processing point cloud data is converted to a position in a horizontal plane coordinate system relative to the self-mobile device, and an angle between a line formed by the point and the self-mobile device and the horizontal plane of the self-mobile device is calculated.
S230, determining first processing point cloud data with the position of the self-mobile device in a coordinate system smaller than the height of the horizontal plane of the self-mobile device and the angle larger than a first preset angle threshold value as first type obstacle point cloud data.
The first preset angle threshold may be set according to different concave terrains relative to a horizontal plane where the mobile device is located, and is not limited herein.
In an exemplary embodiment, if a hollow area exists in front of the mobile device, it is known that when each point in the point cloud data corresponding to the hollow area is located below the horizontal plane coordinate system where the mobile device is located, that is, the height of each point in the point cloud data corresponding to the hollow area is lower than the height of the plane where the mobile device is located. In addition, based on the included angle formed by each point in the point cloud data corresponding to the hollow area and the plane where the self-mobile device is located, the angle corresponding to the included angle is necessarily larger than a first preset angle threshold.
Therefore, the first processing point cloud data with the position smaller than the height of the horizontal plane where the self-mobile device is located and the angle larger than the first preset angle threshold value can be determined to be the first type obstacle point cloud data. And transmitting the first type of obstacle point cloud data to the self-mobile device so that the self-mobile device can avoid the obstacle. The obstacle types corresponding to the first type of obstacle point cloud data are pits, steps, cliffs and the like.
S240, removing the first type obstacle point cloud data from the first processing point cloud data to obtain second processing point cloud data.
And removing the first type of obstacle point cloud data from the first processing point cloud data to obtain second processing point cloud data, and reducing interference of the point cloud data corresponding to the known obstacle type on the identification of the subsequent obstacle.
S250, obtaining first candidate obstacle point cloud data according to the second processing point cloud data.
Since there is a slope or some non-creeping obstacle in the environment that reaches the crawling limit of the self-mobile device, the point cloud data to which the slope or non-creeping obstacle type belongs can be used as the first candidate obstacle point cloud data.
In some embodiments, obtaining first candidate obstacle point cloud data from the second processed point cloud data includes: determining normal vectors corresponding to each point in the second processing point cloud data; determining an included angle between the normal vector and a coordinate system where the self-mobile device is located; and taking the second processing point cloud data with the included angle larger than a second preset angle threshold value as first candidate obstacle point cloud data.
The normal vector of the second processing point cloud data is a normal vector corresponding to a plane obtained by fitting the current point cloud data and at least two surrounding point cloud data. The coordinate system of the self-mobile device comprises a horizontal axis (X axis), a vertical axis (Y axis) or a vertical axis (Z axis). And calculating an included angle between a normal vector corresponding to the point cloud data relative to the plane and the coordinate system of the self-moving equipment as an included angle between the normal vector and the positive direction of the Z axis in the coordinate system of the self-moving equipment.
It will be appreciated that the second preset angle threshold may be determined in accordance with a user setting. For example, the second preset angle threshold is 45 degrees, so when the included angle between the normal vector of the point in the second processing point cloud data and the positive direction of the Z axis in the coordinate system of the self-mobile device is greater than 45 degrees, the point cloud data corresponding to the point is used as the first candidate obstacle point cloud data to continue obstacle identification.
For example, when the angle between the normal vector of the corresponding point of the slope and the positive direction of the Z-axis in the coordinate system of the self-moving device is smaller than 45 degrees, the self-moving device can smoothly pass through the slope with the angle smaller than 45 degrees, so the slope is not an obstacle. However, when the included angle between the normal vector of the corresponding point of the slope and the positive direction of the Z axis in the coordinate system of the self-moving device is larger than 45 degrees, the self-moving device cannot smoothly pass through the slope with the gradient larger than 45 degrees, and therefore the slope is an obstacle.
In some embodiments, clustering the first candidate obstacle point cloud data to obtain second candidate obstacle point cloud data includes: clustering the screened first candidate obstacle point cloud data to obtain a plurality of groups of third processing point cloud data; and taking third processing point cloud data with the number of points being larger than a preset number threshold value as second candidate obstacle point cloud data.
In an exemplary embodiment, a hierarchical clustering method is used for the first candidate obstacle point cloud data to obtain a plurality of groups of third processing point cloud data, and when the number of points in the third processing point cloud data is smaller than a preset number threshold, the third processing point cloud data is insufficient to support and determine that the third processing point cloud data is an obstacle, and therefore, when the number of points in the third processing point cloud data is greater than the preset number threshold, the third processing point cloud data is used as the second candidate obstacle point cloud data.
For example, the first candidate obstacle point cloud data includes 100 points, after hierarchical clustering, 3 third processed point cloud data A, B, C are obtained, and the number of corresponding points in A, B, C is 20, 30, and 50. When the preset number threshold is 25, the number of points in a does not meet the requirement that the number is greater than the preset number threshold, so that the corresponding point cloud data in B, C is only required to be used as the second candidate obstacle point cloud data.
In some embodiments, the method of clustering the first candidate obstacle point cloud data includes, but is not limited to, K-means clustering, density-based clustering, common clustering, and the like.
In some embodiments, the obstacle recognition result includes obstacle point cloud data or non-obstacle point cloud data; determining an obstacle recognition result of the second candidate obstacle point cloud data according to the obstacle probability information corresponding to the second candidate obstacle point cloud data, including: acquiring the height information of each point in the second candidate obstacle point cloud data; and determining the second candidate obstacle point cloud data as obstacle point cloud data when the obstacle probability information corresponding to the second candidate obstacle point cloud data is larger than or equal to a preset probability threshold value and the heights of points in the second candidate obstacle point cloud data are continuous according to each group of the second candidate obstacle point cloud data.
In an embodiment, for each set of second candidate obstacle point cloud data, the following is performed: determining the highest height of the second candidate obstacle point cloud data, and dividing a preset number of intervals based on the highest height, wherein the heights of all the intervals are the same; counting the number of points of the second candidate obstacle point cloud data in each interval; if the number is less than the preset number threshold, the section is discontinuous with other sections, namely the heights of points in the second candidate obstacle point cloud data are discontinuous, otherwise, the heights of the points are continuous. For example, the highest height is 30cm, 6 sections are divided, each section is 5cm, the number of points of the 6 sections is counted, and if the number of points in the 6 sections is less than a preset number threshold value of 5, the height discontinuity is indicated.
In another real-time mode, whether the heights of the points in the second candidate obstacle point cloud data are continuous is judged, and the variance or standard deviation between the heights of the points in the second candidate obstacle point cloud data can be obtained, wherein the variance has the meaning of reflecting the deviation degree between the heights and average heights of the points in the second candidate obstacle point cloud data, and can be used for measuring the discrete degree of the heights of the points in the second candidate obstacle point cloud data. The larger the variance is, the larger the fluctuation of the heights of each point in the second candidate obstacle point cloud data is, and the height of each point in the second candidate obstacle point cloud data is discontinuous; the smaller the variance is, the smaller the fluctuation of the heights of each point in the second candidate obstacle point cloud data is, and the heights of each point in the second candidate obstacle point cloud data are continuous. The preset variance threshold value for judging the high continuity or the high discontinuity can be set automatically according to actual requirements. The standard deviation is the arithmetic square root of the variance, so that the standard deviation can be known after knowing the variance, and can also reflect the degree of dispersion of a data set. Therefore, whether the heights of the points in the second candidate obstacle point cloud data are continuous or not can be judged by setting a preset standard deviation threshold value.
Further, whether the heights of the points in the second candidate obstacle point cloud data are continuous or not can be judged by setting a preset variance threshold and a preset standard deviation threshold in a combined mode.
Taking whether the heights of points in the second candidate obstacle point cloud data are continuous or not as an example according to the variances, the variances corresponding to the heights of the points in the first group of the second candidate obstacle point cloud data are 0.088, the variances corresponding to the heights of the points in the second group of the second candidate obstacle point cloud data are 5.7, the variances corresponding to the heights of the points in the third group of the second candidate obstacle point cloud data are 0.088, if the preset variance threshold is 0.1, the variances of the first group of the second candidate obstacle point cloud data and the third group of the second candidate obstacle point cloud data meet the requirement, the variances corresponding to the points in the second group of the second candidate obstacle point cloud data do not meet the requirement of the preset variance threshold, and the points in the second group of the second candidate obstacle point cloud data are continuous, and the second group of the second candidate obstacle point cloud data are judged to be non-obstacle point cloud data.
And when the heights among the points in the second candidate obstacle point cloud data are discontinuous, judging the second candidate obstacle point cloud data to be non-obstacle point cloud data. When the height between each point in the second candidate obstacle point cloud data is continuous, obstacle probability information of corresponding positions in an environment image corresponding to the second candidate obstacle point cloud data is obtained, the obstacle probability information corresponding to all points in the second candidate obstacle point cloud data is averaged, the obtained average value is used as the obstacle probability information corresponding to the second candidate obstacle point cloud data, the obstacle probability information corresponding to the second candidate obstacle point cloud data is larger than or equal to a preset probability threshold, and the second candidate obstacle point cloud data is judged to be obstacle point cloud data, otherwise, the second candidate obstacle point cloud data is not obstacle point cloud data.
Or when the points in the second candidate obstacle point cloud data are highly continuous, obtaining the obstacle probability information of the corresponding position in the environment image corresponding to the second candidate obstacle point cloud data, and adopting a voting mechanism to the obstacle probability information corresponding to all the points in the second candidate obstacle point cloud data. When the obstacle probability information corresponding to the points in the second candidate obstacle point cloud data is larger than or equal to a preset probability threshold, the second candidate obstacle point cloud data is correspondingly obtained as the obstacle point cloud data, and when the obstacle probability information corresponding to the points in the second candidate obstacle point cloud data is smaller than the preset probability threshold, the second candidate obstacle point cloud data is correspondingly obtained as the non-obstacle point cloud data; and when the score of the obstacle point cloud data is larger than or equal to that of the non-obstacle point cloud data, judging that the second candidate obstacle point cloud data is the obstacle point cloud data, and otherwise, judging that the second candidate obstacle point cloud data is the non-obstacle point cloud data.
In some embodiments, determining whether the second candidate obstacle point cloud data is obstacle point cloud data or non-obstacle point cloud data further comprises: determining a maximum height value from the height information of each point in each set of second candidate obstacle point cloud data; and determining the second candidate obstacle point cloud data with the maximum height value smaller than the preset height threshold value as non-obstacle point cloud data.
In an exemplary embodiment, the maximum height value is determined from the height information of each point in each set of the second candidate obstacle point cloud data, and when the maximum height value is smaller than the preset height threshold value, it may be determined that the self-mobile device may operate normally without obstacle avoidance, so that the second candidate obstacle point cloud data is determined to be non-obstacle point cloud data.
For example, if the diameter of the wheel of the mobile device is 30cm, the radius of the wheel is set to be 15cm, then it is determined that as long as an object with a height lower than the radius of the wheel does not affect the normal operation of the mobile device, if the maximum height value is determined to be 3cm in the height information of each point in the second candidate obstacle point cloud data at this time, the second candidate obstacle point cloud data is determined to be non-obstacle point cloud data.
Referring to fig. 3, fig. 3 is a diagram illustrating an obstacle identifying apparatus 300 according to an embodiment of the present application, where the obstacle identifying apparatus 300 includes a data obtaining module 301, a data processing module 302, a data filtering module 303, a data clustering module 304, and a data identifying module 305.
The data acquisition module 301 is configured to acquire an environment image of an environment in which the mobile device is located and environment point cloud data corresponding to the environment image.
The data processing module 302 is configured to determine obstacle probability information corresponding to each position in the environmental image.
The data filtering module 303 is configured to determine first candidate obstacle point cloud data from the environmental point cloud data according to a pose of each point in the environmental point cloud data relative to the self-mobile device.
The data clustering module 304 is configured to cluster the first candidate obstacle point cloud data to obtain second candidate obstacle point cloud data.
The data identifying module 305 is configured to determine an obstacle identifying result of the second candidate obstacle point cloud data according to the obstacle probability information corresponding to the second candidate obstacle point cloud data.
In some implementations, the data processing module 302 performs, in determining the obstacle probability information for each location in the environmental image:
dividing the environment image into a plurality of sub-environment image areas based on a preset obstacle category;
and determining barrier probability information according to the position of each sub-environment image area.
In some implementations, the pose includes a position and an angle, and the data filtering module 303 performs, in determining the first candidate obstacle point cloud data from the ambient point cloud data based on the pose of each point in the ambient point cloud data relative to the self-mobile device:
Filtering the environmental point cloud data to obtain first processed point cloud data;
determining the position and the angle of each point in the first processing point cloud data relative to a coordinate system of the self-mobile device;
determining first processing point cloud data with the angle larger than a first preset angle threshold value as first type obstacle point cloud data, wherein the first processing point cloud data is smaller than the height of a horizontal plane where the self-moving device is located relative to the position of the self-moving device in a coordinate system;
removing the first type of obstacle point cloud data from the first processing point cloud data to obtain second processing point cloud data;
and obtaining first candidate obstacle point cloud data according to the second processing point cloud data.
In some embodiments, the data filtering module 303 performs, in obtaining the first candidate obstacle point cloud data from the second processed point cloud data:
determining normal vectors corresponding to each point in the second processing point cloud data;
determining an included angle between the normal vector and a coordinate system where the self-mobile device is located;
and taking the second processing point cloud data with the included angle larger than a second preset angle threshold value as first candidate obstacle point cloud data.
In some embodiments, the data clustering module 304 performs, in clustering the first candidate obstacle point cloud data to obtain the second candidate obstacle point cloud data:
Clustering the screened first candidate obstacle point cloud data to obtain a plurality of groups of third processing point cloud data;
and taking third processing point cloud data with the number of points being larger than a preset number threshold value as second candidate obstacle point cloud data.
In some embodiments, the obstacle recognition result includes obstacle point cloud data or non-obstacle point cloud data; the data identifying module 305 performs, in determining an obstacle identifying result of the second candidate obstacle point cloud data according to the obstacle probability information corresponding to the second candidate obstacle point cloud data:
acquiring the height information of each point in the second candidate obstacle point cloud data;
and determining the second candidate obstacle point cloud data as obstacle point cloud data when the obstacle probability information corresponding to the second candidate obstacle point cloud data is larger than or equal to a preset probability threshold value and the heights of points in the second candidate obstacle point cloud data are continuous according to each group of the second candidate obstacle point cloud data.
In some implementations, the data identification module 305 also performs:
determining a maximum height value from the height information of each point in each set of second candidate obstacle point cloud data;
And determining the second candidate obstacle point cloud data with the maximum height value smaller than the preset height threshold value as non-obstacle point cloud data.
It should be noted that, for convenience and brevity of description, specific working processes of the above-described apparatus may refer to corresponding processes in the foregoing embodiment of the obstacle identifying method, which are not described herein again.
Referring to fig. 4, fig. 4 is a schematic block diagram of a self-mobile device according to an embodiment of the present application.
As shown in fig. 4, the self-mobile device 400 includes a processor 401 and a memory 402, the processor 401 and the memory 402 being connected by a bus 403, such as an I2C (Inter-integrated Circuit) bus.
In particular, the processor 401 is used to provide computing and control capabilities, supporting the operation of the entire server. The processor 401 may be a central processing unit (Central Processing Unit, CPU), but the processor 401 may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field-programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. Wherein the general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Specifically, the Memory 402 may be a Flash chip, a Read-Only Memory (ROM) disk, an optical disk, a U-disk, a removable hard disk, or the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 4 is merely a block diagram of a portion of the structure related to the embodiments of the present application and is not limiting of the self-moving device to which the embodiments of the present application apply, and that a particular self-moving device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
The processor 401 is configured to execute a computer program stored in a memory, and implement the obstacle identifying method provided in any embodiment of the present application when the computer program is executed.
In some embodiments, the processor 401 is configured to run a computer program stored in a memory, apply to a self-mobile device, and implement the following steps when executing the computer program:
acquiring an environment image of an environment where the mobile device is located and environment point cloud data corresponding to the environment image;
determining obstacle probability information corresponding to each position in the environment image;
determining first candidate obstacle point cloud data from the environmental point cloud data according to the pose of each point in the environmental point cloud data relative to the self-mobile device;
Clustering the first candidate obstacle point cloud data to obtain second candidate obstacle point cloud data;
and determining an obstacle recognition result of the second candidate obstacle point cloud data according to the obstacle probability information corresponding to the second candidate obstacle point cloud data.
In some implementations, the processor 401, in determining the obstacle probability information for each location in the environmental image, performs:
dividing the environment image into a plurality of sub-environment image areas based on a preset obstacle category;
and determining barrier probability information according to the position of each sub-environment image area.
In some embodiments, the pose includes a position and an angle; the processor 401, in determining the first candidate obstacle point cloud data from the ambient point cloud data according to the pose of each point in the ambient point cloud data with respect to the self-mobile device, performs:
filtering the environmental point cloud data to obtain first processed point cloud data;
determining the position and the angle of each point in the first processing point cloud data relative to a coordinate system of the self-mobile device;
determining first processing point cloud data with a position smaller than the height of a horizontal plane where the self-mobile device is located and an angle larger than a first preset angle threshold value as first type obstacle point cloud data;
Removing the first type of obstacle point cloud data from the first processing point cloud data to obtain second processing point cloud data;
and obtaining first candidate obstacle point cloud data according to the second processing point cloud data.
In some implementations, the processor 401, in deriving the first candidate obstacle point cloud data from the second processed point cloud data, performs:
determining normal vectors corresponding to each point in the second processing point cloud data;
determining an included angle between the normal vector and a coordinate system where the self-mobile device is located;
and taking the second processing point cloud data with the included angle larger than a second preset angle threshold value as first candidate obstacle point cloud data.
In some embodiments, the processor 401 performs, in clustering the first candidate obstacle point cloud data to obtain the second candidate obstacle point cloud data:
clustering the screened first candidate obstacle point cloud data to obtain a plurality of groups of third processing point cloud data;
and taking third processing point cloud data with the number of points being larger than a preset number threshold value as second candidate obstacle point cloud data.
In some embodiments, the obstacle recognition result includes obstacle point cloud data or non-obstacle point cloud data; the processor 401 performs, in determining the obstacle recognition result of the second candidate obstacle point cloud data according to the obstacle probability information corresponding to the second candidate obstacle point cloud data:
Acquiring the height information of each point in the second candidate obstacle point cloud data;
and determining the second candidate obstacle point cloud data as obstacle point cloud data when the obstacle probability information corresponding to the second candidate obstacle point cloud data is larger than or equal to a preset probability threshold value and the heights of points in the second candidate obstacle point cloud data are continuous according to each group of the second candidate obstacle point cloud data.
In some implementations, the processor 401 is further to perform:
determining a maximum height value from the height information of each point in each set of second candidate obstacle point cloud data;
and determining the second candidate obstacle point cloud data with the maximum height value smaller than the preset height threshold value as non-obstacle point cloud data.
It should be noted that, for convenience and brevity of description, specific working processes of the self-mobile device described above may refer to corresponding processes in the foregoing embodiment of the obstacle identifying method, and will not be described herein again.
The embodiments of the present application also provide a storage medium for computer readable storage, where the storage medium stores one or more programs, and the one or more programs are executable by one or more processors to implement the steps of any one of the obstacle identifying methods as provided in the embodiments of the present application.
The storage medium may be an internal storage unit of the self-mobile device in the foregoing embodiment, for example, a self-mobile device memory. The storage medium may also be an external storage device of the mobile device, such as a plug-in hard disk provided on the mobile device, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like.
Those of ordinary skill in the art will appreciate that all or some of the steps of the methods, functional modules/units in the apparatus disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware embodiment, the division between the functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed cooperatively by several physical components. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as known to those skilled in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. Furthermore, as is well known to those of ordinary skill in the art, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
It should be understood that the term "and/or" as used in this specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations. It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments. The foregoing is merely illustrative of the embodiments of the present application, but the scope of the present application is not limited thereto, and any equivalent modifications or substitutions will be apparent to those skilled in the art within the scope of the present application, and these modifications or substitutions are intended to be included in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method of identifying an obstacle, the method comprising:
acquiring an environment image of an environment where a mobile device is located and environment point cloud data corresponding to the environment image;
determining obstacle probability information corresponding to each position in the environment image;
determining first candidate obstacle point cloud data from the environmental point cloud data according to the pose of each point in the environmental point cloud data relative to the self-mobile device;
clustering the first candidate obstacle point cloud data to obtain second candidate obstacle point cloud data;
and determining an obstacle recognition result of the second candidate obstacle point cloud data according to the obstacle probability information corresponding to the second candidate obstacle point cloud data.
2. The method of claim 1, wherein the determining obstacle probability information for each location in the environmental image comprises:
dividing the environment image into a plurality of sub-environment image areas based on a preset obstacle category;
and determining the obstacle probability information according to the position of each sub-environment image area.
3. The method of claim 1, wherein the pose comprises a position and an angle; the determining first candidate obstacle point cloud data from the environment point cloud data according to the pose of each point in the environment point cloud data relative to the self-mobile device comprises:
Filtering the environmental point cloud data to obtain first processing point cloud data;
determining the position and angle of each point in the first processing point cloud data relative to the coordinate system of the self-mobile device;
determining the first processing point cloud data, of which the position relative to the coordinate system of the self-moving equipment is smaller than the height of the horizontal plane of the self-moving equipment and the angle is larger than a first preset angle threshold value, as first type obstacle point cloud data;
removing the first type obstacle point cloud data from the first processing point cloud data to obtain second processing point cloud data;
and obtaining the first candidate obstacle point cloud data according to the second processing point cloud data.
4. The method of claim 3, wherein the obtaining the first candidate obstacle point cloud data from the second processed point cloud data comprises:
determining normal vectors corresponding to each point in the second processing point cloud data;
determining an included angle between the normal vector and a coordinate system where the self-mobile device is located;
and taking the second processing point cloud data with the included angle larger than a second preset angle threshold value as the first candidate obstacle point cloud data.
5. The method of claim 3, wherein clustering the first candidate obstacle point cloud data to obtain second candidate obstacle point cloud data comprises:
clustering the screened first candidate obstacle point cloud data to obtain a plurality of groups of third processing point cloud data;
and taking the third processing point cloud data with the number of points larger than a preset number threshold as the second candidate obstacle point cloud data.
6. The method of claims 1-5, wherein the obstacle recognition result comprises obstacle point cloud data or non-obstacle point cloud data; the determining, according to the obstacle probability information corresponding to the second candidate obstacle point cloud data, an obstacle recognition result of the second candidate obstacle point cloud data includes:
acquiring the height information of each point in each group of the second candidate obstacle point cloud data;
and determining the second candidate obstacle point cloud data as obstacle point cloud data when the obstacle probability information corresponding to the second candidate obstacle point cloud data is larger than or equal to a preset probability threshold value and the heights of points in the second candidate obstacle point cloud data are continuous according to each group of the second candidate obstacle point cloud data.
7. The method of claim 6, wherein the method further comprises:
determining a maximum height value from the height information of each point in each group of the second candidate obstacle point cloud data;
and determining the second candidate obstacle point cloud data with the maximum height value smaller than a preset height threshold value as non-obstacle point cloud data.
8. An obstacle recognition device, characterized by comprising:
the data acquisition module is used for acquiring an environment image of the environment where the mobile equipment is located and environment point cloud data corresponding to the environment image;
the data processing module is used for determining obstacle probability information corresponding to each position in the environment image;
the data screening module is used for determining first candidate obstacle point cloud data from the environment point cloud data according to the pose of each point in the environment point cloud data relative to the self-mobile device;
the data clustering module is used for clustering the first candidate obstacle point cloud data to obtain second candidate obstacle point cloud data;
the data identification module is used for determining an obstacle identification result of the second candidate obstacle point cloud data according to the obstacle probability information corresponding to the second candidate obstacle point cloud data.
9. A self-mobile device, characterized in that the self-mobile device comprises a processor, a memory;
the memory is used for storing a computer program;
the processor is configured to execute the computer program and to implement the obstacle identifying method according to any one of claims 1 to 7 when the computer program is executed.
10. A computer storage medium for computer storage, characterized in that the computer storage medium stores one or more programs executable by one or more processors to implement the steps of the obstacle identification method of any one of claims 1 to 7.
CN202310376907.7A 2023-03-31 2023-03-31 Obstacle recognition method, device, self-mobile device and storage medium Pending CN116486130A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310376907.7A CN116486130A (en) 2023-03-31 2023-03-31 Obstacle recognition method, device, self-mobile device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310376907.7A CN116486130A (en) 2023-03-31 2023-03-31 Obstacle recognition method, device, self-mobile device and storage medium

Publications (1)

Publication Number Publication Date
CN116486130A true CN116486130A (en) 2023-07-25

Family

ID=87222478

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310376907.7A Pending CN116486130A (en) 2023-03-31 2023-03-31 Obstacle recognition method, device, self-mobile device and storage medium

Country Status (1)

Country Link
CN (1) CN116486130A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117742351A (en) * 2024-02-19 2024-03-22 深圳竹芒科技有限公司 Control method of self-mobile device, and readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117742351A (en) * 2024-02-19 2024-03-22 深圳竹芒科技有限公司 Control method of self-mobile device, and readable storage medium

Similar Documents

Publication Publication Date Title
CN110458854B (en) Road edge detection method and device
US10366310B2 (en) Enhanced camera object detection for automated vehicles
US9514366B2 (en) Vehicle detection method and system including irrelevant window elimination and/or window score degradation
US20220245952A1 (en) Parking spot detection method and parking spot detection system
WO2020146983A1 (en) Lane detection method and apparatus, lane detection device, and mobile platform
CN114723830B (en) Obstacle recognition method, device and storage medium
CN109849930B (en) Method and device for calculating speed of adjacent vehicle of automatic driving automobile
CN115049700A (en) Target detection method and device
CN111213153A (en) Target object motion state detection method, device and storage medium
CN112597846B (en) Lane line detection method, lane line detection device, computer device, and storage medium
CN111213154A (en) Lane line detection method, lane line detection equipment, mobile platform and storage medium
CN112598922A (en) Parking space detection method, device, equipment and storage medium
CN116486130A (en) Obstacle recognition method, device, self-mobile device and storage medium
CN108052921B (en) Lane line detection method, device and terminal
CN113822260B (en) Obstacle detection method and apparatus based on depth image, electronic device, and medium
CN114972427A (en) Target tracking method based on monocular vision, terminal equipment and storage medium
CN114325687A (en) Radar data and visual information fusion processing method, device, system and equipment
CN107844749B (en) Road surface detection method and device, electronic device and storage medium
CN115457506A (en) Target detection method, device and storage medium
CN116824152A (en) Target detection method and device based on point cloud, readable storage medium and terminal
CN116358528A (en) Map updating method, map updating device, self-mobile device and storage medium
Oniga et al. A fast ransac based approach for computing the orientation of obstacles in traffic scenes
CN115527187A (en) Method and device for classifying obstacles
CN114118188A (en) Processing system, method and storage medium for moving objects in an image to be detected
CN117372988B (en) Road boundary detection method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination