CN114764885A - Obstacle detection method and device, computer-readable storage medium and processor - Google Patents

Obstacle detection method and device, computer-readable storage medium and processor Download PDF

Info

Publication number
CN114764885A
CN114764885A CN202210240766.1A CN202210240766A CN114764885A CN 114764885 A CN114764885 A CN 114764885A CN 202210240766 A CN202210240766 A CN 202210240766A CN 114764885 A CN114764885 A CN 114764885A
Authority
CN
China
Prior art keywords
point cloud
obstacle
plane
determining
equation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210240766.1A
Other languages
Chinese (zh)
Inventor
徐鹏程
杨静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Lanxin Technology Co ltd
Original Assignee
Hangzhou Lanxin Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Lanxin Technology Co ltd filed Critical Hangzhou Lanxin Technology Co ltd
Priority to CN202210240766.1A priority Critical patent/CN114764885A/en
Publication of CN114764885A publication Critical patent/CN114764885A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a method, a device, a computer readable storage medium and a processor for detecting obstacles, wherein the method comprises the following steps: acquiring a point cloud to be identified, wherein the point cloud to be identified is a point cloud of an object in the visual field of the depth camera; calculating a ground plane equation in real time according to the point cloud to be identified, wherein the ground plane equation is an equation of the current ground in a robot coordinate system; moving a plane corresponding to the ground plane equation by a first preset value along the positive direction of the Z axis of the robot coordinate system to obtain a segmentation plane; determining a prepared obstacle point cloud according to the partition plane, wherein the prepared obstacle point cloud is a point cloud to be identified which is positioned on one side of the partition plane close to the positive direction of the Z axis of the robot coordinate system; and identifying the prepared obstacle point cloud, and determining the obstacle point cloud which is calibrated as the point cloud of the obstacle. The method solves the problems of missing detection and false detection of the barrier caused by low reliability of the ground plane equation in the prior art.

Description

Obstacle detection method and device, computer readable storage medium and processor
Technical Field
The present application relates to the field of point cloud processing technologies, and in particular, to a method and an apparatus for detecting an obstacle, a computer-readable storage medium, and a processor.
Background
At present, manufacturing factories are developed in the direction of automation and intellectualization, and in some scenes, an AGV replaces the role of traditional manual carrying. AGVs belong to a small-sized autonomous driving apparatus of a specific scene, which needs to detect the state of the surrounding environment, and the perception of the current AGVs becomes an indispensable role. In a factory workshop or a warehouse, the environment is difficult to be unchanged, various unknown obstacles may exist on the planned path of the AGV, and the sensing layer needs to timely and accurately identify the obstacles near the AGV. Common external environment perception sensors of the AGV comprise a laser radar, a sonar, a monocular camera, a binocular camera, a depth camera and the like, and the sensors serve as eyes and ears of the AGV and play a critical role in guaranteeing the safety work of the AGV.
Many factories and shopping malls are equipped with AGVs or other intelligent robots, and in recent years, cases of robot damage or personnel damage are rare due to the lack of robot perception or low accuracy of perception. In order to reduce the occurrence of such phenomena, it is necessary to provide a sensor capable of sensing dense environmental information, and the depth camera is used as a high-sensing, low-cost sensor capable of acquiring dense point cloud data in a larger range in front of the camera.
The indoor scene mostly uses the single line laser radar, and the perception range is only on one plane, and the single line laser radar is basically ineffective for obstacles which are not on the measurement range. The sonar has low positioning precision and small sensing range. The monocular camera has no scale information, and is difficult to perceive the real space position of the obstacle. The binocular camera has poor detection effect on the barrier without textures, high consumption of computing resources and great attenuation of measurement precision along with the increase of distance. The depth camera has the characteristics of small volume, low power consumption, wide measurement range, high measurement precision and the like, and is widely applied to robots.
The existing technology for detecting obstacles by using a depth camera also has certain defects, such as: 1. only the point cloud higher than a certain fixed plane is judged as an obstacle, and if the camera shakes with the vehicle or the external parameters of the camera change, the risks of missing detection and false detection exist. 2. The camera itself may have some poor data due to the point cloud imaging characteristics in a default scene, for example, the ground tag point cloud and the ground point cloud are not on the same plane due to the ground curvature or reflectivity problem caused by the TOF camera multipath problem. The existing method cannot well process the data error of the depth camera caused by the special external environment.
The above information disclosed in this background section is only for enhancement of understanding of the background of the technology described herein and, therefore, certain information may be included in the background that does not form the prior art that is already known in this country to a person of ordinary skill in the art.
Disclosure of Invention
The application mainly aims to provide a method and a device for detecting an obstacle, a computer readable storage medium and a processor, so as to solve the problems of missed detection and false detection of the obstacle caused by low reliability of a ground plane equation in the prior art.
According to an aspect of an embodiment of the present invention, there is provided a method of detecting an obstacle, a robot including a depth camera, the method including: acquiring a point cloud to be identified, wherein the point cloud to be identified is a point cloud of an object in the field of view of the depth camera; calculating a ground plane equation in real time according to the point cloud to be identified, wherein the ground plane equation is an equation of the current ground in a robot coordinate system; moving a plane corresponding to the ground plane equation by a first preset value along the positive direction of the Z axis of the robot coordinate system to obtain a segmentation plane; determining a prepared obstacle point cloud according to the segmentation plane, wherein the prepared obstacle point cloud is the point cloud to be identified which is positioned on one side of the segmentation plane close to the positive direction of the Z axis of the robot coordinate system; and identifying the prepared obstacle point cloud, and determining the obstacle point cloud which is calibrated as the point cloud of the obstacle.
Optionally, calculating a ground plane equation in real time according to the point cloud to be recognized includes: acquiring a prior plane equation, wherein the prior plane equation is an equation of the ground in a robot coordinate system when the robot is on a flat ground, and a plane corresponding to the prior plane equation is vertical to a Z axis of the robot coordinate system; determining a division plane candidate area according to the prior plane equation, wherein the division plane candidate area is an area between a first plane and a second plane in the robot coordinate system, the first plane is obtained by moving a second preset value along the positive direction of the Z axis of the robot coordinate system by the plane corresponding to the prior plane equation, the second plane is obtained by moving the second preset value along the negative direction of the Z axis of the robot coordinate system by the plane corresponding to the prior plane equation, and the positive direction of the Z axis of the robot coordinate system is the height increasing direction; determining a maximum plane equation according to the point cloud to be identified in the partition plane candidate region, wherein the maximum plane equation is an equation of a maximum plane formed by the point cloud to be identified in the partition plane candidate region in a robot coordinate system; and determining the ground plane equation according to the prior plane equation and the maximum plane equation.
Optionally, determining a maximum plane equation according to the point cloud to be identified in the partition candidate region includes: dividing the division plane candidate region into a plurality of first cells, wherein the first cells are cubes with side lengths of a first length; sampling the point clouds to be identified in the first cells to obtain a plurality of preparatory sampling points, wherein the coordinates of the preparatory sampling points are the coordinates of any point of the point clouds to be identified in the first cells; dividing the candidate region of the division face into a plurality of second cells, wherein the second cells are cubes with side length of a second length, and the second length is larger than the first length; performing secondary sampling on the sampling points in the second cell to obtain a plurality of sampling points, wherein the sampling points are the gravity centers of the sampling points in the second cell; and determining the maximum plane equation according to the sampling points, wherein the maximum plane equation is an equation of a maximum plane formed by the sampling points in a robot coordinate system.
Optionally, determining the ground plane equation according to the prior plane equation and the maximum plane equation includes: under the condition that at least one preset condition is met, determining the prior plane equation as the ground plane equation, wherein the preset condition comprises that the distance between a first point and a second point is greater than a distance threshold, the included angle between a first normal vector and a second normal vector is greater than a preset included angle, and the point number of point cloud in the candidate area of the division surface is less than a point number threshold, the first point is any point on the plane corresponding to the prior plane equation, the X-axis coordinate and the Y-axis coordinate are both less than a third preset value, the second point is any point on the plane corresponding to the maximum plane equation, the X-axis coordinate and the Y-axis coordinate are both less than the third preset value, the first normal vector is the normal vector of the plane corresponding to the prior plane equation, and the second normal vector is the normal vector of the plane corresponding to the maximum plane equation; and determining the maximum plane equation as the ground plane equation under the condition that all the preset conditions are not satisfied.
Optionally, the obstacle point cloud includes a suspended obstacle point cloud, a high obstacle point cloud, and a low obstacle point cloud, and the preparing obstacle point cloud is identified to determine an obstacle point cloud, including: calibrating the prepared obstacle point cloud with Z-axis coordinates of all points larger than a fourth preset value as the suspended obstacle point cloud, wherein the fourth preset value is larger than a second preset value; calibrating the prepared obstacle point cloud with the Z-axis coordinate of partial points larger than the fourth preset value as the high obstacle point cloud; calibrating the prepared obstacle point cloud with Z-axis coordinates of all points less than or equal to the second preset value as the low obstacle point cloud.
Optionally, after identifying the preliminary obstacle point cloud and determining an obstacle point cloud, the method further comprises: determining a neighboring point of the target point according to the target point, wherein the target point is any point of the high obstacle point cloud or the low obstacle point cloud, and the neighboring point is a point which is in the high obstacle point cloud or the low obstacle point cloud and has a distance with the target point smaller than a first preset distance; clustering the target point according to clustering conditions to obtain a plurality of point cloud blocks, wherein the clustering conditions include a first clustering condition, a second clustering condition and a third clustering condition, the first clustering condition is that the difference between the Z-axis coordinate of the target point and the Z-axis coordinates of all the adjacent points is greater than a fifth predetermined value, the second clustering condition is that the normal vector angle of the target point is less than or equal to a predetermined angle, the difference between the Z-axis coordinate of the target point and the Z-axis coordinates of all the adjacent points is less than a sixth predetermined value, and the Euclidean distances between the target point and all the adjacent points are less than a second predetermined distance, the normal vector angle is an included angle between a local normal vector of the target point and the Z-axis of the robot coordinate system, the local normal vector is a normal vector of a tangent line of a curved surface formed by the target point and the adjacent points at the target point, the second predetermined distance is smaller than the first predetermined distance, the sixth predetermined value is smaller than the fifth predetermined value, the third clustering condition is that the target points which do not satisfy the first clustering condition and the second clustering condition are simultaneously satisfied, the normal vector angle of the target points is larger than the predetermined angle, and the Euclidean distances between the target points and all the adjacent points are smaller than a third predetermined distance; and splicing the point cloud blocks to obtain the obstacle point cloud.
Optionally, after identifying the preliminary obstacle point cloud and determining an obstacle point cloud, the method further comprises: generating a first intensity map according to point cloud data of the low-obstacle point cloud; determining a high-intensity area according to the first intensity map, wherein the high-intensity area is an area in which the intensity of the low obstacle point cloud is greater than a first preset intensity; determining a shadow area according to the high-intensity area, wherein the shadow area is an area which cannot collect point clouds due to obstruction of an obstacle corresponding to the point clouds in the high-intensity area in the first intensity map; and determining whether the low-obstacle point cloud is misjudged according to a duty ratio, wherein the duty ratio is the ratio of the area occupied by the point cloud in the shadow area to the area of the shadow area.
Optionally, determining whether the low obstacle point cloud is misjudged according to the duty ratio includes: determining the low obstacle point cloud false positive if the duty cycle is greater than a predetermined threshold; determining that the low obstacle point cloud is not misjudged if the duty cycle is less than or equal to the predetermined threshold.
Optionally, after identifying the preliminary obstacle point cloud and determining an obstacle point cloud, the method further comprises: generating a second intensity map according to the point cloud data of the point cloud to be identified; determining an intensity abnormal area according to the second intensity map, wherein the intensity abnormal area is an area where the intensity of the point cloud to be identified is greater than a second preset intensity or less than a third preset intensity, and the third preset intensity is less than the second preset intensity; performing inverse perspective transformation on the second intensity map to obtain a transformation intensity map; and comparing the converted intensity map with the intensity abnormal area of the second intensity map, and determining whether the point cloud to be identified corresponding to the intensity abnormal area is the obstacle point cloud.
Optionally, comparing the transformed intensity map with the intensity abnormal region of the second intensity map, and determining whether the point cloud to be identified corresponding to the intensity abnormal region is the obstacle point cloud, includes: under the condition that the ratio of a first length to a second length is larger than a preset ratio, determining that the point cloud to be identified of the strength abnormal area is an obstacle point cloud, wherein the first length is the length of the strength abnormal area in the transformation strength diagram in the target direction, the second length is the length of the strength abnormal area in the second strength diagram in the target direction, the target direction is any direction, and the preset ratio is larger than 1; determining that the point cloud to be identified of the intensity abnormal area is not an obstacle point cloud when the ratio of the first length to the second length is less than or equal to the predetermined ratio.
According to another aspect of the embodiments of the present invention, there is also provided an apparatus for detecting an obstacle, the robot including a depth camera, the apparatus including: the acquisition unit is used for acquiring point clouds to be identified, and the point clouds to be identified are point clouds of objects in the visual field of the depth camera; the calculation unit is used for calculating a ground plane equation in real time according to the point cloud to be identified, wherein the ground plane equation is an equation of the current ground in a robot coordinate system; the processing unit is used for moving the plane corresponding to the ground plane equation by a first preset value along the positive direction of the Z axis of the robot coordinate system to obtain a segmentation plane; a first determination unit, configured to determine a preliminary obstacle point cloud according to the segmentation plane, where the preliminary obstacle point cloud is the point cloud to be identified located on one side of the segmentation plane close to a positive direction of a Z axis of the robot coordinate system; and the second determining unit is used for identifying the prepared obstacle point cloud and determining the obstacle point cloud, wherein the obstacle point cloud is calibrated to be an obstacle.
According to still another aspect of embodiments of the present invention, there is also provided a computer-readable storage medium including a stored program, wherein the program executes the method.
According to another aspect of the embodiments of the present invention, there is also provided a processor, configured to run a program, where the program executes the method.
In the embodiment of the invention, in the method for detecting the obstacle, firstly, a point cloud to be identified is obtained, wherein the point cloud to be identified is a point cloud of an object in the field of view of the depth camera; then, calculating a ground plane equation in real time according to the point cloud to be identified, wherein the ground plane equation is an equation of the current ground in a robot coordinate system; then, moving a plane corresponding to the ground plane equation by a first preset value along the positive direction of the Z axis of the robot coordinate system to obtain a segmentation plane; then, determining a prepared obstacle point cloud according to the segmentation plane, wherein the prepared obstacle point cloud is the point cloud to be identified which is positioned on one side of the segmentation plane close to the positive direction of the Z axis of the robot coordinate system; and finally, identifying the prepared obstacle point cloud, and determining the obstacle point cloud which is calibrated as the obstacle point cloud. By calculating the ground plane equation in real time, the problem that the ground plane equation is unreliable due to the fact that the ground plane equation is large in difference between the ground plane equation of the depth camera and the ground plane equation of the flat ground when the depth camera shakes along with the movement of the robot is solved, the reliability of the ground plane equation is improved, and the problems that in the prior art, the obstacle is missed to be detected and is detected mistakenly due to the fact that the ground plane equation is low in reliability are solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the application and, together with the description, serve to explain the application and are not intended to limit the application. In the drawings:
fig. 1 shows a flow chart of a method of detection of an obstacle according to an embodiment of the present application;
fig. 2 shows a schematic view of a detection device of an obstacle according to an embodiment of the present application.
Detailed Description
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
It will be understood that when an element such as a layer, film, region, or substrate is referred to as being "on" another element, it can be directly on the other element or intervening elements may also be present. Also, in the specification and claims, when an element is described as being "connected" to another element, the element may be "directly connected" to the other element or "connected" to the other element through a third element.
As mentioned in the background of the invention, in order to solve the above problem, in the prior art, a method, an apparatus, a computer-readable storage medium, and a processor for detecting an obstacle are provided.
According to an embodiment of the present application, there is provided a method of detecting an obstacle.
Fig. 1 is a flowchart of a method of detecting an obstacle according to an embodiment of the present application. As shown in fig. 1, the method comprises the steps of:
step S101, point clouds to be identified are obtained, wherein the point clouds to be identified are point clouds of objects in the visual field of the depth camera;
step S102, calculating a ground plane equation in real time according to the point cloud to be identified, wherein the ground plane equation is an equation of the current ground in a robot coordinate system;
step S103, moving a plane corresponding to the ground plane equation by a first preset value along the positive direction of the Z axis of the robot coordinate system to obtain a segmentation plane;
step S104, determining a prepared obstacle point cloud according to the segmentation plane, wherein the prepared obstacle point cloud is the point cloud to be identified which is positioned on one side of the segmentation plane close to the positive direction of the Z axis of the robot coordinate system;
and step S105, identifying the prepared obstacle point cloud, and determining the obstacle point cloud which is calibrated as the obstacle point cloud.
In the method for detecting the obstacle, firstly, point clouds to be identified are obtained, wherein the point clouds to be identified are point clouds of objects in the visual field of the depth camera; then, calculating a ground plane equation in real time according to the point cloud to be identified, wherein the ground plane equation is an equation of the current ground in a robot coordinate system; then, moving a plane corresponding to the ground plane equation by a first preset value along the positive direction of the Z axis of the robot coordinate system to obtain a segmentation plane; then, determining a prepared obstacle point cloud according to the dividing plane, wherein the prepared obstacle point cloud is the point cloud to be identified which is positioned on one side of the dividing plane close to the positive direction of the Z axis of the robot coordinate system; and finally, identifying the prepared obstacle point cloud, and determining the obstacle point cloud which is calibrated as the point cloud of the obstacle. By calculating the ground plane equation in real time, the problem that the ground plane equation is unreliable due to the fact that the ground plane equation is large in difference between the ground plane equation of the depth camera and the ground plane equation of the flat ground when the depth camera shakes along with the movement of the robot is solved, the reliability of the ground plane equation is improved, and the problems that in the prior art, the obstacle is missed to be detected and is detected mistakenly due to the fact that the ground plane equation is low in reliability are solved.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than presented herein.
In an embodiment of the present application, calculating a ground plane equation in real time according to the point cloud to be recognized includes: acquiring a prior plane equation, wherein the prior plane equation is an equation of the ground in a robot coordinate system when the robot is on a flat ground, and a plane corresponding to the prior plane equation is vertical to a Z axis of the robot coordinate system; determining a division plane candidate area according to the prior plane equation, wherein the division plane candidate area is an area between a first plane and a second plane in the robot coordinate system, the first plane is obtained by moving a plane corresponding to the prior plane equation by a second predetermined value along a positive direction of a Z axis of the robot coordinate system, the second plane is obtained by moving the plane corresponding to the prior plane equation by the second predetermined value along a negative direction of the Z axis of the robot coordinate system, and the positive direction of the Z axis of the robot coordinate system is a height increasing direction; determining a maximum plane equation according to the point cloud to be identified in the partition plane candidate region, wherein the maximum plane equation is an equation of a maximum plane formed by the point cloud to be identified in the partition plane candidate region in a robot coordinate system; and determining the ground plane equation according to the prior plane equation and the maximum plane equation. Specifically, after the depth camera is installed on the robot, the installation external parameters of the camera need to be calibrated, the prior plane equation can be determined according to the external parameters, and if the requirement on the detection accuracy of the obstacle is low and the ground of the robot is relatively flat, only the point cloud with the fixed height above P1 is reserved, for example, 6cm above P1. However, in an actual scene, the ground is not very flat, the robot can bump up and down when moving, and partial ground point clouds can be regarded as obstacles, the method obtains a division plane candidate area, a plane corresponding to the prior plane equation P1 is subjected to RANSAC extraction in a small range area above and below the Z axis to obtain a maximum plane equation P2 corresponding to the maximum plane, and a proper ground plane equation is selected from the prior plane equation P1 and the maximum plane equation P2 according to actual conditions.
It should be noted that, only the point cloud within the area to be processed is retained by using the spherical point cloud filter, the point cloud is converted from the camera coordinate system to the robot coordinate system, and the point cloud data within the specified range of the cart is retained by using the rectangular frame filter for the current point cloud, so that the point cloud of the candidate area of the partition surface can be obtained.
In an embodiment of the present application, determining a maximum plane equation according to the point cloud to be identified in the partition candidate region includes: dividing the division plane candidate region into a plurality of first cells, wherein the first cells are cubes with side length of a first length; sampling the point clouds to be identified in the first cells to obtain a plurality of prepared sampling points, wherein the coordinates of the prepared sampling points are the coordinates of any point of the point clouds to be identified in the first cells; dividing the division plane candidate region into a plurality of second cells, the second cells being cubes having a side length of a second length, the second length being greater than the first length; performing secondary sampling on the sampling points in the second cell to obtain a plurality of sampling points, wherein the sampling points are the gravity centers of the sampling points in the second cell; and determining the maximum plane equation according to the sampling points, wherein the maximum plane equation is an equation of a maximum plane formed by the sampling points in a robot coordinate system. Specifically, the point cloud directly extracted by the depth camera is too dense, and if the subsequent filter directly processes the point cloud, the point cloud occupies a CPU resource, and if the downsampling is too sparse, the detection accuracy is reduced, so that downsampling processing of a proper scale is performed on the point cloud. The mainstream point cloud downsampling method comprises voxel filtering and direct downsampling, the voxel filtering calculates the average coordinate value of all points in each unit cell, the calculation resources are very consumed, but the distribution of the whole point cloud is relatively uniform, for example, after the point cloud on one plane is subjected to voxel filtering, all the point clouds are almost on the same plane. The direct down-sampling takes the first appearing point in each cell as the representative of the cell, the down-sampling speed is high, the occupied resource is less, but the point distribution error after the sampling is larger, the method combines the advantages and the disadvantages of the two filters, the direct down-sampling of the small-scale cell and the voxel filtering superposition down-sampling of the large-scale cell are used, the down-sampling precision is ensured, the calculation speed is improved, namely, the down-sampling effect is better, and the CPU occupation is reduced.
In an embodiment of the present application, determining the ground plane equation according to the prior plane equation and the maximum plane equation includes: determining the prior plane equation to be the ground plane equation under the condition that at least one preset condition is met, wherein the preset condition comprises that the distance between a first point and a second point is greater than a distance threshold, the included angle between a first normal vector and a second normal vector is greater than a preset included angle, and the point number of point cloud in the candidate area of the segmentation plane is less than a point number threshold, the first point is any point on the plane corresponding to the prior plane equation, the X-axis coordinate and the Y-axis coordinate are both less than a third preset value, the second point is any point on the plane corresponding to the maximum plane equation, the X-axis coordinate and the Y-axis coordinate are both less than the third preset value, the first normal vector is the normal vector of the plane corresponding to the prior plane equation, and the second normal vector is the normal vector of the plane corresponding to the maximum plane equation; and in the case that all the predetermined conditions are not satisfied, determining the maximum plane equation as the ground plane equation. Specifically, the distance between the first point and the second point is greater than a distance threshold, the plane corresponding to the maximum plane equation is a step plane, the included angle between the first normal vector and the second normal vector is greater than a predetermined included angle, the plane corresponding to the maximum plane equation is a wall plane, the number of points of the point cloud in the partition plane candidate region is less than a point number threshold, and the point cloud is a point cloud formed by noise and the like.
In an embodiment of the present application, the obstacle point cloud includes a suspension obstacle point cloud, a high obstacle point cloud, and a low obstacle point cloud, and the prepared obstacle point cloud is identified to determine an obstacle point cloud, including: calibrating the prepared obstacle point cloud with Z-axis coordinates of all points greater than a fourth preset value as the suspended obstacle point cloud, wherein the fourth preset value is greater than a second preset value; calibrating the prepared obstacle point cloud with the Z-axis coordinate of part of points larger than the fourth preset value into the high obstacle point cloud; and calibrating the prepared obstacle point cloud with Z-axis coordinates of all points less than or equal to the second preset value into the low obstacle point cloud. Specifically, all point clouds are subjected to Euclidean clustering to remove point cloud noise, and a plurality of point cloud blocks obtained by clustering are classified into suspension obstacle point clouds, high obstacle point clouds and low obstacle point clouds according to geometrical characteristics and statistical characteristics of the point cloud blocks.
In an embodiment of the application, after identifying the prepared obstacle point cloud and determining the obstacle point cloud, the method further includes: determining a neighboring point of the target point according to the target point, wherein the target point is any one point of the high obstacle point cloud or the low obstacle point cloud, and the neighboring point is a point which is in the high obstacle point cloud or the low obstacle point cloud and has a distance with the target point smaller than a first preset distance; clustering the target point according to clustering conditions to obtain a plurality of point cloud blocks, wherein the clustering conditions include a first clustering condition, a second clustering condition and a third clustering condition, the first clustering condition is that the difference between the Z-axis coordinate of the target point and the Z-axis coordinates of all the neighboring points is greater than a fifth predetermined value, the second clustering condition is that the normal vector angle of the target point is less than or equal to a predetermined angle, the difference between the Z-axis coordinate of the target point and the Z-axis coordinates of all the neighboring points is less than a sixth predetermined value, and the Euclidean distances between the target point and all the neighboring points are less than a second predetermined distance, the normal vector angle is an included angle between a local normal vector of the target point and the Z-axis of the robot coordinate system, the local normal vector is a normal vector of a tangent line of a curved surface formed by the target point and the neighboring points at the target point, the second predetermined distance is smaller than the first predetermined distance, the sixth predetermined value is smaller than the fifth predetermined value, the third clustering condition is that the target points which do not satisfy the first clustering condition and the second clustering condition are satisfied at the same time, the normal angle of the target point is larger than the predetermined angle, and the euclidean distances between the target point and all the neighboring points are smaller than a third predetermined distance; and splicing the point cloud blocks to obtain the obstacle point cloud. Specifically, due to multipath factors of the depth camera, the lower part of the high obstacle point cloud can contain a partially distorted and sunk ground, the low obstacle point cloud can be a real low obstacle or a normal ground, the method clusters points on the same plane together through clustering to form point cloud blocks, the ground and the obstacle can be distinguished after the point cloud blocks are spliced, the normal ground is prevented from being identified as the low obstacle, the lower part of the high obstacle point cloud can contain the partially distorted and sunk ground, and the obstacle identification accuracy is improved.
In an embodiment of the present application, after identifying the prepared obstacle point cloud and determining the obstacle point cloud, the method further includes: generating a first intensity map according to the point cloud data of the low-obstacle point cloud; determining a high-intensity area according to the first intensity map, wherein the high-intensity area is an area in which the intensity of the low obstacle point cloud is greater than a first preset intensity; determining a shadow area according to the high-intensity area, wherein the shadow area is an area which cannot be acquired due to the fact that obstacles corresponding to the point cloud of the high-intensity area in the first intensity map are shielded; and determining whether the low-obstacle point cloud is judged wrongly according to a duty ratio, wherein the duty ratio is the ratio of the area occupied by the point cloud in the shadow area to the area of the shadow area. Specifically, when a tag such as a security exit is attached to the ground, since the reflectivity of the tag is higher than that of the ground, the point cloud at the position of the tag is higher than that of the ground point cloud, and the tag is misjudged as a short obstacle, so that a high-strength area needs to be determined for misdetection screening on the first strength map, and if no point cloud exists behind a real short obstacle within a certain range, whether the tag is a real obstacle can be determined through the duty ratio of a shadow area behind the point cloud of the short obstacle, so as to determine whether the tag is misjudged.
In an embodiment of the present application, determining whether the low obstacle point cloud is misjudged according to a duty ratio includes: determining the low obstacle point cloud misjudgment under the condition that the duty ratio is larger than a preset threshold value; and determining that the low obstacle point cloud has no misjudgment when the duty ratio is less than or equal to the preset threshold value. Specifically, if the duty ratio is greater than a predetermined threshold, it can be proved that point clouds exist behind a low-obstacle point cloud, the low-obstacle point cloud is not a real obstacle, otherwise, the low-obstacle point cloud is determined not to be a real obstacle, it needs to be noted that if the duty ratio is less than or equal to the predetermined threshold, a tag distance is far, and an original point cloud in a tag rear area is few, and a robot needs to be close to recognize the tag.
In an embodiment of the present application, after identifying the prepared obstacle point cloud and determining the obstacle point cloud, the method further includes: generating a second intensity map according to the point cloud data of the point cloud to be identified; determining an intensity abnormal area according to the second intensity map, wherein the intensity abnormal area is an area where the intensity of the point cloud to be identified is greater than a second preset intensity or less than a third preset intensity, and the third preset intensity is less than the second preset intensity; performing inverse perspective transformation on the second intensity map to obtain a transformation intensity map; and comparing the abnormal intensity areas of the transformed intensity map and the second intensity map, and determining whether the point cloud to be identified corresponding to the abnormal intensity area is the obstacle point cloud. Specifically, the obstacle on the ground may be a high-reflectivity or low-reflectivity material, such as an aluminum profile or a black box, the obstacle in this part of area is over-exposed or too low in intensity, and the actually generated point cloud is very little, which may cause missed detection, so that the missed detection screening needs to be performed on the area with abnormal intensity, the area with abnormal intensity map is subjected to IPM inverse perspective transformation of the camera and the ground, if the transformed area is stretched to be abnormally long, the area can be determined to be a real three-dimensional object, missed detection is determined, the point cloud below the area is used as the lower edge of the obstacle, and the upper edge of the object is obtained by the upper edge of the intensity map according to perspective transformation, so as to estimate the position of the weak reflection obstacle, thereby reducing missed detection.
In an embodiment of the application, comparing the abnormal intensity areas of the transformed intensity map and the second intensity map, and determining whether the point cloud to be identified corresponding to the abnormal intensity area is the obstacle point cloud includes: determining the point cloud to be identified of the abnormal-intensity region as an obstacle point cloud when a ratio of a first length to a second length is greater than a predetermined ratio, the first length being a length of the abnormal-intensity region in the target direction in the transformed intensity map, the second length being a length of the abnormal-intensity region in the target direction in the second intensity map, the target direction being an arbitrary direction, the predetermined ratio being greater than 1; and under the condition that the ratio of the first length to the second length is less than or equal to the preset ratio, determining that the point cloud to be identified of the intensity abnormal area is not the obstacle point cloud. Specifically, the method can judge whether the area after the intensity map abnormal area transformation meets the condition of abnormal long stretching, so that whether detection is missed is determined, and the identification accuracy is ensured.
The embodiment of the present application further provides a device for detecting an obstacle, and it should be noted that the device for detecting an obstacle according to the embodiment of the present application may be used to execute the method for detecting an obstacle according to the embodiment of the present application. The following describes an obstacle detection device provided in an embodiment of the present application.
Fig. 2 is a schematic diagram of an obstacle detection apparatus according to an embodiment of the present application. As shown in fig. 2, the apparatus includes:
the system comprises an acquisition unit 10, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a point cloud to be identified, and the point cloud to be identified is a point cloud of an object in the view field of the depth camera;
the calculating unit 20 is used for calculating a ground plane equation in real time according to the point cloud to be identified, wherein the ground plane equation is an equation of the current ground in a robot coordinate system;
a processing unit 30, configured to move a plane corresponding to the ground plane equation by a first predetermined value in a positive direction of a Z axis of the robot coordinate system, so as to obtain a segmentation plane;
a first determination unit 40 configured to determine a preliminary obstacle point cloud from the division plane, the preliminary obstacle point cloud being the point cloud to be recognized located on a side of the division plane closer to a positive direction of a Z axis of the robot coordinate system;
and a second determining unit 50, configured to identify the prepared obstacle point cloud, and determine an obstacle point cloud, where the obstacle point cloud is calibrated as an obstacle.
In the obstacle detection device, an acquisition unit acquires a point cloud to be identified, wherein the point cloud to be identified is a point cloud of an object in the view field of the depth camera; the calculation unit calculates a ground plane equation in real time according to the point cloud to be identified, wherein the ground plane equation is an equation of the current ground in a robot coordinate system; the processing unit moves a plane corresponding to the ground plane equation by a first preset value along the positive direction of the Z axis of the robot coordinate system to obtain a segmentation plane; a first determination unit that determines a preliminary obstacle point cloud, which is the point cloud to be recognized located on a side of the division plane closer to a positive direction of a Z axis of the robot coordinate system, from the division plane; and the second determining unit identifies the prepared obstacle point cloud and determines the obstacle point cloud, wherein the obstacle point cloud is calibrated to be an obstacle. The device calculates the ground plane equation in real time, so that the problem that the ground plane equation is unreliable due to the fact that the ground plane equation is greatly different from the ground plane equation of the flat ground when the depth camera shakes along with the movement of the robot is avoided, the reliability of the ground plane equation is improved, and the problems that in the prior art, the obstacle is missed to be detected and is detected mistakenly due to the fact that the ground plane equation is low in reliability are solved.
In an embodiment of the present application, the calculating unit includes an obtaining module, a first determining module, a second determining module, and a third determining module, where the obtaining module is configured to obtain a prior plane equation, the prior plane equation is an equation of the ground of the robot in a robot coordinate system when the robot is on a flat ground, and a plane corresponding to the prior plane equation is perpendicular to a Z axis of the robot coordinate system; the first determining module is configured to determine a candidate area of a division plane according to the prior plane equation, where the candidate area of the division plane is an area between a first plane and a second plane in the robot coordinate system, the first plane is obtained by moving a plane corresponding to the prior plane equation by a second predetermined value in a positive direction of a Z axis of the robot coordinate system, the second plane is obtained by moving the plane corresponding to the prior plane equation by the second predetermined value in a negative direction of the Z axis of the robot coordinate system, and the positive direction of the Z axis of the robot coordinate system is a direction in which a height increases; the second determining module is used for determining a maximum plane equation according to the point cloud to be identified in the partition plane candidate region, wherein the maximum plane equation is an equation of a maximum plane formed by the point cloud to be identified in the partition plane candidate region in a robot coordinate system; the third determining module is configured to determine the ground plane equation according to the prior plane equation and the maximum plane equation. Specifically, after the depth camera is installed on the robot, the installation external parameters of the camera need to be calibrated, the prior plane equation can be determined according to the external parameters, and if the requirement on the detection accuracy of the obstacle is low and the ground of the robot is relatively flat, only the point cloud with the fixed height above P1 is reserved, for example, 6cm above P1. However, in an actual scene, the ground is not very flat, the robot can bump up and down when moving, and therefore part of ground point cloud can be regarded as an obstacle, the method obtains a division plane candidate region, a RANSAC is used for extracting a maximum plane in a small range region of an upper region and a lower region of a Z axis of a plane corresponding to a prior plane equation P1, for example, the upper region and the lower region are 20cm, a maximum plane equation P2 corresponding to the maximum plane is obtained, and a proper ground plane equation is selected from the prior plane equation P1 and the maximum plane equation P2 according to an actual situation.
It should be noted that, only the point cloud within the area to be processed is retained by using a spherical point cloud filter, the point cloud is converted from a camera coordinate system to a robot coordinate system, and the point cloud data within the specified range of the trolley is retained by using a rectangular frame filter for the current point cloud, so that the point cloud of the candidate area of the partition surface can be obtained, wherein the parameters are variable, for example, the backpack type AGV needs different point cloud ranges in the backpack and no-load states.
In an embodiment of the application, the second determining module is configured to include a first processing sub-module, a first sampling sub-module, a second processing sub-module, a second sampling sub-module, and a first determining sub-module, where the first processing sub-module is configured to divide the partition candidate area into a plurality of first cells, and each of the first cells is a cube with a side length of a first length; the first sampling submodule is used for sampling the point clouds to be identified in each first cell to obtain a plurality of preliminary sampling points, and the coordinates of the preliminary sampling points are the coordinates of any point of the point clouds to be identified in the first cells; the second processing submodule is configured to divide the partition plane candidate region into a plurality of second cells, where each of the second cells is a cube having a side length of a second length, and the second length is greater than the first length; the second sampling submodule performs secondary sampling on the sampling points in the second cell to obtain a plurality of sampling points, wherein the sampling points are the gravity centers of the sampling points in the second cell; the first determining submodule is used for determining the maximum plane equation according to the sampling points, and the maximum plane equation is an equation of a maximum plane formed by the sampling points in a robot coordinate system. Specifically, the point cloud directly extracted by the depth camera is too dense, and if the subsequent filter directly processes the point cloud, the point cloud occupies a CPU resource, and if the downsampling is too sparse, the detection accuracy is reduced, so that downsampling processing of a proper scale is performed on the point cloud. The mainstream point cloud downsampling method comprises voxel filtering and direct downsampling, the voxel filtering calculates the average coordinate value of all points in each unit cell, the calculation resources are very consumed, but the distribution of the whole point cloud is relatively uniform, for example, after the point cloud on one plane is subjected to voxel filtering, all the point clouds are almost on the same plane. The direct down-sampling takes the first appearing point in each cell as the representative of the cell, the down-sampling speed is high, the occupied resource is less, but the point distribution error after the sampling is larger, the method combines the advantages and the disadvantages of the two filters, the direct down-sampling of the small-scale cell and the voxel filtering superposition down-sampling of the large-scale cell are used, the down-sampling precision is ensured, the calculation speed is improved, namely, the down-sampling effect is better, and the CPU occupation is reduced.
In an embodiment of the application, the third determining module includes a second determining submodule and a third determining submodule, where the second determining submodule is configured to determine that the prior plane equation is the ground plane equation if at least one predetermined condition is satisfied, where the predetermined condition includes that a distance between a first point and a second point is greater than a distance threshold, an included angle between a first normal vector and a second normal vector is greater than a predetermined included angle, and a number of points of the point cloud in the candidate partition plane area is less than a point threshold, the first point is any point on a plane corresponding to the prior plane equation, where an X-axis coordinate and a Y-axis coordinate are both less than a third predetermined value, the second point is any point on a plane corresponding to the maximum plane equation, where an X-axis coordinate and a Y-axis coordinate are both less than the third predetermined value, and the first normal vector is a normal vector of the plane corresponding to the prior plane equation, the second normal vector is a normal vector of a plane corresponding to the maximum plane equation; the third determining submodule is configured to determine the maximum plane equation as the ground plane equation if all of the predetermined conditions are not satisfied. Specifically, the distance between the first point and the second point is greater than a distance threshold, the plane corresponding to the maximum plane equation is a step plane, the included angle between the first normal vector and the second normal vector is greater than a predetermined included angle, the plane corresponding to the maximum plane equation is a wall plane, the number of points of the point cloud in the partition plane candidate region is less than a point number threshold, and the point cloud is a point cloud formed by noise and the like.
In an embodiment of the present application, the obstacle point cloud includes a suspended obstacle point cloud, a high obstacle point cloud, and a low obstacle point cloud, and the second determining unit includes a first calibration module, a second calibration module, and a third calibration module, where the first calibration module is configured to calibrate the prepared obstacle point cloud in which Z-axis coordinates of all points are greater than a fourth predetermined value, as the suspended obstacle point cloud, and the fourth predetermined value is greater than a second predetermined value; the second calibration module is used for calibrating the prepared obstacle point cloud with the Z-axis coordinate of part of points larger than the fourth preset value into the high obstacle point cloud; the third calibration module is configured to calibrate the prepared obstacle point cloud with Z-axis coordinates of all points less than or equal to the second predetermined value to the low obstacle point cloud. Specifically, all point clouds are subjected to Euclidean clustering to remove point cloud noise, and a plurality of point cloud blocks obtained by clustering are classified into suspension obstacle point clouds, high obstacle point clouds and low obstacle point clouds according to geometrical characteristics and statistical characteristics of the point cloud blocks.
In an embodiment of the present application, the apparatus further includes a first review unit, where the first review unit includes a fourth determining module, a clustering module, and a synthesizing module, where the fourth determining module is configured to determine a neighboring point of the target point according to the target point after identifying the prepared obstacle point cloud and determining the obstacle point cloud, where the target point is any one of the high obstacle point cloud and the low obstacle point cloud, and the neighboring point is a point in the high obstacle point cloud or the low obstacle point cloud, where a distance between the neighboring point and the target point is less than a first predetermined distance; the clustering module is configured to cluster the target point according to clustering conditions to obtain a plurality of point cloud blocks, where the clustering conditions include a first clustering condition, a second clustering condition, and a third clustering condition, the first clustering condition is that a difference between a Z-axis coordinate of the target point and Z-axis coordinates of all the neighboring points is greater than a fifth predetermined value, the second clustering condition is that a normal vector angle of the target point is smaller than or equal to a predetermined angle, a difference between the Z-axis coordinate of the target point and the Z-axis coordinates of all the neighboring points is smaller than a sixth predetermined value, and euclidean distances between the target point and all the neighboring points are smaller than a second predetermined distance, the normal vector angle is an included angle between a local normal vector of the target point and the Z-axis of the robot coordinate system, and the local normal vector is a normal vector of a tangent line of a curved surface formed by the target point and the neighboring points at the target point, the second predetermined distance is smaller than the first predetermined distance, the sixth predetermined value is smaller than the fifth predetermined value, the third clustering condition is that the target points which do not satisfy the first clustering condition and the second clustering condition are satisfied at the same time, the normal angle of the target point is larger than the predetermined angle, and the euclidean distances between the target point and all the neighboring points are smaller than a third predetermined distance; the synthesis module is used for splicing the point cloud blocks to obtain the obstacle point cloud. Specifically, due to multipath factors of the depth camera, the lower part of the high obstacle point cloud can contain a partially distorted and sunk ground, the low obstacle point cloud can be a real low obstacle or a normal ground, the method clusters points on the same plane together through clustering to form point cloud blocks, the ground and the obstacle can be distinguished after the point cloud blocks are spliced, the normal ground is prevented from being identified as the low obstacle, the lower part of the high obstacle point cloud can contain the partially distorted and sunk ground, and the obstacle identification accuracy is improved.
In an embodiment of the present application, the apparatus further includes a second review unit, where the second review unit includes a first generation module, a fifth determination module, a sixth determination module, and a seventh determination module, where the first generation module is configured to generate a first intensity map according to point cloud data of the low-obstacle point cloud after identifying the prepared obstacle point cloud and determining the obstacle point cloud; the fifth determining module is configured to determine a high-intensity area according to the first intensity map, where the high-intensity area is an area where the intensity of the low-obstacle point cloud is greater than a first predetermined intensity; the sixth determining module is configured to determine a shadow area according to the high-intensity area, where the shadow area is an area where the point cloud cannot be acquired due to obstruction of an obstacle corresponding to the point cloud of the high-intensity area in the first intensity map; the seventh determining module is configured to determine whether the low obstacle point cloud is misjudged according to a duty ratio, where the duty ratio is a ratio of an area occupied by the point cloud in the shadow area to an area of the shadow area. Specifically, when a tag such as a security exit is attached to the ground, since the reflectivity of the tag is higher than that of the ground, the point cloud at the position of the tag is higher than that of the ground point cloud, and the tag can be misjudged as a short obstacle, a high-intensity area needs to be determined by the first intensity map to be misdetected and screened, and if no point cloud exists behind a real short obstacle within a certain range, whether the tag is a real obstacle can be determined by the duty ratio of a shadow area behind the point cloud of the short obstacle, so that whether misjudgment is determined.
In an embodiment of the application, the seventh determining module includes a fourth determining sub-module and a fifth determining sub-module, where the fourth determining sub-module is configured to determine the low obstacle point cloud misjudgment if the duty ratio is greater than a predetermined threshold; the fifth determining submodule is configured to determine that the low obstacle point cloud is not misjudged if the duty ratio is less than or equal to the predetermined threshold. Specifically, if the duty ratio is greater than a predetermined threshold, it can be proved that point clouds exist behind a low-obstacle point cloud, the low-obstacle point cloud is not a real obstacle, otherwise, the low-obstacle point cloud is determined not to be a real obstacle, it needs to be noted that if the duty ratio is less than or equal to the predetermined threshold, a tag distance is far, and an original point cloud in a tag rear area is few, and a robot needs to be close to recognize the tag.
In an embodiment of the application, the apparatus further includes a third review unit, where the third review unit includes a second generation module, an eighth determination module, a processing module, and a ninth determination module, where the second generation module is configured to generate a second intensity map according to point cloud data of the point cloud to be identified after identifying the prepared obstacle point cloud and determining the obstacle point cloud; the eighth determining module is configured to determine an intensity abnormal region according to the second intensity map, where the intensity of the point cloud to be identified is greater than a second predetermined intensity or less than a third predetermined intensity, and the third predetermined intensity is less than the second predetermined intensity; the processing module is used for carrying out inverse perspective transformation on the second intensity map to obtain a transformation intensity map; the ninth determining module is configured to compare the intensity abnormal area of the transformed intensity map and the intensity abnormal area of the second intensity map, and determine whether the point cloud to be identified corresponding to the intensity abnormal area is the obstacle point cloud. Specifically, the obstacle on the ground may be a high-reflectivity or low-reflectivity material, such as an aluminum profile or a black box, the obstacle in this part of area is overexposed or low in intensity, so that the actually generated point cloud is very few, which may cause missed detection, therefore, the missed detection screening needs to be performed on the area with abnormal intensity, the area with abnormal intensity map is subjected to IPM inverse perspective transformation of a camera and the ground, if the transformed area is stretched for an abnormally long length, the area can be determined to be a real three-dimensional object, missed detection is determined, the point cloud below the area is used as the lower edge of the obstacle, and the upper edge of the object is obtained by the upper edge of the intensity map according to perspective transformation, so as to estimate the size of the position of the weak reflection obstacle, thereby reducing the missed detection.
In an embodiment of the application, the ninth determining module includes a sixth determining submodule and a seventh determining submodule, where the sixth determining submodule is configured to determine that the point cloud to be identified of the intensity abnormal region is an obstacle point cloud when a ratio of a first length to a second length is greater than a predetermined ratio, the first length is a length of the intensity abnormal region in a target direction in the transformed intensity map, the second length is a length of the intensity abnormal region in the target direction in the second intensity map, the target direction is an arbitrary direction, and the predetermined ratio is greater than 1; the seventh determining submodule is configured to determine that the point cloud to be identified in the intensity abnormal area is not an obstacle point cloud when a ratio of the first length to the second length is smaller than or equal to the predetermined ratio. Specifically, the method can judge whether the area after the abnormal area transformation of the intensity map meets the condition of abnormal length of the stretched area, thereby determining whether the detection is missed and ensuring the accuracy of the identification.
The device for detecting the obstacle comprises a processor and a memory, wherein the acquisition unit, the calculation unit, the processing unit, the first determination unit, the second determination unit and the like are stored in the memory as program units, and the processor executes the program units stored in the memory to realize corresponding functions.
The processor comprises a kernel, and the kernel calls the corresponding program unit from the memory. One or more kernels can be set, and the problems of missing detection and false detection of the barrier caused by low reliability of the ground plane equation in the prior art are solved by adjusting the kernel parameters.
The memory may include volatile memory in a computer readable medium, Random Access Memory (RAM) and/or nonvolatile memory such as Read Only Memory (ROM) or flash memory (flash RAM), and the memory includes at least one memory chip.
An embodiment of the present invention provides a computer-readable storage medium on which a program is stored, which when executed by a processor implements the above-described method.
The embodiment of the invention provides a processor, which is used for running a program, wherein the method is executed when the program runs.
The embodiment of the invention provides equipment, which comprises a processor, a memory and a program which is stored on the memory and can run on the processor, wherein when the processor executes the program, at least the following steps are realized:
step S101, point clouds to be identified are obtained, wherein the point clouds to be identified are point clouds of objects in the visual field of the depth camera;
step S102, calculating a ground plane equation in real time according to the point cloud to be identified, wherein the ground plane equation is an equation of the current ground in a robot coordinate system;
step S103, moving a plane corresponding to the ground plane equation by a first preset value along the positive direction of the Z axis of the robot coordinate system to obtain a segmentation plane;
step S104, determining a prepared obstacle point cloud according to the segmentation plane, wherein the prepared obstacle point cloud is the point cloud to be identified which is positioned on one side of the segmentation plane close to the positive direction of the Z axis of the robot coordinate system;
and step S105, identifying the prepared obstacle point cloud, and determining the obstacle point cloud which is calibrated as the obstacle point cloud.
The device herein may be a server, a PC, a PAD, a mobile phone, etc.
The present application further provides a computer program product adapted to perform a program of initializing at least the following method steps when executed on a data processing device:
step S101, point clouds to be identified are obtained, and the point clouds to be identified are point clouds of objects in the visual field of the depth camera;
step S102, calculating a ground plane equation in real time according to the point cloud to be identified, wherein the ground plane equation is an equation of the current ground in a robot coordinate system;
step S103, moving a plane corresponding to the ground plane equation by a first preset value along the positive direction of the Z axis of the robot coordinate system to obtain a segmentation plane;
step S104, determining a prepared obstacle point cloud according to the segmentation plane, wherein the prepared obstacle point cloud is the point cloud to be identified which is positioned on one side of the segmentation plane close to the positive direction of the Z axis of the robot coordinate system;
and step S105, identifying the prepared obstacle point cloud, and determining the obstacle point cloud which is calibrated as the obstacle point cloud.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described apparatus embodiments are merely illustrative, and for example, the division of the above-described units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer-readable storage medium if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a computer-readable storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned computer-readable storage media comprise: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, and various media capable of storing program codes.
From the above description, it can be seen that the above-mentioned embodiments of the present application achieve the following technical effects:
1) firstly, acquiring a point cloud to be identified, wherein the point cloud to be identified is a point cloud of an object in the visual field of the depth camera; then, calculating a ground plane equation in real time according to the point cloud to be identified, wherein the ground plane equation is an equation of the current ground in a robot coordinate system; then, moving a plane corresponding to the ground plane equation by a first preset value along the positive direction of the Z axis of the robot coordinate system to obtain a segmentation plane; then, determining a prepared obstacle point cloud according to the segmentation plane, wherein the prepared obstacle point cloud is the point cloud to be identified which is positioned on one side of the segmentation plane close to the positive direction of the Z axis of the robot coordinate system; and finally, identifying the prepared obstacle point cloud, and determining the obstacle point cloud which is calibrated as the point cloud of the obstacle. By calculating the ground plane equation in real time, the problem that the ground plane equation is unreliable due to the fact that the ground plane equation is large in difference between the ground plane equation of the depth camera and the ground plane equation of the flat ground when the depth camera shakes along with the movement of the robot is solved, the reliability of the ground plane equation is improved, and the problems that in the prior art, the obstacle is missed to be detected and is detected mistakenly due to the fact that the ground plane equation is low in reliability are solved.
2) In the obstacle detection device, an acquisition unit acquires a point cloud to be identified, wherein the point cloud to be identified is a point cloud of an object in the field of view of the depth camera; the calculation unit calculates a ground plane equation in real time according to the point cloud to be identified, wherein the ground plane equation is an equation of the current ground in a robot coordinate system; the processing unit moves a plane corresponding to the ground plane equation by a first preset value along the positive direction of the Z axis of the robot coordinate system to obtain a segmentation plane; a first determination unit that determines a preliminary obstacle point cloud, which is the point cloud to be recognized located on a side of the division plane closer to a positive direction of a Z axis of the robot coordinate system, from the division plane; and the second determining unit identifies the prepared obstacle point cloud and determines the obstacle point cloud, wherein the obstacle point cloud is calibrated to be an obstacle. The device calculates the ground plane equation in real time, so that the problem that the ground plane equation is unreliable due to the fact that the ground plane equation is greatly different from the ground plane equation of the flat ground when the depth camera shakes along with the movement of the robot is avoided, the reliability of the ground plane equation is improved, and the problems that in the prior art, the obstacle is missed to be detected and is detected mistakenly due to the fact that the ground plane equation is low in reliability are solved.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (13)

1. A method of detecting an obstacle, wherein a robot includes a depth camera, the method comprising:
acquiring a point cloud to be identified, wherein the point cloud to be identified is a point cloud of an object in the field of view of the depth camera;
calculating a ground plane equation in real time according to the point cloud to be identified, wherein the ground plane equation is an equation of the current ground in a robot coordinate system;
moving a plane corresponding to the ground plane equation by a first preset value along the positive direction of the Z axis of the robot coordinate system to obtain a segmentation plane;
determining a prepared obstacle point cloud according to the segmentation plane, wherein the prepared obstacle point cloud is the point cloud to be identified which is positioned on one side of the segmentation plane close to the positive direction of the Z axis of the robot coordinate system;
and identifying the prepared obstacle point cloud, and determining the obstacle point cloud which is calibrated as the point cloud of the obstacle.
2. The method of claim 1, wherein calculating the ground plane equation from the point cloud to be identified in real time comprises:
acquiring a prior plane equation, wherein the prior plane equation is an equation of the ground in a robot coordinate system when the robot is on a flat ground, and a plane corresponding to the prior plane equation is vertical to a Z axis of the robot coordinate system;
determining a division plane candidate area according to the prior plane equation, wherein the division plane candidate area is an area between a first plane and a second plane in the robot coordinate system, the first plane is obtained by moving a second preset value along the positive direction of the Z axis of the robot coordinate system by the plane corresponding to the prior plane equation, the second plane is obtained by moving the second preset value along the negative direction of the Z axis of the robot coordinate system by the plane corresponding to the prior plane equation, and the positive direction of the Z axis of the robot coordinate system is the height increasing direction;
determining a maximum plane equation according to the point cloud to be identified in the partition plane candidate region, wherein the maximum plane equation is an equation of a maximum plane formed by the point cloud to be identified in the partition plane candidate region in a robot coordinate system;
and determining the ground plane equation according to the prior plane equation and the maximum plane equation.
3. The method of claim 2, wherein determining a maximum plane equation from the point cloud to be identified within the segmentation plane candidate region comprises:
dividing the division plane candidate region into a plurality of first cells, wherein the first cells are cubes with side length of a first length;
sampling the point cloud to be identified in each first cell to obtain a plurality of preparatory sampling points, wherein the coordinates of the preparatory sampling points are the coordinates of any point of the point cloud to be identified in the first cell;
dividing the division plane candidate region into a plurality of second cells, wherein the second cells are cubes with side lengths of a second length, and the second length is larger than the first length;
performing secondary sampling on the sampling points in the second cell to obtain a plurality of sampling points, wherein the sampling points are the gravity centers of the sampling points in the second cell;
and determining the maximum plane equation according to the sampling points, wherein the maximum plane equation is an equation of the maximum plane formed by the sampling points in a robot coordinate system.
4. The method of claim 2, wherein determining the ground plane equation from the prior plane equation and the maximum plane equation comprises:
under the condition that at least one preset condition is met, determining the prior plane equation as the ground plane equation, wherein the preset condition comprises that the distance between a first point and a second point is greater than a distance threshold, the included angle between a first normal vector and a second normal vector is greater than a preset included angle, and the point number of point cloud in the candidate area of the division surface is less than a point number threshold, the first point is any point on the plane corresponding to the prior plane equation, the X-axis coordinate and the Y-axis coordinate are both less than a third preset value, the second point is any point on the plane corresponding to the maximum plane equation, the X-axis coordinate and the Y-axis coordinate are both less than the third preset value, the first normal vector is the normal vector of the plane corresponding to the prior plane equation, and the second normal vector is the normal vector of the plane corresponding to the maximum plane equation;
in a case where all of the predetermined conditions are not satisfied, determining the maximum plane equation as the ground plane equation.
5. The method of claim 1, wherein the obstacle point cloud comprises a suspended obstacle point cloud, a high obstacle point cloud, and a low obstacle point cloud, and wherein identifying the preliminary obstacle point cloud and determining an obstacle point cloud comprises:
calibrating the prepared obstacle point cloud with Z-axis coordinates of all points larger than a fourth preset value as the suspended obstacle point cloud, wherein the fourth preset value is larger than a second preset value;
calibrating the prepared obstacle point cloud with the Z-axis coordinate of part of points larger than the fourth preset value as the high obstacle point cloud;
calibrating the prepared obstacle point cloud with Z-axis coordinates of all points less than or equal to the second preset value as the low obstacle point cloud.
6. The method of claim 5, wherein after identifying the preliminary obstacle point cloud and determining an obstacle point cloud, the method further comprises:
determining a neighboring point of the target point according to the target point, wherein the target point is any point of the high obstacle point cloud or the low obstacle point cloud, and the neighboring point is a point which is in the high obstacle point cloud or the low obstacle point cloud and has a distance with the target point smaller than a first preset distance;
clustering the target point according to clustering conditions to obtain a plurality of point cloud blocks, wherein the clustering conditions include a first clustering condition, a second clustering condition and a third clustering condition, the first clustering condition is that the difference between the Z-axis coordinate of the target point and the Z-axis coordinates of all the adjacent points is greater than a fifth predetermined value, the second clustering condition is that the normal vector angle of the target point is less than or equal to a predetermined angle, the difference between the Z-axis coordinate of the target point and the Z-axis coordinates of all the adjacent points is less than a sixth predetermined value, and the Euclidean distances between the target point and all the adjacent points are less than a second predetermined distance, the normal vector angle is an included angle between a local normal vector of the target point and the Z-axis of the robot coordinate system, the local normal vector is a normal vector of a tangent line of a curved surface formed by the target point and the adjacent points at the target point, the second predetermined distance is smaller than the first predetermined distance, the sixth predetermined value is smaller than the fifth predetermined value, the third clustering condition is that the target points which do not satisfy the first clustering condition and the second clustering condition are simultaneously satisfied, the normal vector angle of the target points is larger than the predetermined angle, and the Euclidean distances between the target points and all the adjacent points are smaller than a third predetermined distance;
and splicing the point cloud blocks to obtain the obstacle point cloud.
7. The method of claim 5, wherein after identifying the preliminary obstacle point cloud and determining an obstacle point cloud, the method further comprises:
generating a first intensity map according to point cloud data of the low-obstacle point cloud;
determining a high-intensity area according to the first intensity map, wherein the high-intensity area is an area in which the intensity of the low obstacle point cloud is greater than a first preset intensity;
determining a shadow area according to the high-intensity area, wherein the shadow area is an area which cannot collect point clouds due to obstruction of an obstacle corresponding to the point clouds in the high-intensity area in the first intensity map;
and determining whether the low-obstacle point cloud is misjudged according to a duty ratio, wherein the duty ratio is the ratio of the area occupied by the point cloud in the shadow area to the area of the shadow area.
8. The method of claim 7, wherein determining whether the low obstacle point cloud is misjudged according to a duty cycle comprises:
determining the low obstacle point cloud misjudgment when the duty ratio is greater than a predetermined threshold;
determining that the low obstacle point cloud is not misjudged if the duty cycle is less than or equal to the predetermined threshold.
9. The method of claim 5, wherein after identifying the preliminary obstacle point cloud and determining an obstacle point cloud, the method further comprises:
generating a second intensity map according to the point cloud data of the point cloud to be identified;
determining an intensity abnormal area according to the second intensity map, wherein the intensity abnormal area is an area where the intensity of the point cloud to be identified is greater than a second preset intensity or less than a third preset intensity, and the third preset intensity is less than the second preset intensity;
performing inverse perspective transformation on the second intensity map to obtain a transformation intensity map;
and comparing the converted intensity map with the intensity abnormal area of the second intensity map, and determining whether the point cloud to be identified corresponding to the intensity abnormal area is the obstacle point cloud.
10. The method of claim 9, wherein comparing the transformed intensity map with the intensity anomaly region of the second intensity map, and determining whether the point cloud to be identified corresponding to the intensity anomaly region is the obstacle point cloud comprises:
under the condition that the ratio of a first length to a second length is larger than a preset ratio, determining that the point cloud to be identified of the strength abnormal area is an obstacle point cloud, wherein the first length is the length of the strength abnormal area in the transformation strength diagram in the target direction, the second length is the length of the strength abnormal area in the second strength diagram in the target direction, the target direction is any direction, and the preset ratio is larger than 1;
determining that the point cloud to be identified of the intensity abnormal area is not an obstacle point cloud when the ratio of the first length to the second length is less than or equal to the predetermined ratio.
11. An apparatus for detecting obstacles, wherein a robot includes a depth camera, the apparatus comprising:
the acquisition unit is used for acquiring point clouds to be identified, and the point clouds to be identified are point clouds of objects in the visual field of the depth camera;
the calculating unit is used for calculating a ground plane equation in real time according to the point cloud to be identified, wherein the ground plane equation is an equation of the current ground in a robot coordinate system;
the processing unit is used for moving the plane corresponding to the ground plane equation by a first preset value along the positive direction of the Z axis of the robot coordinate system to obtain a segmentation plane;
a first determination unit, configured to determine a preliminary obstacle point cloud according to the segmentation plane, where the preliminary obstacle point cloud is the point cloud to be identified located on one side of the segmentation plane close to a positive direction of a Z axis of the robot coordinate system;
and the second determining unit is used for identifying the prepared obstacle point cloud and determining the obstacle point cloud, wherein the obstacle point cloud is calibrated to be an obstacle.
12. A computer-readable storage medium, characterized in that the computer-readable storage medium comprises a stored program, wherein the program performs the method of any one of claims 1 to 10.
13. A processor, characterized in that the processor is configured to run a program, wherein the program when running performs the method of any of claims 1 to 10.
CN202210240766.1A 2022-03-10 2022-03-10 Obstacle detection method and device, computer-readable storage medium and processor Pending CN114764885A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210240766.1A CN114764885A (en) 2022-03-10 2022-03-10 Obstacle detection method and device, computer-readable storage medium and processor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210240766.1A CN114764885A (en) 2022-03-10 2022-03-10 Obstacle detection method and device, computer-readable storage medium and processor

Publications (1)

Publication Number Publication Date
CN114764885A true CN114764885A (en) 2022-07-19

Family

ID=82364661

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210240766.1A Pending CN114764885A (en) 2022-03-10 2022-03-10 Obstacle detection method and device, computer-readable storage medium and processor

Country Status (1)

Country Link
CN (1) CN114764885A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115641462A (en) * 2022-12-26 2023-01-24 电子科技大学 Radar image target identification method
CN116740361A (en) * 2023-08-11 2023-09-12 之江实验室 Point cloud segmentation method and device, storage medium and electronic equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115641462A (en) * 2022-12-26 2023-01-24 电子科技大学 Radar image target identification method
CN116740361A (en) * 2023-08-11 2023-09-12 之江实验室 Point cloud segmentation method and device, storage medium and electronic equipment
CN116740361B (en) * 2023-08-11 2023-10-13 之江实验室 Point cloud segmentation method and device, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN110988912B (en) Road target and distance detection method, system and device for automatic driving vehicle
CN112396650B (en) Target ranging system and method based on fusion of image and laser radar
CN111210429B (en) Point cloud data partitioning method and device and obstacle detection method and device
US8199977B2 (en) System and method for extraction of features from a 3-D point cloud
CN111753609B (en) Target identification method and device and camera
CN111582054B (en) Point cloud data processing method and device and obstacle detection method and device
CN114764885A (en) Obstacle detection method and device, computer-readable storage medium and processor
CN111553946B (en) Method and device for removing ground point cloud and method and device for detecting obstacle
CN113378760A (en) Training target detection model and method and device for detecting target
CN115049700A (en) Target detection method and device
CN111913177A (en) Method and device for detecting target object and storage medium
CN114179788B (en) Automatic parking method, system, computer readable storage medium and vehicle terminal
CN116503803A (en) Obstacle detection method, obstacle detection device, electronic device and storage medium
CN111308500A (en) Obstacle sensing method and device based on single-line laser radar and computer terminal
CN114792416A (en) Target detection method and device
CN116109601A (en) Real-time target detection method based on three-dimensional laser radar point cloud
CN112683228A (en) Monocular camera ranging method and device
CN114677655A (en) Multi-sensor target detection method and device, electronic equipment and storage medium
CN115641553A (en) Online detection device and method for invaders in heading machine working environment
Kallasi et al. Computer vision in underwater environments: A multiscale graph segmentation approach
CN117292076A (en) Dynamic three-dimensional reconstruction method and system for local operation scene of engineering machinery
Giosan et al. Superpixel-based obstacle segmentation from dense stereo urban traffic scenarios using intensity, depth and optical flow information
CN114648639B (en) Target vehicle detection method, system and device
CN113935946B (en) Method and device for detecting underground obstacle in real time
CN115457506A (en) Target detection method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination