WO2023179405A1 - 障碍物的识别方法、设备及存储介质 - Google Patents

障碍物的识别方法、设备及存储介质 Download PDF

Info

Publication number
WO2023179405A1
WO2023179405A1 PCT/CN2023/081202 CN2023081202W WO2023179405A1 WO 2023179405 A1 WO2023179405 A1 WO 2023179405A1 CN 2023081202 W CN2023081202 W CN 2023081202W WO 2023179405 A1 WO2023179405 A1 WO 2023179405A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
cloud data
obstacle
color information
confirmed
Prior art date
Application number
PCT/CN2023/081202
Other languages
English (en)
French (fr)
Inventor
王雷
陈熙
Original Assignee
深圳市正浩创新科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市正浩创新科技股份有限公司 filed Critical 深圳市正浩创新科技股份有限公司
Publication of WO2023179405A1 publication Critical patent/WO2023179405A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0014Image feed-back for automatic industrial control, e.g. robot with camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Definitions

  • This application belongs to the field of intelligent robot technology, and in particular relates to an obstacle identification method and equipment.
  • a method, device and storage medium for identifying obstacles are provided.
  • This embodiment provides a method for identifying obstacles, including: acquiring target point cloud data in the target area, where the target point cloud data does not include ground point cloud data; and performing fitting and clustering on the target point cloud data.
  • Process to obtain the obstacle point cloud data to be confirmed obtain the first color information of the plane where the obstacle point cloud data to be confirmed is located; obtain the second color information of the area where the obstacle point cloud data to be confirmed is located; When the second color information does not match the first color information, it is confirmed that the obstacle point cloud data to be confirmed is obstacle point cloud data.
  • An embodiment of the present application provides a device for identifying obstacles, which is used to perform the method in the above first aspect or any possible implementation of the first aspect.
  • the apparatus may include a module for performing the method of identifying obstacles in the first aspect or any possible implementation of the first aspect.
  • An embodiment of the present application provides a device, which includes a memory and a processor.
  • the memory is used to store instructions; the processor executes the instructions stored in the memory, so that the device performs the method of identifying obstacles in the first aspect or any possible implementation of the first aspect.
  • Embodiments of the present application provide a computer-readable storage medium. Instructions are stored in the computer-readable storage medium. When executed on the computer, the instruction causes the computer to execute the method of identifying obstacles in the first aspect or any possible implementation of the first aspect.
  • Embodiments of the present application provide a computer program product containing instructions that, when run on a device, cause the device to perform the method of identifying obstacles in the first aspect or any possible implementation of the first aspect.
  • Figure 1 is a schematic flowchart of an obstacle identification method provided by an embodiment of the present application.
  • Figure 2 is a schematic diagram of the plane where the obstacle point cloud data to be confirmed is located according to the embodiment of the present application.
  • Figure 3 is a schematic diagram of the area where the obstacle point cloud data to be confirmed is located according to the embodiment of the present application.
  • Figure 4 is a schematic flowchart of a method for obtaining target point cloud data provided by an embodiment of the present application.
  • Figure 5 is a schematic flowchart of a method for obtaining point cloud data of obstacles to be confirmed provided by an embodiment of the present application.
  • Figure 6 is a schematic diagram of a fitting plane provided by an embodiment of the present application.
  • Figure 7 is a schematic diagram of a fitting plane provided by another embodiment of the present application.
  • FIG. 8 is a schematic flowchart of a method for clustering second point cloud data provided by an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of a device 900 for identifying obstacles provided by an embodiment of the present application.
  • Figure 10 is a schematic structural diagram of a device 1000 provided by an embodiment of the present application.
  • the autonomous mobile device as a lawn mower robot as an example, when the lawn mower robot moves on a lawn containing a grass slope for mowing operations, the lawn mower robot cannot accurately identify low-height objects on the lawn when identifying obstacles. Stones cause the lawn mower robot to collide with the stones, causing the lawn mower robot to be unable to continue mowing operations, thereby reducing work efficiency.
  • taking the mobile device as an unmanned vehicle as an example when the unmanned vehicle is driving on the road, it cannot accurately identify low obstacles on the road, causing the unmanned vehicle to collide with the obstacles and unable to operate normally. travel.
  • the method provided by the embodiment of the present application can be executed by a first device, or by a chip in the first device.
  • the first device can be a non-autonomous mobile device, such as a server, an electronic device (such as a mobile phone), or an autonomous mobile device.
  • Mobile devices such as self-mobile devices, can be robots (such as lawn mowing robots, sweeping robots, demining robots, cruise robots, etc.) or smart cars.
  • the first device is a device other than the self-mobile device, the first device can communicate with the self-mobile device.
  • an application (APP) corresponding to the self-mobile device can be installed on the mobile phone, and the user can operate the APP on the mobile phone to trigger the establishment of a communication connection between the mobile phone and the self-mobile device.
  • the first device is a server
  • the user can trigger the self-mobile device to report the depth image and/or RGB image including the target area to the server in the APP of the mobile phone communicating with the self-mobile device, so that the server executes the steps provided by the embodiments of the present application. method of identifying obstacles.
  • the self-mobile device has a first control, and the user triggers the first control to trigger the self-mobile device to report to the server the depth image and/or RGB image including the target area collected by the self-mobile device, so that the server executes the present application.
  • the embodiment provides a method for identifying obstacles.
  • the first device acquires target point cloud data in the target area, where the target point cloud data does not include ground point cloud data.
  • the first device can obtain a depth image of the current location of the mobile device through a depth camera mounted on the mobile device, and then the area in the depth image other than the ground area can be determined as the target area. In this case, the first device can determine the target point cloud data in the target area based on the acquired depth image, where the target point cloud data does not include ground point cloud data.
  • the first device can also obtain the corresponding depth image through other instruments such as a depth camera, laser scanner, or lidar mounted on the mobile device.
  • the depth image includes point cloud data.
  • the first device performs fitting and clustering processing on the target point cloud data to obtain the obstacle point cloud data to be confirmed.
  • the obstacle point cloud data to be confirmed does not include point cloud data that can be directly determined as obstacles in the target point cloud data, and the obstacle point cloud data to be confirmed does not include the point cloud data that can be directly determined as non-obstacles in the target point cloud data.
  • Point cloud data does not include point cloud data that can be directly determined as obstacles in the target point cloud data, and the obstacle point cloud data to be confirmed does not include the point cloud data that can be directly determined as non-obstacles in the target point cloud data.
  • the fitting clustering processing includes fitting processing and clustering processing, where the fitting processing is plane fitting processing, and the purpose is to determine whether the target area includes a slope.
  • the fitting processing is plane fitting processing
  • the purpose is to determine whether the target area includes a slope.
  • the point cloud data corresponding to the slope is removed from the target point cloud data, and the remaining point cloud data is clustered to obtain one or more obstacle point cloud data to be confirmed; when there is no point cloud data When climbing a slope, the target point cloud data is clustered to obtain multiple obstacle point cloud data to be confirmed.
  • the first device obtains the first color information of the plane where the obstacle point cloud data to be confirmed is located.
  • the first color information is from one or more color information of the plane where the obstacle to be confirmed is located.
  • the color information with the largest proportion of the determined area is from one or more color information of the plane where the obstacle to be confirmed is located.
  • the first color information is the one or more color information of the plane 202, which color has the largest area ratio. Color information.
  • the first device obtains the second color information of the area where the obstacle point cloud data to be confirmed is located.
  • the second color information is the color information with the largest area proportion among one or more color information of the area where the obstacle point cloud data to be confirmed is located.
  • the second color information is the color information with the largest area ratio among the one or more color information in the area 301.
  • the first device determines that the obstacle point cloud data to be confirmed is the obstacle point cloud data.
  • the second color information and the first color information may be considered not to match.
  • this application obtains the target point cloud data in the target area, excluding ground point cloud data, and performs fitting and clustering processing on the target point cloud data to obtain the obstacle point cloud data to be confirmed, and compares the target point cloud data to be confirmed.
  • the second color information does not match the first color information, it is determined that the obstacle point cloud data to be confirmed is the obstacle point cloud data. It can be seen that when there is a low obstacle in the target area, in the area where the point cloud data of the obstacle to be confirmed is located, the color area with the largest proportion is the main color of the obstacle, that is, the first color information.
  • the color with the largest color area ratio is the main color of the plane where the obstacle is located, that is, the second color information. Since there is a certain difference between the color of the obstacle and the color of the plane where the obstacle is located, the second color information does not match the first color information.
  • the autonomous mobile device can accurately identify the obstacle point cloud data based on the first color information and the second color information, so that the autonomous mobile device can accurately identify low obstacles and perform precise obstacle avoidance, thereby effectively improving the autonomous performance of the mobile device.
  • the first device acquires original point cloud data.
  • the original point cloud data is point cloud data based on the coordinate system of the mobile device.
  • the original point cloud data is the point cloud data of all objects in the depth image collected from the mobile device in the coordinate system of the mobile device.
  • the original point cloud data includes ground point cloud data, point cloud data corresponding to the grass slope, point cloud data corresponding to the stones, and point cloud data corresponding to the trees.
  • the mobile device can also obtain original point cloud data through a depth camera, a laser scanner, a laser radar, or other instruments that can obtain point cloud data.
  • the coordinates of each image pixel in the depth image, the depth value of the image pixel, and the corresponding internal parameters of the camera are calculated to obtain each pixel in the depth image.
  • the point cloud data of image pixels in the camera coordinate system can be obtained according to formula (1) to formula (4):.
  • z c d formula (3)
  • p c (x c , y c , z c )
  • (u, v) represents the pixel coordinates of an image pixel in the depth image under the image coordinate system; d is the depth value of the image pixel; l x , ly , f x and f y are all corresponding to the camera Internal parameters; p c represents the point cloud data coordinates of an image pixel in the depth image in the camera coordinate system.
  • p r represents the coordinates of the original point cloud data in the coordinate system of the mobile device
  • R rc represents the rotation parameter from the camera coordinate system to the coordinate system of the mobile device
  • T rc represents the rotation parameter from the camera coordinate system to the coordinate system of the mobile device.
  • Translation parameters in the coordinate system; rotation parameters and translation parameters can be obtained through actual measurements.
  • the coordinates of each image pixel in the depth image in the coordinate system of the mobile device can be obtained through the above formulas (1) to (5).
  • the forward direction of the self-mobile device is the positive direction of the X-axis
  • the direction 90 degrees counterclockwise from the forward direction of the self-mobile device is the positive direction of the Y-axis.
  • Plane, the upward direction is the positive direction of the Z-axis.
  • the first device obtains the normal vector corresponding to each point in the original point cloud data, and the first angle between the normal vector and the preset coordinate axis on the coordinate system.
  • the normal vector of the point cloud data is the normal vector corresponding to the point cloud data relative to the plane obtained by fitting the current point cloud data and at least two surrounding point cloud data.
  • the preset coordinates may be a horizontal axis (X-axis), a vertical axis (Y-axis) or a vertical axis (Z-axis) on the coordinate system of the mobile device.
  • the first included angle between the normal vector and the preset coordinate axis on the self-mobile device coordinate system is the included angle between the normal vector and the positive direction of the Z-axis in the self-mobile device coordinate system.
  • the first device uses the point cloud data to which the corresponding point belongs as the ground Point cloud data.
  • the first angle threshold may be determined according to user settings.
  • the first angle threshold can be set to a relatively small value. small value.
  • the preset height threshold may be determined based on the size of the mobile device from the mobile device.
  • the preset height threshold may be 5 cm.
  • the preset height threshold can be set to 5 centimeters
  • the preset first angle threshold can be set to 5 degrees.
  • the original point cloud data can be set to Z in the coordinate system of the mobile device.
  • the point cloud data belonging to the point whose axis component is less than 5 cm and the angle between the normal vector of the point cloud data and the Z-axis in the coordinate system is less than or equal to 5 degrees is determined to be the ground point cloud data.
  • the target area includes obstacles parallel to the ground
  • the target area includes a table
  • the table will also be determined based on the first angle of the points in the original point cloud data. is ground point cloud data.
  • the target area includes obstacles with lower heights, for example, the target area includes nails, in this case, the nails will also be determined as ground point cloud data based only on the height of the points in the point cloud data. Therefore, the point cloud data belonging to the point in which the first included angle of the point in the original point cloud data is less than the first included angle threshold and the height of the point is less than the preset height threshold can be determined as ground point cloud data to improve Recognition accuracy.
  • the first device removes the ground point cloud data from the original point cloud data to obtain the target point cloud data.
  • the first device can remove the ground point cloud data from the original point cloud data to obtain the target point. Cloud data.
  • the ground point cloud data will cause interference to the process of fitting the plane. Therefore, by removing the ground point cloud data from the original point cloud data, the interference of the ground point cloud data on obstacle recognition can be effectively reduced. Thereby improving the accuracy of obtained point cloud data of obstacles to be confirmed. At the same time, the calculation amount of irrelevant ground point cloud data is reduced, thereby improving the efficiency of fitting processing, thereby improving the recognition efficiency of obstacle point cloud data to be confirmed.
  • the above S102 may include the specific implementation of S5 to S8 in Figure 5:
  • the first device performs plane fitting on the target point cloud data.
  • the random sampling consensus (Random Sample Consensus, Ransac) algorithm can be used to perform plane fitting on the target point cloud data.
  • this embodiment takes the Ransac algorithm as an example to fit the target point cloud data.
  • the algorithm for plane fitting is not limited.
  • the least squares method can be used to perform plane fitting on the target point cloud data.
  • the first device determines the first point cloud data located on the fitting plane according to the fitting plane and the position of the fitting plane relative to the coordinate system of the mobile device.
  • the second included angle of the preset coordinate axes is the first device.
  • the point cloud data whose shortest distance to the fitted plane during the fitting process is less than the interior point distance threshold can be called interior point data.
  • the target point cloud data is fitted, if the number of interior point data is greater than the preset interior point threshold, it can be considered that the target point cloud data has been fitted and a fitting plane is obtained. Among them, the number of interior points of the fitting plane is greater than the preset interior point threshold.
  • the target point cloud data includes other point cloud data in the target area except the ground point cloud data.
  • the target point cloud data includes point cloud data of slopes, point cloud data of obstacles on the ground, and/or point cloud data of obstacles on the slope.
  • the slope Since the slope has a large volume in the target area, there is a large amount of point cloud data for the slope, and most of the point cloud data for the slope are located on the same plane.
  • the principle of fitting is to make the fitting plane have as many interior points as possible. Therefore, after plane fitting the target point cloud data, the obtained fitting plane can be considered as the plane of the slope, and the third point on the fitting plane is The point cloud data is considered to be the point cloud data of the slope.
  • the inlier threshold For example, set the inlier threshold to 80% of the number of target point cloud data. That is, when the number of inliers in the fitting plane is greater than 80% of the number of target point cloud data, the fitting plane is determined as Slopes in the target area. If the target area is as shown in Figure 6, the target area includes a slope 601, and the target area includes a tree on the ground, which is the obstacle 602, and a stone on the slope, which is the obstacle 603. At this time, the target point The cloud data includes point cloud data corresponding to slope 601, point cloud data corresponding to obstacle 602, and point cloud data corresponding to obstacle 603.
  • the number of target point cloud data is 31,000, and the number of point cloud data corresponding to slope 601 corresponds to is 25990, and the total number of point cloud data corresponding to obstacles 602 and 603 is 4010.
  • the Ransac algorithm is used to fit the target point cloud data to the plane, and the point cloud data corresponding to the fitted plane 604 is obtained.
  • the point cloud data corresponding to slope 601 is the point cloud data corresponding to slope 601
  • the number of interior points corresponding to the fitting plane is 83.3% of the target point cloud data. Therefore, it can be seen that if the number of inliers in the fitting process is greater than or equal to the inlier threshold, a fitting plane can be obtained, that is, the target area includes a slope, and the fitting plane can be considered as the plane of the slope. .
  • the inlier threshold can be determined based on the amount of target point cloud data.
  • the inlier threshold can be 80% of the amount of target point cloud data.
  • the first point cloud data located on the fitting plane and the second angle between the fitting plane and the preset coordinate axis on the coordinate system of the mobile device can be determined according to the fitting plane.
  • the second included angle is the included angle between the fitting plane and the Z axis in the coordinate system of the self-moving device.
  • the second included angle can be used to describe the included angle between the fitting plane and the plane where the self-moving device is located. Angle, further, the second included angle can also be used to describe the slope of the slope in the target area.
  • operation S8 or S9 can be performed according to the relationship between the second included angle and the second included angle threshold.
  • the first device determines the first point cloud data as obstacle point cloud data.
  • the first device determines the point cloud data of the slope as the obstacle. Point cloud data is used to determine the slope as an obstacle, and the obstacle can be avoided.
  • the second angle threshold For example, set the second angle threshold to 45 degrees.
  • the slope of the slope is greater than 45 degrees, that is, when the angle between the fitting plane corresponding to the slope and the Z axis in the coordinate system of the mobile device is greater than 45 degrees, due to The mobile device cannot pass a slope with a slope greater than or equal to 45 degrees, so the slope is regarded as an obstacle and obstacle avoidance measures are required.
  • the first device determines the first point cloud data as the obstacle point cloud data. .
  • the first device removes the first point cloud data from the target point cloud data to obtain the second point cloud data
  • the slope is determined to be a passable slope, in other words, the slope is fitted Plane point cloud data is not obstacle point cloud data. In this case, the first point cloud on the fitting plane in the target point cloud data is removed to obtain the second point cloud data.
  • the second angle threshold is set to 45 degrees
  • the slope of the slope is less than 45 degrees
  • the angle between the fitting plane corresponding to the slope and the Z axis in the coordinate system of the mobile device is less than 45 degrees.
  • Mobile devices can successfully negotiate slopes with a slope less than 45 degrees, so the slope is not an obstacle. In this case, the first point cloud on the fitting plane in the target point cloud data is removed to obtain the second point cloud data.
  • the robot since the robot is affected by the maximum slope angle when climbing, the robot usually performs obstacle avoidance processing when encountering an object in front that exceeds the maximum slope angle. Therefore, when the target area is determined to be a slope through plane fitting, and the second angle of the slope is greater than or equal to the second angle threshold, the slope angle of the slope exceeds the maximum slope angle, so the robot cannot pass the slope. At this time Identify the slope as an obstacle. On the contrary, when the second included angle of the slope is less than the second included angle threshold, the robot can pass the slope, and at this time, the point cloud data of the fitting plane is regarded as non-obstacle point cloud data.
  • the method provided by the embodiment of the present application also includes S6 in Figure 5: when the plane is not obtained by fitting, the first device uses the target point cloud data as the second point cloud data, and cluster the second point cloud data to obtain the obstacle point cloud data to be confirmed.
  • the target point cloud data is the point cloud data of one or more obstacles other than the ground point cloud data in the target area. Since the one or more obstacles The heights of the obstacles in each obstacle are generally different, and the distribution of obstacles in one or more obstacles has no fixed rules. Therefore, the distribution of the target point cloud is relatively scattered in the coordinate system of the mobile device. At this time, the target point cloud data is fitted to a plane. Since the inner points of the fitting plane during the fitting process cannot be greater than or equal to the preset If the interior point threshold is exceeded, the fitting plane cannot be obtained.
  • the interior point threshold For example, set the interior point threshold to 80% of the number of target point cloud data. That is, when the interior points during the fitting process are less than 80% of the number of target point cloud data, the plane cannot be obtained by fitting. Then, the slope is not included in the target area. If the target area is as shown in Figure 7, the target area does not include slopes, including the ground 701, and the target area includes 5 obstacles, namely obstacle 702, obstacle 703, obstacle 704, obstacle 705, and obstacle 706. At this time, the target point cloud data includes 1030 point cloud data corresponding to obstacle 702, 890 point cloud data corresponding to obstacle 703, 2305 point cloud data corresponding to obstacle 704, and 2305 point cloud data corresponding to obstacle 704.
  • the number of point cloud data corresponding to 705 is 760
  • the number of point cloud data corresponding to obstacle 706 is 861
  • the number of target point cloud data is 6000.
  • the Ransac algorithm is used to fit the plane to the target point cloud.
  • the maximum number of interior points obtained is 30.2% of the target point cloud data.
  • the plane cannot be obtained by fitting. Therefore, it can be seen that if the number of inliers in the process of fitting the plane is less than the inlier threshold, the plane cannot be obtained by fitting. Furthermore, it can be determined that the target area does not include slopes.
  • the target point cloud data can be directly As the second point cloud data, the second point cloud data is clustered.
  • the obtained obstacle point cloud data to be confirmed includes all obstacle point cloud data. There will be no missing recognition phenomenon, which effectively improves the identification of obstacles. object accuracy.
  • clustering the second point cloud data in S10 to obtain the obstacle point cloud data to be confirmed can be implemented through S11 to S14 in Figure 8:
  • the first device performs clustering processing on the second point cloud data to obtain different cluster groups.
  • clustering groups after clustering the second point cloud data, one or more different clustering groups will be obtained, wherein the clustering groups are multiple clustering groups obtained by clustering according to the preset obstacle categories.
  • Point cloud data different clustering groups include different numbers of points in the point cloud data.
  • this application when performing clustering processing on the second point cloud data, this application performs clustering processing on the second point cloud data.
  • the algorithm is not limited.
  • the second point cloud data can be clustered using algorithms such as k-means clustering algorithm, Gaussian mixture model algorithm, and expectation maximization algorithm.
  • the first device determines the suspicious obstacle cluster group from each cluster group based on the number of points in the point cloud data corresponding to each cluster group.
  • the number of points in the point cloud data corresponding to the cluster group of the obstacle will exceed a certain threshold, so the number of points in the corresponding point cloud data in the cluster group will exceed the threshold.
  • the clusters are identified as suspicious obstacle clusters.
  • the first device calibrates the point cloud data corresponding to the suspicious obstacle cluster group as point cloud data of the obstacle.
  • the height of the suspicious obstacle cluster group from the nearest plane is the distance between the points of the point cloud data in the suspicious obstacle cluster group and the ground.
  • the height of the suspicious obstacle cluster group from the nearest plane is the distance between the points of the point cloud data in the suspicious obstacle cluster group and the slope.
  • the third height threshold is set to 20 cm
  • the point cloud data corresponding to the suspicious obstacle cluster will be clustered.
  • method 1 when determining the height of the suspicious obstacle cluster from the nearest plane, when a slope is included in the target area, method 1 can be used to determine:
  • Method 1 can be: when the target area includes a slope, separately calculate the distance between the suspicious obstacle cluster group and the ground and the distance between the suspicious obstacle cluster group and the slope plane, and determine the smaller distance value as the suspicious obstacle. The height of the cluster from the nearest plane.
  • the height of the suspicious obstacle cluster from the nearest plane is 50 cm.
  • method 2 when determining the height of the suspicious obstacle cluster from the nearest plane, when the target area does not include slopes, method 2 can be used to determine:
  • Method 2 may be: when the target area does not include a slope, determine the distance between the suspicious obstacle cluster group and the ground as the height of the plane closest to the suspicious obstacle cluster group.
  • Method 3 may be: determining the coordinates of the center point of the suspicious obstacle cluster group based on the coordinates of the multiple point cloud data in the suspicious obstacle cluster group in the coordinate system of the own mobile device, and then based on the The coordinates of the center point are used to determine the distance between the suspicious obstacle cluster group and the ground, and the shortest distance from the center point to the ground is determined as the distance between the suspicious obstacle cluster group and the ground.
  • the average of the coordinates of the point cloud data in all clustering groups can be calculated as the coordinates of the center point of the clustering group.
  • the suspicious obstacle cluster group includes 10 point cloud data.
  • the coordinates of the 10 point cloud data at the coordinates of the mobile device are (13, 54, 23), (15, 56, 44), ( 13,51,37), (16,53,22), (14,54,34), (16,52,71), (17,53,41), (19,55,35), (12, 49, 29), (18, 53, 49), then the coordinates of the center point are (15.3, 53, 38.5), where the length corresponding to one unit in the coordinate system of the mobile device is 1 cm, then the suspicious obstacle The distance between the cluster and the ground is 38.5 cm.
  • Method 4 may be: based on the coordinates of multiple point cloud data in the suspicious obstacle cluster group in the coordinate system of the mobile device, determine the point cloud data with the highest height among the point cloud data in the suspicious obstacle cluster group, Then, based on the coordinates corresponding to the point cloud data, the shortest distance from the point cloud data to the ground is determined as the distance between the suspicious obstacle cluster group and the ground.
  • the first target object includes 10 point cloud data
  • the coordinates of the 10 point cloud data in the coordinate system of the mobile device are (13, 54, 23), (15, 56, 44), (13 ,51,37), (16,53,22), (14,54,34), (16,52,71), (17,53,41), (19,55,35), (12,49 , 29), (18, 53, 49)
  • the coordinates corresponding to the point cloud data to which the point with the highest height belongs is (16, 52, 71), where, from a coordinate system of the mobile device
  • the corresponding length of the unit is 1 cm, then the distance between the suspicious obstacle cluster and the ground is 71 cm.
  • Method 5 may be: determine the average value of multiple components based on the height of multiple point cloud data in the suspicious obstacle cluster group in the coordinate system of the mobile device, and determine the average value as the suspicious obstacle cluster The distance of the group from the ground.
  • the suspicious obstacle cluster group includes 10 point cloud data.
  • the heights of the points of the 10 point cloud data in the coordinate system of the own mobile device, that is, the Z-axis components in the coordinate system of the own mobile device are respectively : 23, 44, 37, 22, 34, 71, 41, 35, 29, 49, then the average value of the multiple components is 38.5, where the length corresponding to one unit in the coordinate system of the mobile device is 1 cm, Then the distance between the suspicious obstacle cluster and the ground is 38.5 cm.
  • Method 6 may be: determining the maximum value among the multiple components based on the height of the multiple point cloud data in the first target object in the coordinate system of the mobile device, and determining the maximum value as the suspicious obstacle cluster group Distance from the ground.
  • the first target object includes 10 point cloud data.
  • the heights of the 10 point cloud data points in the coordinate system of the mobile device that is, the Z-axis components in the coordinate system of the mobile device are: 23 , 44, 37, 22, 34, 71, 41, 35, 29, 49, then the maximum value of the multiple components is 71, where one unit in the coordinate system of the mobile device corresponds to The length is 1 cm, then the distance between the suspicious obstacle cluster and the ground is 71 cm.
  • Method 7 can be: based on the height of multiple point cloud data in the suspicious obstacle cluster group in the coordinate system of the own mobile device, that is, the Z-axis component of the point cloud data in the coordinate system of the own mobile device, determine the multiple point cloud data.
  • the median value of the components is determined as the distance between the suspicious obstacle cluster and the ground.
  • the suspicious obstacle cluster group includes 10 point cloud data.
  • the Z-axis components of the 10 point cloud data in the coordinate system of the mobile device are: 23, 44, 37, 22, 34, 71, 41 respectively. , 35, 29, 49, then the median is 36, where the length corresponding to one unit in the coordinate system of the mobile device is 1 cm, and the distance between the suspicious obstacle cluster and the ground is 36 cm.
  • the first device determines the point cloud data corresponding to the suspicious obstacle cluster group as the obstacle point cloud data to be confirmed.
  • the point cloud corresponding to the suspicious obstacle cluster group with a height greater than or equal to the third height threshold is The data directly confirms the obstacle point cloud data, and only the point cloud data corresponding to the suspicious obstacle clusters whose height is less than the third height threshold is determined as the point cloud data to be confirmed.
  • the point cloud data to be confirmed is further analyzed and identified by combining color information, which can reduce the amount of calculation required to identify obstacle point cloud data.
  • suspicious obstacles with a height greater than the third height threshold are confirmed as obstacle points.
  • Cloud data technology improves the accuracy of identifying obstacles.
  • S103 can be implemented in the following manner:
  • the mobile device is equipped with a camera, which is used to obtain the image information of the plane corresponding to the area corresponding to the obstacle point cloud data to be confirmed, and send the image information captured by the camera installed on the mobile device to the first device. , so that the first device obtains the image information of the plane where the area corresponding to the obstacle point cloud data to be confirmed is located.
  • the first device can perform color extraction on the image information, obtain multiple color information, and filter out the main color information as the first color information from the multiple information.
  • the image information can be an RGB image.
  • the first color information when performing color extraction on image information, obtaining multiple color information, and filtering out the main color information as the first color information from the multiple information, the first color information can be determined through the following steps:
  • the k-means clustering algorithm For example, use the k-means clustering algorithm to extract the first color of the image information in the target area. Before extraction, set the termination conditions of the extraction and the maximum number of iterations, as well as the selection method of the cluster center. Use k-means clustering to extract the image information. After information processing, multiple clustering groups are obtained, wherein multiple clustering groups respectively correspond to multiple color information. After obtaining multiple color information, the clustering center of the largest clustering group among the multiple clustering groups can be The color of the point cloud to which the points belong is filtered as the main color color information, and determine the main color information as the first color information.
  • the termination condition can be that the number of points in the point cloud data that are reassigned to different clusters in the next clustering is less than 3, or that the cluster center of less than 2 clusters changes in the next clustering. .
  • the k-means clustering is used as an example to describe the determination of the first color information.
  • algorithm used to extract the color in the image can be any algorithm that can be used to extract the color in the image.
  • Color algorithms such as deep learning algorithms, median segmentation methods, etc.
  • the first color information may be the color number of the color included in the area or the color change rule.
  • the image information of the plane corresponding to the area of the obstacle point cloud data to be confirmed may contain multiple colors, that is, there is interference color information
  • the main color information is filtered out from multiple color information as the The first color information can effectively reduce interference, thereby improving the accuracy of identifying obstacles.
  • S104 can be implemented through the following steps:
  • the point cloud data in the cluster group corresponding to the obstacle to be confirmed After obtaining the obstacle to be confirmed, according to the point cloud data in the cluster group corresponding to the obstacle to be confirmed, find the area corresponding to the cluster group corresponding to the obstacle to be confirmed in the corresponding RGB image, and change the color of the area The information is determined as the second color information.
  • the second color information may be the color number of the color included in the area or the color change rule.
  • S105 can be implemented through the following steps:
  • the image when calculating the color difference value between the second color information and the first color information, the image can be converted into a mode of HVS color space (Hue Saturation Value, HVS) or LAB color space (Lab Color Space, LAB), Thus, the color difference value between the second color information and the first color information is calculated.
  • HVS color space Human Saturation Value, HVS
  • LAB color space Lab Color Space, LAB
  • the first color information represents the color information of the plane where the obstacle point cloud data to be confirmed is located
  • the second color information represents the color information of the area where the obstacle point cloud data to be confirmed is located
  • the color difference value between the second color information and the first color information can be calculated.
  • the color difference value is greater than the preset
  • the obstacle point cloud data to be confirmed is used as the obstacle point cloud data.
  • the point cloud data corresponding to the gray stone can be obtained as the point cloud data of the obstacle to be confirmed, and the plane where the point cloud data of the stone is located is The area where the point cloud data of lawn and stone is located is stone.
  • the first color information is green and the second color information is gray. Since the color difference between gray and green exceeds the preset color difference threshold, the point cloud data of the gray stone can be used as the obstacle point cloud. data to accurately identify obstacles.
  • the technical means of using the obstacle point cloud data to be confirmed as the obstacle point cloud data can be Accurately identify low obstacles, and when the color of the passable slope is the same as the color of the ground, it avoids identifying the slope as an obstacle, effectively improving the accuracy of identifying obstacles.
  • the first color information may also be the color change rule of the plane where the obstacle point cloud data to be confirmed is located
  • the second color information may be the color change rule of the area where the point cloud data to be confirmed is located, where
  • the second color information is considered The color difference from the first color information is lower than a certain threshold. In this case, it can be considered that the obstacle to be confirmed is not an obstacle.
  • the obstacle point cloud data to be confirmed can be determined first based on the height of the suspicious obstacle cluster group from the nearest plane, and then the obstacle point cloud data to be confirmed can be determined based on the height of the suspicious obstacle cluster group from the nearest plane.
  • the first color information of the plane where the point cloud data is located and the second color information of the area where the point cloud data is to be confirmed are located, and the obstacle point cloud data to be confirmed that does not match the second color information and the first color information is confirmed as an obstacle.
  • Point cloud data is
  • the color information of the plane where the suspicious obstacle cluster group is located and the color information of the area where the suspicious obstacle cluster group is located may also be used.
  • the suspicious obstacle cluster group whose color information of the plane where the suspicious obstacle cluster group is located does not match the color information of the area where the suspicious obstacle cluster group is located is determined as the obstacle point cloud data to be confirmed, and then based on the to-be-determined obstacle point cloud data The height of the point cloud data from the nearest plane, and the obstacle point cloud data to be confirmed whose height is greater than the preset third height threshold is confirmed as obstacle point cloud data.
  • Figure 9 is a schematic block diagram of an obstacle recognition device 900 provided by an embodiment of the present application, including an acquisition unit 901, a processing unit 902, and a confirmation unit 903.
  • the acquisition unit 901 is used to acquire target point cloud data in the target area, where the target point cloud data does not include ground point cloud data.
  • the processing unit 902 is configured to perform fitting and clustering processing on the target point cloud data to obtain obstacle point cloud data to be confirmed.
  • the acquisition unit 901 is also used to acquire the first color information of the plane where the obstacle point cloud data to be confirmed is located.
  • the acquisition unit 901 is also configured to acquire the second color information of the area where the obstacle point cloud data to be confirmed is located.
  • the confirmation unit 903 is configured to confirm that the obstacle point cloud data to be confirmed is obstacle point cloud data when the second color information does not match the first color information.
  • the acquisition unit 901 is also used to acquire original point cloud data, where the original point cloud data is point cloud data based on the coordinate system of the mobile device.
  • the acquisition unit 901 is also used to acquire the normal vector corresponding to each point in the original point cloud data, and the normal vector.
  • the confirmation unit 903 is also configured to: when the first included angle of a point in the original point cloud data is less than a first included angle threshold, and the height of the point is less than a preset third height threshold, The point cloud data to which the corresponding point belongs is used as ground point cloud data.
  • processing unit 902 is also configured to remove the ground point cloud data from the original point cloud data to obtain target point cloud data.
  • the processing unit 902 is also configured to determine, according to the fitting plane, the first point cloud data located on the fitting plane and the relative position of the fitting plane when the target point cloud data is fitted to obtain a fitting plane. at a second included angle with the preset coordinate axis on the coordinate system of the self-moving device.
  • the confirmation unit 903 is also configured to determine the first point cloud data as the point cloud data of the obstacle when the second included angle is greater than or equal to the second included angle threshold.
  • the processing unit 902 is also configured to remove the first point cloud data from the target point cloud data to obtain the second point cloud data when the second included angle is less than the second included angle threshold. .
  • the processing unit 902 is also configured to perform clustering processing on the second point cloud data to obtain point cloud data of obstacles to be confirmed.
  • the confirmation unit 903 is also configured to use the target point cloud data as the second point cloud data when the plane is not obtained by fitting.
  • the confirmation unit 903 is also configured to perform clustering processing on the second point cloud data to obtain different clustering groups, wherein the clustering groups are clustered according to preset obstacle categories. Multiple point cloud data obtained by class.
  • the confirmation unit 903 is also configured to determine suspicious obstacle cluster groups from each cluster group based on the number of points in the point cloud data corresponding to each cluster group.
  • the confirmation unit 903 is also configured to calibrate the point cloud data corresponding to the suspicious obstacle cluster group as an obstacle when the height of the suspicious obstacle cluster group from the nearest plane exceeds the third height threshold. Point cloud data of objects.
  • the confirmation unit 903 is also configured to determine the point cloud data corresponding to the suspicious obstacle cluster group when the height of the suspicious obstacle cluster group from the nearest plane is less than the third height threshold. It is the point cloud data of the obstacle to be confirmed.
  • the method further includes a calculation unit configured to calculate a color difference value between the second color information and the first color information.
  • the confirmation unit 903 is also configured to use the obstacle point cloud data to be confirmed as obstacle point cloud data when the color difference value is greater than a preset color difference threshold.
  • the acquisition unit 901 is also used to acquire the image information of the plane where the area corresponding to the obstacle point cloud data to be confirmed is located.
  • the processing unit 902 is also used to perform color extraction on the image information to obtain multiple color information.
  • the confirmation unit 903 is also configured to select main color information as the first color information from the plurality of color information.
  • the acquisition unit 901 is also used to acquire lidar point cloud data from the mobile device.
  • the processing unit 902 is also configured to convert the lidar point cloud data into the coordinate system of the mobile device to obtain the original point cloud data.
  • the device 900 in the embodiment of the present application can be implemented by an application-specific integrated circuit (ASIC) or a programmable logic device (PLD).
  • the above PLD can be a complex program logic device. (complex programmable logical device, CPLD), field-programmable gate array (field-programmable gate array, FPGA), general array logic (generic array logic, GAL) or any combination thereof.
  • the method of identifying obstacles shown in FIG. 1 may also be implemented through software. When the method of identifying obstacles shown in FIG. 1 is implemented through software, the device 900 and its respective modules may also be software modules.
  • Figure 10 is a schematic structural diagram of a device provided by an embodiment of the present application.
  • the device 1000 includes a processor 1001, a memory 1002, a communication interface 1003 and a bus 1004.
  • the processor 1001, the memory 1002, and the communication interface 1003 communicate through the bus 1004. Communication can also be achieved through other means such as wireless transmission.
  • the memory 1002 is used to store instructions, and the processor 1001 is used to execute the instructions stored in the memory 1002.
  • the memory 1002 stores program code 10021, and the processor 1001 can call the program code 10021 stored in the memory 1002 to execute the method of identifying obstacles shown in FIG. 1 .
  • the processor 1001 may be a CPU.
  • the processor 1001 may also be other general-purpose processors, digital signal processors (DSPs), application-specific integrated circuits (ASICs), or field programmable gate arrays (FPGAs). ) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • DSPs digital signal processors
  • ASICs application-specific integrated circuits
  • FPGAs field programmable gate arrays
  • a general-purpose processor can be a microprocessor or any conventional processor, etc.
  • the memory 1002 may include read-only memory and random access memory, and provides instructions and data to the processor 1001. Memory 1002 may also include non-volatile random access memory.
  • the memory 1002 may be volatile memory or non-volatile memory, or may include both volatile and non-volatile memory. Among them, the non-volatile memory can be read-only memory (ROM), programmable ROM (PROM), erasable programmable read-only memory (erasable PROM, EPROM), electrically removable memory. Erase programmable read-only memory (electrically EPROM, EEPROM) or flash memory. Volatile memory may be random access memory (RAM), which is used as an external cache.
  • RAM random access memory
  • RAM static random access memory
  • DRAM dynamic random access memory
  • SDRAM synchronous dynamic random access memory
  • Double data rate synchronous dynamic random access memory double data date SDRAM, DDR SDRAM
  • enhanced synchronous dynamic random access memory enhanced SDRAM, ESDRAM
  • synchronous link dynamic random access memory direct memory bus random access memory
  • direct rambus RAM direct rambus RAM
  • bus 1004 may also include a power bus, a control bus, a status signal bus, etc. However, for the sake of clarity, the various buses are labeled bus 1004 in FIG. 10 .
  • the device 1000 may correspond to the device 900 in the embodiment of the present application, and may correspond to the first device in the method shown in FIG. 1 in the embodiment of the present application.
  • the device 1000 corresponds to the method shown in FIG. 1
  • the above and other operations and/or functions of each module in the device 1000 are respectively to implement the operating steps of the method performed by the first device in Figure 1. For the sake of brevity, they will not be described again here. .
  • Embodiments of the present application also provide a computer-readable storage medium.
  • the computer-readable storage medium stores a computer program.
  • the steps in each of the above method embodiments can be implemented.
  • Embodiments of the present application provide a computer program product.
  • the steps in each of the above method embodiments can be implemented when executed by the mobile device.
  • sequence number of each step in the above embodiment does not mean the order of execution.
  • the execution order of each process should be determined by its function and internal logic, and should not constitute any limitation on the implementation process of the embodiment of the present application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Robotics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

一种障碍物的识别方法、设备及存储介质,包括:获取目标区域的目标点云数据,其中目标点云数据不包括地面点云数据,对目标点云数据进行拟合聚类处理,以得到待确认障碍物点云数据,对待确认障碍物点云数据所处平面的第一颜色信息以及待确认障碍物点云数据所处区域的第二颜色信息,当第二颜色信息与第一颜色信息不匹配时,确认待确认障碍物点云数据为障碍物点云数据,从而可以准确的识别出低矮的障碍物。

Description

障碍物的识别方法、设备及存储介质
相关申请的交叉引用
本申请要求于2022年03月21日提交中国专利局、申请号为202210278336.9、发明名称为“障碍物的识别方法、设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请属于智能机器人技术领域,尤其涉及一种障碍物的识别方法及设备。
背景技术
这里的陈述仅提供与本申请有关的背景信息,而不必然地构成示例性技术。
当前在自移动设备的移动过程中,为了保证自移动设备能够顺利的移动,需要对自移动设备周围的障碍物进行准确的识别,以防止自移动设备与障碍物发生碰撞。
然而在相关技术中,在自移动设备对障碍物进行识别时,对高度较低的障碍物,即低矮障碍物的识别率准确率较低,使得自移动设备容易与低矮障碍物发生碰撞,同时,容易将较矮的斜坡误认为障碍物,识别斜坡的准确率较低,从而导致自移动设备的工作效率较低。
发明内容
根据本申请的各种实施例,提供了一种识别障碍物的方法、设备及存储介质。
本实施例提供了一种识别障碍物的方法,包括:获取目标区域中的目标点云数据,所述目标点云数据不包括地面点云数据;对所述目标点云数据进行拟合聚类处理,以得到待确认障碍物点云数据;获取所述待确认障碍物点云数据所处平面的第一颜色信息;获取所述待确认障碍物点云数据所处区域的第二颜色信息;当所述第二颜色信息与所述第一颜色信息不匹配时,确认所述待确认障碍物点云数据为障碍物点云数据。
本申请实施例提供了一种识别障碍物的装置,该装置用于执行上述第一方面或第一方面的任一可能的实现方式中的方法。具体地,该装置可以包括用于执行第一方面或第一方面的任一可能的实现方式中识别障碍物的方法的模块。
本申请实施例提供了一种设备,该设备包括存储器与处理器。该存储器用于存储指令;该处理器执行该存储器存储的指令,使得该设备执行第一方面或第一方面的任一可能的实现方式中识别障碍物的方法。
本申请实施例提供一种计算机可读存储介质,该计算机可读存储介质中存储有指令,当 该指令在计算机上执行时,使得计算机执行第一方面或第一方面的任一可能的实现方式中识别障碍物的方法。
本申请实施例提供一种包含指令的计算机程序产品,当该指令在设备上运行时,使得设备执行第一方面或第一方面的任一可能的实现方式中识别障碍物的方法。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其他特征、目的和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或示例性技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的障碍物的识别方法的流程示意图。
图2是本申请实施例提供的待确认障碍物点云数据所处平面的示意图。
图3是本申请实施例提供的待确认障碍物点云数据所处区域的示意图。
图4是本申请实施例提供的得到目标点云数据的方法的流程示意图。
图5是本申请实施例提供的得到待确认障碍物点云数据的方法的流程示意图。
图6是本申请实施例提供的拟合平面的示意图。
图7是本申请另一实施例提供的拟合平面的示意图。
图8是本申请实施例提供的对第二点云数据进行聚类处理的方法的流程示意图。
图9是本申请实施例提供的识别障碍物的装置900的结构示意图。
图10是本申请实施例提供的设备1000的结构示意图。
具体实施方式
以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本申请。在其它情况中,省略对众所周知的系统、装置、电路以及方法的详细说明,以免不必要的细节妨碍本申请的描述。
以自移动设备为割草机器人为例,当割草机器人在包含草坡的草坪上移动进行割草作业时,由于割草机器人在进行障碍物识别时,无法准确的识别草坪上高度较低的石头,使得割草机器人与石头发生碰撞,导致割草机器人无法继续进行割草作业,从而降低工作效率。或,以自移动设备为无人驾驶车辆为例,当无人驾驶车辆在路上行驶的过程中,无法准确的识别路上的低矮障碍物,导致无人驾驶车辆与障碍物发生碰撞,无法正常行驶。
由此可见,亟需一种识别障碍物的方法,使得自移动设备可以准备的识别高度较低的障碍物,从而提高自移动设备的工作效率。
本申请实施例提供的方法可以由第一设备执行,也可以由第一设备中的芯片执行,该第一设备可以是非自移动设备,例如服务器,电子设备(比如,手机),也可以是自移动设备,比如自移动设备可以是机器人(比如割草机器人、扫地机器人、排雷机器人、巡航机器人等),也可以是智能汽车等。在第一设备为非自移动设备外的设备时,该第一设备可以和自移动设备互相通信。比如,手机上可以安装有与该自移动设备对应的应用程序(application,APP),用户可以在手机中操作该APP,以触发手机和自移动设备建立通信连接。在第一设备为服务器时,用户可以在与该自移动设备通信的手机的APP中触发自移动设备向服务器上报包括目标区域的深度图像和/或RGB图像,以使得服务器执行本申请实施例提供的识别障碍物的方法。或者该自移动设备中具有第一控件,用户触发该第一控件,以触发自移动设备向服务器上报由自移动设备采集的包括目标区域的深度图像和/或RGB图像,以使得服务器执行本申请实施例提供的识别障碍物的方法。
下面结合图1对本申请的实施例提供的识别障碍物的方法进行详细的介绍。
S101、第一设备获取目标区域中的目标点云数据,其中,目标点云数据不包括地面点云数据。
在一些实施例中,第一设备可以通过搭载在自移动设备上的深度相机获取自移动设备所在当前位置的深度图像,则可以将深度图像中除地面区域以外的区域确定为目标区域。在这种情况下,第一设备可以根据采集得到的深度图像,确定目标区域中的目标点云数据,其中目标点云数据不包括地面点云数据。
可选的,第一设备也可以通过搭载在自移动设备上的深度相机、激光扫描仪或激光雷达等其他仪器获取对应的深度图像,深度图像包括点云数据。
S102、第一设备对目标点云数据进行拟合聚类处理,以得到待确认障碍物点云数据。
其中,待确认障碍物点云数据不包括目标点云数据中能够直接确定为障碍物的点云数据,以及待确认障碍物点云数据不包括目标点云数据中能够直接确定为非障碍物的点云数据。
具体的,拟合聚类处理包括拟合处理以及聚类处理,其中拟合处理为拟合平面处理,目的是为了确定目标区域中是否包括斜坡。当存在斜坡时,将目标点云数据移除斜坡对应的点云数据,并对其剩下的点云数据进行聚类处理以获取到一个或多个待确认障碍物点云数据;当不存在斜坡时,对目标点云数据进行聚类处理得到多个待确认障碍物点云数据。
S103、第一设备获取待确认障碍物点云数据所处平面的第一颜色信息。
在一些实施例中,第一颜色信息为从待确认障碍物所处平面的一种或多种颜色信息中 所确定的面积占比最大的颜色信息。
例如,如图2所示,当待确认障碍物点云数据201所处的平面为平面202时,第一颜色信息为平面202的一种或多种颜色信息中,颜色的面积占比最大的颜色信息。
S104、第一设备获取待确认障碍物点云数据所处区域的第二颜色信息。
在一些实施例中,第二颜色信息为待确认障碍物点云数据所处区域的一种或多种颜色信息中面积占比最大的颜色信息。
如图3所示,当待确认障碍物点云数据所处区域为区域301时,第二颜色信息为区域301的一种或多种颜色信息中,颜色的面积占比最大的颜色信息。
S105、当第二颜色信息与第一颜色信息不匹配时,第一设备确定待确认障碍物点云数据为障碍物点云数据。
在一些实施例中,当第一颜色信息与第二颜色信息的色差值大于预设的色差阈值时,则可以认为第二颜色信息与第一颜色信息不匹配。
应理解,本申请通过获取目标区域中,不包括地面点云数据的目标点云数据,并对目标点云数据进行拟合聚类处理,得到待确认障碍物点云数据,并通过对比待确认障碍物所在平面的第一颜色信息与障碍物所在区域的第二颜色信息,当第二颜色信息与第一颜色信息不匹配时,确定待确认障碍物点云数据为障碍物点云数据。可以看到,当目标区域中存在低矮的障碍物时,待确认障碍物点云数据所在区域内,颜色面积占比最大的为障碍物主要的颜色,即第一颜色信息,待确认障碍物点云数据所在平面内,颜色面积占比最大的颜色为障碍物所在平面主要的颜色,即,第二颜色信息。由于障碍物的颜色与障碍物所在平面的颜色之间会存在一定差别,因此,第二颜色信息与第一颜色信息不匹配。自移动设备可以根据第一颜色信息与第二颜色信息,准确的识别出障碍物点云数据,使得自移动设备可以准确的识别出低矮的障碍物,进行精准避障,从而有效提高了自移动设备的工作效率。
在本申请的一个可能的实现方式中,上述S101可以通过图4中的S1到S4实现:
S1、第一设备获取原始点云数据,原始点云数据为基于自移动设备的坐标系下的点云数据。
在一些实施例中,原始点云数据为自移动设备采集到的深度图像中的所有物体在自移动设备坐标系下的点云数据。
例如,目标区域中包括草坪、草坡、石头、树木,则原始点云数据包括地面点云数据、草坡对应的点云数据、石头对应的点云数据、树木对应的点云数据。
在一些实施例中,自移动设备也可以通过深度相机,或激光扫描仪、或激光雷达等其他可以获取点云数据的仪器获取原始点云数据。
示例性的,在使用深度相机获取点云数据时,将深度图像中的每一个图像像素点的坐标、图像像素点的深度值以及相机对应的内部参数进行计算,以得到深度图像中的每个图像像素点在相机坐标系下的点云数据,具体可以按照公式(1)到公式(4)得到:。


zc=d   公式(3)
pc=(xc,yc,zc)  公式(4)
其中,(u,v)表示图像坐标系下深度图像中的一个图像像素点的像素坐标;d为该图像像素点的深度值;lx,ly,fx以及fy均为相机对应的内部参数;pc表示深度图像中的一个图像像素点在相机坐标系下的点云数据坐标。
进一步地,根据公式(5)将相机坐标系下每一个点云数据进行转换,以得到在自移动设备的坐标系下的原始点云数据:
pr=Rrc×pc+Trc  公式(5)
其中,pr表示在自移动设备的坐标系下的原始点云数据的坐标;Rrc表示相机坐标系到自移动设备的坐标系下的旋转参数;Trc表示相机坐标系到自移动设备的坐标系下的平移参数;旋转参数和平移参数可通过实际测量获取。
通过上述公式(1)至公式(5)可以获取到深度图像中的每一个图像像素点在自移动设备坐标系下的坐标。
应理解,自移动设备的坐标系以自移动设备前进的方向为X轴的正方向,以自移动设备前进方向的逆时针90度的方向为Y轴的正方向,以垂直于自移动设备所在平面,向上的方向为Z轴的正方向。
S2、第一设备获取原始点云数据中的各点对应的法向量,以及法向量与坐标系上的预设坐标轴之间的第一夹角。
应理解,点云数据的法向量为在当前点云数据与周围至少两个点云数据拟合得到的平面上,该点云数据相对于该平面对应的法向量。其中,预设坐标为可以为自移动设备坐标系上横轴(X轴)、纵轴(Y轴)或者竖轴(Z轴)。
应理解,法向量与自移动设备坐标系上的预设坐标轴之间的第一夹角为法向量与自移动设备坐标系中的Z轴的正方向之间的夹角。
S3、当原始点云数据中的点的第一夹角小于第一夹角阈值,并且所述点的高度小于预设的高度阈值时,第一设备将对应的点所属的点云数据作为地面点云数据。
在一些实施例中,第一夹角阈值可以根据用户的设定进行确定。
应理解,由于在实际情况中,地面对应的所有点云数据的法向量与自移动设备的坐标系的Z轴的夹角并不是全部等于零,因此可以将第一夹角阈值设定为一个较小的值。
在一些实施例中,预设的高度阈值可以根据自移动设备的移动装置的尺寸确定。
例如,以自移动设备的移动装置为轮子为例进行说明,若轮子的直径为5厘米,则预设的高度阈值可以为5厘米。
在一个可能的实现方式中,可以设置预设的高度阈值为5厘米,预设的第一夹角阈值为5度,则可以将原始点云数据中,在自移动设备的坐标系下的Z轴的分量小于5厘米,且点云数据的法向量与坐标系中的Z轴的夹角小于或等于5度的点所属的点云数据确定为地面点云数据。
应理解,若目标区域中包括与地面平行的障碍物,例如目标区域中包括桌子,在这种情况下,仅根据原始点云数据中的点的第一夹角进行判断,会将桌子也确定为地面点云数据。或,若目标区域中包括高度较低的障碍物,例如目标区域中包括钉子,在这种情况下,仅根据点云数据中的点的高度,会将钉子也确定为地面点云数据。因此,可以将原始点云数据中的点的第一夹角小于第一夹角阈值,并且该点的高度小于预设的高度阈值的点所属的点云数据确定为地面点云数据,以提高识别的准确度。
S4、第一设备从原始点云数据中移除地面点云数据,以得到目标点云数据。
应理解,由于地面不是障碍物,且地面点云数据在后续拟合聚类的过程中会对拟合聚类的过程造成干扰。因此在对障碍物识别的过程中,无需对地面点云数据进行拟合聚类处理,在这种情况下,第一设备可以从原始点云数据中移除地面点云数据,以得到目标点云数据。
在进行拟合处理时,地面点云数据会对拟合平面的过程造成干扰,因此通过从原始点云数据中移除地面点云数据,可以有效降低地面点云数据对障碍物识别的干扰,从而提高获取的待确认障碍物点云数据的准确率。同时,减少了无关的地面点云数据的计算量,从而提高了拟合处理的效率,进而提高了待确认障碍物点云数据的识别效率。
在本申请的一个可能的实现方式中,上述S102可以包括图5中的S5到S8的具体实现:
S5、第一设备对目标点云数据进行平面拟合。
示例性的,在对目标点云数据进行平面拟合时,可以通过随机抽样一致(Random Sample Consensus,Ransac)算法对目标点云数据进行平面拟合。
应理解,本实施例以Ransac算法为例,对目标点云数据进行拟合,在实际使用过程中,对平面拟合的算法不做限定,例如可以使用最小二乘法对目标点云数据进行平面拟合,或使用灰度值插值算法对目标点云数据进行平面拟合。
S7、当目标点云数据经过拟合得到拟合平面时,第一设备根据拟合平面确定位于拟合平面上的第一点云数据以及拟合平面相对于与自移动设备的坐标系上的预设坐标轴的第二夹角。
示例性的,可以将拟合过程中与拟合的平面之间的最短距离小于内点距离阈值的点云数据称为内点数据。则在对目标点云数据经过拟合处理时,若内点数据的数量大于预设的内点阈值,可以认为目标点云数据经过拟合,得到了拟合平面。其中,拟合平面的内点的数量大于预设的内点阈值。
应理解,若目标区域中包括斜坡,则目标点云数据包括目标区域中,除地面点云数据以外的其他点云数据。例如,目标点云数据包括斜坡的点云数据、地面上的障碍物的点云数据,和/或,斜坡上的障碍物的点云数据。
由于斜坡在目标区域中的体积较大,因此斜坡的点云数据的数量较多,且斜坡的点云数据绝大部分都位于同一个平面,而在对点云数据进行拟合的过程中,拟合的原则是使得拟合平面上有尽可能多的内点,因此对目标点云数据进行平面拟合之后,可以将得到的拟合平面认为是斜坡的平面,将拟合平面上的第一点云数据认为是斜坡的点云数据。
例如,设置内点阈值为目标点云数据的数量的百分之八十,即,拟合平面的内点数量大于目标点云数据的数量的百分之八十时,该拟合平面确定为目标区域中的斜坡。若目标区域如图6所示,目标区域中包括斜坡601,且目标区域中包括地面上的一棵树,即为障碍物602、斜坡上的一块石头,即为障碍物603,此时目标点云数据包括斜坡601对应的点云数据、障碍物602对应的点云数据、障碍物603对应的点云数据,且目标点云数据的数量为31000个,斜坡601对应的点云数据的数量对应为25990个,障碍物602、障碍物603总共的点云数据对应的数量为4010个,此时对目标点云数据使用Ransac算法进行平面的拟合,得到的拟合平面604对应的点云数据为斜坡601对应的点云数据,且拟合平面对应的内点的数量为目标点云数据的83.3%。因此,可以看到,若拟合过程中的内点的数量大于或等于内点阈值,则可以得到拟合平面,也即,目标区域中包括斜坡,且可以将拟合平面认为是斜坡的平面。
可选的,内点阈值可以根据目标点云数据的数量确定,例如,内点阈值可以是目标点云数据的数量的百分之八十。
在得到拟合平面之后,可以根据拟合平面,确定位于拟合平面上的第一点云数据以及拟合平面相对于与自移动设备的坐标系上的预设坐标轴的第二夹角。
应理解,第二夹角为拟合平面与自移动设备的坐标系中的Z轴之间的夹角,第二夹角可以用来描述拟合平面与自移动设备所处平面之间的夹角,进一步的,第二夹角也可以用来描述目标区域中的斜坡的坡度。
在得到了第二夹角之后,可以根据第二夹角与第二夹角阈值之间的关系,进行S8或者S9的操作。
S8、在第二夹角大于或等于第二夹角阈值时,第一设备将第一点云数据确定为障碍物点云数据。
在一些实施例中,第二夹角大于或等于第二夹角阈值时,也即斜坡的坡度大于或等于第二夹角阈值时,第一设备将该斜坡的点云数据确定为障碍物的点云数据,从而将该斜坡确定为障碍物,可以对该障碍物进行避障处理。
例如,设置第二夹角阈值为45度,当斜坡的坡度大于45度时,即斜坡对应的拟合平面与自移动设备的坐标系中的Z轴之间的夹角大于45度时,由于自移动设备无法通过坡度大于或等于45度的斜坡,因此将斜坡视为障碍物,需要进行避障措施,在这种情况下,第一设备将第一点云数据确定为障碍物点云数据。
S9、在第二夹角小于第二夹角阈值时,第一设备将目标点云数据移除第一点云数据,以得到第二点云数据;
在一些实施例中,在第二夹角小于第二夹角阈值时,也即斜坡的坡度小于第二夹角阈值时,将该斜坡确定为可通过的斜坡,换句话说,也即拟合平面点云数据不是障碍物点云数据。在这种情况下,移除目标点云数据中的拟合平面上的第一点云,得到第二点云数据。
例如,设置第二夹角阈值为45度,则当斜坡的坡度小于45度时,斜坡对应的拟合平面与自移动设备的坐标系中的Z轴之间的夹角小于45度,由于自移动设备可以顺利通过坡度小于45度的斜坡,因此该斜坡不是障碍物。在这种情况下,移除目标点云数据中的拟合平面上的第一点云,得到第二点云数据。
S10、对第二点云数据进行聚类处理以得到待确认障碍物点云数据。
在得到第二点云数据之后,对第二点云数据进行聚类处理,以得到待确认障碍物点云数据,具体的聚类处理过程由图8中的S11到S14实现。
基于上述技术方案,由于机器人爬坡时受最大斜坡角度影响,当遇到前方对象超过最大斜坡角度时机器人通常执行避障处理。因此,当通过平面拟合确定目标区域为斜坡,且斜坡的第二夹角大于或等于第二夹角阈值时,该斜坡的斜坡角度超过了最大斜坡角度,因此机器人无法通过该斜坡,此时将该斜坡识别为障碍物。反之,当斜坡的第二夹角小于第二夹角阈值时,机器人可以通过该斜坡,此时将该拟合平面的点云数据视为非障碍物点云数据。通过目标点云数据移除拟合平面的点云数据,可以得到更加准确的待确认障碍物点云数据,解决了自移动设备容易将可以通过的斜坡误识别为障碍物,导致自移动设备出现误判断,使得工作效率较低的技术问题,达到可以准确的只将机器人无法通过的斜坡识别为障碍物,从而 提高自移动设备的工作效率的技术效果。
在本申请的一个可能的实施例中,在S5之后,本申请实施例提供的方法还包括图5中的S6:在未拟合得到平面时,第一设备将目标点云数据作为第二点云数据,并对第二点云数据进行聚类处理以得到待确认障碍物点云数据。
应理解,拟合过程中,当内点的数量大于预设的内点阈值,才会拟合得到平面。在这种情况下,若目标区域中不包括斜坡,则目标点云数据为目标区域中除地面点云数据以外的其他一个或多个障碍物的点云数据,由于该一个或多个障碍物中的障碍物的各自高度一般不同,且一个或多个障碍物中的障碍物的分布无固定的规律。因此,目标点云的分布在自移动设备的坐标系下较为分散,此时对目标点云数据进行平面的拟合,由于拟合过程中的拟合平面的内点无法大于或等于预设的内点阈值,则无法得到拟合平面。
例如,设置内点阈值为目标点云数据的数量的百分之八十,即,拟合过程中的内点小于目标点云数据的数量的百分之八十时,无法拟合得到平面,则,目标区域中不包括斜坡。若目标区域如图7所示,目标区域中不包括斜坡,包括地面701,且目标区域中包括5个障碍物,分别为障碍物702、障碍物703、障碍物704、障碍物705、障碍物706,此时目标点云数据包括障碍物702对应的点云数据的数量为1030,障碍物703对应的点云数据的数量为890、障碍物704对应的点云数据的数量为2305、障碍物705对应的点云数据的数量为760、障碍物706对应的点云数据的数量为861,目标点云数据的数量为6000。此时对目标点云使用Ransac算法进行平面的拟合,得到的内点的数量的最大值为目标点云数据的30.2%,此时无法拟合得到平面。因此,可以看到,若拟合平面的过程中的内点的数量小于内点阈值,则无法拟合得到平面,进一步的,可以确定目标区域不包括斜坡。
基于上述技术方案,在未拟合得到平面时,说明目标区域中不包括斜坡,因此不存在斜坡点云数据对聚类处理造成干扰的情况,在这种情况下,可以直接将目标点云数据作为第二点云数据,并对第二点云数据进行聚类处理,得到的待确认障碍物点云数据中包括所有障碍物点云数据,不会存在漏识别的现象,有效提高了识别障碍物的准确度。
在本申请的一个可能的实施例中,S10中的对第二点云数据进行聚类处理以得到待确认障碍物点云数据,具体可以通过图8中的S11到S14实现:
S11、第一设备对第二点云数据进行聚类处理,以得到不同的聚类群。
在一些实施例中,对第二点云数据进行聚类处理之后,会得到一个或多个不同的聚类群,其中,聚类群为根据预设的障碍物类别进行聚类得到的多个点云数据,不同的聚类群中包括的点云数据的点的数量不同。
具体的,在对第二点云数据进行聚类处理时,本申请对第二点云数据进行聚类处理的 算法不做限定。例如,可以使用k均值聚类算法、高斯混合模型算法、期望最大化算法等算法对第二点云数据进行聚类处理。
S12、第一设备根据每个聚类群对应的点云数据的点的数量,从各聚类群中确定可疑障碍物聚类群。
应理解,若目标区域中存在障碍物,则障碍物的聚类群对应的点云数据的点的数量会超过一定的阈值,因此将聚类群中对应的点云数据的点的数量超过阈值的聚类群确定为可疑障碍物聚类群。
在确定出可疑障碍物聚类群之后,确定聚类群距离最近的平面的高度,可以包括S13或S14:
S13、在可疑障碍物聚类群距离最近的平面的高度超过第三高度阈值时,第一设备将可疑障碍物聚类群对应的点云数据标定为障碍物的点云数据。
可选的,当可疑障碍物聚类群所在的平面为地面时,可疑障碍物聚类群距离最近的平面的高度为可疑障碍物聚类群中的点云数据的点到地面之间的距离,当可疑障碍物聚类群所在的平面为斜坡时,可疑障碍物聚类群距离最近的平面的高度为可疑障碍物聚类群中的点云数据的点到斜坡之间的距离。
例如,设置第三高度阈值为20厘米,则当可疑障碍物聚类群中的点云数据中的点距离最近的平面的高度超过20厘米时,将其可疑障碍物聚类对应的点云数据标定为障碍物的点云数据。
可选的,在确定可疑障碍物聚类群距离最近的平面的高度时,在目标区域中包括斜坡时,可以通过方式1进行确定:
方式1可以是:在目标区域中包括斜坡时,分别计算可疑障碍物聚类群距离地面的距离以及可疑障碍物聚类群距离斜坡平面的距离,将其中较小的距离值确定为可疑障碍物聚类群距离最近的平面的高度。
例如,可疑障碍物距离地面的距离为3米,可以障碍物距离斜坡平面的距离为50厘米,则可疑障碍物聚类群距离最近的平面的高度为50厘米。
可选的,在确定可疑障碍物聚类群距离最近的平面的高度时,在目标区域中不包括斜坡时,可以通过方式2进行确定:
方式2可以是:在目标区域中不包括斜坡时,将可疑障碍物聚类群距离地面的距离确定为可疑障碍物聚类群距离最近的平面的高度。
具体的,当自移动设备位于地面上时,在确定可疑障碍物聚类群距离地面的距离时,可以通过方式3至方式7中的任意一种方式确定:
方式3可以是:根据可疑障碍物聚类群中的多个点云数据在自移动设备的自移动设备的坐标系下的坐标,确定可疑障碍物聚类群的中心点的坐标,再根据该中心点的坐标,确定该可疑障碍物聚类群距离地面的距离,将该中心点到地面的最短距离确定为可疑障碍物聚类群距离地面的距离。
应理解,在确定可以障碍物聚类群的中心点的坐标时,可以求取所有聚类群中的点云数据的坐标的平均值,以作为聚类群的中心点的坐标。
例如,可疑障碍物聚类群中包括10个点云数据,该10个点云数据在自移动设备的坐标下的坐标分别为(13,54,23)、(15,56,44)、(13,51,37)、(16,53,22)、(14,54,34)、(16,52,71)、(17,53,41)、(19,55,35)、(12,49,29)、(18,53,49),则中心点的坐标为(15.3,53,38.5),其中,自移动设备的坐标系下的一个单位对应的长度是1厘米,则可疑障碍物聚类群距离地面的距离为38.5厘米。
方式4可以是:根据可疑障碍物聚类群中的多个点云数据在自移动设备的坐标系下的坐标,确定可疑障碍物聚类群的点云数据中,高度最高的点云数据,再根据该点云数据对应的坐标,将该点云数据的点到地面的最短距离确定为可疑障碍物聚类群距离地面的距离。
例如,第一目标对象中包括10个点云数据,该10个点云数据在自移动设备的坐标系下的坐标分别为(13,54,23)、(15,56,44)、(13,51,37)、(16,53,22)、(14,54,34)、(16,52,71)、(17,53,41)、(19,55,35)、(12,49,29)、(18,53,49),则点云数据中,高度最高的点所属的点云数据对应的坐标为(16,52,71),其中,自移动设备的坐标系下的一个单位对应的长度是1厘米,则可疑障碍物聚类群距离地面的距离为71厘米。
方式5可以是:根据可疑障碍物聚类群中的多个点云数据在自移动设备的坐标系下的高度,确定多个分量的平均值,并将该平均值确定为可疑障碍物聚类群距离地面的距离。
例如,可疑障碍物聚类群中包括10个点云数据,该10个点云数据的点在自移动设备的坐标系下的高度,即在自移动设备的坐标系下的Z轴分量分别是:23、44、37、22、34、71、41、35、29、49,则多个分量的平均值为38.5,其中,自移动设备的坐标系下的一个单位对应的长度是1厘米,则可疑障碍物聚类群距离地面的距离为38.5厘米。
方式6可以是:根据第一目标对象中的多个点云数据在自移动设备的坐标系下的高度,确定多个分量中的最大值,并将该最大值确定为可疑障碍物聚类群距离地面的距离。
例如,第一目标对象中包括10个点云数据,该10个点云数据的点在自移动设备的坐标系下的高度,即在自移动设备的坐标系下的Z轴分量分别是:23、44、37、22、34、71、41、35、29、49,则多个分量的最大值为71,其中,自移动设备的坐标系下的一个单位对应 的长度是1厘米,则可疑障碍物聚类群距离地面的距离为71厘米。
方式7可以是:根据可疑障碍物聚类群中的多个点云数据在自移动设备的坐标系下的高度,即点云数据在自移动设备的坐标系下的Z轴的分量,确定多个分量的中位数的值,并将该中位数的值确定为可疑障碍物聚类群距离地面的距离。
例如,可疑障碍物聚类群中包括10个点云数据,该10个点云数据在自移动设备的坐标系下的Z轴分量分别是:23、44、37、22、34、71、41、35、29、49,则中位数为36,其中,自移动设备的坐标系下的一个单位对应的长度是1厘米,则可疑障碍物聚类群距离地面的距离为36厘米。
S14、在可疑障碍物聚类群距离最近的平面的高度小于第三高度阈值时,第一设备将可疑障碍物聚类群对应的点云数据确定为待确认障碍物点云数据。
基于上述技术方案,在确定出可疑障碍物聚类群之后,根据可疑障碍物聚类群距离最近地平面的高度,将高度大于或等于第三高度阈值的可疑障碍物聚类群对应的点云数据直接确认障碍物点云数据,仅将高度小于第三高度阈值的可疑障碍物聚类群对应的点云数据确定为待确认点云数据。并对待确认点云数据进行结合颜色信息做进一步的分析与识别,可以减小识别障碍物点云数据时需要的计算量,同时通过将高度大于第三高度阈值的可疑障碍物确认为障碍物点云数据的技术手段,提高了识别障碍物的准确度。
应理解,确定可疑障碍物聚类群距离最近的平面的高度可以参考上述方法进行确定,为了简洁,这里不再赘述。
在本申请的一个可能的实施例中,S103具体可以通过以下方式实现:
可选的,自移动设备上搭载有相机,用于获取待确认障碍物点云数据对应区域所处平面的图像信息,并将自移动设备上搭载的相机拍摄得到的图像信息发送至第一设备,使得第一设备获取待确认障碍物点云数据对应区域所处平面的图像信息。
示例性的,第一设备在获取到图像信息之后,可以对图像信息进行颜色提取,获取到多个颜色信息,并从多个信息中筛选出主颜色信息作为第一颜色信息。
可选的,图像信息可以是RGB图像。
示例性的,在对图像信息进行颜色提取,获取到多个颜色信息,并从多个信息中筛选出主颜色信息作为第一颜色信息时,可以通过以下步骤确定第一颜色信息:
例如,使用k均值聚类算法对目标区域的图像信息进行第一颜色的提取,提取前设置好提取的终止条件以及最大的迭代次数,以及聚类中心的选取方式,使用k均值聚类对图像信息处理之后,得到多个聚类群,其中多个聚类群分别对应多个颜色信息,在获得多个颜色信息之后,可以将多个聚类群中,最大的聚类群的聚类中心的点所属的点云的颜色筛选为主颜 色信息,并将主颜色信息确定为第一颜色信息。
例如,终止条件可以是在下一次聚类时重新分配给不同聚类的点云数据的点的数量低于3个,或,在下一次聚类时低于2个聚类群的聚类中心发生变化。
应理解,此处仅以k均值聚类确定第一颜色信息为例进行说明,在实际应用中,对提取图像中的颜色使用到的算法不作限定,可以是任意一种可以用来提取图像中的颜色的算法,例如深度学习算法、中位切分法等。
可选的,第一颜色信息可以是该区域包括的颜色的色号或颜色的变化规律。
基于上述技术方案,由于待确认障碍物点云数据对应区域所处平面的图像信息中可能包含了多种颜色,即存在干扰的颜色信息,因此通过从多个颜色信息中筛选出主颜色信息作为第一颜色信息可以有效减少干扰,从而提高对障碍物的识别准确度。
在本申请的一个可能的实施例中,S104具体可以通过以下步骤实现:
在获取到待确认障碍物之后,根据待确认障碍物对应的聚类群中的点云数据,在对应的RGB图中找到待确认障碍物对应的聚类群对应的区域,将该区域的颜色信息确定为第二颜色信息。
可选的,第二颜色信息可以是该区域包括的颜色的色号或颜色的变化规律。
在本申请的一个可能的实施例中,S105具体可以通过以下步骤实现:
计算第二颜色信息与第一颜色信息的色差值,当色差值大于预设的色差阈值时,将待确认障碍物点云数据作为障碍物点云数据。
示例性的,在计算第二颜色信息与第一颜色信息的色差值时,可以将图像转换为HVS颜色空间(Hue Saturation Value,HVS)或LAB颜色空间(Lab Color Space,LAB)的模式,从而计算第二颜色信息与第一颜色信息的色差值。
应理解,由于第一颜色信息表示待确认障碍物点云数据所处平面的颜色信息,第二颜色信息表示待确认障碍物点云数据所处区域的颜色信息,由于障碍物的点云数据的颜色信息很大概率与障碍物所处平面点云数据的颜色信息存在一定的色差,因此可以通过计算第二颜色信息与第一颜色信息之间的色差值,当色差值大于预设的色差阈值时,将待确认障碍物点云数据作为障碍物点云数据。
例如,自移动设备在草坪上进行除草作业,一个灰色的石头在草坪上,则可以得到灰色的石头对应的点云数据为待确认障碍物点云数据,石头的点云数据所处的平面为草坪,石头的点云数据所处的区域为石头。在这种情况下,第一颜色信息为绿色,第二颜色信息为灰色,由于灰色与绿色之间的色差超过预设的色差阈值,则可以将灰色的石头的点云数据作为障碍物点云数据,从而准确的识别出障碍物。
通过计算第二颜色信息与第一颜色信息之间的色差值,并当色差值大于预设的色差阈值时,将待确认障碍物点云数据作为障碍物点云数据的技术手段,可以准确的识别出低矮的障碍物,且当可通过的斜坡的颜色与地面颜色相同时,避免将斜坡识别为障碍物,有效提高了识别障碍物的准确性。
在一些实施例中,第一颜色信息也可以是待确认障碍物点云数据所处平面的颜色的变化规律,第二颜色信息可以是待确认点云数据所处区域的颜色变化规律,在这种情况下,若待确认障碍物点云数据所处平面的颜色的变化规律以及待确认点云数据所处区域的颜色变化规律之间的相似度高于某一阈值时,认为第二颜色信息与第一颜色信息之间的色差低于某一阈值,在这种情况下,可以认为待确认障碍物不是障碍物。
在本申请的一些实施例中,在确定出可疑障碍物聚类群之后,可以先根据可疑障碍物聚类群距离最近的平面的高度,确定出待确认障碍物点云数据,再根据待确认点云数据所处平面的第一颜色信息与待确认点云数据所处区域的第二颜色信息,将第二颜色信息与第一颜色信息不匹配的待确认障碍物点云数据确认为障碍物点云数据。
在本申请的另一些实施例中,在确定出可疑障碍物聚类群之后,也可以先根据可疑障碍物聚类群所处平面的颜色信息与可疑障碍物聚类群所处区域的颜色信息,将可疑障碍物聚类群所处平面的颜色信息与可疑障碍物聚类群所处区域的颜色信息不匹配的可疑障碍物聚类群确定为待确认障碍物点云数据,再根据待确定点云数据距离最近的平面的高度,将高度大于预设的第三高度阈值的待确认障碍物点云数据确认为障碍物点云数据。
图9为本申请实施例提供的识别障碍物装置900的示意性框图,包括获取单元901、处理单元902、确认单元903。
其中获取单元901,用于获取目标区域中的目标点云数据,所述目标点云数据不包括地面点云数据。
处理单元902,用于对所述目标点云数据进行拟合聚类处理,以得到待确认障碍物点云数据。
获取单元901,还用于获取所述待确认障碍物点云数据所处平面的第一颜色信息。
获取单元901,还用于获取所述待确认障碍物点云数据所处区域的第二颜色信息。
确认单元903,用于当所述第二颜色信息与所述第一颜色信息不匹配时,确认所述待确认障碍物点云数据为障碍物点云数据。
可选的,获取单元901,还用于获取原始点云数据,所述原始点云数据为基于自移动设备的坐标系下的点云数据。
可选的,获取单元901,还用于获取原始点云数据中的各点对应的法向量,以及所述法 向量与所述坐标系上的预设坐标轴之间的第一夹角。
可选的,确认单元903,还用于当所述原始点云数据中的点的第一夹角小于第一夹角阈值,并且所述点的高度小于预设的第三高度阈值时,将对应的点所属的点云数据作为地面点云数据。
可选的,处理单元902,还用于从所述原始点云数据中移除所述地面点云数据,以得到目标点云数据。
可选的,处理单元902,还用于当所述目标点云数据经过拟合得到拟合平面时,根据拟合平面确定位于拟合平面上的第一点云数据以及所述拟合平面相对于与所述自移动设备的坐标系上的预设坐标轴的第二夹角。
可选的,确认单元903,还用于在所述第二夹角大于或等于第二夹角阈值时,将所述第一点云数据确定为障碍物的点云数据。
可选的,处理单元902,还用于在所述第二夹角小于第二夹角阈值时,将所述目标点云数据移除所述第一点云数据,以得到第二点云数据。
可选的,处理单元902,还用于对所述第二点云数据进行聚类处理以得到待确认障碍物点云数据。可选的,确认单元903,还用于在未拟合得到平面时,将所述目标点云数据作为所述第二点云数据。
可选的,确认单元903,还用于在对所述第二点云数据进行聚类处理,以得到不同的聚类群,其中,所述聚类群为根据预设的障碍物类别进行聚类得到的多个点云数据。
可选的,确认单元903,还用于在根据每个所述聚类群对应的点云数据的点的数量,从各聚类群中确定可疑障碍物聚类群。
可选的,确认单元903,还用于在所述可疑障碍物聚类群距离最近的平面的高度超过第三高度阈值时,将所述可疑障碍物聚类群对应的点云数据标定为障碍物的点云数据。
可选的,确认单元903,还用于在所述可疑障碍物聚类群距离最近的平面的高度小于所述第三高度阈值时,将所述可疑障碍物聚类群对应的点云数据确定为待确认障碍物点云数据。
可选的,所述方法还包括计算单元,用于计算所述第二颜色信息与第一颜色信息的色差值。
可选的,确认单元903,还用于当所述色差值大于预设的色差阈值时,将所述待确认障碍物点云数据作为障碍物点云数据。
可选的,获取单元901,还用于获取待确认障碍物点云数据对应区域所处平面的图像信息。
可选的,处理单元902,还用于对所述图像信息进行颜色提取,以得到多个颜色信息。
可选的,确认单元903,还用于从所述多个颜色信息中筛选出主颜色信息作为第一颜色信息。
可选的,获取单元901,还用于获取自移动设备的激光雷达点云数据。
可选的,处理单元902,还用于将所述激光雷达点云数转换至所述自移动机设备的坐标系下,以获取所述原始点云数据。
应理解的是,本申请实施例的装置900可以通过专用集成电路(application-specific integrated circuit,ASIC)实现,或可编程逻辑器件(programmable logic device,PLD)实现,上述PLD可以是复杂程序逻辑器件(complex programmable logical device,CPLD),现场可编程门阵列(field-programmable gate array,FPGA),通用阵列逻辑(generic array logic,GAL)或其任意组合。也可以通过软件实现图1所示的识别障碍物的方法,当通过软件实现图1所示的识别障碍物的方法时,装置900及其各个模块也可以为软件模块。
图10为本申请实施例提供的一种设备的结构示意图。如图10所示,其中设备1000包括处理器1001、存储器1002、通信接口1003和总线1004。其中,处理器1001、存储器1002、通信接口1003通过总线1004进行通信,也可以通过无线传输等其他手段实现通信。该存储器1002用于存储指令,该处理器1001用于执行该存储器1002存储的指令。该存储器1002存储程序代码10021,且处理器1001可以调用存储器1002中存储的程序代码10021执行图1所示的识别障碍物的方法。
应理解,在本申请实施例中,处理器1001可以是CPU,处理器1001还可以是其他通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现场可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者是任何常规的处理器等。
该存储器1002可以包括只读存储器和随机存取存储器,并向处理器1001提供指令和数据。存储器1002还可以包括非易失性随机存取存储器。该存储器1002可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(random access memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(DRAM)、同步动态随机存取存储器(synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(double data date SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(enhanced  SDRAM,ESDRAM)、同步连接动态随机存取存储器(synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(direct rambus RAM,DRRAM)。
该总线1004除包括数据总线之外,还可以包括电源总线、控制总线和状态信号总线等。但是为了清楚说明起见,在图10中将各种总线都标为总线1004。
应理解,根据本申请实施例的设备1000可对应于本申请实施例中的装置900,并可以对应于本申请实施例图1所示方法中的第一设备,当设备1000对应于图1所示方法中的第一设备时,设备1000中的各个模块的上述和其它操作和/或功能分别为了实现图1中的由第一设备执行的方法的操作步骤,为了简洁,在此不再赘述。
本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现可实现上述各个方法实施例中的步骤。
本申请实施例提供了一种计算机程序产品,当计算机程序产品在自移动设备上运行时,使得自移动设备执行时实现可实现上述各个方法实施例中的步骤。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。

Claims (10)

  1. 一种障碍物的识别方法,包括:
    获取目标区域中的目标点云数据,所述目标点云数据不包括地面点云数据;
    对所述目标点云数据进行拟合聚类处理,以得到待确认障碍物点云数据;
    获取所述待确认障碍物点云数据所处平面的第一颜色信息;
    获取所述待确认障碍物点云数据所处区域的第二颜色信息;
    当所述第二颜色信息与所述第一颜色信息不匹配时,确认所述待确认障碍物点云数据为障碍物点云数据。
  2. 如权利要求1所述的方法,其中,所述获取目标区域中的目标点云数据包括:
    获取原始点云数据,所述原始点云数据为基于自移动设备的坐标系下的点云数据;
    获取原始点云数据中的各点对应的法向量,以及所述法向量与所述坐标系上的预设坐标轴之间的第一夹角;
    当所述原始点云数据中的点的第一夹角小于第一夹角阈值,并且所述点的高度小于预设的高度阈值时,将对应的点所属的点云数据作为地面点云数据;
    从所述原始点云数据中移除所述地面点云数据,以得到目标点云数据。
  3. 如权利要求2所述的方法,其中,所述对所述目标点云数据进行拟合聚类处理,以得到待确认障碍物点云数据包括:
    对所述目标点云数据进行平面拟合;
    当所述目标点云数据经过拟合得到拟合平面时,根据拟合平面确定位于拟合平面上的第一点云数据以及所述拟合平面相对于与所述自移动设备的坐标系上的预设坐标轴的第二夹角;
    在所述第二夹角大于或等于第二夹角阈值时,将所述第一点云数据确定为障碍物的点云数据;
    在所述第二夹角小于第二夹角阈值时,将所述目标点云数据移除所述第一点云数据,以得到第二点云数据;
    对所述第二点云数进行聚类处理以得到待确认障碍物点云数据。
  4. 根据权利要求3所述的方法,其中,在对所述目标点云数据进行平面拟合后,所述方法还包括:
    在未拟合得到平面时,将所述目标点云数据作为所述第二点云数据;
    对所述第二点云数进行聚类处理以得到待确认障碍物点云数据。
  5. 如权利要求3或者4所述的方法,其中,所述对所述第二点云数进行聚类处理以得到待确认障碍物点云数据包括:
    对所述第二点云数据进行聚类处理,以得到不同的聚类群,其中,所述聚类群为根据预设的障碍物类别进行聚类得到的多个点云数据;
    根据每个所述聚类群对应的点云数据的点的数量,从各聚类群中确定可疑障碍物聚类群;
    在所述可疑障碍物聚类群距离最近的平面的高度超过第三高度阈值时,将所述可疑障碍物聚类群对应的点云数据标定为障碍物的点云数据;
    在所述可疑障碍物聚类群距离最近的平面的高度小于所述第三高度阈值时,将所述可疑障碍物聚类群对应的点云数据确定为待确认障碍物点云数据。
  6. 如权利要求1所述的方法,其中,所述当所述第二颜色信息与所述第一颜色信息不匹配时,确认所述待确认障碍物点云数据为障碍物点云数据包括:
    计算所述第二颜色信息与第一颜色信息的色差值;
    当所述色差值大于预设的色差阈值时,将所述待确认障碍物点云数据作为障碍物点云数据。
  7. 如权利要求1所述的方法,其中,所述获取所述待确认障碍物点云数据所处平面的第一颜色信息包括:
    获取待确认障碍物点云数据对应区域所处平面的图像信息;
    对所述图像信息进行颜色提取,以得到多个颜色信息;
    从所述多个颜色信息中筛选出主颜色信息作为第一颜色信息。
  8. 如权利要求2所述的方法,其中,所述获取原始点云数据包括:
    获取自移动设备的激光雷达点云数据;
    将所述激光雷达点云数转换至所述自移动机设备的坐标系下,以获取所述原始点云数据。
  9. 一种设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如权利要求1至8任一项所述障碍物的识别方法。
  10. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至8任一项所述障碍物的识别方法。
PCT/CN2023/081202 2022-03-21 2023-03-14 障碍物的识别方法、设备及存储介质 WO2023179405A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210278336.9 2022-03-21
CN202210278336.9A CN114723830B (zh) 2022-03-21 2022-03-21 障碍物的识别方法、设备及存储介质

Publications (1)

Publication Number Publication Date
WO2023179405A1 true WO2023179405A1 (zh) 2023-09-28

Family

ID=82237954

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/081202 WO2023179405A1 (zh) 2022-03-21 2023-03-14 障碍物的识别方法、设备及存储介质

Country Status (2)

Country Link
CN (1) CN114723830B (zh)
WO (1) WO2023179405A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117152719A (zh) * 2023-11-01 2023-12-01 锐驰激光(深圳)有限公司 除草障碍物检测方法、设备、存储介质及装置
CN117351526A (zh) * 2023-12-05 2024-01-05 深圳纯和医药有限公司 一种血管内超声图像的血管内膜自动识别方法
CN117408913A (zh) * 2023-12-11 2024-01-16 浙江托普云农科技股份有限公司 待测物体点云去噪方法、系统及装置

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114723830B (zh) * 2022-03-21 2023-04-18 深圳市正浩创新科技股份有限公司 障碍物的识别方法、设备及存储介质
CN115861426B (zh) * 2023-01-13 2023-06-13 江苏金恒信息科技股份有限公司 物料取样方法、装置、计算机设备和存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103955920A (zh) * 2014-04-14 2014-07-30 桂林电子科技大学 基于三维点云分割的双目视觉障碍物检测方法
CN108152831A (zh) * 2017-12-06 2018-06-12 中国农业大学 一种激光雷达障碍物识别方法及系统
CN109872324A (zh) * 2019-03-20 2019-06-11 苏州博众机器人有限公司 地面障碍物检测方法、装置、设备和存储介质
US20190303692A1 (en) * 2016-12-19 2019-10-03 Cloudminds (Shenzhen) Robotics Systems Co., Ltd. Obstacle detection method and apparatus
CN114723830A (zh) * 2022-03-21 2022-07-08 深圳市正浩创新科技股份有限公司 障碍物的识别方法、设备及存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110928301B (zh) * 2019-11-19 2023-06-30 北京小米智能科技有限公司 一种检测微小障碍的方法、装置及介质
CN112585553A (zh) * 2020-02-26 2021-03-30 深圳市大疆创新科技有限公司 用于可移动平台的控制方法、可移动平台、设备和存储介质
CN113536883B (zh) * 2021-03-23 2023-05-02 长沙智能驾驶研究院有限公司 障碍物检测方法、车辆、设备及计算机存储介质
CN113920134B (zh) * 2021-09-27 2022-06-07 山东大学 一种基于多线激光雷达的斜坡地面点云分割方法及系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103955920A (zh) * 2014-04-14 2014-07-30 桂林电子科技大学 基于三维点云分割的双目视觉障碍物检测方法
US20190303692A1 (en) * 2016-12-19 2019-10-03 Cloudminds (Shenzhen) Robotics Systems Co., Ltd. Obstacle detection method and apparatus
CN108152831A (zh) * 2017-12-06 2018-06-12 中国农业大学 一种激光雷达障碍物识别方法及系统
CN109872324A (zh) * 2019-03-20 2019-06-11 苏州博众机器人有限公司 地面障碍物检测方法、装置、设备和存储介质
CN114723830A (zh) * 2022-03-21 2022-07-08 深圳市正浩创新科技股份有限公司 障碍物的识别方法、设备及存储介质

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117152719A (zh) * 2023-11-01 2023-12-01 锐驰激光(深圳)有限公司 除草障碍物检测方法、设备、存储介质及装置
CN117152719B (zh) * 2023-11-01 2024-03-26 锐驰激光(深圳)有限公司 除草障碍物检测方法、设备、存储介质及装置
CN117351526A (zh) * 2023-12-05 2024-01-05 深圳纯和医药有限公司 一种血管内超声图像的血管内膜自动识别方法
CN117351526B (zh) * 2023-12-05 2024-03-22 深圳纯和医药有限公司 一种血管内超声图像的血管内膜自动识别方法
CN117408913A (zh) * 2023-12-11 2024-01-16 浙江托普云农科技股份有限公司 待测物体点云去噪方法、系统及装置
CN117408913B (zh) * 2023-12-11 2024-02-23 浙江托普云农科技股份有限公司 待测物体点云去噪方法、系统及装置

Also Published As

Publication number Publication date
CN114723830B (zh) 2023-04-18
CN114723830A (zh) 2022-07-08

Similar Documents

Publication Publication Date Title
WO2023179405A1 (zh) 障碍物的识别方法、设备及存储介质
WO2020024234A1 (zh) 路径导航方法、相关装置及计算机可读存储介质
US10997438B2 (en) Obstacle detection method and apparatus
WO2022188663A1 (zh) 一种目标检测方法及装置
CN111598916A (zh) 一种基于rgb-d信息的室内占据栅格地图的制备方法
CN103646249A (zh) 一种温室智能移动机器人视觉导航路径识别方法
DE102020206387B4 (de) Verfahren und computersystem zur verarbeitung von kandidatenkanten
US20240071094A1 (en) Obstacle recongnition method applied to automatic traveling device and automatic traveling device
CN110908374A (zh) 一种基于ros平台的山地果园避障系统及方法
CN114428515A (zh) 一种无人机避障方法、装置、无人机及存储介质
CN112308928A (zh) 一种无标定装置的相机与激光雷达自动标定方法
JP7153264B2 (ja) 画像解析システム、画像解析方法及び画像解析プログラム
CN112902981B (zh) 机器人导航方法和装置
CN112578405B (zh) 一种基于激光雷达点云数据去除地面的方法及系统
CN113222914A (zh) 一种基于配准点云的树障隐患快速检测方法
CN112298564A (zh) 基于图像识别的变量施药控制方法及装置
US20230367319A1 (en) Intelligent obstacle avoidance method and apparatus based on binocular vision, and non-transitory computer-readable storage medium
CN116486130A (zh) 障碍物识别的方法、装置、自移动设备及存储介质
US20240019870A1 (en) Image-Based Working Area Identification Method and System, and Robot
CN113435287A (zh) 草地障碍物识别方法、装置、割草机器人及可读存储介质
CN111340833A (zh) 最小二乘去干扰随机Hough变换的输电线提取方法
Yang et al. Extraction of straight field roads between farmlands based on agricultural vehicle-mounted LiDAR
CN112286230A (zh) 无人机视觉图像算法、避障步骤及其信息融合处理系统
CN116580299A (zh) 障碍物识别方法、装置及自移动设备
WO2023231022A1 (zh) 图像识别方法、自移动设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23773653

Country of ref document: EP

Kind code of ref document: A1