CN112560548B - Method and device for outputting information - Google Patents

Method and device for outputting information Download PDF

Info

Publication number
CN112560548B
CN112560548B CN201910907185.7A CN201910907185A CN112560548B CN 112560548 B CN112560548 B CN 112560548B CN 201910907185 A CN201910907185 A CN 201910907185A CN 112560548 B CN112560548 B CN 112560548B
Authority
CN
China
Prior art keywords
ground
point cloud
determining
threshold
plane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910907185.7A
Other languages
Chinese (zh)
Other versions
CN112560548A (en
Inventor
刘祥
张双
高斌
朱晓星
薛晶晶
杨凡
王俊平
王成法
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201910907185.7A priority Critical patent/CN112560548B/en
Publication of CN112560548A publication Critical patent/CN112560548A/en
Application granted granted Critical
Publication of CN112560548B publication Critical patent/CN112560548B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads

Abstract

The embodiment of the application discloses a method and a device for outputting information. One embodiment of the above method comprises: according to the point cloud data collected by the vehicle in the running process, determining a predicted ground; determining a plurality of ground thresholds according to a preset ground threshold value range; for each ground threshold, determining ground point clouds according to the predicted ground and the ground threshold, identifying obstacles for the point clouds except the ground point clouds in the point cloud data, and determining the number of the obstacles; and determining and outputting a target ground threshold according to the obtained plurality of quantities. According to the method and the device, the ground threshold value in the point cloud data processing process can be determined according to the number of the obstacles, so that manual adjustment is not needed, and automatic adjustment of the ground threshold value is achieved.

Description

Method and device for outputting information
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a method and a device for outputting information.
Background
At present, the laser radar ranging has been widely applied in the fields of automatic driving, auxiliary driving and the like because of its excellent characteristics and strong adaptability to the external environment. In the application scenario of data collected by a lidar, many parameters often need to be adjusted. Relying on manual adjustment of these parameters is often time consuming and laborious.
Disclosure of Invention
The embodiment of the application provides a method and a device for outputting information.
In a first aspect, an embodiment of the present application provides a method for outputting information, including: according to the point cloud data collected by the vehicle in the running process, determining a predicted ground; determining a plurality of ground thresholds according to a preset ground threshold value range; for each ground threshold, determining a ground point cloud according to the predicted ground and the ground threshold, identifying obstacles for the point clouds except the ground point cloud in the point cloud data, and determining the number of the obstacles; and determining and outputting a target ground threshold according to the obtained plurality of quantities.
In some embodiments, the determining the plurality of ground thresholds according to the preset ground threshold value range includes: and selecting a plurality of points in the ground threshold value range at preset distance intervals to serve as a plurality of ground thresholds.
In some embodiments, the identifying the obstacle to the point clouds other than the ground point clouds in the point cloud data, and determining the number of obstacles include: performing obstacle recognition on point clouds except the ground point clouds in the point cloud data, and determining the size of the obstacle; and determining the number of the obstacles with the height smaller than a preset height threshold according to the sizes of the obstacles.
In some embodiments, determining and outputting the target ground threshold according to the obtained plurality of numbers includes: determining a curve of the quantity-ground threshold according to the quantity and the ground threshold corresponding to each quantity; and determining the slope of the curve at each ground threshold, and determining the target ground threshold according to each slope.
In some embodiments, determining the target ground threshold according to the slopes includes: determining the maximum value of the absolute values of the slopes; and taking the ground threshold corresponding to the maximum value as a target ground threshold.
In some embodiments, determining the predicted ground according to the point cloud data collected by the vehicle during driving includes: determining estimated ground point clouds in the point cloud data; dividing the first three-dimensional space in which the estimated ground point cloud is positioned into a plurality of second three-dimensional spaces; performing ground estimation on the estimated ground point clouds in the plurality of second three-dimensional spaces to obtain a plurality of ground sub-planes; the predicted ground is generated based on the plurality of ground sub-planes.
In some embodiments, determining the estimated ground point cloud from the point cloud data includes: and taking the point cloud points in the preset height range from the estimated ground in the point cloud data as the estimated ground point cloud.
In some embodiments, the dividing the first stereo space in which the estimated ground point cloud is located into a plurality of second stereo spaces includes: dividing the estimated ground into a plurality of grids; and dividing the first three-dimensional space based on a plurality of grids to obtain a plurality of second three-dimensional spaces.
In some embodiments, the performing ground estimation on the estimated ground point clouds in the second plurality of stereo spaces to obtain a plurality of ground sub-planes includes: fitting estimated ground point cloud points in a plurality of second three-dimensional spaces to obtain a plurality of first planes; for each first plane, the following fitting step is performed: selecting estimated ground point cloud points with the distance smaller than a first distance threshold value from a second three-dimensional space corresponding to the first plane as candidate point cloud points; fitting a second plane by using the candidate point cloud points; determining whether said second plane is stable; and if the second plane is stable, taking the second plane as a ground sub-plane.
In some embodiments, the performing ground estimation on the estimated ground point cloud points in the second stereo spaces to obtain a plurality of ground sub-planes further includes: in response to determining that the second plane is unstable, replacing the first plane with the second plane and continuing the fitting step.
In some embodiments, the fitting the plurality of first planes based on the estimated ground point cloud points in the plurality of second stereo spaces includes: sampling the estimated ground point cloud points in each second three-dimensional space to obtain sampling point cloud points; and fitting the first plane by using the sampling point cloud points.
In some embodiments, the sampling the estimated ground point cloud point in the second stereo space includes: dividing the second stereoscopic space into a plurality of third stereoscopic spaces; and sampling the estimated ground point cloud points in each third three-dimensional space.
In some embodiments, determining whether the second plane is stable includes: determining whether the sum of the distances from the estimated ground point cloud point in the second three-dimensional space to the second plane is smaller than a second distance threshold value or not in response to the fact that the execution times of the fitting step are smaller than a preset time threshold value; if the sum of the distances is less than the second distance threshold, determining that the second plane is stable; and if the sum of the distances is not smaller than the second distance threshold value, determining that the second plane is unstable.
In some embodiments, the above method further comprises: and determining that no estimated ground point cloud point exists in the second three-dimensional space in response to the fact that the execution times of the fitting step are not smaller than the time threshold.
In some embodiments, the above method further comprises: and responding to the fact that the execution times of the fitting step are not smaller than a preset time threshold, and the angle between the second plane and the ground is larger than an angle threshold, and determining that no estimated ground point cloud point exists in the second three-dimensional space.
In a second aspect, an embodiment of the present application provides an apparatus for outputting information, including: the prediction ground determining unit is configured to determine prediction ground according to point cloud data acquired by the vehicle in the running process; the ground threshold determining unit is configured to determine a plurality of ground thresholds according to a preset ground threshold value range; an obstacle number determination unit configured to determine, for each ground threshold, a ground point cloud based on the predicted ground and the ground threshold, and to identify, from the point cloud data, a number of obstacles by performing obstacle identification on point clouds other than the ground point cloud; and a target ground threshold value determining unit configured to determine and output a target ground threshold value based on the obtained plurality of numbers.
In some embodiments, the above ground threshold determination unit is further configured to: and selecting a plurality of points in the ground threshold value range at preset distance intervals to serve as a plurality of ground thresholds.
In some embodiments, the above obstacle number determining unit is further configured to: performing obstacle recognition on point clouds except the ground point clouds in the point cloud data, and determining the size of the obstacle; and determining the number of the obstacles with the height smaller than a preset height threshold according to the sizes of the obstacles.
In some embodiments, the target ground threshold determination unit is further configured to: determining a curve of the quantity-ground threshold according to the quantity and the ground threshold corresponding to each quantity; and determining the slope of the curve at each ground threshold, and determining the target ground threshold according to each slope.
In some embodiments, the target ground threshold determination unit is further configured to: determining the maximum value of the absolute values of the slopes; and taking the ground threshold corresponding to the maximum value as a target ground threshold.
In some embodiments, the above-described predicted ground determination unit includes: the point cloud determining module is configured to determine an estimated ground point cloud from the point cloud data; the space dividing module is configured to divide the first three-dimensional space where the estimated ground point cloud is located into a plurality of second three-dimensional spaces; the ground estimation module is configured to perform ground estimation on the estimated ground point clouds in the plurality of second three-dimensional spaces to obtain a plurality of ground sub-planes; a ground generation module configured to generate the predicted ground based on the plurality of ground sub-planes.
In some embodiments, the above-described point cloud determination module is further configured to: and taking the point cloud points in the preset height range from the estimated ground in the point cloud data as the estimated ground point cloud.
In some embodiments, the above spatial partitioning module is further configured to: dividing the estimated ground into a plurality of grids; and dividing the first three-dimensional space based on a plurality of grids to obtain a plurality of second three-dimensional spaces.
In some embodiments, the above ground estimation module is further configured to: fitting estimated ground point cloud points in a plurality of second three-dimensional spaces to obtain a plurality of first planes; for each first plane, the following fitting step is performed: selecting estimated ground point cloud points with the distance smaller than a first distance threshold value from a second three-dimensional space corresponding to the first plane as candidate point cloud points; fitting a second plane by using the candidate point cloud points; determining whether said second plane is stable; and if the second plane is stable, taking the second plane as a ground sub-plane.
In some embodiments, the above ground estimation module is further configured to: and in response to determining that the second plane is unstable, replacing the first plane with the second plane, and continuing the fitting step.
In some embodiments, the above ground estimation module is further configured to: sampling the estimated ground point cloud points in each second three-dimensional space to obtain sampling point cloud points; and fitting the first plane by using the sampling point cloud points.
In some embodiments, the above ground estimation module is further configured to: dividing the second stereoscopic space into a plurality of third stereoscopic spaces; and sampling the estimated ground point cloud points in each third three-dimensional space.
In some embodiments, the above ground estimation module is further configured to: determining whether the sum of the distances from the estimated ground point cloud point in the second three-dimensional space to the second plane is smaller than a second distance threshold value or not in response to the fact that the execution times of the fitting step are smaller than a preset time threshold value; if the sum of the distances is less than the second distance threshold, determining that the second plane is stable; and if the sum of the distances is not smaller than the second distance threshold value, determining that the second plane is unstable.
In some embodiments, the above-described predicted ground determination unit further comprises: and the first determining module is configured to determine that no estimated ground point cloud point exists in the second stereo space in response to the fact that the execution times of the fitting step are not smaller than the time threshold.
In some embodiments, the above-described predicted ground determination unit further comprises: and the second determining module is configured to determine that no estimated ground point cloud point exists in the second three-dimensional space in response to the fact that the execution times of the fitting step are not smaller than a preset time threshold and the angle between the second plane and the ground is larger than an angle threshold.
In a third aspect, an embodiment of the present application provides an electronic device, including: one or more processors; and a storage device having one or more programs stored thereon, which when executed by the one or more processors cause the one or more processors to implement the method as described in any of the embodiments of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer readable medium having stored thereon a computer program which, when executed by a processor, implements a method as described in any of the embodiments of the first aspect.
The method and the device for outputting information provided by the embodiment of the application can determine the predicted ground according to the point cloud data acquired by the vehicle in the driving process. And determining a plurality of ground thresholds according to a preset ground threshold value range. Then, for each ground threshold, a ground point cloud is determined from the predicted ground and the ground threshold, and obstacle recognition is performed on point clouds other than the ground point cloud in the point cloud data, thereby determining the number of obstacles. Finally, determining and outputting a target ground threshold according to the obtained plurality of numbers. According to the method, the ground threshold value in the point cloud data processing process can be determined according to the number of the obstacles, so that manual adjustment is not needed, and automatic adjustment of the ground threshold value is achieved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the following drawings, in which:
FIG. 1 is an exemplary system architecture diagram in which an embodiment of the present application may be applied;
FIG. 2 is a flow chart of one embodiment of a method for outputting information according to the present application;
FIG. 3 is a schematic illustration of one application scenario of a method for outputting information according to the present application;
FIG. 4 is a flow chart of one embodiment of determining a predicted surface in a method for outputting information according to the present application;
FIG. 5 is a schematic structural view of one embodiment of an apparatus for outputting information according to the present application;
fig. 6 is a schematic diagram of a computer system suitable for use in implementing embodiments of the present application.
Detailed Description
The present application is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings.
It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other. The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates an exemplary system architecture 100 to which embodiments of the methods for outputting information or the apparatus for outputting information of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include autonomous vehicles 101, 102, 103, a network 104, and a server 105. The network 104 is a medium used to provide a communication link between the autonomous vehicles 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
Various sensors, such as lidar, may be mounted on the autonomous vehicles 101, 102, 103 to collect point cloud data of the driving environment of the autonomous vehicles 101, 102, 103. Various electronic devices such as navigation devices, unmanned vehicle controllers, anti-lock systems, brake force distribution systems, and the like may also be mounted on the autonomous vehicles 101, 102, 103. The autonomous vehicles 101, 102, 103 may be vehicles that include an autonomous mode, including both fully autonomous vehicles and vehicles that are capable of switching to an autonomous mode.
The server 105 may be a server that provides various services, such as a background server that processes point cloud data collected by the vehicles 101, 102, 103. The background server may analyze and process the received data, such as point cloud data, and feed back the processing result (e.g., the target ground threshold) to the vehicles 101, 102, 103.
The server 105 may be hardware or software. When the server 105 is hardware, it may be implemented as a distributed server cluster formed by a plurality of servers, or as a single server. When server 105 is software, it may be implemented as a plurality of software or software modules (e.g., to provide distributed services), or as a single software or software module. The present invention is not particularly limited herein.
It should be noted that, the method for outputting information provided in the embodiment of the present application may be performed by the vehicles 101, 102, 103, or may be performed by the server 105. Accordingly, the means for outputting information may be provided in the vehicles 101, 102, 103, in the server 105.
It should be understood that the number of vehicles, networks, and servers in fig. 1 are merely illustrative. There may be any number of vehicles, networks, and servers, as desired for implementation.
With continued reference to fig. 2, a flow 200 of one embodiment of a method for outputting information according to the present application is shown. The method for outputting information of the present embodiment includes the steps of:
with continued reference to fig. 2, a flow 200 of one embodiment of a method for outputting information according to the present application is shown. The method for outputting information of the present embodiment includes the steps of:
step 201, determining a predicted ground according to point cloud data acquired by a vehicle in a running process.
In the present embodiment, the execution subject of the method for outputting information (e.g., the vehicles 101, 102, 103 or the server 105 shown in fig. 1) may acquire point cloud data acquired by the vehicle during traveling by a wired connection or a wireless connection. The point cloud data may include information of a plurality of point cloud points, such as coordinates, reflection intensity, and the like.
The vehicle can be various vehicles, and a laser radar sensor can be installed on the vehicle and used for collecting point cloud data of the vehicle in the running process. After the execution body obtains the point cloud data, the execution body can analyze the point cloud data to determine the prediction ground. Specifically, the execution body may first determine, according to a preset estimated ground, that a point cloud point within a certain range above and below the estimated ground is an estimated ground point cloud, and then determine, according to the estimated ground point cloud, the estimated ground. For example, the execution subject may divide the estimated ground point cloud into a plurality of stereoscopic grids, calculate the center point of each stereoscopic grid, then connect the center points of the grids into planes, and finally use the obtained planes as the predicted ground.
Step 202, determining a plurality of ground thresholds according to a preset ground threshold value range.
In this embodiment, the execution body may acquire a preset ground threshold value range, where the ground threshold value range may be determined according to preset ground threshold values. Each ground threshold may be set by a technician based on his own experience. The execution body can take values at intervals of preset distances in the ground threshold value range to obtain a plurality of ground threshold values. For example, the ground threshold value ranges from 0.5m to 1.5m, and the execution subject can take a value of 0.1m every interval, so that a plurality of ground threshold values, respectively, of 0.5m, 0.6m and 0.7m … … 1.5.1.5 m, can be obtained. Alternatively, the execution subject may randomly select a plurality of ground thresholds from the ground threshold value range.
In some alternative implementations of the present embodiment, the executing entity may determine the plurality of ground thresholds by the following steps, not shown in fig. 2: and selecting a plurality of points in the ground threshold value range as a plurality of ground threshold values at preset distance intervals.
In this implementation manner, the execution body may select a plurality of points in the ground threshold value range as a plurality of ground threshold values at preset distance intervals. Specifically, the ground threshold value range may be divided by the distance interval.
Step 203, for each ground threshold, determining a ground point cloud according to the predicted ground and the ground threshold, identifying obstacles by using the point clouds except the ground point cloud in the point cloud data, and determining the number of the obstacles.
After determining the plurality of ground thresholds, the executing body may process for each ground threshold. Specifically, the executing body may determine the ground point cloud according to the predicted ground and each ground threshold. For example, the execution subject may set, as the ground point cloud, cloud points above the predicted ground at which the distance from the predicted ground is less than or equal to the ground threshold. Then, the execution subject may perform obstacle recognition on the point clouds other than the ground point clouds in the above-described point cloud data to determine the number of obstacles contained therein.
The execution subject may recognize the obstacle included in each point cloud frame in the point cloud data by using a pre-trained obstacle recognition model or an obstacle recognition algorithm (e.g., a point cloud segmentation algorithm, a feature extraction algorithm, etc.), and may also recognize the obstacle included in each image frame in the image data. Specifically, the execution subject may input each point cloud frame of the point cloud data or each image frame in the image data from the input side of the obstacle recognition model, and the output side of the obstacle recognition model may obtain the recognized obstacle.
In some optional implementations of the present embodiment, the execution subject may perform the following steps, not shown in fig. 2, when performing obstacle recognition: performing obstacle recognition on point clouds except the ground point clouds in the point cloud data, and determining the size of the obstacle; and determining the number of the obstacles with the height smaller than a preset height threshold according to the sizes of the obstacles.
In this implementation, after identifying the obstacle, the execution body may determine the size of the obstacle. Specifically, the execution body may determine the size of the obstacle according to coordinates of points cloud points corresponding to the obstacle. Then, the execution subject may determine the number of obstacles having a height less than the preset height threshold. For higher obstacles, the change of the ground threshold value generally has no effect on the recognition result of the obstacle. For shorter obstacles, if the ground threshold is set too high, it may be missed, and the number of resulting obstacles is smaller. If the ground threshold is set too low, then smaller sized obstacles may be detected, with a greater number of obstacles being obtained. That is, the ground threshold has a large influence on the detection result of an obstacle of a small size, and a small influence on the detection result of a large obstacle. Therefore, in the present implementation, when counting the number of obstacles, only the number of obstacles whose height is smaller than the preset height threshold value may be counted.
And 204, determining and outputting a target ground threshold according to the obtained plurality of numbers.
In this embodiment, as the ground threshold increases, more and more point clouds are considered as ground point clouds, and the number of point clouds in the obstacle point cloud becomes smaller. Accordingly, the number of obstacles detected by the execution body may also be reduced. That is, as the ground threshold increases, the number of missed obstacles increases, and the missed detection rate increases. The number of false-detected obstacles is smaller and smaller, and the false-detection rate is smaller and smaller. Theoretically, the intersection point of the omission factor curve and the false detection factor curve is the most suitable ground threshold value. At the same time, this intersection point is also the point at which the number of obstacles drops most rapidly. Therefore, the execution body can determine the point at which the number of obstacles decreases most rapidly according to the obtained plurality of numbers, and take the value of the ground threshold value corresponding to the point as the target ground threshold value. And outputting the target ground threshold value.
In some alternative implementations of the present embodiment, the executing entity may determine the target ground threshold by the following steps, not shown in fig. 2: determining a curve of the quantity-ground threshold according to the quantity and the ground threshold corresponding to each quantity; and determining the slope of the curve at each ground threshold, and determining the target ground threshold according to each slope.
In this embodiment, the execution body may use the ground threshold as the X axis and the number of obstacles as the Y axis according to the number of obstacles corresponding to each ground threshold, and may generate a number-ground threshold curve. The executing body may then determine the slope of the curve at each ground threshold according to the equation for the curve. The slope herein may refer to the rate of change of the number of obstacles. The execution body can determine a target ground threshold according to the obtained slopes. For example, the execution subject may set the ground threshold value at which the absolute value of the slope is largest as the target ground threshold value. Alternatively, the executing body may calculate the average value of each slope, and then use the ground threshold corresponding to the average value as the target ground threshold.
In some alternative implementations of the present embodiment, the executing entity may determine the target ground threshold by the following steps, not shown in fig. 2: determining the maximum value of the absolute values of the slopes; and taking the ground threshold corresponding to the maximum value as a target ground threshold.
In this implementation, the executing body may determine a maximum value among absolute values of the slopes; and taking the ground threshold corresponding to the maximum value as a target ground threshold.
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the method for outputting information according to the present embodiment. In the application scenario of fig. 3, the autonomous vehicle 301 collects point cloud data by a lidar sensor mounted thereto during traveling, and transmits the point cloud data to the server 302. The server 302 performs the processing of steps 201 to 204 with respect to the point cloud data, determines a target ground threshold value, and transmits the target ground threshold value to the automated driving vehicle 301. The autonomous vehicle 301 may perform real-time obstacle recognition during traveling according to the target ground threshold value described above.
The method for outputting information provided by the embodiment of the application can determine the predicted ground according to the point cloud data collected by the vehicle in the driving process. And determining a plurality of ground thresholds according to a preset ground threshold value range. Then, for each ground threshold, a ground point cloud is determined from the predicted ground and the ground threshold, and obstacle recognition is performed on point clouds other than the ground point cloud in the point cloud data, thereby determining the number of obstacles. Finally, determining and outputting a target ground threshold according to the obtained plurality of numbers. According to the method, the ground threshold value in the point cloud data processing process can be determined according to the number of the obstacles, so that manual adjustment is not needed, and automatic adjustment of the ground threshold value is achieved.
With continued reference to FIG. 4, a flow 400 of one embodiment of determining a predicted surface in a method for outputting information according to the present application is shown. It will be appreciated that the execution body of this embodiment may be the same as or different from the execution body of the embodiment shown in fig. 2. When the execution body of the present embodiment is different from the execution body of the embodiment shown in fig. 2, the execution body of the present embodiment may transmit the determined predicted ground to the execution body of the embodiment shown in fig. 2. As shown in fig. 4, the predicted ground surface may be determined in this embodiment by:
In step 401, an estimated ground point cloud is determined from the point cloud data.
In this embodiment, the execution subject may select the estimated ground point cloud from the point cloud data. Specifically, the executing body may determine the estimated ground point cloud according to the heights of the points cloud points in the point cloud data. For example, the execution body may determine a point cloud point with the lowest height, and then use a point cloud point with a height difference from the point cloud point being less than a preset threshold value as the estimated ground point cloud.
In some alternative implementations of the present embodiment, the executing entity may determine the estimated ground point cloud by the following steps, not shown in fig. 4: and taking the point cloud points in the range of the preset height from the estimated ground in the point cloud data as the estimated ground point cloud.
In this implementation, the executing body may first obtain the estimated ground. The estimated ground can be sent to the execution body by a technician in a preset mode, or the execution body can determine according to the position of the laser radar sensor. After determining the estimated ground, the execution body may use the point cloud point within a predetermined height range from the estimated ground in the point cloud data as the estimated ground point cloud.
Step 402, dividing the first three-dimensional space where the estimated ground point cloud is located into a plurality of second three-dimensional spaces.
After determining the estimated ground point cloud, the execution body may divide the first three-dimensional space in which the estimated ground point cloud is located into a plurality of second three-dimensional spaces. It can be understood that the estimated ground point cloud is a three-dimensional space, the bottom surface of the three-dimensional space is the cross section of the point cloud data to the ground, and the height is the preset height. The execution body may divide the first stereoscopic space into a plurality of second stereoscopic spaces in various forms.
In some optional implementations of the present embodiment, the execution subject may divide the first stereoscopic space into a plurality of second stereoscopic spaces through the following steps not shown in fig. 4: dividing the estimated ground into a plurality of grids; the first three-dimensional space is divided based on the grids, and a plurality of second three-dimensional spaces are obtained.
In this implementation manner, the execution body may first perform grid division on the estimated ground to obtain a plurality of grids. Then, the execution body may take each grid as a bottom surface and take a preset height as a height, so as to obtain a plurality of second three-dimensional spaces. In practical application, in order to reduce the calculation amount, the execution body may perform projection processing on the point cloud data to obtain the projection ground. And then meshing the projection ground.
And step 403, performing ground estimation on the estimated ground point clouds in the plurality of second three-dimensional spaces to obtain a plurality of ground sub-planes.
After obtaining the plurality of second three-dimensional spaces, the execution body can perform ground estimation on the estimated ground point cloud in each second three-dimensional space to obtain a plurality of ground sub-planes. Specifically, the executing body may calculate a height average value of each cloud point in each second three-dimensional space, and then use the height average value as a ground sub-plane of the second three-dimensional space.
In some alternative implementations of the present embodiment, the executing body may determine the ground sub-plane by the following steps, not shown in fig. 4: fitting estimated ground point cloud points in a plurality of second three-dimensional spaces to obtain a plurality of first planes; for each first plane, the following fitting step is performed: selecting estimated ground point cloud points with the distance smaller than a first distance threshold value from a second three-dimensional space corresponding to the first plane as candidate point cloud points; fitting a second plane by using the candidate point cloud points; determining whether the second plane is stable; and if the second plane is stable, taking the second plane as a ground sub-plane.
In this implementation manner, the execution body may fit each cloud point in each second stereo space to obtain the first plane. Here, the existing fitting method can be used for fitting the cloud points of each point. The execution subject may then perform the following fitting steps for each first plane.
Firstly, selecting estimated ground point cloud points with the distance smaller than a first distance threshold value from a second three-dimensional space corresponding to the first plane, and taking the selected estimated ground point cloud points as candidate point cloud points.
Then, the candidate point cloud points are used for fitting a second plane.
It will be appreciated that the fitting method employed by the executing subject may be the same as the fitting method when the first plane is obtained by fitting.
Finally, judging whether the second plane is stable. If stable, the second plane is taken as a ground sub-plane of the second stereoscopic space.
If not, the second plane is taken as a new first plane, and the fitting step is continuously carried out.
Here, stable may refer to the second plane having an angle with the ground that is less than a preset angle threshold. Or, the number of the point cloud points falling on the second plane is larger than a preset number threshold.
In some alternative implementations of the present embodiment, the executing body may determine whether the second plane is stable by the following steps, not shown in fig. 4: determining whether the sum of the distances from the estimated ground point cloud point in the second three-dimensional space to the second plane is smaller than a second distance threshold value or not in response to the fact that the execution times of the fitting step are smaller than a preset time threshold value; if the sum of the distances is smaller than a second distance threshold, determining that the second plane is stable; if the sum of the distances is not less than the second distance threshold, the second plane is determined to be unstable.
In this implementation, the execution subject may record the number of times the fitting step is performed. And under the condition of a preset execution times threshold, calculating the sum of the distances from each cloud point in the second three-dimensional space to the second plane. If the sum of the distances is less than the second distance threshold, the second plane is indicated to be stable. If not, the second plane is indicated to be unstable.
In some alternative implementations of the present embodiment, the execution body may obtain the first plane through the following steps not shown in fig. 4: sampling the estimated ground point cloud points in each second three-dimensional space to obtain sampling point cloud points; and fitting the first plane by using the cloud points of the sampling points.
In this implementation manner, the execution body may sample each point cloud point in the second three-dimensional space, and obtain the first plane by using the point cloud point fitting obtained by sampling. In sampling, the execution body may employ various sampling methods, such as random sampling, and the like. This can reduce the amount of calculation at the time of fitting.
In some alternative implementations of the present embodiment, the execution body may implement sampling of the cloud points of points within the second stereo space through the following steps, which are not shown in fig. 4: dividing the second stereoscopic space into a plurality of third stereoscopic spaces; and sampling point cloud points in each third three-dimensional space.
In this implementation manner, in order to ensure uniformity of sampling, the execution body may first divide each second stereoscopic space into a plurality of third stereoscopic spaces when sampling. Then, the execution body can take the same number of point cloud points in each third three-dimensional space.
In some optional implementations of the present embodiment, the method may further include the following steps, not shown in fig. 4: and determining that no estimated ground point cloud point exists in the second three-dimensional space in response to the fact that the execution times of the fitting step are not smaller than a preset time threshold.
In this implementation manner, if the number of times of execution of the fitting step is not less than the preset number of times threshold, it is considered that no estimated ground point cloud point exists in the second stereo space. Then the point cloud point in the second volume cannot be employed when determining the predicted ground.
In some optional implementations of the present embodiment, the method may further include the following steps, not shown in fig. 4: and responding to the fact that the execution times of the fitting step are not smaller than a preset time threshold, and the angle between the second plane and the ground is larger than an angle threshold, and determining that no estimated ground point cloud point exists in the second three-dimensional space.
In this implementation manner, if the execution subject determines that the execution frequency of the fitting step is not less than the preset frequency threshold, and determines that the angle between the ground and the second plane obtained by executing the fitting step last time is greater than the angle threshold, the obtained second plane is considered unreasonable, and it can be considered that no estimated ground point cloud point exists in the second three-dimensional space.
Step 404, generating a predicted ground based on the plurality of ground sub-planes.
After obtaining the plurality of ground sub-planes, the executing body may generate a predicted ground from each ground sub-plane. Specifically, the execution main body can splice all the ground sub-planes according to the positions of the ground sub-planes to obtain the predicted ground.
In some alternative implementations of the present embodiment, the execution body may smooth the plurality of ground sub-planes to generate the predicted ground.
In some alternative implementations of the present embodiment, for each ground sub-plane, the ground sub-plane may be smoothed using ground sub-planes surrounding the ground sub-plane when smoothed.
In some optional implementations of this embodiment, for each ground sub-plane, when smoothing, an included angle between the ground sub-plane and the ground, an included angle between a surrounding ground sub-plane and the ground, and weights corresponding to the included angles may be calculated to calculate an included angle adjustment value between the ground sub-plane and the ground. And adjusting the included angle between the ground sub-plane and the ground according to the included angle adjustment value.
The method for outputting information provided by the embodiment of the application can obtain a more accurate prediction plane, so that the accuracy of the target ground threshold value is improved.
With further reference to fig. 5, as an implementation of the method shown in the foregoing figures, the present application provides an embodiment of an apparatus for outputting information, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 2, and the apparatus may be specifically applied to various electronic devices.
As shown in fig. 5, the apparatus 500 for outputting information of the present embodiment includes: a predicted ground determination unit 501, a ground threshold determination unit 502, an obstacle number determination unit 503, and a target ground threshold determination unit 504.
The predicted ground determining unit 501 is configured to determine a predicted ground based on point cloud data acquired by the vehicle during traveling.
The ground threshold determining unit 502 is configured to determine a plurality of ground thresholds according to a preset ground threshold value range.
An obstacle number determination unit 503 configured to determine, for each ground threshold, a ground point cloud from the predicted ground and the ground threshold, and perform obstacle recognition on point clouds other than the ground point cloud in the point cloud data, determining the number of obstacles.
A target ground threshold value determining unit 504 configured to determine and output a target ground threshold value according to the obtained plurality of numbers.
In some optional implementations of the present embodiment, the ground threshold determination unit 502 is further configured to: and selecting a plurality of points in the ground threshold value range as a plurality of ground threshold values at preset distance intervals.
In some optional implementations of the present embodiment, the obstacle number determining unit 503 is further configured to: performing obstacle recognition on point clouds except the ground point clouds in the point cloud data, and determining the size of the obstacle; and determining the number of the obstacles with the height smaller than a preset height threshold according to the sizes of the obstacles.
In some optional implementations of the present embodiment, the target ground threshold determination unit 504 is further configured to: determining a curve of the quantity-ground threshold according to the quantity and the ground threshold corresponding to each quantity; and determining the slope of the curve at each ground threshold, and determining the target ground threshold according to each slope.
In some optional implementations of the present embodiment, the target ground threshold determination unit 504 is further configured to: determining the maximum value of the absolute values of the slopes; and taking the ground threshold corresponding to the maximum value as a target ground threshold.
In some optional implementations of the present embodiment, the predicted ground determination unit 501 may further include a point cloud determination module, a space division module, a ground estimation module, and a ground generation module, which are not shown in fig. 5.
The point cloud determining module is configured to determine estimated ground point clouds in the point cloud data;
the space division module is configured to divide a first three-dimensional space in which the estimated ground point cloud is located into a plurality of second three-dimensional spaces.
The ground estimation module is configured to perform ground estimation on the estimated ground point clouds in the plurality of second three-dimensional spaces to obtain a plurality of ground sub-planes.
The ground generation module is configured to generate a predicted ground based on the plurality of ground sub-planes.
In some optional implementations of the present embodiment, the point cloud determination module is further configured to: and taking the point cloud points in the range of the preset height from the estimated ground in the point cloud data as the estimated ground point cloud.
In some optional implementations of the present embodiment, the spatial partitioning module is further configured to: dividing the estimated ground into a plurality of grids; the first three-dimensional space is divided based on the grids, and a plurality of second three-dimensional spaces are obtained.
In some optional implementations of the present embodiment, the ground estimation module is further configured to: fitting estimated ground point cloud points in a plurality of second three-dimensional spaces to obtain a plurality of first planes; for each first plane, the following fitting step is performed: selecting estimated ground point cloud points with the distance smaller than a first distance threshold value from a second three-dimensional space corresponding to the first plane as candidate point cloud points; fitting a second plane by using the candidate point cloud points; determining whether the second plane is stable; if the second plane is stable, the second plane is taken as a ground sub-plane.
In some optional implementations of the present embodiment, the ground estimation module is further configured to: in response to determining that the second plane is unstable, the first plane is replaced with the second plane and the fitting step continues.
In some optional implementations of the present embodiment, the ground estimation module is further configured to: sampling the estimated ground point cloud points in each second three-dimensional space to obtain sampling point cloud points; and fitting the first plane by using the cloud points of the sampling points.
In some optional implementations of the present embodiment, the ground estimation module is further configured to: dividing the second stereoscopic space into a plurality of third stereoscopic spaces; and sampling the estimated ground point cloud points in each third three-dimensional space.
In some optional implementations of the present embodiment, the ground estimation module is further configured to: determining whether the sum of the distances from the estimated ground point cloud point in the second three-dimensional space to the second plane is smaller than a second distance threshold value or not in response to the fact that the execution times of the fitting step are smaller than a preset time threshold value; if the sum of the distances is smaller than a second distance threshold, determining that the second plane is stable; if the sum of the distances is not less than the second distance threshold, the second plane is determined to be unstable.
In some optional implementations of the present embodiment, the predicted ground determination unit 501 further includes: the first determining module is configured to determine that no estimated ground point cloud point exists in the second stereo space in response to the number of times of execution of the fitting step being not less than a number of times threshold.
In some optional implementations of the present embodiment, the predicted ground determination unit 501 further includes: and the second determining module is configured to determine that no estimated ground point cloud point exists in the second three-dimensional space in response to the number of times of execution of the fitting step is not less than a preset number of times threshold and the angle between the second plane and the ground is greater than an angle threshold.
It should be understood that the units 501 to 504 described in the apparatus 500 for outputting information correspond to the respective steps in the method described with reference to fig. 2. Thus, the operations and features described above with respect to the method for outputting information are equally applicable to the apparatus 500 and the units contained therein, and are not described in detail herein.
Referring now to fig. 6, a schematic diagram of an electronic device 600 suitable for use in implementing embodiments of the present disclosure is shown. The electronic device shown in fig. 6 is merely an example and should not impose any limitations on the functionality and scope of use of embodiments of the present disclosure.
As shown in fig. 6, the electronic device 600 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 601, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM603, various programs and data required for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM603 are connected to each other through a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
In general, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, magnetic tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 shows an electronic device 600 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead. Each block shown in fig. 6 may represent one device or a plurality of devices as needed.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via communication means 609, or from storage means 608, or from ROM 602. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing means 601. It should be noted that, the computer readable medium according to the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In an embodiment of the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. Whereas in embodiments of the present disclosure, the computer-readable signal medium may comprise a data signal propagated in baseband or as part of a carrier wave, with computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: according to the point cloud data collected by the vehicle in the running process, determining a predicted ground; determining a plurality of ground thresholds according to a preset ground threshold value range; for each ground threshold, determining ground point clouds according to the predicted ground and the ground threshold, identifying obstacles by using point clouds except the ground point clouds in the point cloud data, and determining the number of the obstacles; and determining and outputting a target ground threshold according to the obtained plurality of quantities.
Computer program code for carrying out operations of embodiments of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments described in the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The described units may also be provided in a processor, for example, described as: a processor includes a predicted ground determination unit, a ground threshold determination unit, an obstacle number determination unit, and a target ground threshold determination unit. The names of these units do not constitute a limitation on the unit itself in some cases, and for example, the predicted ground determination unit may also be described as "a unit that determines the predicted ground from point cloud data acquired by the vehicle during traveling".
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above technical features, but encompasses other technical features formed by any combination of the above technical features or their equivalents without departing from the spirit of the invention. Such as the above-described features, are mutually substituted with (but not limited to) the features having similar functions disclosed in the embodiments of the present disclosure.

Claims (28)

1. A method for outputting information, comprising:
according to point cloud data acquired in the running process of a vehicle, taking point cloud points in a preset height range from the estimated ground in the point cloud data as the estimated ground point cloud; determining a predicted ground according to the predicted ground point cloud;
determining a plurality of ground thresholds according to a preset ground threshold value range;
for each ground threshold, determining a ground point cloud according to the predicted ground and the ground threshold, identifying obstacles for the point clouds except the ground point cloud in the point cloud data, and determining the number of the obstacles;
Determining a curve of the quantity-ground threshold according to the quantity and the ground threshold corresponding to each quantity; and determining the slope of the curve at each ground threshold, and determining a target ground threshold according to each slope.
2. The method of claim 1, wherein the determining a plurality of ground thresholds according to the preset ground threshold value range comprises:
and selecting a plurality of points in the ground threshold value range as a plurality of ground threshold values at preset distance intervals.
3. The method of claim 1, wherein the identifying the obstacles to the point clouds other than the ground point clouds in the point cloud data, determining the number of obstacles, comprises:
performing obstacle recognition on point clouds except the ground point clouds in the point cloud data, and determining the size of the obstacle;
and determining the number of the obstacles with the height smaller than a preset height threshold according to the sizes of the obstacles.
4. The method of claim 1, wherein the determining a target ground threshold from each slope comprises:
determining the maximum value of the absolute values of the slopes;
and taking the ground threshold corresponding to the maximum value as a target ground threshold.
5. The method of any of claims 1-4, wherein the determining a predicted ground from the predicted ground point cloud comprises:
dividing the first three-dimensional space in which the estimated ground point cloud is positioned into a plurality of second three-dimensional spaces;
performing ground estimation on the estimated ground point clouds in the plurality of second three-dimensional spaces to obtain a plurality of ground sub-planes;
the predicted ground is generated based on the plurality of ground sub-planes.
6. The method of claim 5, wherein the dividing the first stereoscopic space in which the estimated ground point cloud is located into a plurality of second stereoscopic spaces comprises:
dividing the estimated ground into a plurality of grids;
and dividing the first three-dimensional space based on a plurality of grids to obtain a plurality of second three-dimensional spaces.
7. The method of claim 5, wherein the ground estimating the estimated ground point cloud in the second plurality of stereo spaces results in a plurality of ground sub-planes, comprising:
fitting estimated ground point cloud points in a plurality of second three-dimensional spaces to obtain a plurality of first planes;
for each first plane, the following fitting step is performed: selecting estimated ground point cloud points with the distance smaller than a first distance threshold value from a second three-dimensional space corresponding to the first plane as candidate point cloud points; fitting a second plane by using the candidate point cloud points; determining whether the second plane is stable; and if the second plane is stable, taking the second plane as a ground sub-plane.
8. The method of claim 7, wherein the performing ground estimation on the estimated ground point cloud points in the second plurality of stereo spaces results in a plurality of ground sub-planes, further comprising:
in response to determining that the second plane is unstable, replacing the first plane with the second plane and continuing to perform the fitting step.
9. The method of claim 7, wherein the fitting the plurality of first planes based on the estimated ground point cloud points within the plurality of second volumetric spaces comprises:
sampling the estimated ground point cloud points in each second three-dimensional space to obtain sampling point cloud points;
and fitting a first plane by using the sampling point cloud points.
10. The method of claim 9, wherein said sampling the estimated ground point cloud points in the second volume comprises:
dividing the second stereoscopic space into a plurality of third stereoscopic spaces;
and sampling the estimated ground point cloud points in each third three-dimensional space.
11. The method of claim 7, wherein said determining whether the second plane is stable comprises:
determining whether the sum of the distances from the estimated ground point cloud point in the second stereoscopic space to the second plane is smaller than a second distance threshold value or not in response to the execution times of the fitting step is smaller than a preset time threshold value;
If the sum of the distances is smaller than the second distance threshold value, determining that the second plane is stable;
and if the sum of the distances is not smaller than the second distance threshold value, determining that the second plane is unstable.
12. The method of claim 11, wherein the method further comprises:
and determining that no estimated ground point cloud point exists in the second stereo space in response to the execution times of the fitting step being not less than the times threshold.
13. The method of claim 11, wherein the method further comprises:
and responding to the fact that the execution times of the fitting step are not smaller than a preset time threshold, and the angle between the second plane and the ground is larger than an angle threshold, and determining that no estimated ground point cloud point exists in the second three-dimensional space.
14. An apparatus for outputting information, comprising:
the prediction ground determining unit is configured to take point cloud points in a preset height range from the prediction ground in the point cloud data as the prediction ground point cloud according to the point cloud data acquired by the vehicle in the driving process; determining a predicted ground according to the predicted ground point cloud;
the ground threshold determining unit is configured to determine a plurality of ground thresholds according to a preset ground threshold value range;
An obstacle number determination unit configured to determine, for each ground threshold, a ground point cloud from the predicted ground and the ground threshold, and to identify obstacles for point clouds other than the ground point cloud in the point cloud data, and to determine the number of obstacles;
a target ground threshold determining unit configured to determine a number-ground threshold curve from the plurality of numbers and the ground threshold corresponding to each number; and determining the slope of the curve at each ground threshold, and determining a target ground threshold according to each slope.
15. The apparatus of claim 14, wherein the ground threshold determination unit is further configured to:
and selecting a plurality of points in the ground threshold value range as a plurality of ground threshold values at preset distance intervals.
16. The apparatus of claim 14, wherein the obstacle quantity determination unit is further configured to:
performing obstacle recognition on point clouds except the ground point clouds in the point cloud data, and determining the size of the obstacle;
and determining the number of the obstacles with the height smaller than a preset height threshold according to the sizes of the obstacles.
17. The apparatus of claim 14, wherein the target ground threshold determination unit is further configured to:
determining the maximum value of the absolute values of the slopes;
and taking the ground threshold corresponding to the maximum value as a target ground threshold.
18. The apparatus of any of claims 14-17, wherein the predicted ground determination unit comprises:
the space division module is configured to divide a first three-dimensional space in which the estimated ground point cloud is located into a plurality of second three-dimensional spaces;
the ground estimation module is configured to perform ground estimation on the estimated ground point clouds in the plurality of second three-dimensional spaces to obtain a plurality of ground sub-planes;
a ground generation module configured to generate the predicted ground based on the plurality of ground sub-planes.
19. The apparatus of claim 18, wherein the spatial partitioning module is further configured to:
dividing the estimated ground into a plurality of grids;
and dividing the first three-dimensional space based on a plurality of grids to obtain a plurality of second three-dimensional spaces.
20. The apparatus of claim 18, wherein the ground estimation module is further configured to:
fitting estimated ground point cloud points in a plurality of second three-dimensional spaces to obtain a plurality of first planes;
For each first plane, the following fitting step is performed: selecting estimated ground point cloud points with the distance smaller than a first distance threshold value from a second three-dimensional space corresponding to the first plane as candidate point cloud points; fitting a second plane by using the candidate point cloud points; determining whether the second plane is stable; and if the second plane is stable, taking the second plane as a ground sub-plane.
21. The apparatus of claim 20, wherein the ground estimation module is further configured to:
in response to determining that the second plane is unstable, replacing the first plane with the second plane and continuing to perform the fitting step.
22. The apparatus of claim 20, wherein the ground estimation module is further configured to:
sampling the estimated ground point cloud points in each second three-dimensional space to obtain sampling point cloud points;
and fitting a first plane by using the sampling point cloud points.
23. The apparatus of claim 22, wherein the ground estimation module is further configured to:
dividing the second stereoscopic space into a plurality of third stereoscopic spaces;
And sampling the estimated ground point cloud points in each third three-dimensional space.
24. The apparatus of claim 20, wherein the ground estimation module is further configured to:
determining whether the sum of the distances from the estimated ground point cloud point in the second stereoscopic space to the second plane is smaller than a second distance threshold value or not in response to the execution times of the fitting step is smaller than a preset time threshold value;
if the sum of the distances is smaller than the second distance threshold value, determining that the second plane is stable;
and if the sum of the distances is not smaller than the second distance threshold value, determining that the second plane is unstable.
25. The apparatus of claim 24, wherein the predicted ground determination unit further comprises:
the first determining module is configured to determine that no estimated ground point cloud point exists in the second stereo space in response to the number of times the fitting step is performed being not less than the number threshold.
26. The apparatus of claim 24, wherein the predicted ground determination unit further comprises:
and the second determining module is configured to determine that no estimated ground point cloud point exists in the second three-dimensional space in response to the number of times of execution of the fitting step is not less than a preset number of times threshold and the angle between the second plane and the ground is greater than an angle threshold.
27. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-13.
28. A computer readable medium having stored thereon a computer program, wherein the program when executed by a processor implements the method of any of claims 1-13.
CN201910907185.7A 2019-09-24 2019-09-24 Method and device for outputting information Active CN112560548B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910907185.7A CN112560548B (en) 2019-09-24 2019-09-24 Method and device for outputting information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910907185.7A CN112560548B (en) 2019-09-24 2019-09-24 Method and device for outputting information

Publications (2)

Publication Number Publication Date
CN112560548A CN112560548A (en) 2021-03-26
CN112560548B true CN112560548B (en) 2024-04-02

Family

ID=75028981

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910907185.7A Active CN112560548B (en) 2019-09-24 2019-09-24 Method and device for outputting information

Country Status (1)

Country Link
CN (1) CN112560548B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108501954A (en) * 2018-04-03 2018-09-07 北京瑞特森传感科技有限公司 A kind of gesture identification method, device, automobile and storage medium
CN109154823A (en) * 2016-06-10 2019-01-04 凯斯纽荷兰工业美国有限责任公司 Utonomous working vehicle barrier detection system
CN109144097A (en) * 2018-08-15 2019-01-04 广州极飞科技有限公司 Barrier or ground identification and flight control method, device, equipment and medium
CN109766404A (en) * 2019-02-12 2019-05-17 湖北亿咖通科技有限公司 Points cloud processing method, apparatus and computer readable storage medium
CN109840448A (en) * 2017-11-24 2019-06-04 百度在线网络技术(北京)有限公司 Information output method and device for automatic driving vehicle
CN110121716A (en) * 2017-04-28 2019-08-13 深圳市大疆创新科技有限公司 Method and related system for network analysis

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050015201A1 (en) * 2003-07-16 2005-01-20 Sarnoff Corporation Method and apparatus for detecting obstacles

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109154823A (en) * 2016-06-10 2019-01-04 凯斯纽荷兰工业美国有限责任公司 Utonomous working vehicle barrier detection system
CN110121716A (en) * 2017-04-28 2019-08-13 深圳市大疆创新科技有限公司 Method and related system for network analysis
CN109840448A (en) * 2017-11-24 2019-06-04 百度在线网络技术(北京)有限公司 Information output method and device for automatic driving vehicle
CN108501954A (en) * 2018-04-03 2018-09-07 北京瑞特森传感科技有限公司 A kind of gesture identification method, device, automobile and storage medium
CN109144097A (en) * 2018-08-15 2019-01-04 广州极飞科技有限公司 Barrier or ground identification and flight control method, device, equipment and medium
CN109766404A (en) * 2019-02-12 2019-05-17 湖北亿咖通科技有限公司 Points cloud processing method, apparatus and computer readable storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
The Analysis of Stereo Vision 3D Point Cloud Data of Autonomous Vehicle Obstacle Recognition;Li P.等;2015 7th International Conference on Intelligent Human-Machine Systems and Cybernetics;20151123;全文 *
基于信息融合的智能车障碍物检测方法;陆峰;徐友春;李永乐;王德宇;谢德胜;;计算机应用;20171220(第S2期);全文 *
激光点云在无人驾驶路径检测中的应用;张永博;李必军;陈诚;;测绘通报;20161125(第11期);全文 *

Also Published As

Publication number Publication date
CN112560548A (en) 2021-03-26

Similar Documents

Publication Publication Date Title
CN110687549B (en) Obstacle detection method and device
EP3798974B1 (en) Method and apparatus for detecting ground point cloud points
CN109521756B (en) Obstacle motion information generation method and apparatus for unmanned vehicle
CN112630799B (en) Method and apparatus for outputting information
CN109146976B (en) Method and device for locating unmanned vehicles
CN111602138B (en) Object detection system and method based on artificial neural network
CN110717918B (en) Pedestrian detection method and device
CN113607185B (en) Lane line information display method, lane line information display device, electronic device, and computer-readable medium
CN112622923B (en) Method and device for controlling a vehicle
CN115540896A (en) Path planning method, path planning device, electronic equipment and computer readable medium
CN115406457A (en) Driving region detection method, system, equipment and storage medium
CN112558035B (en) Method and device for estimating the ground
CN112558036B (en) Method and device for outputting information
CN112630798B (en) Method and apparatus for estimating ground
CN116205964B (en) Point cloud downsampling method and device based on horizontal distance
CN112560548B (en) Method and device for outputting information
CN115565374A (en) Logistics vehicle driving optimization method and device, electronic equipment and readable storage medium
CN112668371B (en) Method and device for outputting information
CN114049449A (en) High-precision map road level calculation method and system
CN110363834B (en) Point cloud data segmentation method and device
CN113119999A (en) Method, apparatus, device, medium, and program product for determining automatic driving characteristics
CN112634487B (en) Method and apparatus for outputting information
CN113650616B (en) Vehicle behavior prediction method and system based on collected data
CN113987741A (en) Multi-target data tracking method and system
CN116977997A (en) Freespace real-time detection method based on quasi-density tree structure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant