CN110298311B - Method and device for detecting surface water accumulation - Google Patents

Method and device for detecting surface water accumulation Download PDF

Info

Publication number
CN110298311B
CN110298311B CN201910577870.8A CN201910577870A CN110298311B CN 110298311 B CN110298311 B CN 110298311B CN 201910577870 A CN201910577870 A CN 201910577870A CN 110298311 B CN110298311 B CN 110298311B
Authority
CN
China
Prior art keywords
point cloud
coordinate system
grid map
grid
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910577870.8A
Other languages
Chinese (zh)
Other versions
CN110298311A (en
Inventor
张蓉
熊祺
张放
李晓飞
张德兆
王肖
霍舒豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Idriverplus Technologies Co Ltd
Original Assignee
Beijing Idriverplus Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Idriverplus Technologies Co Ltd filed Critical Beijing Idriverplus Technologies Co Ltd
Priority to CN201910577870.8A priority Critical patent/CN110298311B/en
Publication of CN110298311A publication Critical patent/CN110298311A/en
Application granted granted Critical
Publication of CN110298311B publication Critical patent/CN110298311B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/182Network patterns, e.g. roads or rivers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a method for detecting surface water, which comprises the following steps: acquiring multi-frame laser point cloud information; acquiring a first grid map occupied by a point cloud cavity of a current frame point cloud; determining a second grid map occupied by the point cloud cavities under a second coordinate system according to the positioning information of the vehicle and the first grid map; tracking each grid unit, and performing coordinate conversion after tracking to obtain a third grid map under the first coordinate system; acquiring the outline of each connected region of the binary image of the third grid map; when the area of the first water surface is not larger than a preset first threshold value, filtering out a corresponding outline; and determining the position of the surface water under the first coordinate system according to the residual contour. Therefore, the existing perception sensor of the automatic driving vehicle is directly utilized, and the method can realize the effective accumulated water detection function without a large amount of manual labeling work.

Description

Method and device for detecting surface water accumulation
Technical Field
The invention relates to the field of data processing, in particular to a method and a device for detecting surface water.
Background
In recent years, an application demonstration of smart city life is built in multiple scenes such as intelligent transportation travel, logistics distribution and cleaning operation of automatic driving vehicles. However, autonomous vehicles still face many challenges before all-weather operation can be achieved, such as heavy rain and snow, and road conditions such as accumulated water and snow can affect safe driving of the vehicle. Therefore, the perception system of the autonomous vehicle needs to have a road condition detection function such as a road surface water accumulation in addition to the obstacle detection function.
At present, a road surface and water surface detection scheme comprises a detection method based on special optical equipment, a detection method based on a near-field radar means, and an image processing method based on deep learning.
The water surface detection scheme based on the vehicle-mounted laser radar is less open, and related mature technologies are not found. Detection technologies based on special optical devices or near-field radar devices are often expensive in practical applications and are easily affected by external environments. The visual solution based on deep learning has certain adaptability to the change of environment, but the training of the deep learning model needs to use a large amount of labeled data sets, and the cost is relatively high.
Disclosure of Invention
The embodiment of the invention aims to provide a method and a device for detecting surface water, which are used for solving the problems that in the prior art, when surface water is detected, the cost is high and the surface water is easily influenced by the outside.
In order to solve the above problem, in a first aspect, the present invention provides a method for detecting surface water, including:
acquiring multi-frame laser point cloud information;
processing each frame of laser point cloud information to obtain a first grid map occupied by a point cloud cavity of a current frame point cloud in a first coordinate system; the point cloud cavity is a graph expressed by the laser radar point cloud when no echo or abnormal echo exists on the surface of the water accumulation road;
acquiring positioning information of a vehicle;
determining a second grid map occupied by the point cloud cavity of the current frame point cloud under a second coordinate system according to the positioning information of the vehicle and the first grid map;
decomposing the second grid map to obtain a plurality of grid units;
tracking each grid unit, and performing coordinate conversion after tracking to obtain a third grid map in a first coordinate system;
converting the third grid map into a binary image; the binary image comprises one or more connected regions;
acquiring the outline of each connected region of the binary image; each connected region corresponds to a contour;
calculating a first water surface area corresponding to each contour;
when the first water surface area is not larger than a preset first threshold value, filtering out a corresponding outline;
and determining the position of the surface gathered water under the first coordinate system according to the residual contour after the corresponding contour is filtered.
In a possible implementation manner, the performing tracking processing on each grid unit specifically includes:
superposing the grid unit on the confidence coefficient of the current frame and the confidence coefficient of the previous frame, and reserving the grid unit when the superposed confidence coefficient is not less than a preset confidence coefficient threshold value; and deleting the grid unit when the superimposed confidence is smaller than a preset confidence threshold.
In a possible implementation manner, the processing the laser point cloud information to obtain a first grid map occupied by a point cloud cavity of a current frame point cloud in a first coordinate system specifically includes:
extracting point cloud information of a first number of laser point clouds in a first direction of the vehicle;
extracting invalid points or abnormal points in the point cloud information of each ring in the first number of rings;
obtaining a starting point and an end point of a first cavity line segment of the point cloud cavity in each ring in a first coordinate system according to the invalid point or the abnormal point to obtain a first cavity line segment of each ring;
calculating the intersection points of a plurality of equally divided rays of the origin of the first coordinate system and the first cavity line segments of each ring;
when the intersection points exist, extracting two intersection points with the largest adjacent distance, and taking a line segment corresponding to the two intersection points with the largest adjacent distance as a second cavity line segment on the equal division ray;
and determining a first grid map according to the second hole line segment.
In a possible implementation manner, the acquiring the positioning information of the vehicle specifically includes:
and acquiring the positioning information of the vehicle through a global satellite navigation system on the vehicle.
In a possible implementation manner, the determining, according to the positioning information of the vehicle and the first grid map, a second grid map occupied by a current frame point cloud in a second coordinate system specifically includes:
performing coordinate conversion on the first grid map in the first coordinate system according to the positioning information of the vehicle to obtain one or more second grid maps occupied by the current frame point cloud in a second coordinate system; wherein, the first coordinate system is a vehicle coordinate system, and the second coordinate system is a global coordinate system.
In one possible implementation, the method further includes, after the step of:
when at least two laser radars are arranged on the vehicle, the grid maps of the surface ponding corresponding to the laser radars are subjected to fusion processing, and the position information of the target surface ponding is obtained.
In a second aspect, the present invention provides a surface water detection apparatus, the apparatus comprising:
the system comprises a preprocessing module, a processing module and a processing module, wherein the preprocessing module is used for acquiring multi-frame laser point cloud information; processing each frame of laser point cloud information to obtain a first grid map occupied by a point cloud cavity of a current frame point cloud in a first coordinate system; the point cloud cavity is a graph expressed by the laser radar point cloud when no echo or abnormal echo exists on the surface of the water accumulation road;
the tracking module is used for acquiring positioning information of the vehicle; determining a second grid map occupied by the point cloud cavity of the current frame point cloud under a second coordinate system according to the positioning information of the vehicle and the first grid map; decomposing the second grid map to obtain a plurality of grid units; tracking each grid unit, and performing coordinate conversion after tracking to obtain a third grid map in a first coordinate system;
a post-processing module for converting the third grid map into a binary image; the binary image comprises one or more connected regions; acquiring one or more contours of the binary image; each connected region corresponds to a contour; calculating a first water surface area corresponding to each contour; when the first water surface area is not larger than a preset first threshold value, filtering out a corresponding outline; and determining the position of the surface gathered water under the first coordinate system according to the residual contour after the corresponding contour is filtered.
In a third aspect, the present invention provides an apparatus comprising a memory for storing a program and a processor for performing the method according to any of the first aspect.
In a fourth aspect, the present invention provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method according to any one of the first aspect.
In a fifth aspect, the invention provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the method of any of the first aspects.
By applying the method and the device for detecting the surface water accumulation, the detection of the surface water accumulation can be realized by directly utilizing the existing perception sensor of the automatic driving vehicle, and a large amount of manual marking work is not needed.
Drawings
Fig. 1 is a schematic flow chart of a method for detecting surface water according to a first embodiment of the present invention;
FIG. 2 is a schematic diagram of a laser radar point cloud cavity caused by a water accumulation road surface according to an embodiment of the invention;
fig. 3 is a schematic diagram of a first cavity line segment and a second cavity line segment according to an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating a second void segment being transformed into a first grid map according to an embodiment of the present invention;
fig. 5A is a diagram of an actual scene of surface water provided in the first embodiment of the present invention;
fig. 5B is a diagram illustrating an effect of detecting surface water according to the first embodiment of the present invention;
fig. 6 is a schematic structural view of a surface water detection device provided in the second embodiment of the present invention.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be further noted that, for the convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 is a schematic flow chart of a method for detecting surface water according to an embodiment of the present invention. The method is applied to an automatic driving Vehicle, the execution main body of the method is an automatic driving Vehicle Control Unit (AVCU) of the Vehicle, and the AVCU is a processor of the automatic driving Vehicle. As shown in fig. 1, the method comprises the steps of:
step 101, obtaining multi-frame laser point cloud information.
Specifically, the laser radar is arranged on the automatic driving vehicle, and the accumulated water road surface forms mirror reflection on a wire harness of the laser radar, so that the phenomenon that a point cloud cavity is formed when the point cloud of the laser radar is not returned or the returned value is abnormal is caused. Fig. 2 is a schematic diagram of a laser radar point cloud cavity caused by a water accumulation road surface according to an embodiment of the present invention.
102, processing each frame of laser point cloud information to obtain a first grid map occupied by a point cloud cavity of a current frame point cloud in a first coordinate system; the point cloud cavity is a graph which is shown when the laser radar point cloud has no echo or the echo is abnormal on the surface of the ponding road.
Specifically, step 102 includes the following steps:
firstly, point cloud information of a first number of laser point clouds in a first direction of a vehicle is taken; then, extracting invalid points or abnormal points in the point cloud information of each ring in the first number of rings; then, according to the invalid points or the abnormal points, obtaining a starting point and an end point of a first cavity line segment of the point cloud cavity in each ring in a first coordinate system, and obtaining a first cavity line segment of each ring; (ii) a Then, calculating the intersection points of a plurality of equal division lines of the origin of the first coordinate system and the first cavity line segments of each ring; then, when the intersection points exist, extracting two intersection points with the maximum adjacent distance, and taking the line segment corresponding to the two intersection points with the maximum adjacent distance as a second cavity line segment on the bisector; and finally, determining the first grid map according to the second cavity line segment.
The grid map is provided in the first coordinate system, and the grid map includes a plurality of grids divided at preset intervals, for example, the grids may be divided at intervals of 0.1 meter. The first number may be 4, the first direction being forward of the vehicle. The number of the first hollow line segments of each ring may be 0 or multiple, which is not limited in the present application.
This step is described in detail below with reference to fig. 3 and 4.
The main execution body of this step may be a preprocessing module, which first extracts the point cloud information of 4 lowest (closest to the ground) line bundles in the vertical arrangement line bundles of the laser radar in the vehicle forward direction, extracts the invalid points or abnormal points in each point cloud line bundle, and obtains the longest hollow line segment, that is, the positions of the starting point and the ending point of the first hollow line segment, which is illustrated and not limited, and the first hollow line segment refers to 3 real line segments in fig. 3. Then, the intersections of the plurality of bisectors having passed the origin of coordinates of the vehicle and the plurality of first hole line segments, such as the intersection A, B and the point C shown in fig. 4, are calculated. Finally, for each bisector, if there is an intersection, two intersections with the largest adjacent distance are extracted and taken as the second hole line segment on the bisector, and the line segment formed by the point A, C shown in fig. 4 is the second hole line segment.
As shown in fig. 4, the second void line segments on all the bisectors that satisfy the condition are converted into a first grid map occupied by the surface of the standing water.
Wherein, in the first coordinate system, the area in front of the vehicle is equally divided into a plurality of adjacent grid areas, and then the grid cells occupied by the line segment AC are calculated. And finally outputting a first grid map occupied by the current frame of ponding water surface.
And 103, acquiring the positioning information of the vehicle.
Specifically, the positioning information of the vehicle may be acquired by a Global Navigation Satellite System (GNSS) on the vehicle. The positioning information comprises position information of the vehicle under a global coordinate system, and the position information comprises longitude and latitude information.
And step 104, determining a second grid map occupied by the point cloud hole of the current frame point cloud under a second coordinate system according to the positioning information of the vehicle and the first grid map.
The main body of the step can be a tracking module, and because the first grid output by the preprocessing module is based on a first coordinate system, namely a self-vehicle coordinate system, and the coordinate system is a moving coordinate system, the tracking of the static grid cannot be realized. Therefore, after the detection that the ponding water surface of the single frame of point cloud information occupies the grid is completed, the tracking module performs coordinate system conversion, converts the first coordinate system into the second coordinate system, and tracks each grid unit in the second grid map under the second coordinate system, so that the filtering of partial detection noise is realized, and the stability of the output first grid is ensured. The tracking module mainly utilizes the positioning information of the vehicle to convert the water surface occupying first grid detected in the coordinate system of the vehicle into the global coordinate system, and then realizes the tracking task of the static grid under the static coordinate system.
Specifically, according to the positioning information of the vehicle, coordinate conversion is carried out on a first grid map in a first coordinate system, and one or more second grid maps occupied by the current frame of the ponding water surface in a second coordinate system are obtained. Wherein, the first coordinate system is a vehicle coordinate system, and the second coordinate system is a global coordinate system.
And 105, decomposing the second grid map to obtain a plurality of grid units.
Specifically, the grid unit here may be a minimum grid unit.
And 106, tracking each grid unit, and converting a coordinate system after tracking to obtain a third grid map in the first coordinate system.
The tracking processing mainly comprises the generation of a new grid cell, the state updating of a tracked grid cell and the deletion of a failed grid cell.
Specifically, in one example, the grid unit is superimposed on the confidence of the current frame and the confidence of the previous frame, and when the superimposed confidence is not less than a preset confidence threshold, the grid unit is retained; and deleting the grid unit when the superimposed confidence is smaller than a preset confidence threshold.
The confidence level may be set according to whether echo data exists in the grid cell or whether the echo data is abnormal. For anomalies and no echo data present, the corresponding confidence is low, and vice versa.
And after tracking processing, obtaining a new grid map, and after coordinate conversion is carried out on the new grid map, obtaining a third grid map in the first coordinate system.
Step 107, converting the third grid map into a binary image.
The main body of the step may be a post-processing module, and the post-processing module may convert the third grid map into a binary image using a correlation algorithm. A binary image is an image in which each pixel is either black or white, and its gray value has no intermediate transitions. The binary image is generally used for describing characters or graphs, and has the advantages of small occupied space, capability of describing contours and incapability of describing details. The converted binary image comprises one or more connected regions. The connected region is an image region formed by foreground pixel points which have the same pixel value and are adjacent in position in the image.
And step 108, acquiring the outline of each connected region of the binary image.
Wherein each connected region corresponds to a contour. The skilled person can extract the contour of the binary image by using the existing technology, which is not limited in this application.
And step 109, calculating a first water surface area corresponding to each contour.
And step 110, when the area of the first water surface is not larger than a preset first threshold value, filtering out a corresponding outline.
And step 111, determining the position of the surface water under the first coordinate system according to the residual contour after the corresponding contour is filtered.
Specifically, when the detected first water surface area is larger than a certain value (for example, 4 m) according to the trafficability of the autonomous vehicle2) And the method outputs a corresponding ponding water surface detection result.
After filtering the contours of which the first water surface area is not larger than a preset first threshold value, taking the rest contours as target contours, determining a grid map occupied by the target contours, and taking the occupied grid map as a position corresponding to the surface ponding. Fig. 5A and 5B are an actual surface water image and a surface water detection image, respectively.
Further, when being provided with on the vehicle and being no less than two laser radar, two laser radar have all detected the detection image of surface gathered water, can fuse the detection grid map of two surface gathered water, obtain final surface gathered water positional information.
It is to be understood that the type of lidar is not limited in this application; in the application, 4 laser point cloud wire harnesses at the front lower part of the vehicle are selected for water accumulation detection, and the method comprises but is not limited to the configuration mode. This application includes, but is not limited to, using bisected rays through the origin, and may also use non-bisected rays, requiring only that the rays pass through the origin of the host vehicle coordinate system.
By applying the method for detecting the surface water accumulation provided by the embodiment of the invention, the detection can be realized by directly utilizing the existing perception sensor of the automatic driving vehicle, and the method can realize an effective water accumulation detection function without a large amount of manual marking work.
Fig. 6 is a schematic structural view of a surface water detection device provided in the second embodiment of the present invention. This surface water detection device can use in surface water detection method, as shown in fig. 6, this surface water detection device includes: a pre-processing module 601, a tracking module 602 and a post-processing module 603.
The preprocessing module 601 is used for acquiring multi-frame laser point cloud information; processing each frame of laser point cloud information to obtain a first grid map occupied by a point cloud cavity of a current frame point cloud in a first coordinate system; the point cloud cavity is a graph expressed by the laser radar point cloud when no echo or abnormal echo exists on the surface of the water accumulation road;
the tracking module 602 is configured to obtain positioning information of a vehicle; determining a second grid map occupied by the point cloud cavity of the current frame point cloud under a second coordinate system according to the positioning information of the vehicle and the first grid map; decomposing the second grid map to obtain a plurality of grid units; tracking each grid unit, and performing coordinate conversion after tracking to obtain a third grid map under the first coordinate system;
the post-processing module 603 is configured to convert the third grid map into a binary image; the binary image comprises one or more connected regions; acquiring one or more contours of a binary image; each connected region corresponds to a contour; calculating a first water surface area corresponding to each contour; when the area of the first water surface is not larger than a preset first threshold value, filtering out a corresponding outline; and determining the position of the surface gathered water under the first coordinate system according to the residual contour after the corresponding contour is filtered.
The specific actions of the modules are the same as those described in the first embodiment, and are not described again here.
By applying the road surface accumulated water detection device provided by the second embodiment of the invention, the detection can be realized by directly utilizing the existing perception sensor of the automatic driving vehicle, and the device can realize the effective accumulated water detection function without a large amount of manual marking work.
The third embodiment of the invention provides equipment, which comprises a memory and a processor, wherein the memory is used for storing programs, and the memory can be connected with the processor through a bus. The memory may be a non-volatile memory such as a hard disk drive and a flash memory, in which a software program and a device driver are stored. The software program is capable of performing various functions of the above-described methods provided by embodiments of the present invention; the device drivers may be network and interface drivers. The processor is used for executing a software program, and the software program can realize the method provided by the first embodiment of the invention when being executed.
A fourth embodiment of the present invention provides a computer program product including instructions, which, when the computer program product runs on a computer, causes the computer to execute the method provided in the first embodiment of the present invention.
The fifth embodiment of the present invention provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the method provided in the first embodiment of the present invention is implemented.
Those of skill would further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware, a software module executed by a processor, or a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The above embodiments are provided to further explain the objects, technical solutions and advantages of the present invention in detail, it should be understood that the above embodiments are merely exemplary embodiments of the present invention and are not intended to limit the scope of the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. A method for detecting surface water, characterized in that the method comprises:
acquiring multi-frame laser point cloud information;
processing each frame of laser point cloud information to obtain a first grid map occupied by a point cloud cavity of a current frame point cloud in a first coordinate system; the point cloud cavity is a graph expressed by the laser radar point cloud when no echo or abnormal echo exists on the surface of the water accumulation road;
acquiring positioning information of a vehicle;
determining a second grid map occupied by the point cloud cavity of the current frame point cloud under a second coordinate system according to the positioning information of the vehicle and the first grid map;
decomposing the second grid map to obtain a plurality of grid units;
tracking each grid unit, and performing coordinate conversion after tracking to obtain a third grid map in a first coordinate system;
converting the third grid map into a binary image; the binary image comprises one or more connected regions;
acquiring the outline of each connected region of the binary image; each connected region corresponds to a contour;
calculating a first water surface area corresponding to each contour;
when the first water surface area is not larger than a preset first threshold value, filtering out a corresponding outline;
and determining the position of the surface gathered water under the first coordinate system according to the residual contour after the corresponding contour is filtered.
2. The method according to claim 1, wherein the tracking processing for each grid cell specifically includes:
superposing the grid unit on the confidence coefficient of the current frame and the confidence coefficient of the previous frame, and reserving the grid unit when the superposed confidence coefficient is not less than a preset confidence coefficient threshold value; and deleting the grid unit when the superimposed confidence is smaller than a preset confidence threshold.
3. The method of claim 1, wherein the processing the laser point cloud information to obtain a first grid map occupied by a point cloud hole of a current frame point cloud in a first coordinate system comprises:
extracting point cloud information of a first number of laser point clouds in a first direction of the vehicle;
extracting invalid points or abnormal points in the point cloud information of each ring in the first number of rings;
obtaining a starting point and an end point of a first cavity line segment of the point cloud cavity in each ring in a first coordinate system according to the invalid point or the abnormal point to obtain a first cavity line segment of each ring;
calculating the intersection points of a plurality of equally divided rays of the origin of the first coordinate system and the first cavity line segments of each ring;
when the intersection points exist, extracting two intersection points with the largest adjacent distance, and taking a line segment corresponding to the two intersection points with the largest adjacent distance as a second cavity line segment on the equal division ray;
and determining a first grid map according to the second hole line segment.
4. The method according to claim 1, wherein the obtaining of the positioning information of the vehicle specifically comprises:
and acquiring the positioning information of the vehicle through a global satellite navigation system on the vehicle.
5. The method of claim 1, wherein determining a second grid map occupied by a current frame point cloud in a second coordinate system according to the positioning information of the vehicle and the first grid map comprises:
performing coordinate conversion on the first grid map in the first coordinate system according to the positioning information of the vehicle to obtain one or more second grid maps occupied by the current frame point cloud in a second coordinate system; wherein, the first coordinate system is a vehicle coordinate system, and the second coordinate system is a global coordinate system.
6. The method of claim 1, further comprising:
when at least two laser radars are arranged on the vehicle, the grid maps of the surface ponding corresponding to the laser radars are subjected to fusion processing, and the position information of the target surface ponding is obtained.
7. The utility model provides a surface gathered water detection device which characterized in that, the device includes:
the system comprises a preprocessing module, a processing module and a processing module, wherein the preprocessing module is used for acquiring multi-frame laser point cloud information; processing each frame of laser point cloud information to obtain a first grid map occupied by a point cloud cavity of a current frame point cloud in a first coordinate system; the point cloud cavity is a graph expressed by the laser radar point cloud when no echo or abnormal echo exists on the surface of the water accumulation road;
the tracking module is used for acquiring positioning information of the vehicle; determining a second grid map occupied by the point cloud cavity of the current frame point cloud under a second coordinate system according to the positioning information of the vehicle and the first grid map; decomposing the second grid map to obtain a plurality of grid units; tracking each grid unit, and performing coordinate conversion after tracking to obtain a third grid map in a first coordinate system;
a post-processing module for converting the third grid map into a binary image; the binary image comprises one or more connected regions; acquiring one or more contours of the binary image; each connected region corresponds to a contour; calculating a first water surface area corresponding to each contour; when the first water surface area is not larger than a preset first threshold value, filtering out a corresponding outline; and determining the position of the surface gathered water under the first coordinate system according to the residual contour after the corresponding contour is filtered.
8. An apparatus, comprising a memory for storing a program and a processor for performing the method of any of claims 1-6.
9. A computer program product comprising instructions for causing a computer to perform the method according to any one of claims 1 to 6 when the computer program product is run on the computer.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when being executed by a processor, carries out the method according to any one of claims 1-6.
CN201910577870.8A 2019-06-28 2019-06-28 Method and device for detecting surface water accumulation Active CN110298311B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910577870.8A CN110298311B (en) 2019-06-28 2019-06-28 Method and device for detecting surface water accumulation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910577870.8A CN110298311B (en) 2019-06-28 2019-06-28 Method and device for detecting surface water accumulation

Publications (2)

Publication Number Publication Date
CN110298311A CN110298311A (en) 2019-10-01
CN110298311B true CN110298311B (en) 2021-05-07

Family

ID=68029357

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910577870.8A Active CN110298311B (en) 2019-06-28 2019-06-28 Method and device for detecting surface water accumulation

Country Status (1)

Country Link
CN (1) CN110298311B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112585656B (en) * 2020-02-25 2022-06-17 华为技术有限公司 Method and device for identifying special road conditions, electronic equipment and storage medium
CN111461982B (en) * 2020-03-30 2023-09-22 北京百度网讯科技有限公司 Method and apparatus for splice point cloud
CN112666553B (en) * 2020-12-16 2023-04-18 动联(山东)电子科技有限公司 Road ponding identification method and equipment based on millimeter wave radar
CN114814796B (en) * 2022-07-01 2022-09-30 陕西欧卡电子智能科技有限公司 Method, device and equipment for extracting water surface travelable area based on high-precision map
CN116311095B (en) * 2023-03-16 2024-01-02 广州市衡正工程质量检测有限公司 Pavement detection method based on region division, computer equipment and storage medium

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104050825B (en) * 2013-03-13 2017-09-15 厦门歌乐电子企业有限公司 It is equipped on terminal installation, vehicle and the based reminding method on puddle road surface on vehicle
US9453941B2 (en) * 2014-12-22 2016-09-27 GM Global Technology Operations LLC Road surface reflectivity detection by lidar sensor
US9495764B1 (en) * 2016-03-21 2016-11-15 URC Ventures, Inc. Verifying object measurements determined from mobile device images
CN106325132A (en) * 2016-09-23 2017-01-11 常州大学怀德学院 Road waterlogging monitoring system
CN107092803B (en) * 2017-05-12 2020-07-07 长安大学 Road ponding area identification method based on three-dimensional line laser technology
CN108664715B (en) * 2018-04-26 2022-03-29 长安大学 Three-dimensional evaluation and driving safety analysis method for accumulated water ruts on road surface
CN109670404B (en) * 2018-11-23 2023-07-11 江苏理工学院 Road ponding image detection early warning method based on hybrid model
US10816993B1 (en) * 2019-11-23 2020-10-27 Ha Q Tran Smart vehicle

Also Published As

Publication number Publication date
CN110298311A (en) 2019-10-01

Similar Documents

Publication Publication Date Title
CN110298311B (en) Method and device for detecting surface water accumulation
US11709058B2 (en) Path planning method and device and mobile device
CN110163930B (en) Lane line generation method, device, equipment, system and readable storage medium
CN111551958B (en) Mining area unmanned high-precision map manufacturing method
CN112581612B (en) Vehicle-mounted grid map generation method and system based on fusion of laser radar and all-round-looking camera
CN114930401A (en) Point cloud-based three-dimensional reconstruction method and device and computer equipment
CN114485698B (en) Intersection guide line generation method and system
CN115273039B (en) Small obstacle detection method based on camera
CN113238251A (en) Target-level semantic positioning method based on vehicle-mounted laser radar
CN105787445A (en) Method and system for automatically extracting rod-shaped objects in vehicular laser scanning data
CN114155720B (en) Vehicle detection and track prediction method for roadside laser radar
CN112001272A (en) Laser radar environment sensing method and system based on deep learning
CN113479191B (en) Lane-line-free lane boundary detection system and method for parking and vehicle
CN116523970A (en) Dynamic three-dimensional target tracking method and device based on secondary implicit matching
Eraqi et al. Static free space detection with laser scanner using occupancy grid maps
CN113624223B (en) Indoor parking lot map construction method and device
CN115618602A (en) Lane-level scene simulation method and system
CN111380529A (en) Mobile equipment positioning method, device and system and mobile equipment
CN115457505A (en) Small obstacle detection method, device and equipment for camera and storage medium
CN110660113A (en) Method and device for establishing characteristic map, acquisition equipment and storage medium
CN111338336B (en) Automatic driving method and device
US11544899B2 (en) System and method for generating terrain maps
CN112747757A (en) Method and device for providing radar data, computer program and computer-readable storage medium
US11810459B1 (en) Vehicle localization based on radar detections in garages
CN117685954B (en) Multi-mode semantic map construction system and method for mining area

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: B4-006, maker Plaza, 338 East Street, Huilongguan town, Changping District, Beijing 100096

Patentee after: Beijing Idriverplus Technology Co.,Ltd.

Address before: B4-006, maker Plaza, 338 East Street, Huilongguan town, Changping District, Beijing 100096

Patentee before: Beijing Idriverplus Technology Co.,Ltd.

CP01 Change in the name or title of a patent holder