CN112464812A - Vehicle-based sunken obstacle detection method - Google Patents

Vehicle-based sunken obstacle detection method Download PDF

Info

Publication number
CN112464812A
CN112464812A CN202011355147.4A CN202011355147A CN112464812A CN 112464812 A CN112464812 A CN 112464812A CN 202011355147 A CN202011355147 A CN 202011355147A CN 112464812 A CN112464812 A CN 112464812A
Authority
CN
China
Prior art keywords
contour
obstacle
vehicle
image
concave
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011355147.4A
Other languages
Chinese (zh)
Other versions
CN112464812B (en
Inventor
胡劲文
赵春晖
徐钊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202011355147.4A priority Critical patent/CN112464812B/en
Publication of CN112464812A publication Critical patent/CN112464812A/en
Application granted granted Critical
Publication of CN112464812B publication Critical patent/CN112464812B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Traffic Control Systems (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a vehicle-based detection method for sunken obstacles, which comprises the steps of obtaining images in a vehicle detection range through camera equipment, and obtaining point cloud data in the vehicle detection range through a laser radar; extracting a first contour of a concave obstacle in the image based on a YOLO detection method, and extracting a second contour of the concave obstacle in the point cloud data; calibrating the first contour and the second contour to obtain a projected image of the calibrated concave obstacle; calculating an intersection ratio of the projection of the first contour and the projection of the second contour in the projection image, and generating a contour of the concave obstacle on the projection image when the intersection ratio is greater than or equal to a threshold value; according to the method, the detection accuracy can be improved through data of different data sources, the two obtained contours are calibrated into the same projection image, the final contour of the concave obstacle is generated by combining an intersection and comparison method, and the accuracy of the identification result can be improved.

Description

Vehicle-based sunken obstacle detection method
Technical Field
The invention belongs to the technical field of unmanned vehicle obstacle sensing, and particularly relates to a vehicle-based sunken obstacle detection method.
Background
The unmanned vehicle is a vehicle, also called a wheeled mobile robot, and mainly depends on an intelligent driving system mainly comprising a computer system in the vehicle to achieve the purpose of unmanned driving. Unmanned vehicles have been successfully used in various fields such as military, civil use, and the like.
The unmanned vehicle can sense the surrounding environment and position itself through the information obtained by the sensor in the moving process, and can take corresponding path planning and control decision according to the task target. The unmanned vehicle can safely and reliably navigate and can not drive to leave a good obstacle detection system.
In a complicated field environment, there are many kinds of obstacles, such as pits, rocks, trunks, steep slopes, pot holes, and the like, and these obstacles and their surrounding areas are difficult for unmanned vehicles to safely pass through, and thus are defined as impassable areas. Most obstacles have different sizes and shapes, are not only protruded from the road surface but also lower than the road surface, and are still positioned in the middle of a plurality of weeds or low shrubs and are difficult to distinguish. In the current research, convex obstacles are taken as a research benchmark, and when the obstacles in a concave shape are encountered, the obstacles below the road surface are difficult to be accurately identified, so that control decision errors are caused.
Disclosure of Invention
The invention aims to provide a vehicle-based sunken obstacle detection method to accurately identify sunken obstacles.
The invention adopts the following technical scheme: a vehicle-based recessed obstacle detection method comprises the following steps:
acquiring an image in a vehicle detection range through camera equipment, and acquiring point cloud data in the vehicle detection range through a laser radar;
extracting a first contour of a concave obstacle in the image based on a YOLO detection method, and extracting a second contour of the concave obstacle in the point cloud data;
calibrating the first contour and the second contour to obtain a projected image of the calibrated concave obstacle;
and calculating an intersection ratio of the projection of the first contour and the projection of the second contour in the projection image, and generating the contour of the concave obstacle on the projection image when the intersection ratio is larger than or equal to a threshold value.
Further, extracting a second contour of a concave obstacle in the point cloud data comprises:
filling invalid points by adopting a linear interpolation method to obtain point cloud data with the invalid points removed;
calculating the distance between each line of current points and adjacent points in the backward direction in the point cloud data after the invalid points are removed, and marking the current points as contour points when the distance is greater than a distance threshold;
and traversing the point cloud data to obtain a forward contour point set serving as a second contour.
Further, obtaining the forward contour point set further includes:
reversely traversing the point cloud data to obtain a reverse contour point set;
and computing the forward contour point set and the reverse contour point set, and taking the contour point set after the AND computing as a second contour.
Further, calibrating the first profile and the second profile includes:
acquiring a first 3D coordinate of a calibration plate vertex in an image coordinate system and a second 3D coordinate of the calibration plate vertex in a laser radar coordinate system;
and generating a rotation and translation matrix according to the first 3D coordinate and the second 3D coordinate, and completing the calibration of the first contour and the second contour through the rotation and translation matrix.
Further, generating a contour of the concave obstacle on the projection image includes:
generating a contour union set according to the first contour and the second contour;
selecting extreme points of the contour and concentrating;
generating the contour of the concave obstacle according to the extreme point.
Further, when the intersection ratio is less than the threshold:
the second contour is taken as the contour of the concave obstacle.
Further, the first profile, the second profile and the profile of the concave obstacle are all rectangular.
The other technical scheme of the invention is as follows: a vehicle-based recessed barrier detection apparatus comprising:
the detection module is used for acquiring images in a vehicle detection range through the camera equipment and acquiring point cloud data in the vehicle detection range through the laser radar;
the extraction module is used for extracting a first contour of a concave obstacle in the image based on a YOLO detection method and extracting a second contour of the concave obstacle in the point cloud data;
the calibration module is used for calibrating the first contour and the second contour to obtain a projected image of the calibrated concave obstacle;
and the calculation generation module is used for calculating the intersection ratio of the projection of the first contour and the projection of the second contour in the projection image, and when the intersection ratio is larger than or equal to a threshold value, the contour of the concave obstacle is generated on the projection image.
The other technical scheme of the invention is as follows: a vehicle-based concave obstacle detection device comprises a memory, a processor and a computer program which is stored in the memory and can run on the processor, wherein when the processor executes the computer program, any one of the vehicle-based concave obstacle detection methods is realized.
The other technical scheme of the invention is as follows: a computer readable storage medium storing a computer program which, when executed by a processor, implements any of the above-described vehicle-based recessed obstacle detection methods.
The invention has the beneficial effects that: according to the method, the image and the point cloud data in the vehicle detection range are obtained simultaneously, the outlines of the concave obstacles in the image and the point cloud data are extracted respectively, the detection accuracy can be improved through the data of different data sources, the two obtained outlines are calibrated in the same projection image, the final outline of the concave obstacle is generated by combining an intersection and proportion method, and the accuracy of the identification result can be improved.
Drawings
FIG. 1 is a schematic view of a scene of a concave obstacle with rectangular pits in an embodiment of the present invention;
FIG. 2 is a flowchart of a method for detecting recessed obstacles based on a vehicle according to an embodiment of the present invention;
FIG. 3 is a schematic diagram showing the shape of a fan-shaped laser beam falling on the ground on each line of a laser radar according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a shape scanned by a single line of a laser radar when a vehicle body tilts left and right in the embodiment of the invention;
FIG. 5 is a schematic view of a model of a line camera in an embodiment of the invention;
FIG. 6 is a schematic diagram of a calibration apparatus according to an embodiment of the present invention;
FIG. 7 is a block diagram of a vehicle-based recessed barrier detection apparatus according to another embodiment of the present invention;
fig. 8 is a schematic diagram of a vehicle-based recessed obstacle detection apparatus according to another embodiment of the present invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
When the unmanned vehicle works under the field condition, the unmanned vehicle can touch various obstacles, the obstacles pose great threat to the safe driving of the unmanned vehicle, and the traveling speed is limited to a great extent. Therefore, in order to realize the safe driving of the unmanned vehicle in a complex environment, the invention provides a multi-sensor fusion detection method for the sunken obstacles based on the ground unmanned vehicle, so that the unmanned vehicle has the capability of accurately detecting, identifying and positioning the obstacles to make up the blank of the sensing field in the ground unmanned vehicle, the scheme of the existing unmanned vehicle sensing system is perfected, the autonomous unmanned vehicle has the capacity of a complex field environment, and the autonomous unmanned vehicle has certain influence significance on the development of a field combat system.
In particular, with respect to the geometrical features of the concave obstacle, in the present invention a concave obstacle refers to an obstacle below the ground. Common concave obstacles are soil pits, water pits, ground holes, and the like. The shape of the concave obstacle includes a circle, an ellipse, a rectangle, and the like. In the embodiment of the present invention, the concave obstacle is a rectangular concave pit, and the shape of the concave obstacle is as shown in fig. 1.
The invention discloses a vehicle-based sunken barrier detection method, which comprises the following steps of:
s110, acquiring an image in a vehicle detection range through a camera device, and acquiring point cloud data in the vehicle detection range through a laser radar; s120, extracting a first contour of a concave obstacle in the image based on a YOLO detection method, and extracting a second contour of the concave obstacle in the point cloud data; s130, calibrating the first contour and the second contour to obtain a projected image of the calibrated concave obstacle; and S140, calculating an intersection ratio of the projection of the first contour and the projection of the second contour in the projection image, and generating the contour of the concave obstacle on the projection image when the intersection ratio is larger than or equal to a threshold value.
According to the method, the image and the point cloud data in the vehicle detection range are obtained simultaneously, the outlines of the concave obstacles in the image and the point cloud data are extracted respectively, the detection accuracy can be improved through the data of different data sources, the two obtained outlines are calibrated in the same projection image, the final outline of the concave obstacle is generated by combining an intersection and proportion method, and the accuracy of the identification result can be improved.
The invention adopts a method of combining a camera and a laser radar to detect the concave obstacle. A sample of the concave obstacle is first collected for training, and then an evaluation criterion is designed on which the concave obstacle is detected using the trained weight.
When acquiring an image, the image capturing apparatus may be a camera, a mobile phone, a video camera, or other apparatuses having a photographing function. The image pickup apparatus mainly learns the concave obstacle detection method based on the depth of vision. At present, a concave obstacle detection algorithm based on machine vision has the problem of low detection speed. To solve this problem, the embodiment adopts a real-time detection method based on the YOLO algorithm. The basic model of the YOLO detection algorithm consists of a convolution layer, a pooling layer and a full-connection layer, has strong robustness and can quickly and accurately complete the detection task of the concave obstacle.
In the training process, a small batch gradient descent method and impulse are adopted, because the impulse can make the training process more convergent. Based on the reciprocal of the YOLO loss function, the parameters are continuously updated by using a back propagation method, and the loss function is reduced until convergence. In the testing process, the IOU values of the detection boundary box and the reference standard box are calculated (IOU is short for Intersection overlapping Union and is a standard for measuring the accuracy of detecting corresponding objects in a specific data set). When the IOU is more than or equal to 0.5, the case is true; when IOU is less than 0.5, it is false positive. When IOU is 0, it is an anti-false case. And finally, calculating precision ratio, recall ratio and the like.
The lidar part, this embodiment proposes a concave obstacle detection algorithm based on geometric features. The lidar point that falls on ground and the lidar point that falls into concave barrier inside show obvious difference in geometric characteristics, and the distance that the lidar point that embodies to fall into concave barrier inside will grow to lidar. According to the geometrical characteristic, laser radar points falling into the concave obstacle are separated from the laser radar point cloud, and then the position and the size of the concave obstacle can be obtained by adopting Euclidean clustering for segmentation.
In this embodiment, the method for detecting a concave obstacle based on a laser radar specifically includes:
if the laser radar used is a 32-line laser radar, the laser radar is installed on the roof of the unmanned vehicle, and each laser scanning beam adopts radial circumferential scanning. At the concave obstacle, the point cloud data returned by a single laser scanning beam presents the local convexity in the width direction, the return values of a plurality of laser scanning beams present the phenomenon of jump, and the wider the concave obstacle is, the more obvious the jump of the distance return value is, namely the more obvious the geometrical characteristics of the concave obstacle area are.
Specifically, the fan-shaped laser beam on each line of the laser radar appears to be circular on the ground, as shown in fig. 3, and only a semicircle in front of the vehicle is intercepted as an example. The rectangle PQBC is a concave obstacle, and the distance from any point to the center O on the circumference of the circle O is dOMI.e. the radius of the circle. However, when the laser beam scans the concave obstacle, the laser point falling into the concave obstacle will fall on the back wall, and the distance from the radar point to the center O at this time becomes dONObviously have dON>dOM
Based on the model, the characteristic points of all concave obstacle areas can be found in the point cloud data by traversing the point cloud data on each line of the laser radar. And then carrying out Euclidean clustering on the characteristic points of the concave obstacle area to obtain the specific direction and size of the concave obstacle.
The method comprises the following specific steps:
(1) an M x n matrix M is first created to store the point cloud, and the point clouds are sorted for post-processing. Where m is the number of lines of the lidar and n is determined by the horizontal resolution of the lidar, for example, a lidar with a horizontal resolution of 0.2 ° can divide [ -90 ° 90 ° ] into 900 parts, so that n is 900.
For each three-dimensional point pi(x, y, z) can be represented by the formula
Figure BDA0002802410290000071
And
Figure BDA0002802410290000072
to obtain a vertical angle omega and a horizontal angle
Figure BDA0002802410290000073
By comparing the omega with the vertical angle relation table of the laser radar (which is common in the factory specifications of the laser radar), the number of channels to which each point belongs can be determined, and the corresponding m value can be determined. And the corresponding n value can be found by searching for the separation
Figure BDA0002802410290000074
The horizontal angle interval with the closest angle is determined. This fills the entire M matrix. The M matrix at this time may have invalid points, and therefore, a linear interpolation method is used to fill the invalid points.
(2) Then, a horizontal distance difference is calculated for all adjacent points of each row (i.e. m), i.e. the distance between the current point of each row and the adjacent point in the backward direction in the point cloud data is calculated, and when the distance is greater than a distance threshold value e, the current point is marked as a contour point. As shown in the following equation:
Figure BDA0002802410290000081
wherein, Δ disj,j+1The horizontal distance difference between the j point and the j +1 point in a certain line in the point cloud data is x when the distance is larger than a distance threshold valuejAnd yjThe label being xkAnd ykThis point is indicated as a contour point.
(3) After traversing all the points in the forward direction for one time, finding the condition of delta disj,j+1All points ≧ e, these are potential foveal obstacle points. At this time, these potential obstacle points are combined to create a binary image B1(i.e., the set of forward contour points), the obstacle sheet is assigned a value of 1 and the other points are assigned a value of 0. In addition, considering that the vehicle body may be inclined left/right on the road surface, as shown in fig. 4, the single line beam is caused to sweep the ground not in a uniform circle but in an oval shape inclined left/right, which is likely to falsely detect the points on the right/left of the pit as the concave obstacle. Therefore, in order to reduce the false detection probability and enhance the detection robustness, a secondary traversal method is adopted. After the point cloud data of the laser radar is traversed in the forward direction, traversing from the reverse direction once again to obtain another binary image B2(i.e., the set of inverse contour points), and finally, by fitting B to the set of inverse contour points1And B2And performing AND operation to obtain a final binary image B which is a binary matrix representing the probability of the concave obstacles, and finally clustering points belonging to the concave obstacles by a classical connected domain algorithm to obtain a final detection result, namely a second contour.
In the embodiment of the invention, due to the fact that the sensors have different working platforms or different sampling intervals and coordinate systems for providing observation information, the self deviation and the observation error of the sensors exist, so that the multi-sensor needs data registration before fusion, namely, the space-time registration process usually needed in the process of 'error-free' conversion of multi-sensor data mainly comprises two parts of time registration and space registration.
Aiming at the problem of space-time registration, the invention provides a calibration algorithm for corresponding 3D points to 3D points. The time registration mainly converts the observation information collected by each sensor from an asynchronous moment to the same moment. The space matching criterion mainly completes the conversion of the observation data of the same platform and the same coordinate system of the same platform or different coordinate systems of different platforms of all the sensors to the same coordinate system of the same platform. After the time-space registration of the camera and the laser radar is finished, the detection results of the camera and the laser radar are in the same time and the same space coordinate system.
Time registration:
the data fusion of the sensors in time is the synchronization of the data of the sensors in time, and the data collected by the sensors are not necessarily information at the same moment because the sampling frequencies of different sensors are different. In the embodiment, from the viewpoint of real-time performance, a GPS time service method is not adopted, and a thread synchronization method is adopted instead. And a radar data receiving thread and a camera data receiving thread are created in the program, and data of the radar at the current moment are acquired when the current frame image is acquired each time. This synchronizes the radar data and the camera data in time.
Spatial registration:
the camera mathematical model describes the process of projecting the three-dimensional scene to the two-dimensional image after being transformed, and the position of a projection point of a certain point in the three-dimensional scene in the two-dimensional image can be known after the camera mathematical model is determined. The calibration of the camera is described below by taking a linear camera model as an example. The linear camera model, as shown in fig. 5, has a center of projection O and an image plane W, and any point p in the world coordinate system has a corresponding projected point n on the image plane, where n is the intersection of the extended lines of p and O and the image plane W.
This perspective projection relationship can be expressed as:
Figure BDA0002802410290000091
in the above formula, the first and second carbon atoms are,
Figure BDA0002802410290000092
is a homogeneous coordinate of the world coordinate system,
Figure BDA0002802410290000093
is the homogeneous coordinate of the image plane, lambda is the scaling factor,
Figure BDA0002802410290000094
is a perspective projection matrix of 3 x 4, which is determined by the internal and external parameters of the camera.
The perspective projection matrix can be decomposed into
Figure BDA0002802410290000095
K is an internal parameter of the camera and can be expressed as
Figure BDA0002802410290000096
fxAnd fyEquivalent focal lengths in the x and y directions, u, respectively0And v0Is the coordinate of the center of the image pixel, i.e. the intersection of the optical axis and the image plane.
The internal parameters represent the projection relation between the object points and the image points in the camera coordinate system and can be obtained by the Zhang calibration method. R is a rotation matrix of the camera external parameters, T is a translation vector of the camera external parameters, the orientation and the position of the camera installation are determined by the rotation matrix and the translation vector, and the external parameters represent the geometric transformation relation of a camera coordinate system and a world coordinate system.
Joint calibration of a laser radar and a camera:
the laser radar and the camera are used as two most important sensors for field autonomous perception, and due to the characteristics of the two sensors, the sensors are fused and become a better detection scheme. The laser radar has strong resolution, high ranging precision and good real-time performance, but is greatly influenced by barrier material characteristics, weather factors (such as rain and fog weather) and the like. The camera provides rich color and characteristic information, and can detect an interested object through the most advanced algorithm, but a two-dimensional image acquired in the camera has no depth information, and the three-dimensional information of an obstacle cannot be acquired through the image. Therefore, in the present embodiment, a radar-camera joint calibration method (i.e., radar _ camera _ calibration) is adopted to project the lidar point cloud onto the image plane.
The traditional joint calibration of the laser radar and the camera is realized by corresponding a camera 2D point and a laser radar 3D point. The 2D point coordinates of the camera are obtained by solving the vertex coordinates of the calibration plate (extracting key points of the corner points), and the 3D point coordinates of the laser radar are obtained by fitting the boundary straight line of the calibration plate. Finally solving the PNP problem to obtain a corresponding coordinate transfer matrix as shown in the following formula:
Figure BDA0002802410290000101
wherein f is the focal length f of the cameraxF/dx, 1/dx being the number of pixels in 1 mm of x-direction, fyF/dy, 1/dy being the number of pixels in 1 mm of the y direction, cxAnd cyRespectively, the abscissa and ordinate values, t, of the central point position of the image1For translation of the lidar coordinate system in the x-direction, t, under the camera coordinate system2For the translation of the laser radar coordinate system in the y direction under the camera coordinate system, t3For the translation of the lidar coordinate system in the z-direction under the camera coordinate system,
Figure BDA0002802410290000111
is a rotation matrix of the lidar coordinate system relative to the camera coordinate system.
Converting the problem into an optimization problem of re-projection from the radar coordinate system to the camera coordinate system by using a plurality of corresponding points, and solving a final R, T, as shown in the following formula:
Figure BDA0002802410290000112
however, such a processing mode depends on a high-resolution laser radar, otherwise, the calculated 3D point coordinate has a large error, so that the final data fusion result is not accurate. In view of such a situation, the present embodiment proposes a calibration method for 3D points corresponding to 3D points, SO (3) is a 3 x 3 array,
Figure BDA0002802410290000113
is a real number.
The experimental setup is as shown in fig. 6 below, and before calibration, the camera and the lidar are fixed in a predetermined position. If the dimensions of the cardboard and the locations of the Aruco markers are known, then the locations of the calibration plate vertices in the coordinate system of the Aruco markers can be easily calculated. The ArUco marker can provide a rotational translation matrix between the camera coordinate system and the radar coordinate system.
Furthermore, the calibration method specifically comprises the following steps: acquiring a first 3D coordinate of a calibration plate vertex in an image coordinate system and a second 3D coordinate of the calibration plate vertex in a laser radar coordinate system; and generating a rotation and translation matrix according to the first 3D coordinate and the second 3D coordinate, and completing the calibration of the first contour and the second contour through the rotation and translation matrix.
And obtaining the 3D coordinates of the calibration plate vertex under the velodyne laser radar in a straight line fitting mode. Due to the horizontal nature of LiDAR scan lines, a vertical edge may be obtained if one side of the marker remains parallel to the ground, but a horizontal edge may not necessarily be obtained. To overcome this problem, the board is tilted so that it forms an angle between one of the edges and the ground plane. With this arrangement, points are always obtained on the four edges of the calibration plate, so that edge points are fitted into a line by using a RANSAC (random consensus) method, and an intersection point is obtained.
And then, obtaining a rotation translation matrix between corresponding points by an ICP (iterative nearest neighbor) method, and applying an algorithm to the condition of multiple frames to eliminate the influence caused by the noise points.
Assuming that we have obtained two sets of point cloud pairs, the objective function to be minimized is as follows, and finally the rotation matrix R and the translation matrix t between the camera and the lidar can be obtained.
Figure BDA0002802410290000121
And finally projecting the laser radar 3D point onto an image plane.
The alignment of the multi-source heterogeneous sensor in time and space is completed through the method, and then a multi-sensor fusion method based on an intersection set is provided. The method fuses data acquired by each sensor. The first purpose of data fusion is to bring together the radar coordinate system, the camera coordinate system, the image pixel coordinate system. The unification and the establishment of the coordinate system are beneficial to the measurement of the environment and the specific distance and the direction of the obstacle by the environment perception sensor, the calibration of the external parameters of the camera can project radar scanning points on an image, a dynamic target area is formed on the image coordinate system, and the subsequent combination of the respective perception characteristics of the camera and the radar is beneficial to the division of the obstacle area, so that the robustness of the system is enhanced, the detection accuracy is effectively improved, and the false alarm rate of the identification is reduced.
The target matching adopts a method of judging Intersection over Union of the obstacle boundary frames detected by different sensors, that is, the ratio of the Intersection and the Union of the areas of each rectangular frame is calculated, and the target which has the largest ratio and is greater than a certain threshold is regarded as the same target, wherein in the embodiment, the threshold is set to be 0.5. The intersection ratio of the two bounding boxes A, B is calculated as:
Figure BDA0002802410290000122
and when the obtained IOU value is larger than or equal to the threshold value, generating a contour union set according to the first contour and the second contour, selecting an extreme point of the union set, and finally generating the contour of the concave obstacle according to the selected extreme point.
Because the detection result of the sensor has the conditions of missing detection, false alarm and the like, if the obtained IOU value is smaller than the threshold value, the matching of the IOU value and the threshold value is proved to have a problem, and if an error occurs, the laser radar data is taken as the standard, namely the second contour is taken as the contour of the concave obstacle to be marked. In case of a false lidar identification, the first contour of the picture is used as the contour of the concave obstacle for marking.
In summary, the invention provides a multi-sensor fusion algorithm, which fuses results obtained by multiple sensors by adopting a method of calculating a cross-over ratio, and displays the fused detection result on an image, so that the robustness of the system can be enhanced, the detection accuracy can be effectively improved, and the false alarm rate of the identification can be reduced.
Another embodiment of the present invention further discloses a vehicle-based recessed obstacle detection device, as shown in fig. 7, including the following modules:
the detection module 210 is configured to obtain an image within a vehicle detection range through a camera device, and simultaneously obtain point cloud data within the vehicle detection range through a laser radar; an extracting module 220, configured to extract a first contour of a concave obstacle in the image based on a YOLO detection method, and extract a second contour of the concave obstacle in the point cloud data; the calibration module 230 is configured to calibrate the first contour and the second contour to obtain a projection image of the calibrated concave obstacle; and the calculation generation module 240 is used for calculating an intersection ratio of the projection of the first contour and the projection of the second contour in the projection image, and when the intersection ratio is greater than or equal to a threshold value, generating the contour of the concave obstacle on the projection image.
It should be noted that, because the contents of information interaction, execution process, and the like between the modules are based on the same concept as the method embodiment of the present invention, specific functions and technical effects thereof may be referred to specifically in the method embodiment section, and are not described herein again.
It will be clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely illustrated, and in practical applications, the above function distribution may be performed by different functional modules according to needs, that is, the internal structure of the apparatus is divided into different functional modules to perform all or part of the above described functions. Each functional module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, the specific names of the functional modules are only for convenience of distinguishing from each other and are not used for limiting the protection scope of the present invention. The specific working process of the modules in the system may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
Another embodiment of the present invention further discloses a vehicle-based concave obstacle detection device, as shown in fig. 8, which includes a memory 31, a processor 32, and a computer program 33 stored in the memory 31 and operable on the processor 32, wherein when the processor 32 executes the computer program 33, any one of the above-mentioned vehicle-based concave obstacle detection methods is implemented.
The other technical scheme of the invention is as follows: a computer readable storage medium storing a computer program which, when executed by a processor, implements any of the above-described vehicle-based recessed obstacle detection methods.
The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.

Claims (10)

1. A vehicle-based recessed obstacle detection method is characterized by comprising the following steps:
acquiring an image in a vehicle detection range through camera equipment, and acquiring point cloud data in the vehicle detection range through a laser radar;
extracting a first contour of a concave obstacle in the image and extracting a second contour of the concave obstacle in the point cloud data based on a YOLO detection method;
calibrating the first contour and the second contour to obtain a projected image of the calibrated concave obstacle;
and calculating the intersection ratio of the projection of the first contour and the projection of the second contour in the projection image, and generating the contour of the concave obstacle on the projection image when the intersection ratio is larger than or equal to a threshold value.
2. The vehicle-based recessed obstacle detection method of claim 1, wherein extracting a second contour of a recessed obstacle in the point cloud data comprises:
filling invalid points by adopting a linear interpolation method to obtain point cloud data with the invalid points removed;
calculating the distance between each line of current points and adjacent points in the backward direction in the point cloud data after the invalid points are removed, and when the distance is greater than a distance threshold value, marking the current points as contour points;
and traversing the point cloud data to obtain a forward contour point set serving as the second contour.
3. A vehicle-based intrusion type obstacle detection method according to claim 2, wherein obtaining the set of forward contour points further comprises:
reversely traversing the point cloud data to obtain a reverse contour point set;
and computing the forward contour point set and the reverse contour point set, and taking the contour point set after the and computing as the second contour.
4. A vehicle-based intrusion type obstacle detection method according to claim 3, wherein calibrating the first and second profiles comprises:
acquiring a first 3D coordinate of a calibration plate vertex in an image coordinate system and a second 3D coordinate of the calibration plate vertex in a laser radar coordinate system;
and generating a rotation and translation matrix according to the first 3D coordinate and the second 3D coordinate, and completing the calibration of the first contour and the second contour through the rotation and translation matrix.
5. The vehicle-based recessed obstacle detection method of claim 4, wherein generating the contour of the recessed obstacle on the projection image comprises:
generating a contour union set according to the first contour and the second contour;
selecting extreme points of the contour in a concurrent and concentrated manner;
and generating the outline of the concave obstacle according to the extreme point.
6. A vehicle-based recessed barrier-like detection method according to any of claims 2-5, wherein when the intersection ratio is less than a threshold value:
the second contour is taken as the contour of a concave obstacle.
7. The vehicle-based recessed obstacle detection method of claim 6, wherein the first profile, the second profile, and the contour of the recessed obstacle are all rectangular.
8. A vehicle-based recessed obstacle detection device, comprising:
the detection module is used for acquiring images in a vehicle detection range through the camera equipment and acquiring point cloud data in the vehicle detection range through the laser radar;
an extraction module, configured to extract a first contour of a concave obstacle in the image based on a YOLO detection method, and extract a second contour of the concave obstacle in the point cloud data;
the calibration module is used for calibrating the first contour and the second contour to obtain a projection image of the calibrated concave obstacle;
and the calculation generation module is used for calculating the intersection ratio of the projection of the first contour and the projection of the second contour in the projection image, and when the intersection ratio is larger than or equal to a threshold value, the contour of the concave obstacle is generated on the projection image.
9. A vehicle-based recessed obstacle detection apparatus comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements a vehicle-based recessed obstacle detection method according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out a method for vehicle-based recessed obstacle detection according to any one of claims 1 to 7.
CN202011355147.4A 2020-11-27 2020-11-27 Vehicle-based concave obstacle detection method Active CN112464812B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011355147.4A CN112464812B (en) 2020-11-27 2020-11-27 Vehicle-based concave obstacle detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011355147.4A CN112464812B (en) 2020-11-27 2020-11-27 Vehicle-based concave obstacle detection method

Publications (2)

Publication Number Publication Date
CN112464812A true CN112464812A (en) 2021-03-09
CN112464812B CN112464812B (en) 2023-11-24

Family

ID=74808902

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011355147.4A Active CN112464812B (en) 2020-11-27 2020-11-27 Vehicle-based concave obstacle detection method

Country Status (1)

Country Link
CN (1) CN112464812B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113147750A (en) * 2021-05-21 2021-07-23 清华大学 Safety decision system and method for controlling vehicle running
CN113935946A (en) * 2021-09-08 2022-01-14 广东工业大学 Method and device for detecting underground obstacle in real time
CN114044002A (en) * 2021-11-30 2022-02-15 成都坦途智行科技有限公司 Automatic low-lying road surface identification method suitable for automatic driving
CN114721404A (en) * 2022-06-08 2022-07-08 超节点创新科技(深圳)有限公司 Obstacle avoidance method, robot and storage medium
CN115546749A (en) * 2022-09-14 2022-12-30 武汉理工大学 Road surface depression detection, cleaning and avoidance method based on camera and laser radar
CN116229097A (en) * 2023-01-09 2023-06-06 钧捷科技(北京)有限公司 Image processing method based on image sensor

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103645480A (en) * 2013-12-04 2014-03-19 北京理工大学 Geographic and geomorphic characteristic construction method based on laser radar and image data fusion
CN104484648A (en) * 2014-11-27 2015-04-01 浙江工业大学 Variable-viewing angle obstacle detection method for robot based on outline recognition
CN106908783A (en) * 2017-02-23 2017-06-30 苏州大学 Obstacle detection method based on multi-sensor information fusion
US10134135B1 (en) * 2015-08-27 2018-11-20 Hrl Laboratories, Llc System and method for finding open space efficiently in three dimensions for mobile robot exploration
CN108983248A (en) * 2018-06-26 2018-12-11 长安大学 It is a kind of that vehicle localization method is joined based on the net of 3D laser radar and V2X
CN109283538A (en) * 2018-07-13 2019-01-29 上海大学 A kind of naval target size detection method of view-based access control model and laser sensor data fusion
CN109934837A (en) * 2018-12-26 2019-06-25 江苏名通信息科技有限公司 A kind of extracting method of 3D plant leaf blade profile, apparatus and system
CN110456363A (en) * 2019-06-17 2019-11-15 北京理工大学 The target detection and localization method of three-dimensional laser radar point cloud and infrared image fusion
CN110648394A (en) * 2019-09-19 2020-01-03 南京邮电大学 Three-dimensional human body modeling method based on OpenGL and deep learning
US20200027266A1 (en) * 2018-07-17 2020-01-23 Uti Limited Partnership Building contour generation from point clouds
CN111035115A (en) * 2020-03-13 2020-04-21 杭州蓝芯科技有限公司 Sole gluing path planning method and device based on 3D vision
CN111160302A (en) * 2019-12-31 2020-05-15 深圳一清创新科技有限公司 Obstacle information identification method and device based on automatic driving environment
CN111191600A (en) * 2019-12-30 2020-05-22 深圳元戎启行科技有限公司 Obstacle detection method, obstacle detection device, computer device, and storage medium
WO2020103533A1 (en) * 2018-11-20 2020-05-28 中车株洲电力机车有限公司 Track and road obstacle detecting method
CN111222579A (en) * 2020-01-09 2020-06-02 北京百度网讯科技有限公司 Cross-camera obstacle association method, device, equipment, electronic system and medium
CN111611853A (en) * 2020-04-15 2020-09-01 宁波吉利汽车研究开发有限公司 Sensing information fusion method and device and storage medium

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103645480A (en) * 2013-12-04 2014-03-19 北京理工大学 Geographic and geomorphic characteristic construction method based on laser radar and image data fusion
CN104484648A (en) * 2014-11-27 2015-04-01 浙江工业大学 Variable-viewing angle obstacle detection method for robot based on outline recognition
US10134135B1 (en) * 2015-08-27 2018-11-20 Hrl Laboratories, Llc System and method for finding open space efficiently in three dimensions for mobile robot exploration
CN106908783A (en) * 2017-02-23 2017-06-30 苏州大学 Obstacle detection method based on multi-sensor information fusion
CN108983248A (en) * 2018-06-26 2018-12-11 长安大学 It is a kind of that vehicle localization method is joined based on the net of 3D laser radar and V2X
CN109283538A (en) * 2018-07-13 2019-01-29 上海大学 A kind of naval target size detection method of view-based access control model and laser sensor data fusion
US20200027266A1 (en) * 2018-07-17 2020-01-23 Uti Limited Partnership Building contour generation from point clouds
WO2020103533A1 (en) * 2018-11-20 2020-05-28 中车株洲电力机车有限公司 Track and road obstacle detecting method
CN109934837A (en) * 2018-12-26 2019-06-25 江苏名通信息科技有限公司 A kind of extracting method of 3D plant leaf blade profile, apparatus and system
CN110456363A (en) * 2019-06-17 2019-11-15 北京理工大学 The target detection and localization method of three-dimensional laser radar point cloud and infrared image fusion
CN110648394A (en) * 2019-09-19 2020-01-03 南京邮电大学 Three-dimensional human body modeling method based on OpenGL and deep learning
CN111191600A (en) * 2019-12-30 2020-05-22 深圳元戎启行科技有限公司 Obstacle detection method, obstacle detection device, computer device, and storage medium
CN111160302A (en) * 2019-12-31 2020-05-15 深圳一清创新科技有限公司 Obstacle information identification method and device based on automatic driving environment
CN111222579A (en) * 2020-01-09 2020-06-02 北京百度网讯科技有限公司 Cross-camera obstacle association method, device, equipment, electronic system and medium
CN111035115A (en) * 2020-03-13 2020-04-21 杭州蓝芯科技有限公司 Sole gluing path planning method and device based on 3D vision
CN111611853A (en) * 2020-04-15 2020-09-01 宁波吉利汽车研究开发有限公司 Sensing information fusion method and device and storage medium

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
MINGLEI LI 等: "Modelling of buildings from aerial LiDAR point clouds using TINs and label maps", 《ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING》, vol. 154, pages 127 - 138, XP085730700, DOI: 10.1016/j.isprsjprs.2019.06.003 *
YIBING ZHAO 等: "The intelligent obstacle sensing and recognizing method based on D–S evidence theory for UGV", 《FUTURE GENERATION COMPUTER SYSTEMS》 *
YIBING ZHAO 等: "The intelligent obstacle sensing and recognizing method based on D–S evidence theory for UGV", 《FUTURE GENERATION COMPUTER SYSTEMS》, vol. 97, 31 August 2019 (2019-08-31), pages 21 - 29 *
余升林: "基于传感器信息融合的车辆识别与测量研究", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》 *
余升林: "基于传感器信息融合的车辆识别与测量研究", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》, vol. 2020, no. 7, 15 July 2020 (2020-07-15), pages 2 - 4 *
玉荣: "基于非刚性配准的复杂零件几何修复技术研究", 《中国博士学位论文全文数据库 工程科技Ⅱ辑》, vol. 2018, no. 9, pages 029 - 7 *
苏致远 等: "基于三维激光雷达的车辆目标检测方法", 《军事交通学院学报》 *
苏致远 等: "基于三维激光雷达的车辆目标检测方法", 《军事交通学院学报》, vol. 19, no. 1, 31 December 2017 (2017-12-31), pages 45 - 49 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113147750A (en) * 2021-05-21 2021-07-23 清华大学 Safety decision system and method for controlling vehicle running
CN113147750B (en) * 2021-05-21 2023-12-22 清华大学 Safety decision system and method for controlling vehicle running
CN113935946A (en) * 2021-09-08 2022-01-14 广东工业大学 Method and device for detecting underground obstacle in real time
CN114044002A (en) * 2021-11-30 2022-02-15 成都坦途智行科技有限公司 Automatic low-lying road surface identification method suitable for automatic driving
CN114721404A (en) * 2022-06-08 2022-07-08 超节点创新科技(深圳)有限公司 Obstacle avoidance method, robot and storage medium
CN114721404B (en) * 2022-06-08 2022-09-13 超节点创新科技(深圳)有限公司 Obstacle avoidance method, robot and storage medium
CN115546749A (en) * 2022-09-14 2022-12-30 武汉理工大学 Road surface depression detection, cleaning and avoidance method based on camera and laser radar
CN116229097A (en) * 2023-01-09 2023-06-06 钧捷科技(北京)有限公司 Image processing method based on image sensor
CN116229097B (en) * 2023-01-09 2024-06-07 钧捷科技(北京)有限公司 Image processing method based on image sensor

Also Published As

Publication number Publication date
CN112464812B (en) 2023-11-24

Similar Documents

Publication Publication Date Title
CN112464812B (en) Vehicle-based concave obstacle detection method
CN111882612B (en) Vehicle multi-scale positioning method based on three-dimensional laser detection lane line
CN109752701B (en) Road edge detection method based on laser point cloud
CN109961440B (en) Three-dimensional laser radar point cloud target segmentation method based on depth map
CN111046776B (en) Method for detecting obstacle of path of mobile robot based on depth camera
CN112346463B (en) Unmanned vehicle path planning method based on speed sampling
CN104574406B (en) A kind of combined calibrating method between 360 degree of panorama laser and multiple vision systems
CN110674705B (en) Small-sized obstacle detection method and device based on multi-line laser radar
CN110197173B (en) Road edge detection method based on binocular vision
GB2317066A (en) Method of detecting objects for road vehicles using stereo images
CN111123242B (en) Combined calibration method based on laser radar and camera and computer readable storage medium
CN115049700A (en) Target detection method and device
Kellner et al. Road curb detection based on different elevation mapping techniques
CN114325634A (en) Method for extracting passable area in high-robustness field environment based on laser radar
CN116310679A (en) Multi-sensor fusion target detection method, system, medium, equipment and terminal
Muresan et al. Real-time object detection using a sparse 4-layer LIDAR
CN116452852A (en) Automatic generation method of high-precision vector map
Nedevschi Online cross-calibration of camera and lidar
CN115877347A (en) Mining area obstacle detection method and system, electronic equipment and readable storage medium
CN113221883B (en) Unmanned aerial vehicle flight navigation route real-time correction method
Deng et al. Joint calibration of dual lidars and camera using a circular chessboard
Chenchen et al. A camera calibration method for obstacle distance measurement based on monocular vision
CN117029870A (en) Laser odometer based on road surface point cloud
EP4078087B1 (en) Method and mobile entity for detecting feature points in an image
WO2022133986A1 (en) Accuracy estimation method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant