CN114118252A - Vehicle detection method and detection device based on sensor multivariate information fusion - Google Patents

Vehicle detection method and detection device based on sensor multivariate information fusion Download PDF

Info

Publication number
CN114118252A
CN114118252A CN202111390381.5A CN202111390381A CN114118252A CN 114118252 A CN114118252 A CN 114118252A CN 202111390381 A CN202111390381 A CN 202111390381A CN 114118252 A CN114118252 A CN 114118252A
Authority
CN
China
Prior art keywords
vehicle
detection
coordinate system
laser radar
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111390381.5A
Other languages
Chinese (zh)
Inventor
赵林峰
姜武华
张毅航
蔡必鑫
任毅
马晓东
黄为宇
张曼玲
王天元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei University of Technology
Original Assignee
Hefei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei University of Technology filed Critical Hefei University of Technology
Priority to CN202111390381.5A priority Critical patent/CN114118252A/en
Publication of CN114118252A publication Critical patent/CN114118252A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Theoretical Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Traffic Control Systems (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The invention discloses a vehicle detection method and a detection device based on sensor multivariate information fusion. The method comprises the following steps: converting a camera coordinate system and a laser radar coordinate system of the vehicle into a detection coordinate system; firstly, preliminarily determining a camera detection area, then screening point cloud data of a laser radar, finally extracting road boundary position information and projecting the road boundary position information to an image to determine a vehicle passing area; firstly, restricting the detection angle of the laser radar within the visual angle range of a camera, projecting object distance information onto a visual image, and searching an interested area; and extracting the contour of the edge of the tail of the vehicle in the guide image, fusing textural features to identify the vehicle in front, and then verifying the image identification result. The invention gives full play to the detection advantages of each sensor, makes up for the deficiencies, extracts object characteristics from the original data, fuses the object characteristics in the two sensors and improves the object identification precision through complementary detection.

Description

Vehicle detection method and detection device based on sensor multivariate information fusion
Technical Field
The invention relates to an early warning method in the technical field of automatic driving, in particular to a vehicle detection method based on sensor multivariate information fusion, and further relates to a detection device.
Background
With the continuous improvement of the economic level and the rapid development of the unmanned vehicle technology, the sensor detection technology plays a significant role in the research of the unmanned vehicle, and the difficulty that the accurate detection and positioning of the front vehicle are required to overcome by each sensor detection technology is related to whether the vehicle can safely and reliably participate in traffic operation. The advantages and disadvantages of different sensors are obvious, for example, visual images are as if human eyes can well capture various objects appearing in a visual angle and classify and recognize the objects. However, because of the limited size of the image, the positioning accuracy of the sensor cannot meet the requirement of intelligent driving. Various radar sensors have good positioning capability relative to vision, but the radar sensors are difficult to completely capture all characteristics of an object, and the difficulty in judging the type of the object is increased. The single sensor data processing algorithm can not meet the pursuit of people on driving intelligence, safety, comfort and the like more and more.
Disclosure of Invention
The invention provides a vehicle detection method and a detection device based on sensor multivariate information fusion, aiming at solving the technical problem of low positioning accuracy of the conventional vehicle sensor.
The invention is realized by adopting the following technical scheme: a vehicle detection method based on sensor multivariate information fusion comprises the following steps:
s1: converting a camera coordinate system and a laser radar coordinate system of a vehicle into a detection coordinate system of the vehicle;
s2: preliminarily determining a camera detection area according to an image vanishing line and a camera acquisition visual angle, screening the laser radar point cloud data, finally detecting a mutation position of a laser radar data return value, extracting road boundary position information, projecting the road boundary position information onto an image, and determining a vehicle passing area;
s3: the detection angle of the laser radar is constrained within the visual angle range of a camera, a detection area is determined for an image recognition vehicle, then the object distance information detected by the laser radar is projected onto a visual image, and the visual image is used as a base point to search the interested area of the vehicle recognition on the image;
s4: after a detection area of a vehicle in an image is determined, according to the contour change direction of the laser point cloud concentrated point cloud information, guiding and extracting the contour feature of the tail of the vehicle in a guide image, then fusing the texture feature to identify a front vehicle, and then verifying an image identification result according to the point cloud structure and the space position of the laser radar.
The invention is based on the angle of improving the detection precision and reducing the time consumption, and combines the original data characteristics with the advanced characteristic data, thereby fully playing the detection advantages of each sensor, making up for the deficiencies of each sensor and realizing the final target requirement. In the original data fusion, noise reduction is taken as a main purpose, interference noise data influencing a detection result is removed, and the purpose of identification and positioning is realized by using the minimum data volume and the minimum detection cost as far as possible; secondly, extracting object features from the original data, fusing the object features in the two sensors, improving the object identification precision through complementary detection, and solving the technical problem of low positioning precision of the existing vehicle sensor.
As a further improvement of the above scheme, one point in the defined space is P under the laser radar coordinate systeml(xl,yl,zl) Under the camera coordinate system is Pc(xc,yc,zc) P in the detection coordinate systemp(xp,yp,zp) (ii) a The conversion relations from the laser radar coordinate system and the camera coordinate system to the detection coordinate system are respectively as follows:
Pp=Pl·Rl+Bl
Pp=Pl·Rc+Bc
wherein, BlAnd BcRespectively, the laser radar coordinate system and the translation matrix from the camera coordinate system to the detection coordinate system.
As a further improvement of the above scheme, when the laser radar point cloud data is screened, if there is no obstacle on the ground, the coordinates of the laser beam of the laser radar located on the ground point cloud in the radar coordinate system are:
Figure BDA0003368470760000031
in the formula, xl、yl、zlCoordinates, alpha, representing an arbitrary laser beamlRepresenting the search angle of any laser line beam, wherein rho is the detection distance of the laser radar, and omega is the emission angle of the laser radar;
after the height is fixed, the point cloud coordinate points of the laser beam of the laser radar are as follows:
Figure BDA0003368470760000032
in the formula, H represents the installation height of the laser radar;
and performing rotation correction on the source data of the laser radar, and comparing the converted data with a coordinate point obtained according to the height:
Figure BDA0003368470760000033
in the formula, P represents a coordinate point obtained in terms of height.
As a further improvement of the above solution, in step S3, the data return value of each point in the polar coordinate system in the detection range of the laser radar is:
P(ρn,α,ωn)n=1,2,3...
wherein alpha is a search angle, and n represents a beam value of the laser radar; rhonFor a detected distance of beam value n, ωnThe beam value is the emission angle of n; and rotating around the z axis, wherein the range of the beam searching angle of the laser radar is (0,360), the visual angle of the camera is M, and the searching angle of the laser radar is corrected to be (-0.5M, 0.5M).
As a further improvement of the scheme, the point cloud data of the laser radar is used for road segmentation, namely, vehicle passing areas are extracted according to structural features of road edges, and wavelet analysis is used for performing secondary segmentation on primary segmentation results to determine the vehicle passing areas.
As a further improvement of the above scheme, when an obstacle appears in the radar detection range, the mutation position is detected; and extracting laser point cloud data, fitting the distance data received by the laser radar by using a Daubechies wavelet 6-order function, positioning the distance data at the position of data mutation accurately by using the wavelet function, extracting boundary characteristic points, and fitting a series of characteristic points by using a least square method to obtain a vehicle passing area.
As a further improvement of the scheme, the vehicle identification detection is carried out on the determined vehicle detection area by applying machine learning, the ground positions of the left and right vehicles are firstly positioned, then the vehicle detection area is determined visually, then the search frame is gradually enlarged, and the vehicles are identified by adopting a search mode from two sides to the middle and from bottom to top.
As a further improvement of the above solution, when projecting the object distance information onto the visual image, the method for deriving the conversion relationship between the camera coordinate system and the image coordinate system includes the following steps:
P(Xc,Yc,Zc) Calculating P (X) for any point in the camera coordinate system by using triangle similarityc,Yc,Zc) Projection position in image coordinate system:
Figure BDA0003368470760000041
wherein f is the focal length of the camera; (0-xy) is the image coordinate system, and P (x, y) is P (x)c,yc,zc) Points projected into the coordinate system;
definition (0)uvUv) is a pixel coordinate system, and the conversion formula of the camera coordinate system to the pixel coordinate system is:
Figure BDA0003368470760000042
deducing a coordinate conversion formula from the point cloud to the image according to a conversion formula from a laser radar coordinate system to a camera coordinate system, wherein the coordinate conversion formula is as follows:
Figure BDA0003368470760000043
and projecting the object distance information detected by the laser radar onto the visual image according to a coordinate conversion formula from the point cloud to the image.
As a further improvement of the above solution, a space formed by the laser radar in the vehicle body is defined as: u is TL,M,R(x, y, z), the point cloud set formed by acting on the object needs to satisfy the following condition:
(1)
Figure BDA0003368470760000044
(2) for the detected objects on the left side and the right side, the spatial point cloud forms two mutually perpendicular surfaces and is in circular arc transition; the object right in front is a plane, and two ends of the object are connected with the circular arc.
The present invention also provides a detection device, which applies any of the above-mentioned vehicle detection methods based on sensor multivariate information fusion, comprising:
a conversion module for converting a camera coordinate system and a lidar coordinate system of a vehicle into a detection coordinate system of the vehicle;
an initial data processing module for processing initial data of the camera and the laser radar; the initial data processing module preliminarily determines a camera detection area according to an image vanishing line and a camera acquisition visual angle, then screens the laser radar point cloud data, finally detects a mutation position of a laser radar data return value, extracts road boundary position information and projects the road boundary position information onto an image, and determines a vehicle passing area;
the detection area fusion module is used for restraining the detection angle of the laser radar within the visual angle range of the camera, determining a detection area for the image recognition vehicle, projecting the object distance information detected by the laser radar onto the visual image, and searching the interested area of the vehicle recognition on the image by taking the object distance information as a base point;
the structural feature fusion identification module is used for guiding and extracting the outline features of the tail part of a vehicle in a guide image according to the outline change direction of the laser point cloud concentrated point cloud information after determining the detection area of the vehicle in the image, fusing the texture features to identify a front vehicle, and verifying the image identification result according to the point cloud structure and the space position of the laser radar.
The vehicle detection method and the detection device based on sensor multivariate information fusion have the following beneficial effects:
1. the invention provides a technology for determining a detection range by utilizing the fusion of a camera and a laser radar in vehicle detection. For safe driving of a vehicle, much of the lidar data detected at 360 degrees is redundant, extra calculation cost is brought to a processor, and execution efficiency is affected. The visual angle of the camera is equivalent to the eyes of a driver, and an object on a road is detected when the vehicle runs, so that a reasonable search range is provided for the laser radar by using the visual angle of the camera.
2. The invention provides a vehicle detection technology, which utilizes the characteristic that wavelet analysis is sensitive to sudden change of a laser radar data return value to respectively perform wavelet analysis on point cloud data returned by each laser beam, extracts each laser point data capable of scanning the ground edge, and then performs fitting to restrict the detection range within the vehicle passable range.
3. In the vehicle detection technology, the vehicle identification mode is improved, the detection efficiency is improved, and the ground coordinate position of the lower left corner of the vehicle is positioned by using the detection result of the laser radar (if the vehicle is positioned on the right side, the ground coordinate of the lower right corner is positioned). And based on the point, the search area is continuously adjusted and expanded, the size of the detection frame is adjusted, and the vehicle is rapidly identified.
4. The invention proposes that in the vehicle detection technique, the same laser beam acts on the object, because of the continuity of the surface of the object, the distances between adjacent laser points should be very close. After visually recognizing the vehicle, extracting laser point clouds in each detection frame, and firstly judging whether point cloud data in adjacent detection frames have the point clouds under the same laser beam action; and then judging whether the point cloud depth distance between different detection frames changes suddenly. And comprehensively judging whether the phenomenon that the multiple detection frames identify the same object occurs or not, and determining a final detection frame by using the point cloud data in the set so as to reduce the false detection phenomenon.
5. The invention provides a vehicle detection technology, wherein a point cloud set formed by acting on an object needs to meet conditions, so that the vehicle can be reliably judged according to the conditions, the missing detection phenomenon is reduced, and meanwhile, the method can be used for verifying whether the vehicle point cloud data identified by an image is accurate or not.
6. The invention is based on the angle of improving the detection precision and reducing the time consumption, and combines the original data characteristics with the advanced characteristic data, thereby fully playing the detection advantages of each sensor, making up for the deficiencies of each sensor and realizing the final target requirement. In the original data fusion, noise reduction is taken as a main purpose, interference noise data influencing a detection result is removed, and the purpose of identification and positioning is realized by using the minimum data volume and the minimum detection cost as far as possible; secondly, extracting object features from the original data, fusing the object features in the two sensors, and improving the object identification precision through complementary detection.
The beneficial effects of the detection device of the invention are the same as those of the vehicle detection method based on sensor multivariate information fusion, and are not repeated herein.
Drawings
Fig. 1 is a flowchart of a vehicle detection method based on sensor multivariate information fusion in embodiment 1 of the present invention.
Fig. 2 is a schematic diagram of a camera coordinate system, a laser radar coordinate system, and a detection coordinate system in embodiment 1 of the present invention.
Fig. 3 is a schematic view of an acquisition view of a camera in embodiment 1 of the present invention.
Fig. 4 is a schematic diagram of a lidar data processing area under the camera view angle constraint in embodiment 1 of the present invention.
Fig. 5 is a schematic diagram of determining a visual interesting region from a laser radar road region in embodiment 1 of the present invention.
Fig. 6 is a schematic top view of the point cloud shape of the vehicle at different positions in embodiment 1 of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Example 1
Referring to fig. 1-6, the present embodiment provides a vehicle detection method based on sensor multivariate information fusion, the method first converts the coordinate systems of each camera and lidar into a detection coordinate system; then, performing laser point cloud processing, preliminarily positioning a laser radar detection range by using a camera view angle, and extracting road boundary position information by using wavelet analysis; projecting the extracted road boundary onto an image by using an imaging principle to determine a vehicle passing area; secondly, determining an interested area of the identified vehicle in the traffic area by utilizing the positioning function of the laser radar; and finally, verifying the image identification result according to the point cloud structure and the spatial distribution of the laser radar, and eliminating false detection and missing detection in the image. In the embodiment, the vehicle detection method based on sensor multi-information fusion is mainly realized by the following steps, specifically steps S1-S4.
Step S1: and converting the camera coordinate system and the laser radar coordinate system of the vehicle into the detection coordinate system of the vehicle. Referring to fig. 2, each sensor has its own coordinate system, and the detected data is also based on the respective coordinate system of the sensor, and after the camera and the lidar are mounted, the camera coordinate system and the lidar coordinate system are unified to complete the spatial synchronization of the sensors. In this embodiment, each sensor of the vehicle has its own coordinate system, and the detected data is also based on the respective coordinate system of the sensor, and after the installation of the camera and the lidar, the camera coordinate system and the lidar coordinate system need to be converted into the detection coordinate system. Defining a point in space as P under the laser radar coordinate systeml(xl,yl,zl) P in the camera coordinate systemc(xc,yc,zc) P in the detection coordinate systemp(xp,yp,zp). The laser radar and the camera are respectively arranged at an angle of (theta)lll)、(θccc). The position relation between the origin points of the coordinate systems can be directly obtained by vehicles, namely the translation matrixes from the laser radar coordinate system and the camera coordinate system to the detection coordinate system are respectively BlAnd Bc. The conversion relations from the laser radar coordinate system and the camera coordinate system to the detection coordinate system are respectively as follows:
Pp=Pl·Rl+Bl
Pp=Pl·Rc+Bc
step S2: referring to fig. 3, a camera detection area is initially determined according to an image vanishing line and a camera collection view angle, then the laser radar point cloud data is screened, finally, a mutation position of a laser radar data return value is detected, road boundary position information is extracted and projected onto an image, and a vehicle passing area is determined. The camera acquisition visual angle theta provides transverse constraint, and an image vanishing line formed by image vanishing points provides longitudinal constraint, so that the detection area of the camera is determined. The return values of the radar collected data have independent distance and azimuth angles, and the data are returned on a flat ground with a large enough area in a concentric circle mode. When objects such as vehicles and the like higher than the ground appear in the radar detection range, a plurality of laser radars can simultaneously scan the same object, so that some point cloud aggregation phenomena occur, and for the laser radar point cloud data, the arc point cloud basically belongs to the ground.
The emission angle of each laser radar is fixed, so after the radars are installed and fixed, the point cloud data acquired by the laser beams should have the same distance and receiving angle, and each laser beam corresponds to a unique emission angle. In this embodiment, when screening the laser radar point cloud data, if there is no obstacle on the ground, the coordinates of the laser beam of the laser radar located on the ground point cloud in the radar coordinate system are:
Figure BDA0003368470760000081
in the formula, xl、yl、zlCoordinates, alpha, representing an arbitrary laser beamlRepresenting the search angle of any laser line beam, wherein rho is the detection distance of the laser radar, and omega is the emission angle of the laser radar;
after the height is fixed, the point cloud coordinate points of the laser beam of the laser radar are as follows:
Figure BDA0003368470760000091
in the formula, H represents the installation height of the laser radar;
because there is angular deviation after the radar installation, according to the rotation matrix that preceding analysis obtained, need rotate the correction to laser radar's radar source data, compare the data after the transform and compare according to the coordinate point that the height obtained:
Figure BDA0003368470760000092
in the formula, P represents a coordinate point obtained in terms of height.
And (4) carrying out the formula processing on the acquired radar data, and providing a ground laser point cloud obtaining effect.
In this embodiment, the point cloud data of the lidar is used for road segmentation, that is, a vehicle passing area is extracted according to structural features of road edges, and a wavelet analysis is used for performing secondary segmentation on a primary segmentation result to determine a vehicle passing area. When an obstacle appears in the radar detection range, detecting the mutation position; and extracting laser point cloud data, fitting the distance data received by the laser radar by using a Daubechies wavelet 6-order function, positioning the distance data at the position of data mutation accurately by using the wavelet function, extracting boundary characteristic points, and fitting a series of characteristic points by using a least square method to obtain a vehicle passing area.
The wavelet can accurately position the frequency occurrence position, and is sensitive to data change response. When an obstacle appears in the detection range of the radar, the returned value of the obstacle is subjected to a sudden change phenomenon in value due to the blocking of an object, so that the sudden change position can be detected by using a wavelet analysis method. The db6 wavelet can fit the distance data received by the radar well, and the wavelet function can locate the data accurately at the position of sudden change of the data. And respectively performing wavelet analysis on the point cloud data returned by each laser beam, extracting each laser point data capable of scanning the ground edge, performing quadratic fitting to obtain two continuous curves, and constraining the detection range in the passable range of the vehicle.
S3: referring to fig. 4, the camera collection angle of the camera is firstly used to preliminarily determine the detection range of the lidar, i.e. the detection angle of the lidar is constrained within the camera angle range, to determine the detection area for the image recognition vehicle, and then the object distance information detected by the lidar is projected onto the visual image, so as to search the region of interest identified by the vehicle on the image with the base point. The data return value of each point under the polar coordinate system in the detection range of the laser radar is as follows:
P(ρn,α,ωn)n=1,2,3...
wherein alpha is a search angle, and n represents a beam value of the laser radar; rhonFor a detected distance of beam value n, ωnThe beam value is the emission angle of n; and rotating around the z axis, wherein the range of the beam searching angle of the laser radar is (0,360), the visual angle of the camera is M, the searching angle of the laser radar can be corrected to be (-0.5M, 0.5M), and the detection area is determined for the image recognition vehicle.
After a laser radar detection area and a vehicle passing area are preliminarily determined, the road edge is projected onto an image according to the camera imaging principle, and the original data acquired by a sensor are fused to obtain an interested area of the image. When the object distance information is projected on a visual image, the method for deducing the conversion relation between the camera coordinate system and the image coordinate system comprises the following steps:
P(Xc,Yc,Zc) The camera imaging is based on the pinhole imaging principle, and an object in a real scene is presented in the form of a picture, wherein the position in the picture is related to the position of the object in the camera coordinate system. The transformation relationship between the camera coordinate system and the image coordinate system is derived as follows: p (X)c,Yc,Zc) Calculating P (X) for any point in the camera coordinate system by using the similarity of triangles according to the imaging principlec,Yc,Zc) Projection position in image coordinate system:
Figure BDA0003368470760000101
wherein f is the focal length of the camera; (0-xy) is the image coordinate system, and P (x, y) is P (x)c,yc,zc) Projected to a point in the coordinate system.
The following is to derive the conversion relationship of the image coordinate system to the pixel coordinate system, because the two-dimensional coordinate system conversion on the uniform plane is realized only by translating the coordinate system. (0) is defined by integrating the conversion from the camera coordinate system to the image coordinate systemuvUv) is a pixel coordinate system, and the conversion formula of the camera coordinate system to the pixel coordinate system is:
Figure BDA0003368470760000102
the laser radar has an independent coordinate system, and a coordinate conversion formula from point cloud to image is derived according to a conversion formula from the laser radar coordinate system to a camera coordinate system and is as follows:
Figure BDA0003368470760000111
by utilizing the positioning function of the laser radar, according to a coordinate conversion formula from the point cloud to the image, the object distance information detected by the laser radar is projected to the visual image, and the area of interest identified by the vehicle on the image is determined by taking the object distance information as a base point, as shown in fig. 5.
S4: after a detection area of a vehicle in an image is determined, according to the contour change direction of the laser point cloud concentrated point cloud information, guiding and extracting the contour feature of the tail of the vehicle in a guide image, then fusing the texture feature to identify a front vehicle, and then verifying an image identification result according to the point cloud structure and the space position of the laser radar.
In the embodiment, machine learning is applied to perform vehicle identification and detection on the determined vehicle detection area, the ground positions of the left and right vehicles are firstly positioned, then the vehicle detection area is determined visually, the search frame is gradually enlarged, and the vehicles are identified by adopting a search mode from two sides to the middle and from bottom to top. The method is generally carried out in a mode of identifying windows line by line, and identification of each vehicle which possibly appears is realized by adjusting the size of the windows. When the method is applied to an image, the identification judgment needs to be carried out line by line from the top left corner of the region of interest, from left to right, and from top to bottom. With this method, the vehicle ahead can be identified, but since the feature analysis calculation is performed for each pixel in the region of interest, which is very time consuming, the following improved method is proposed:
and improving a vehicle identification mode, and positioning the ground coordinate position of the lower left corner of the vehicle by using the detection result of the laser radar (if the vehicle is positioned on the right side, positioning the ground coordinate of the lower right corner). Based on the point, the search area is continuously adjusted and enlarged, the size of the detection frame is adjusted, the whole image is prevented from being identified and detected, and the vehicle is quickly identified.
The same laser beam acts on the object and adjacent laser spots should be very close. After visually recognizing the vehicle, extracting laser point clouds in each detection frame, and firstly judging whether point cloud data in adjacent detection frames have the point clouds under the same laser beam action; and then judging whether the point cloud depth distance between different detection frames changes suddenly. And comprehensively judging whether the phenomenon that the multiple detection frames identify the same object occurs or not, and determining a final detection frame by using the point cloud data in the set so as to reduce the false detection phenomenon. The method solves the problem of false detection by eliminating the condition that a plurality of detection frames appear on the same target object, extracts the laser point cloud in each detection frame after visually identifying the vehicle, and firstly judges whether the point cloud data in the adjacent detection frames have the point cloud with the same laser beam effect; and then judging whether the point cloud depth distance between different detection frames changes suddenly. And comprehensively judging whether the phenomenon that the multiple detection frames identify the same object occurs or not, and determining a final detection frame by using point cloud data in the set.
Defining the space formed by the laser radar in the vehicle body as follows: u is TL,M,R(x, y, z), the point cloud set formed by acting on the object needs to satisfy the following condition:
(1)
Figure BDA0003368470760000121
(2) as shown in fig. 6, for the detected left and right objects, the spatial point cloud forms two mutually perpendicular planes and transitions in an arc; the object right in front is a plane, and two ends of the object are connected with the circular arc. According to the condition, the vehicle can be reliably judged, the missing detection phenomenon is reduced, and meanwhile, the method can also be used for verifying whether the vehicle point cloud data identified by the image is accurate.
In summary, compared with the existing vehicle collision prediction technology, the vehicle detection method based on sensor multivariate information fusion of the embodiment has the following advantages:
1. the embodiment provides that in the vehicle detection technology, the detection range is determined by means of fusion of a camera and a laser radar. For safe driving of a vehicle, much data collected by the laser radar for 360-degree detection are redundant, extra burden is brought to a processor, and driving safety is affected. The visual angle of the camera is equivalent to the eyes of a driver, and an object on a road is detected when the vehicle runs, so that a reasonable search range is provided for the laser radar by using the visual angle of the camera.
2. In the vehicle detection technology, the characteristic that wavelet analysis is sensitive to sudden change of the data return value of the laser radar is utilized, the wavelet analysis is respectively carried out on the point cloud data returned by each laser beam, each laser point data capable of scanning the ground edge is extracted, fitting is carried out, and the detection range is restricted in the vehicle passable range.
3. In the vehicle detection technology provided by the embodiment, a vehicle identification mode is improved, the detection efficiency is improved, and the ground coordinate position of the lower left corner of the vehicle is positioned by using the detection result of the laser radar (if the vehicle is positioned on the right side, the ground coordinate of the lower right corner is positioned). And based on the point, the search area is continuously adjusted and expanded, the size of the detection frame is adjusted, and the vehicle is rapidly identified.
4. This embodiment proposes that in the vehicle detection technique, the same laser beam is applied to the object, because the surface of the object is continuous and the distances between adjacent laser spots should be very close. After visually recognizing the vehicle, extracting laser point clouds in each detection frame, and firstly judging whether point cloud data in adjacent detection frames have the point clouds under the same laser beam action; and then judging whether the point cloud depth distance between different detection frames changes suddenly. And comprehensively judging whether the phenomenon that the multiple detection frames identify the same object occurs or not, and determining a final detection frame by using the point cloud data in the set so as to reduce the false detection phenomenon.
5. The embodiment provides that in the vehicle detection technology, the point cloud set formed by acting on an object needs to meet the conditions, so that the vehicle can be reliably judged according to the conditions, the missing detection phenomenon is reduced, and meanwhile, the point cloud data can also be used for verifying whether the vehicle point cloud data identified by the image is accurate or not.
6. In the embodiment, from the viewpoint of improving the detection precision and reducing the time consumption, the original data characteristics are fused with the high-level characteristic data, the detection advantages of each sensor are fully exerted, and the advantages are made up for to meet the final objective requirement. In the original data fusion, noise reduction is taken as a main purpose, interference noise data influencing a detection result is removed, and the purpose of identification and positioning is realized by using the minimum data volume and the minimum detection cost as far as possible; secondly, extracting object features from the original data, fusing the object features in the two sensors, and improving the object identification precision through complementary detection.
Example 2
The embodiment provides a detection device, which applies the vehicle detection method based on sensor multi-information fusion in embodiment 1, and specifically includes a conversion module, an initial data processing module, a detection area fusion module, and a structural feature fusion identification module.
The conversion module is used for converting a camera coordinate system and a laser radar coordinate system of the vehicle into a detection coordinate system of the vehicle. The initial data processing module is used for processing initial data of the camera and the laser radar; the initial data processing module firstly preliminarily determines a camera detection area according to an image vanishing line and a camera collecting visual angle, then screens the laser radar point cloud data, finally detects a mutation position of a laser radar data return value, extracts road boundary position information and projects the road boundary position information onto an image, and determines a vehicle passing area.
The detection area fusion module is used for restraining the detection angle of the laser radar within the visual angle range of the camera, determining a detection area for the image recognition vehicle, projecting the object distance information detected by the laser radar onto the visual image, and searching the region of interest of the vehicle recognition on the image by taking the object distance information as a base point. The structural feature fusion identification module is used for guiding and extracting the contour feature of the tail part of the vehicle in the guide image according to the contour change direction of the laser point cloud concentrated point cloud information after determining the detection area of the vehicle in the image, then fusing the texture feature to identify the front vehicle, and then verifying the image identification result according to the point cloud structure and the space position of the laser radar.
Example 3
The embodiment provides a detection device, which applies the vehicle detection method based on sensor multivariate information fusion in embodiment 1, and specifically comprises a camera data processing module, a laser radar data processing module, a feature analysis module and a recognition and positioning module.
The camera data processing module is used for preliminarily determining the detection range of the laser radar and reducing the data processing amount and the processing time consumption. And the laser radar data processing module extracts ground point cloud according to the radar installation height, extracts road boundaries by utilizing wavelet analysis and obtains a vehicle passing area. The detection area fusion module is used for determining a reasonable search range for the laser radar by utilizing a camera capture visual angle, further dividing the result, restricting the detection area of the laser radar in a vehicle passing area to the maximum extent, and projecting point cloud coordinates collected by the radar to an image to obtain a vehicle identification region of interest. The feature analysis module verifies the image recognition result according to the point cloud structure and the spatial distribution of the laser radar, eliminates the phenomena of false detection and missing detection in the image and improves the recognition precision. The identification positioning module is used for identifying and positioning the vehicle in the detection range by utilizing the identification function of the camera and the positioning function of the laser radar.
Example 4
The present embodiments provide a computer terminal comprising a memory, a processor, and a computer program stored on the memory and executable on the processor. The processor executes the program to realize the steps of the vehicle detection method based on the sensor multi-information fusion in embodiment 1.
When the vehicle detection method based on sensor multivariate information fusion is applied, the vehicle detection method can be applied in a software form, for example, a program designed to run independently is installed on a computer terminal, and the computer terminal can be a computer, a smart phone, a control system, other Internet of things equipment and the like. The vehicle detection method based on sensor multivariate information fusion can also be designed into an embedded running program and installed on a computer terminal, such as a singlechip.
Example 5
The present embodiment provides a computer-readable storage medium having a computer program stored thereon. The program, when executed by a processor, implements the steps of the sensor multivariate information fusion-based vehicle detection method of embodiment 1. When the vehicle detection method based on sensor multi-element information fusion is applied, the method can be applied in the form of software, such as a program designed to be independently operated by a computer-readable storage medium, wherein the computer-readable storage medium can be a U disk, or a storage medium designed to exist in the form of a U shield, or a program designed to start the whole method through external triggering by the U disk.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (10)

1. A vehicle detection method based on sensor multivariate information fusion is characterized by comprising the following steps:
s1: converting a camera coordinate system and a laser radar coordinate system of a vehicle into a detection coordinate system of the vehicle;
s2: preliminarily determining a camera detection area according to an image vanishing line and a camera acquisition visual angle, screening the laser radar point cloud data, finally detecting a mutation position of a laser radar data return value, extracting road boundary position information, projecting the road boundary position information onto an image, and determining a vehicle passing area;
s3: the detection angle of the laser radar is constrained within the visual angle range of a camera, a detection area is determined for an image recognition vehicle, then the object distance information detected by the laser radar is projected onto a visual image, and the visual image is used as a base point to search the interested area of the vehicle recognition on the image;
s4: after a detection area of a vehicle in an image is determined, according to the contour change direction of the laser point cloud concentrated point cloud information, guiding and extracting the contour feature of the tail of the vehicle in a guide image, then fusing the texture feature to identify a front vehicle, and then verifying an image identification result according to the point cloud structure and the space position of the laser radar.
2. The method of claim 1, wherein a point in the defined space is p in the lidar coordinate systeml(xl,yl,zl) P in the camera coordinate systemc(xc,yc,zc) P in the detection coordinate systemp(xp,yp,zp) (ii) a The conversion relations from the laser radar coordinate system and the camera coordinate system to the detection coordinate system are respectively as follows:
Pp=Pl·Rl+Bl
Pp=Pl·Rc+Bc
wherein, BlAnd BcRespectively, the laser radar coordinate system and the translation matrix from the camera coordinate system to the detection coordinate system.
3. The vehicle detection method based on sensor multivariate information fusion as claimed in claim 2, wherein when screening the lidar point cloud data, if the ground has no obstacle, the coordinates of the laser beam of the lidar located on the ground point cloud under the radar coordinate system are:
Figure FDA0003368470750000021
in the formula, xl、yl、zlCoordinates, alpha, representing an arbitrary laser beamlRepresenting the search angle of any laser line beam, wherein rho is the detection distance of the laser radar, and omega is the emission angle of the laser radar;
after the height is fixed, the point cloud coordinate points of the laser beam of the laser radar are as follows:
Figure FDA0003368470750000022
in the formula, H represents the installation height of the laser radar;
and performing rotation correction on the source data of the laser radar, and comparing the converted data with a coordinate point obtained according to the height:
Figure FDA0003368470750000023
in the formula, P represents a coordinate point obtained in terms of height.
4. The method for detecting vehicles based on sensor multivariate information fusion as claimed in claim 3, wherein in step S3, the data return value of each point in the polar coordinate system in the detection range of the laser radar is:
P(ρn,α,ωn) n=1,2,3...
wherein alpha is a search angle, and n represents a beam value of the laser radar; rhonFor a detected distance of beam value n, ωnThe beam value is the emission angle of n; and rotating around the z axis, wherein the range of the beam searching angle of the laser radar is (0,360), the visual angle of the camera is M, and the searching angle of the laser radar is corrected to be (-0.5M, 0.5M).
5. The method as claimed in claim 4, wherein the point cloud data of the lidar is used to segment the road, i.e. the vehicle passing area is extracted according to the structural features of the road edge, and the wavelet analysis is used to perform a secondary segmentation on the primary segmentation result to determine the vehicle passing area.
6. The sensor multivariate information fusion-based vehicle detection method as claimed in claim 5, wherein, when an obstacle appears in a radar detection range, a sudden change position is detected; and extracting laser point cloud data, fitting the distance data received by the laser radar by using a Daubechies wavelet 6-order function, positioning the distance data at the position of data mutation accurately by using the wavelet function, extracting boundary characteristic points, and fitting a series of characteristic points by using a least square method to obtain a vehicle passing area.
7. The method for vehicle detection based on sensor multivariate information fusion as claimed in claim 6, wherein machine learning is applied to perform vehicle identification detection on the determined vehicle detection area, and the vehicle is identified by positioning the ground positions of the left and right vehicles, visually determining the vehicle detection area, gradually enlarging the search frame, and searching from two sides to the middle and from bottom to top.
8. The method for detecting the vehicle based on the sensor multivariate information fusion as claimed in claim 7, wherein when the object distance information is projected on the visual image, the method for deriving the conversion relation between the camera coordinate system and the image coordinate system comprises the following steps:
P(Xc,Yc,Zc) Calculating P (X) for any point in the camera coordinate system by using triangle similarityc,Yc,Zc) Projection position in image coordinate system:
Figure FDA0003368470750000031
in which f is a phaseThe machine focal length; (0-xy) is the image coordinate system, and P (x, y) is P (x)c,yc,zc) Points projected into the coordinate system;
definition (0)uvUv) is a pixel coordinate system, and the conversion formula of the camera coordinate system to the pixel coordinate system is:
Figure FDA0003368470750000032
deducing a coordinate conversion formula from the point cloud to the image according to a conversion formula from a laser radar coordinate system to a camera coordinate system, wherein the coordinate conversion formula is as follows:
Figure FDA0003368470750000033
and projecting the object distance information detected by the laser radar onto the visual image according to a coordinate conversion formula from the point cloud to the image.
9. The vehicle detection method based on sensor multivariate information fusion as claimed in claim 8, wherein the space formed by the laser radar on the vehicle body is defined as follows: u is TL,M,R(x, y, z), the point cloud set formed by acting on the object needs to satisfy the following condition:
(1)
Figure FDA0003368470750000041
(2) for the detected objects on the left side and the right side, the spatial point cloud forms two mutually perpendicular surfaces and is in circular arc transition; the object right in front is a plane, and two ends of the object are connected with the circular arc.
10. A detection apparatus applying the sensor multivariate information fusion-based vehicle detection method according to any one of claims 1-9, characterized in that it comprises:
a conversion module for converting a camera coordinate system and a lidar coordinate system of a vehicle into a detection coordinate system of the vehicle;
an initial data processing module for processing initial data of the camera and the laser radar; the initial data processing module preliminarily determines a camera detection area according to an image vanishing line and a camera acquisition visual angle, then screens the laser radar point cloud data, finally detects a mutation position of a laser radar data return value, extracts road boundary position information and projects the road boundary position information onto an image, and determines a vehicle passing area;
the detection area fusion module is used for restraining the detection angle of the laser radar within the visual angle range of the camera, determining a detection area for the image recognition vehicle, projecting the object distance information detected by the laser radar onto the visual image, and searching the interested area of the vehicle recognition on the image by taking the object distance information as a base point;
the structural feature fusion identification module is used for guiding and extracting the outline features of the tail part of a vehicle in a guide image according to the outline change direction of the laser point cloud concentrated point cloud information after determining the detection area of the vehicle in the image, fusing the texture features to identify a front vehicle, and verifying the image identification result according to the point cloud structure and the space position of the laser radar.
CN202111390381.5A 2021-11-23 2021-11-23 Vehicle detection method and detection device based on sensor multivariate information fusion Pending CN114118252A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111390381.5A CN114118252A (en) 2021-11-23 2021-11-23 Vehicle detection method and detection device based on sensor multivariate information fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111390381.5A CN114118252A (en) 2021-11-23 2021-11-23 Vehicle detection method and detection device based on sensor multivariate information fusion

Publications (1)

Publication Number Publication Date
CN114118252A true CN114118252A (en) 2022-03-01

Family

ID=80439455

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111390381.5A Pending CN114118252A (en) 2021-11-23 2021-11-23 Vehicle detection method and detection device based on sensor multivariate information fusion

Country Status (1)

Country Link
CN (1) CN114118252A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115267815A (en) * 2022-06-10 2022-11-01 合肥工业大学 Road side laser radar group optimization layout method based on point cloud modeling
CN115273547A (en) * 2022-07-26 2022-11-01 上海工物高技术产业发展有限公司 Road anti-collision early warning system
CN115690261A (en) * 2022-12-29 2023-02-03 安徽蔚来智驾科技有限公司 Parking space map building method based on multi-sensor fusion, vehicle and storage medium
CN116580098A (en) * 2023-07-12 2023-08-11 中科领航智能科技(苏州)有限公司 Cabin door position detection method for automatic leaning machine system
CN117315613A (en) * 2023-11-27 2023-12-29 新石器中研(上海)科技有限公司 Noise point cloud identification and filtering method, computer equipment, medium and driving equipment
CN117809440A (en) * 2024-03-01 2024-04-02 江苏濠汉信息技术有限公司 Tree obstacle mountain fire monitoring and early warning method and system applying three-dimensional ranging

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115267815A (en) * 2022-06-10 2022-11-01 合肥工业大学 Road side laser radar group optimization layout method based on point cloud modeling
CN115273547A (en) * 2022-07-26 2022-11-01 上海工物高技术产业发展有限公司 Road anti-collision early warning system
CN115690261A (en) * 2022-12-29 2023-02-03 安徽蔚来智驾科技有限公司 Parking space map building method based on multi-sensor fusion, vehicle and storage medium
CN116580098A (en) * 2023-07-12 2023-08-11 中科领航智能科技(苏州)有限公司 Cabin door position detection method for automatic leaning machine system
CN116580098B (en) * 2023-07-12 2023-09-15 中科领航智能科技(苏州)有限公司 Cabin door position detection method for automatic leaning machine system
CN117315613A (en) * 2023-11-27 2023-12-29 新石器中研(上海)科技有限公司 Noise point cloud identification and filtering method, computer equipment, medium and driving equipment
CN117809440A (en) * 2024-03-01 2024-04-02 江苏濠汉信息技术有限公司 Tree obstacle mountain fire monitoring and early warning method and system applying three-dimensional ranging
CN117809440B (en) * 2024-03-01 2024-05-10 江苏濠汉信息技术有限公司 Tree obstacle mountain fire monitoring and early warning method and system applying three-dimensional ranging

Similar Documents

Publication Publication Date Title
WO2021223368A1 (en) Target detection method based on vision, laser radar, and millimeter-wave radar
CN114118252A (en) Vehicle detection method and detection device based on sensor multivariate information fusion
CN108983219B (en) Fusion method and system for image information and radar information of traffic scene
CN112396650B (en) Target ranging system and method based on fusion of image and laser radar
JP6825569B2 (en) Signal processor, signal processing method, and program
CN110443225B (en) Virtual and real lane line identification method and device based on feature pixel statistics
CN113156421A (en) Obstacle detection method based on information fusion of millimeter wave radar and camera
US8867792B2 (en) Environment recognition device and environment recognition method
Daigavane et al. Road lane detection with improved canny edges using ant colony optimization
CN104778444A (en) Method for analyzing apparent characteristic of vehicle image in road scene
WO2022151664A1 (en) 3d object detection method based on monocular camera
CN110197173B (en) Road edge detection method based on binocular vision
Ponsa et al. On-board image-based vehicle detection and tracking
Lin et al. Construction of fisheye lens inverse perspective mapping model and its applications of obstacle detection
CN115327572A (en) Method for detecting obstacle in front of vehicle
WO2023207845A1 (en) Parking space detection method and apparatus, and electronic device and machine-readable storage medium
CN107220632B (en) Road surface image segmentation method based on normal characteristic
Yoneda et al. Simultaneous state recognition for multiple traffic signals on urban road
CN114740493A (en) Road edge detection method based on multi-line laser radar
Zhang et al. Rvdet: Feature-level fusion of radar and camera for object detection
Hwang et al. Vision-based vehicle detection and tracking algorithm design
CN112990049A (en) AEB emergency braking method and device for automatic driving of vehicle
CN112733678A (en) Ranging method, ranging device, computer equipment and storage medium
Álvarez et al. Perception advances in outdoor vehicle detection for automatic cruise control
Xiong et al. A 3d estimation of structural road surface based on lane-line information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination