CN111414857A - Front vehicle detection method based on vision multi-feature fusion - Google Patents

Front vehicle detection method based on vision multi-feature fusion Download PDF

Info

Publication number
CN111414857A
CN111414857A CN202010198474.7A CN202010198474A CN111414857A CN 111414857 A CN111414857 A CN 111414857A CN 202010198474 A CN202010198474 A CN 202010198474A CN 111414857 A CN111414857 A CN 111414857A
Authority
CN
China
Prior art keywords
vehicle
area
shadow
image
tail lamp
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010198474.7A
Other languages
Chinese (zh)
Other versions
CN111414857B (en
Inventor
陈学文
裴月莹
李亚盼
蓝富琪
马天放
于添
佟佳颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Liaoning University of Technology
Original Assignee
Liaoning University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Liaoning University of Technology filed Critical Liaoning University of Technology
Priority to CN202010198474.7A priority Critical patent/CN111414857B/en
Publication of CN111414857A publication Critical patent/CN111414857A/en
Application granted granted Critical
Publication of CN111414857B publication Critical patent/CN111414857B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30236Traffic on road, railway or crossing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a vision-based multi-feature fusion front vehicle detection method, which comprises the following steps of 1: acquiring profile information of taillight pairs of front vehicles, determining the mass centers of the taillights, and determining a taillight mark area X according to the width and the height of an image of the taillight mark area by taking the central point between the mass centers as a reference pointT(ii) a Step 2: acquiring a gray value of a vehicle shadow area, and acquiring a vehicle bottom shadow image area according to the gray value of the vehicle shadow area; obtaining shadow markRecording the height of the area: determining a shadow mark area X according to the width and the height of the shadow mark area image by taking the central point of the lower edge line of the shadow image area at the bottom of the vehicle as a reference pointS(ii) a And step 3: acquiring a target detection area of a front vehicle: xw=ksXS+kTXT(ii) a In the formula, XwTarget detection area, k, for the preceding vehicles、kTAre weight coefficients. The existing area of the front vehicle detected based on the tail lamp pair and shadow composite characteristics improves the detection precision and overcomes the defect that single characteristics are missed or cannot be detected.

Description

Front vehicle detection method based on vision multi-feature fusion
Technical Field
The invention relates to the field of automobile safety auxiliary driving control, in particular to a vision-based multi-feature fusion front vehicle detection method.
Background
An automobile safety assistant driving system (ADAS) is widely used for improving driving safety, such as a lane keeping assistant system (L KAS), a front collision early warning system (FCW), an automatic emergency collision avoidance system (AEB), an intelligent cruise control system (IACC) and the like.
With the popularization of cost-effective image sensors and the increasing maturity of image processing technologies, vehicle detection and identification methods based on machine vision are widely used in driving assistance systems. Such as using the linear geometry characteristics of the vehicle, the symmetry of the vehicle, or using special hardware, such as computer vision methods for color CCDs and binocular CCDs. In addition, there are a method based on optical flow, a method of template matching, a method of support vector machine, a method of training using neural network, a method of multi-sensor information fusion, and the like. Most of the research methods are based on single vehicle characteristics, and the prior knowledge is utilized to determine the area where the vehicle exists or judge whether the vehicle exists. The method has poor adaptability to the external environment, and is easily restricted by weather factors to influence the detection accuracy.
Disclosure of Invention
The invention designs and develops a vision-based multi-feature fusion front vehicle detection method, which improves the detection precision based on the existence area of the front vehicle detected by the tail lamp pair and shadow composite features, and solves the defect that the detection is missed or can not be detected by a single feature.
The technical scheme provided by the invention is as follows:
a vision-based multi-feature fusion front vehicle detection method comprises the following steps:
step 1: acquiring the profile information of the tail lamp of the front vehicle, and determining the mass center of each tail lamp to meet the following requirements:
Figure BDA0002418491020000024
Figure BDA0002418491020000025
Figure BDA0002418491020000026
in the formula (I), the compound is shown in the specification,
Figure BDA0002418491020000027
the width between the tail lamp pairs in the image; wminAnd WminRespectively a minimum pixel value and a maximum pixel value of the width between the tail lamp pairs;
Figure BDA0002418491020000028
and
Figure BDA0002418491020000029
the heights of the left tail lamp and the right tail lamp in the image are respectively; h iscHeight difference threshold values of the left tail lamp and the right tail lamp;
Figure BDA00024184910200000210
and
Figure BDA00024184910200000211
the areas of the left tail lamp and the right tail lamp in the image are respectively; a islAnd arThe minimum value and the maximum value of the area ratio of the left tail lamp and the right tail lamp in the image are respectively.
Acquiring the height of a tail lamp marking area:
Figure BDA0002418491020000021
in the formula (f)w1Marking the area image width for the tail light; f. ofh1Marking the area image height for the tail light; vwIs the actual width of the vehicle; vhTo the actual height of the vehicle, α1Scale factor for marking the area image for the tail light;
determining a tail lamp marking area X according to the width and the height of the tail lamp marking area image by taking the central point between the centroids as a reference pointT
Step 2: obtaining a gray value of a vehicle shadow area:
STH=Gμmin-kσGσmin
Figure BDA0002418491020000022
Figure BDA0002418491020000023
in the formula, STHGray values of shadow areas of the vehicle; k is a radical ofσIs a variance proportionality coefficient, GμminIs the mean value of the minimum gray values, G, of the road pixelsσminThe minimum mean square error corresponding to the minimum gray value mean value is obtained, and M, N is the length and the width of the image;
obtaining a vehicle bottom shadow image area according to the gray value of the vehicle shadow area;
obtaining shadow mark area height:
Figure BDA0002418491020000031
in the formula (f)w2Marking the area image width for shading; f. ofh2Marking the height of the area image for shading α2Scale factors for shadow-labeled zone images;
determining a shadow mark area X according to the width and the height of the shadow mark area image by taking the central point of the lower edge line of the shadow image area at the bottom of the vehicle as a reference pointS
And step 3: acquiring a target detection area of a front vehicle:
Xw=ksXS+kTXT
in the formula, XwTarget detection area, k, for the preceding vehicles、kTAre weight coefficients.
It is preferred. Further comprising:
determining the longitudinal distance to the front vehicle according to the target detection area:
Figure BDA0002418491020000032
where Z is the longitudinal distance from the vehicle in front, f is the effective focal length of the vision camera, fωThe width of the target detection area.
Preferably, in step 1, a detection image including the profile information of the taillight of the preceding vehicle is acquired by Canny edge detection, and the profile information of the taillight of the preceding vehicle is determined by morphological closing operation.
Preferably, in the step 2, the image is divided according to the gray scale value of the vehicle shadow area to obtain a binary image, and the opening and closing operation processing is performed to obtain the vehicle bottom shadow image area.
Preferably, the shadow mark area image width and the tail light mark area image width are both tail light widths.
Preferably, the front vehicle tail light is determined by performing a morphological closing operation on the profile information using a square structural element of 6 × 6.
Preferably, the vehicle bottom shadow image area is composed of a shadow of the vehicle on the ground, left and right rear tires of the vehicle, and a rear bumper of the vehicle.
The invention has the following beneficial effects:
the vision-based multi-feature fusion front vehicle detection method designed and developed by the invention improves the detection precision based on the existing region of the front vehicle detected by the tail lamp pair and the shadow composite feature, and solves the defect that the detection is missed or can not be detected by a single feature.
Drawings
Fig. 1 is a rear tail lamp junction diagram of a preceding vehicle.
Fig. 2 is a geometric model diagram of monocular visual ranging.
FIG. 3 is a schematic diagram of determining a longitudinal distance based on an image plane vehicle width.
Detailed Description
The present invention is further described in detail below with reference to the attached drawings so that those skilled in the art can implement the invention by referring to the description text.
The invention designs and develops a vision-based multi-feature fusion front vehicle detection method, which comprises the following steps:
step 1: vehicle taillight pair information acquisition
The contour information of the tail lamp of the vehicle is contained in a detection image after Canny edge detection, and the contour of the tail lamp needs to be extracted from the detection image, and the shape closing operation is carried out by adopting a square structural element of 6 × 6 to eliminate noise points.
Fig. 1 is a rear tail lamp junction diagram of a preceding vehicle. As can be seen from fig. 1, the tail light pair contour information can be extracted with the following constraints:
Figure BDA0002418491020000041
Figure BDA0002418491020000042
Figure BDA0002418491020000043
the center of mass of the tail lamp area can be determined according to the constraint conditions of the formula (1) to the formula (3), the mass points are connected and the distance is obtained, and then the left edge and the right edge (namely the vehicle width) of the vehicle in the figure are determined, wherein the width of one tail lamp is added with the distance of the center of mass of the left tail lamp and the center of mass of the right tail lamp.
Step 2: vehicle shadow area acquisition
The vehicle shadow area is composed of the shadow of the vehicle on the ground, the left and right rear tires of the vehicle, the rear bumper of the vehicle and the like. In general, the gradation value of the vehicle shadow region is in the minimum value range in the entire road surface image. For the image to be detected, calculating the mean value G of the gray value of the road surface pixelμAnd mean square error GσAs shown in equations (4) and (5).
Figure BDA0002418491020000051
Figure BDA0002418491020000052
In the formula: m, N is the length and width of the image.
The finally selected threshold is determined by the minimum gray mean and the corresponding mean square error, and the specific calculation formula is as follows:
STH=Gμmin-kσGσmin(6)
in the formula, STHGray values of shadow areas of the vehicle; k is a radical ofσIs a variance proportionality coefficient, GμminIs the mean value of the minimum gray values, G, of the road pixelsσminIs the minimum mean square error corresponding to the minimum gray value mean, M,N is the length and width of the image
And performing image segmentation according to the gray value of the vehicle shadow area, wherein in the segmented binary image of the vehicle bottom shadow area, for the situation that the noise which is connected into a whole and appears around the vehicle bottom shadow area or the vehicle bottom shadow area is discontinuous and the non-vehicle bottom shadow area, the segmented bottom shadow area needs to be separated from the background noise and the non-vehicle bottom shadow area as much as possible, and the opening and closing operation processing is performed on the bottom shadow area to obtain the vehicle bottom shadow image area.
And step 3: vehicle detection zone integration based on taillight and shadow features
Obtaining a shadow area of the bottom of a target vehicle based on shadow features, marking a coordinate point of a vehicle existing area in an image according to the shadow area, determining a width area of a marking frame by taking the center point of a lower edge line of the shadow area as a basic reference point and combining the width of a tail lamp of the vehicle, obtaining a height coordinate point of the vehicle in the image by combining the proportional relation between the actual structure size of the vehicle and the image size (as shown in formula 7), and further obtaining a rectangular marking area X of the target vehicle according to the width and the height of the image of the shadow marking area by taking the center point of the lower edge line of the shadow image area of the bottom of the vehicle as a reference pointS(shaded area).
Figure BDA0002418491020000061
In the formula: f. ofw2Marking the area image width for shading; f. ofh2Marking the height of the area image for shading α2Scale factors for the shadow mark region images. VwIs the actual width of the vehicle; vhIs the actual height of the vehicle.
Based on the tail lamp pair information, the center point coordinate of the tail lamp can be obtained, the center point coordinate of the center point is selected as a reference point, the width area of the mark frame is determined according to the width of the tail lamp, and then the vehicle height coordinate point in the image can be obtained by combining the proportional relation between the actual structure size of the vehicle and the image size (as shown in a formula 8), so that the vehicle height coordinate point obtained based on the characteristics of the tail lamp is further obtainedOf the target vehicleT(taillight marking area).
Figure BDA0002418491020000062
In the formula (f)w1Marking the area image width for the tail light; f. ofh1Marking the area image height for the tail light; vwIs the actual width of the vehicle; vhTo the actual height of the vehicle, α1Scale factor of the tail light marking area image.
Step 4, aiming at the shadow feature and the tail light pair feature, respectively obtaining a rectangular mark region (namely, a shadow mark region and a tail light mark region) of the target vehicle, and performing integration processing on the mark regions by combining with actual conditions to determine a unique target detection region (as shown in formula 9).
Xw=ksXS+kTXT(9)
In the formula, XwTarget detection area, k, for the preceding vehicles、kTAre weight coefficients.
Of course, the determination can be made according to actual needs. However, the vehicle detection is carried out based on the shadow and the tail lamp pair composite characteristics, so that the occurrence of false detection caused by the detection of parallel vehicles by a single tail lamp pair can be overcome.
And 5, determining the longitudinal distance to the front vehicle based on the extraction results of the tail light and the shadow features (namely the unique target detection area).
As is known, although a vehicle detection method based on monocular vision can effectively detect a target vehicle in front of an ADAS system, it cannot acquire information such as a longitudinal distance from the target vehicle in front to the vehicle and a vehicle speed of the target vehicle, which requires a monitoring sensor such as a radar for a vehicle distance keeping and collision avoidance system, and thus increases system cost and complexity of information processing.
Based on the monocular vision combined with the composite characteristics of the vehicle shadow area and the tail lamp pair, the width information of the vehicle image can be accurately acquired, the longitudinal distance from the front target vehicle to the vehicle can be filtered and estimated by utilizing the vehicle width parameter, and the problem that the monocular vision cannot measure the distance is solved.
From the monocular visual ranging geometry (as shown in fig. 2), it can be known that the longitudinal distance Z between a point P on the road ahead and the center of the lens can be expressed by the following formula (10):
Figure BDA0002418491020000071
in the formula: h is the installation height of the CCD camera; f is the effective focal length of the CCD camera; y is the component of the projection coordinate of the target point P on the CCD image plane in the y-axis direction (the pixel value of the image plane coordinate); y is0Is the intersection of the optical axis and the image plane (generally y)0Take 0).
As can be understood from equation (10), if the point P is changed to a target vehicle in front of the host vehicle, the y value in the equation represents the vehicle height of the target vehicle in the image plane. It can be known that the width of the target vehicle in front of the CCD camera in the image plane also corresponds to the proportional relation of equation (10), as shown in fig. 3, that is: the longitudinal distance Z between the front target vehicle and the host vehicle can be determined by using the width of the vehicle in the image and the actual vehicle width, and the relation can be expressed by the formula (11):
Figure BDA0002418491020000072
in the formula: f. ofwThe vehicle width of the target vehicle in the image plane; vwThe actual width of the vehicle.
In formula (11), the parameters f and VwCan be obtained by actual measurement, and fωThe unique target detection area obtained by combining the vehicle shadow area and the tail lamp pair composite characteristics based on monocular vision is accurately detected. Therefore, the longitudinal distance from the front target vehicle to the host vehicle can be determined by accurately obtaining the width value of the vehicle in the image plane. In addition, if parameters such as longitudinal speed, acceleration and the like of a front target vehicle need to be acquired, the relationship can be established by utilizing the observation distance parameters acquired in the inventionThe estimation of the speed and the acceleration of the front target vehicle is realized by a Kalman filter of the longitudinal distance, the speed and the acceleration, and the content is not repeated in the invention.
Individual application tail light detection
The size of the imported program image is 360-240, the unit is pixel, the size of the actual length, width and height of the car is 1925-1720 (the unit of the size and the adjustment value of the image is pixel; the unit of the actual size of the car is millimeter, the same is used below, and the description is not repeated), the unit is millimeter, the width range of the rectangular frame is the distance between the tail lamp pairs and is 20-120 pixels, the height of the rectangular frame is calculated according to the width and the actual width of the car in the image (see formula 8), the distance between the tail lamp pairs and the width of a single tail lamp are expanded leftwards (the distance between the tail lamp pairs and the width of the single tail lamp are increased)/2 by taking the center point of the tail lamp pair as the center, the height of the tail lamp pair is half of the height in the image, the size of the rectangular frame is 128.7.
Single use shadow detection
The size of the image of the imported program is 360 × 240, the actual size of the car is 1925 × 1720, the width range of the rectangular frame of the region of interest of the bottom of the vehicle is 45-125, the specific positions are that the center point of the lower line of the rectangular frame of the bottom of the vehicle is used as the center, half of the width of the rectangular frame of interest of the bottom of the vehicle is respectively expanded leftwards, and the height of the vehicle in the image obtained through a proportion formula is expanded upwards (see formula 7). the size of the rectangular frame is 123 × 109.9.9.
Using composite feature detection
And finally, only one rectangular frame is determined, the height of the vehicle in the image is calculated according to a proportional formula by taking the width of the vehicle in the image obtained when the features of the tail lamp are extracted (the distance between the tail lamp pair and the width of a single tail lamp), the abscissa x of the central point of the tail lamp pair is translated to the left by half of the width of the vehicle in the image by taking the central point of the tail lamp pair as a center and the lower edge line of the shadow area of the bottom of the vehicle as a reference point, the abscissa x is x1, the abscissa x1 and y are known, and the height value of the vehicle in the image obtained by finally expanding the point (x1 and y) upwards is obtained, and the size of the rectangular frame is 128.7 × 115.
If the shadow exists, determining the bottom of the vehicle by taking the lower edge line of the shadow area as a reference point, and determining the width by the characteristics of the tail lamp; no shadow exists and the light is converted into a tail light characteristic to determine a rectangular frame.
In rain, snow and night, only the detection method for extracting the features of the tail lamp is used independently, the specific numerical value is consistent with that when the tail lamp is used independently for detection, and the X is determined according to the following judgment functionw=ksXS+kTXT
Wherein: xT-a vehicle tail light feature; xS-vehicle shadow features; k is a radical ofs、kT-a weighting factor (k) for boths+kT=1)。
Under normal daytime driving conditions, both the tail light and the shadow of the vehicle can be detected, so k is sets=kT0.5; at night and under severe weather conditions such as rain and snow, the shadow features cannot be accurately obtained, so k is setT=1,ks=0。
With the above data, it is possible to reliably detect the tail lamp feature information of the rear part of the vehicle after the opening and closing operation of the tail lamp pair constraint condition feature extraction, and to specify the rectangular frame range of the region of interest in which the vehicle may exist, based on the vehicle width specified by the center of mass of the tail lamp pair. The shadow feature of the bottom of the vehicle can be reliably detected based on the shadow gray threshold limiting condition, and the rectangular frame range of the region of interest in which the vehicle is likely to exist is determined. Therefore, the candidate vehicle region can be accurately marked according to the shadow gray-scale value feature. And the candidate vehicle areas determined by the tail lamp pair and the shadow composite feature are basically consistent from the detection result of the extraction of the tail lamp pair and the shadow composite feature. And in the snow and night driving environment, because the shadow of the bottom of the vehicle is lost, the vehicle existing region extracted based on the composite feature of the tail lamp pair and the shadow mainly depends on the vehicle tail lamp pair information as the final result. In the daytime, the detection area of the vehicle can be obtained by the two areas as long as the shadow areas exist in the sunny environment and the cloudy environment. This also illustrates that vehicle detection based on multi-feature information can greatly improve the defects of single-feature missing detection or failure of detection.
The vision-based multi-feature fusion front vehicle detection method designed and developed by the invention improves the detection precision based on the existing region of the front vehicle detected by the tail lamp pair and the shadow composite feature, and solves the defect that the detection is missed or can not be detected by a single feature.
While embodiments of the invention have been described above, it is not limited to the applications set forth in the description and the embodiments, which are fully applicable in various fields of endeavor to which the invention pertains, and further modifications may readily be made by those skilled in the art, it being understood that the invention is not limited to the details shown and described herein without departing from the general concept defined by the appended claims and their equivalents.

Claims (7)

1. A vision-based multi-feature fusion front vehicle detection method is characterized by comprising the following steps:
step 1: acquiring the profile information of the tail lamp of the front vehicle, and determining the mass center of each tail lamp to meet the following requirements:
Figure FDA0002418491010000011
Figure FDA0002418491010000012
Figure FDA0002418491010000013
in the formula (I), the compound is shown in the specification,
Figure FDA0002418491010000014
the width between the tail lamp pairs in the image; wminAnd WminRespectively a minimum pixel value and a maximum pixel value of the width between the tail lamp pairs;
Figure FDA0002418491010000015
and
Figure FDA0002418491010000016
the heights of the left tail lamp and the right tail lamp in the image are respectively; h iscHeight difference threshold values of the left tail lamp and the right tail lamp;
Figure FDA0002418491010000017
and
Figure FDA0002418491010000018
the areas of the left tail lamp and the right tail lamp in the image are respectively; a islAnd arRespectively the minimum value and the maximum value of the area ratio of the left tail lamp to the right tail lamp in the image;
acquiring the height of a tail lamp marking area:
Figure FDA0002418491010000019
in the formula (f)w1Marking the area image width for the tail light; f. ofh1Marking the area image height for the tail light; vwIs the actual width of the vehicle; vhTo the actual height of the vehicle, α1Scale factor for marking the area image for the tail light;
determining a tail lamp marking area X according to the width and the height of the tail lamp marking area image by taking the central point between the centroids as a reference pointT
Step 2: obtaining a gray value of a vehicle shadow area:
STH=Gμmin-kσGσmin
Figure FDA00024184910100000110
Figure FDA00024184910100000111
in the formula, STHGray values of shadow areas of the vehicle; k is a radical ofσIs a variance proportionality coefficient, GμminIs the mean value of the minimum gray values, G, of the road pixelsσminTo a minimumThe minimum mean square error corresponding to the mean of the gray values, M, N is the length and width of the image;
obtaining a vehicle bottom shadow image area according to the gray value of the vehicle shadow area;
obtaining shadow mark area height:
Figure FDA0002418491010000021
in the formula (f)w2Marking the area image width for shading; f. ofh2Marking the height of the area image for shading α2Scale factors for shadow-labeled zone images;
determining a shadow mark area X according to the width and the height of the shadow mark area image by taking the central point of the lower edge line of the shadow image area at the bottom of the vehicle as a reference pointS
And step 3: acquiring a target detection area of a front vehicle:
Xw=ksXS+kTXT
in the formula, XwTarget detection area, k, for the preceding vehicles、kTAre weight coefficients.
2. The vision-based multi-feature fusion forward vehicle detection method of claim 1, further comprising:
determining the longitudinal distance to the front vehicle according to the target detection area:
Figure FDA0002418491010000022
where Z is the longitudinal distance from the vehicle in front, f is the effective focal length of the vision camera, fωThe width of the target detection area.
3. The vision-based multi-feature fusion preceding vehicle detection method according to claim 1 or 2, characterized in that in step 1, a detection image containing the contour information of the preceding vehicle tail light is acquired by Canny edge detection, and the contour information of the preceding vehicle tail light is determined by morphological closing operation.
4. The method for detecting a vehicle ahead based on multi-feature fusion of vision according to claim 1 or 2, characterized in that in the step 2, a binarized image is obtained by performing image segmentation based on the gray value of the vehicle shadow region, and an opening and closing arithmetic processing is performed to obtain a vehicle underbody shadow image region.
5. The vision-based multi-feature fusion preceding vehicle detection method of claim 1 or 2, characterized in that the shadow mark area image width and the taillight mark area image width are both taillight widths.
6. The vision-based multi-feature fusion preceding vehicle detection method of claim 3, characterized in that the preceding vehicle tail light performs morphological closing operation determination on contour information using a square structural element of 6 × 6.
7. The vision-based multi-feature fusion forward vehicle detection method of claim 5, wherein the vehicle bottom shadow image area is composed of a shadow of the vehicle on the ground, left and right rear tires of the vehicle, and a rear bumper of the vehicle.
CN202010198474.7A 2020-03-20 2020-03-20 Front vehicle detection method based on vision multi-feature fusion Active CN111414857B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010198474.7A CN111414857B (en) 2020-03-20 2020-03-20 Front vehicle detection method based on vision multi-feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010198474.7A CN111414857B (en) 2020-03-20 2020-03-20 Front vehicle detection method based on vision multi-feature fusion

Publications (2)

Publication Number Publication Date
CN111414857A true CN111414857A (en) 2020-07-14
CN111414857B CN111414857B (en) 2023-04-18

Family

ID=71491347

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010198474.7A Active CN111414857B (en) 2020-03-20 2020-03-20 Front vehicle detection method based on vision multi-feature fusion

Country Status (1)

Country Link
CN (1) CN111414857B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112766117A (en) * 2021-01-10 2021-05-07 哈尔滨理工大学 Vehicle detection and distance measurement method based on YOLOV4-tiny
WO2022148143A1 (en) * 2021-01-08 2022-07-14 华为技术有限公司 Target detection method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140293052A1 (en) * 2011-07-08 2014-10-02 Bendix Commercial Vehicle Systems Llc Image-based vehicle detection and distance measuring method and apparatus
CN104866838A (en) * 2015-06-02 2015-08-26 南京航空航天大学 Vision-based automatic detection method for front vehicle
CN109190523A (en) * 2018-08-17 2019-01-11 武汉大学 A kind of automobile detecting following method for early warning of view-based access control model
CN110502971A (en) * 2019-07-05 2019-11-26 江苏大学 Road vehicle recognition methods and system based on monocular vision

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140293052A1 (en) * 2011-07-08 2014-10-02 Bendix Commercial Vehicle Systems Llc Image-based vehicle detection and distance measuring method and apparatus
CN104866838A (en) * 2015-06-02 2015-08-26 南京航空航天大学 Vision-based automatic detection method for front vehicle
CN109190523A (en) * 2018-08-17 2019-01-11 武汉大学 A kind of automobile detecting following method for early warning of view-based access control model
CN110502971A (en) * 2019-07-05 2019-11-26 江苏大学 Road vehicle recognition methods and system based on monocular vision

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022148143A1 (en) * 2021-01-08 2022-07-14 华为技术有限公司 Target detection method and device
CN112766117A (en) * 2021-01-10 2021-05-07 哈尔滨理工大学 Vehicle detection and distance measurement method based on YOLOV4-tiny

Also Published As

Publication number Publication date
CN111414857B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
JP7291129B2 (en) Method and apparatus for recognizing and evaluating environmental impacts based on road surface conditions and weather
CN104011737B (en) Method for detecting mist
CN105835880B (en) Lane following system
JP6174975B2 (en) Ambient environment recognition device
CN104573646B (en) Chinese herbaceous peony pedestrian detection method and system based on laser radar and binocular camera
US8244027B2 (en) Vehicle environment recognition system
CN109190523B (en) Vehicle detection tracking early warning method based on vision
CN104260723B (en) A kind of front vehicle motion state tracking prediction meanss and Forecasting Methodology
CN110647850A (en) Automatic lane deviation measuring method based on inverse perspective principle
CN109829365B (en) Multi-scene adaptive driving deviation and turning early warning method based on machine vision
KR101968349B1 (en) Method for detecting lane boundary by visual information
CN101131321A (en) Real-time safe interval measurement method and device used for vehicle anti-collision warning
Liu et al. Development of a vision-based driver assistance system with lane departure warning and forward collision warning functions
CN113657265B (en) Vehicle distance detection method, system, equipment and medium
CN103171560A (en) Lane recognition device
CN111414857B (en) Front vehicle detection method based on vision multi-feature fusion
CN111723778B (en) Vehicle distance measuring system and method based on MobileNet-SSD
JP4296287B2 (en) Vehicle recognition device
CN106803066B (en) Vehicle yaw angle determination method based on Hough transformation
JPWO2019174682A5 (en)
CN114495066A (en) Method for assisting backing
CN111539278A (en) Detection method and system for target vehicle
CN116587978A (en) Collision early warning method and system based on vehicle-mounted display screen
CN113895439B (en) Automatic driving lane change behavior decision method based on probability fusion of vehicle-mounted multisource sensors
CN113486837B (en) Automatic driving control method for low-pass obstacle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant