CN109815812B - Vehicle bottom edge positioning method based on horizontal edge information accumulation - Google Patents

Vehicle bottom edge positioning method based on horizontal edge information accumulation Download PDF

Info

Publication number
CN109815812B
CN109815812B CN201811567600.0A CN201811567600A CN109815812B CN 109815812 B CN109815812 B CN 109815812B CN 201811567600 A CN201811567600 A CN 201811567600A CN 109815812 B CN109815812 B CN 109815812B
Authority
CN
China
Prior art keywords
rectangular frame
image
horizontal edge
vehicle
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811567600.0A
Other languages
Chinese (zh)
Other versions
CN109815812A (en
Inventor
于红绯
潘晓光
卢紫微
王宇彤
郭来德
魏海平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Liaoning Shihua University
Original Assignee
Liaoning Shihua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Liaoning Shihua University filed Critical Liaoning Shihua University
Priority to CN201811567600.0A priority Critical patent/CN109815812B/en
Publication of CN109815812A publication Critical patent/CN109815812A/en
Application granted granted Critical
Publication of CN109815812B publication Critical patent/CN109815812B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a vehicle bottom edge positioning method based on horizontal edge information accumulation, which comprises the following steps: in an image frame of a video, acquiring a rectangular frame containing a vehicle; calculating a horizontal edge strength response function of each rectangular frame and a scaling parameter and a translation parameter corresponding to each rectangular frame; for each rectangular frame in the current image, creating or updating a horizontal edge information accumulation function; and for each rectangular frame in the current image, calculating the position of the lower bottom edge of the target according to the horizontal edge information accumulation function of the rectangular frame. According to the invention, through calculating the image zooming parameter and the translation parameter, multi-frame accumulation is carried out on the single-frame horizontal edge information, along with the motion of the vehicle, the background edge information is continuously weakened in the accumulation process, and the vehicle edge information is continuously strengthened in the accumulation process, so that the background interference is effectively overcome, and the accurate position of the lower bottom edge of the vehicle is finally obtained. Compared with the existing lower bottom edge positioning method based on the vehicle bottom shadow characteristics, the method can effectively overcome the complex background interference.

Description

Vehicle bottom edge positioning method based on horizontal edge information accumulation
Technical Field
The invention relates to the technical field of vehicle detection, in particular to a vehicle bottom edge positioning method based on horizontal edge information accumulation.
Background
The vision-based vehicle detection system has important application in the fields of automobile auxiliary driving, automatic driving and the like. The vehicle detection system based on the monocular camera is favored by various large vehicle factories due to the advantages of low cost, flexible installation, easy integration with other hardware equipment and the like. In a vehicle detection system based on a monocular camera, the real spatial position of a detected vehicle is often calculated according to the position of the lower bottom edge of the vehicle in an image, and then the relative position relationship between the vehicle and the detected vehicle is determined, so that the functions of collision early warning, vehicle following and the like are realized. It becomes very important to accurately locate the position of the lower edge of the vehicle in the image.
The existing lower bottom edge positioning method is often used for positioning by utilizing information such as symmetry of a vehicle, shadow of the bottom edge of the vehicle and the like, and is easily influenced by background and illumination change, so that the position of the lower bottom edge is not accurately calculated.
Disclosure of Invention
Aiming at the defects in the prior art, the technical problem to be solved by the invention is to provide a vehicle bottom edge positioning method based on horizontal edge information accumulation, wherein edge information which is possibly the bottom edge of a vehicle is stored in a single-frame image, and then the final position of the bottom edge of the vehicle is determined by utilizing multi-frame image information.
The technical scheme adopted by the invention for realizing the purpose is as follows: a vehicle bottom edge positioning method based on horizontal edge information accumulation comprises the following steps:
in an image frame of a video, acquiring a rectangular frame containing a vehicle;
calculating a horizontal edge strength response function of each rectangular frame and a scaling parameter and a translation parameter corresponding to each rectangular frame;
for each rectangular frame in the current image, creating or updating a horizontal edge information accumulation function;
and for each rectangular frame in the current image, calculating the position of the lower bottom edge of the target according to the horizontal edge information accumulation function of the rectangular frame.
The method for acquiring the position of a rectangular frame containing a vehicle in an image frame of a video comprises the following steps:
collecting a vehicle picture as a positive sample in an off-line manner, collecting a background picture as a negative sample, and training a classifier;
in the image frame, traversing and searching each position of the image in a sliding window mode, calling a trained classifier to perform online detection, and reserving a rectangular frame detected as a vehicle;
clustering a plurality of rectangular frames detected by a classifier in the image to obtain rectangular frames containing vehicles; and only one rectangular frame is contained for the same vehicle.
Said calculation of the horizontal edge intensity response function, i.e. in a rectangular frame containing the vehicle, for the current frame image ItThe ith rectangular frame region in (x, y)
Figure BDA0001914686670000021
Calculating a horizontal edge intensity response function, comprising the steps of:
for the current frame image ItThe i-th ROI rectangular region in (x, y)
Figure BDA0001914686670000022
Calculating the absolute function of the partial derivatives in the x-direction
Figure BDA0001914686670000023
Figure BDA0001914686670000024
Wherein
Figure BDA0001914686670000025
To represent
Figure BDA0001914686670000026
Derivatives in the vertical direction within the image area;
computing horizontal edge intensity response function
Figure BDA0001914686670000027
Figure BDA0001914686670000028
Wherein the content of the first and second substances,
Figure BDA0001914686670000029
x is an integer which is the number of atoms,
Figure BDA00019146866700000210
is a rectangular frame area
Figure BDA00019146866700000211
The height of the rectangular area is
Figure BDA00019146866700000212
Pixel of width of
Figure BDA00019146866700000213
A pixel.
The calculation of the scaling parameters and the translation parameters corresponding to each rectangular frame, i.e. for the current frame image I in the rectangular frame containing the vehicletThe ith rectangular frame region in (x, y)
Figure BDA00019146866700000214
Computing rectangular box regions
Figure BDA00019146866700000215
In the current frame image It(x, y) and the previous frame image It-1Scaling parameters in (x, y)
Figure BDA00019146866700000216
And translation parameters
Figure BDA00019146866700000220
The method comprises the following steps:
step 1: in the area of the rectangular frame
Figure BDA00019146866700000218
Detecting characteristic points in the image and obtaining the image I of the previous framet-1Matching feature points in (x, y) to form N pairs of matched feature points;
step 2: calculating a scaling parameter
Figure BDA00019146866700000219
Figure BDA0001914686670000031
Wherein med is the median of the elements in the set, (x)c,m,yc,m) Is composed of
Figure BDA00019146866700000322
M characteristic point of (x)p,m,yp,m) Last frame image I matched with itt-1The characteristic points in (x, y), N is the total logarithm of the characteristic points;
and step 3: calculating target translation amount
Figure BDA0001914686670000032
Figure BDA0001914686670000033
Wherein the content of the first and second substances,
Figure BDA0001914686670000034
as a translation parameter
Figure BDA00019146866700000323
Two components of (a);
and 4, step 4: determining the residual error rm
Figure BDA0001914686670000035
At rmWhen the pixel is more than or equal to 0.5, (x)c,m,yc,m) As outliers, (x) are removedc,m,yc,m) And its matching point (x)p,m,yp,m);
And 5: after the outer points are removed, the step 2 and the step 3 are repeatedly executed for the remaining characteristic point pairs to obtain the final scaling parameters
Figure BDA0001914686670000036
And translation parameters
Figure BDA00019146866700000324
The horizontal edge information accumulation function is created or updated for each rectangular frame in the current image, namely for the ith rectangular frame region and the scaling parameter in the current image
Figure BDA0001914686670000037
And translation parameters
Figure BDA00019146866700000325
Finding a target rectangular frame area in the t-1 frame image matched with the target rectangular frame area, comprising the following steps of:
is provided with
Figure BDA0001914686670000038
For the ith rectangular frame area in the current image
Figure BDA0001914686670000039
The height of the rectangular frame area is
Figure BDA00019146866700000310
Pixel of width of
Figure BDA00019146866700000311
Pixel, calculating rectangular frame area
Figure BDA00019146866700000312
Corresponding to the rectangular region in the t-1 frame
Figure BDA00019146866700000313
Upper left vertex coordinates of
Figure BDA00019146866700000314
And height of rectangular region
Figure BDA00019146866700000315
And width
Figure BDA00019146866700000316
Figure BDA00019146866700000317
Figure BDA00019146866700000318
Figure BDA00019146866700000319
Wherein the content of the first and second substances,
Figure BDA00019146866700000320
as a translation parameter
Figure BDA00019146866700000326
The two components of (a) and (b),
Figure BDA00019146866700000321
is a scaling parameter;
according to
Figure BDA0001914686670000041
Finding the rectangular frame with the largest overlapping area at the time t-1 and meeting the predetermined rule as the rectangular frame region
Figure BDA0001914686670000042
Updating the corresponding horizontal edge information accumulation function of the matched target rectangular frame area to be used as the horizontal edge information accumulation function of the current rectangular frame;
if there is no rectangular frame region
Figure BDA0001914686670000043
The matched target rectangular frame area is the rectangular frame area
Figure BDA0001914686670000044
A horizontal edge information accumulation function is created.
The horizontal edge information accumulation function
Figure BDA0001914686670000045
The creation and update process of (1) is as follows:
a creating process:
Figure BDA0001914686670000046
wherein the content of the first and second substances,
Figure BDA0001914686670000047
for the purpose of the horizontal edge intensity response function,
Figure BDA0001914686670000048
b belongs to (1,2,3,. eta., f), and f is a history accumulated length;
and (3) updating:
Figure BDA0001914686670000049
for each rectangular frame in the current image, the position of the lower bottom edge of the target is calculated according to the horizontal edge information accumulation function, namely for the ith rectangular frame area in the current image, the horizontal edge information accumulation function is set as
Figure BDA00019146866700000410
The horizontal edge detection process is as follows:
first, a horizontal edge information accumulation function L is calculatedtColumn Peak function of (a, b)
Figure BDA00019146866700000411
Sum column peak mark function
Figure BDA00019146866700000412
Marking the row coordinate position of the possible horizontal edges in the image:
Figure BDA00019146866700000413
Figure BDA00019146866700000414
wherein the content of the first and second substances,
Figure BDA00019146866700000415
b belongs to (1,2,3,. eta., f), and f is a history accumulated length;
computing horizontal edge detection function
Figure BDA00019146866700000416
Figure BDA0001914686670000051
Wherein Th1And Th2Respectively a threshold value of accumulated frame number and a threshold value of horizontal edge response intensity;
is selected to satisfy
Figure BDA0001914686670000052
The maximum value a of the lower edge is used as the line position of the lower edge, so that the image line where the lower edge is located is determined, and the lower edge detection is completed.
The invention has the following advantages and beneficial effects:
1. according to the invention, through calculating the image zooming parameter and the translation parameter, multi-frame accumulation is carried out on the single-frame horizontal edge information, along with the motion of the vehicle, the background edge information is continuously weakened in the accumulation process, and the vehicle edge information is continuously strengthened in the accumulation process, so that the background interference is effectively overcome, and the accurate position of the lower bottom edge of the vehicle is finally obtained. Compared with the existing lower bottom edge positioning method based on the vehicle bottom shadow characteristics, the method can effectively overcome the complex background interference.
2. The invention can realize the association of the vehicle target frames detected by a single frame, thereby determining the positions of the same vehicle in different frames of the video. The method can also be used for video tracking of the vehicle target, namely, the position of the vehicle in other frames is automatically tracked under the condition of only an initial target frame. The method is not only suitable for the vehicle target frame, but also suitable for the tracking of other rigid body targets or the association of the target frames.
3. The invention is generally applicable to rigid body targets, and during the running process of the self-vehicle, the same rigid body target is approximately positioned on a plane parallel to a vehicle-mounted camera and has the same zooming and translating parameters, while the background is positioned on different planes, and the zooming and translating parameters are different from the zooming and translating parameters. Based on the rule, the invention designs a method for removing the non-target area feature points based on the reverse calculation of the zooming and translating parameters, and can effectively distinguish whether the feature points in the target frame are the feature points positioned on the target object.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a schematic illustration of three ROI of vehicles detected in a three lane situation from the front of a vehicle;
FIG. 3 is a schematic diagram of an image coordinate system of the present invention;
FIG. 4 is a schematic diagram of a parameter calculation process according to the present invention;
fig. 5 is a diagram illustrating a horizontal edge information accumulation function when f is 3.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples.
The method is suitable for a vehicle detection system of a vehicle-mounted camera, the mounting position of the vehicle-mounted camera is positioned at the front windshield, the bumper and other positions of the self vehicle or corresponding positions behind the vehicle, and the vehicle-mounted camera is respectively used for monitoring other vehicles from the front or the back of the self vehicle in the running process of the self vehicle, and the functions of collision early warning and the like are realized by positioning the lower bottom edge of the vehicle-mounted camera and calculating the space position. When the camera is installed, the optical axis of the camera is required to be basically parallel to the vehicle body (namely parallel to the ground), if the camera is installed and has a pitch angle, the camera can be subjected to image correction through an off-line external reference calibration method, and the application of the method is not influenced.
As shown in fig. 1, a method for locating a bottom edge of a vehicle based on horizontal edge information accumulation includes the following steps:
in an image frame of a video, acquiring a rectangular frame containing a vehicle;
calculating each momentHorizontal edge intensity response function of the shape frame and scaling parameters and translation parameters corresponding to each rectangular frame, i.e. for the current frame image I in the rectangular frame containing the vehicletThe ith rectangular frame region in (x, y)
Figure BDA0001914686670000061
Computing rectangular box regions
Figure BDA0001914686670000062
Horizontal edge intensity response function and rectangular frame area
Figure BDA0001914686670000063
Scaling parameter of
Figure BDA0001914686670000064
And translation parameters
Figure BDA0001914686670000065
For each rectangular frame in the current image, creating or updating a horizontal edge information accumulation function;
for each rectangular box in the current image, accumulate function L according to its horizontal edge informationtAnd (a, b) calculating the position of the lower bottom edge of the target.
The above steps are described in detail below.
Vehicle ROI position acquisition
Vehicle region of interest (ROI) position acquisition refers to acquiring a rectangular frame position containing a vehicle in an image frame of a video. In the prior art, there are many methods for obtaining the ROI position of the vehicle, such as a knowledge-based method, an optical flow-based method, a statistical learning-based method, and the like. According to the invention, a statistical learning-based method is adopted to obtain the position of the ROI of the vehicle, and the position detection of the ROI of the vehicle is completed by off-line training of an AdaBoost classifier. The method comprises the following steps: firstly, collecting a vehicle picture as a positive sample in an off-line manner, collecting a background picture as a negative sample, and training an AdaBoost classifier; then, in the image frame, traversing and searching each position of the image in a sliding window mode, calling a trained AdaBoost classifier for online detection, and reserving a rectangular frame detected as a vehicle; and finally, clustering a plurality of rectangular frames in the image to obtain a rectangular frame containing the vehicle ROI. Clustering is performed so that only one ROI rectangle is contained in the same vehicle.
For tasks such as target tracking, video annotation and the like, the position of a rectangular frame containing the vehicle can be manually and roughly drawn in the first frame where the vehicle appears to serve as the vehicle ROI. The vehicle bottom edge position output function in the invention can be accomplished as well.
The rectangular box in fig. 2 represents the detected vehicle ROI.
Horizontal edge intensity response function calculation
In the vehicle ROI, a horizontal edge intensity response function is calculated. The coordinate system is shown in FIG. 3 for the current frame image ItThe i-th ROI rectangular region in (x, y)
Figure BDA0001914686670000071
Calculating the absolute function of the partial derivatives in the x-direction
Figure BDA0001914686670000072
Figure BDA0001914686670000073
Wherein
Figure BDA0001914686670000074
To represent
Figure BDA0001914686670000075
Within the image area, the derivative in the vertical direction. Next, a horizontal edge intensity response function is calculated
Figure BDA0001914686670000076
Figure BDA0001914686670000077
Wherein the content of the first and second substances,
Figure BDA0001914686670000078
x is an integer.
Figure BDA0001914686670000079
Is a rectangular area
Figure BDA00019146866700000710
The height of the rectangular area is
Figure BDA00019146866700000711
Pixel of width of
Figure BDA00019146866700000712
A pixel.
The above calculation process is repeated for each ROI in the current image. Obtaining a horizontal edge intensity response function H corresponding to each ROIt(x)。
Target zoom and pan parameter calculation
For the current frame image ItThe i-th ROI rectangular region in (x, y)
Figure BDA00019146866700000713
Computing
Figure BDA00019146866700000714
In picture It(x, y) and image It-1Scaling parameters in (x, y)
Figure BDA0001914686670000081
And translation parameters
Figure BDA00019146866700000814
The calculation process is as follows:
step 1: in that
Figure BDA0001914686670000082
The Harris angular point detection and calculation method is adoptedMethod (prior art, refer to wild goose, Lanmeihui, Wanjonqong, etc.. an improved Harris-based corner point detection method [ J]Computer technology and development, 2009,10(5): 130-. And adopts Lucas and Kanade's characteristic point tracking method (in the prior art, refer to Tomasi C, Kanade T.detection and tracking of point features R]School of Computer Science, Carnegie Mellon Univ.,1991.) to obtain images I in adjacent framest-1The matched feature points in (1) form N pairs of matched feature point pairs.
Step 2: calculating a scaling parameter
Figure BDA0001914686670000083
Figure BDA0001914686670000084
Wherein med is the median of the elements in the set.
And step 3: let (x)c,m,yc,m) Is composed of
Figure BDA0001914686670000085
M characteristic point of (x)p,m,yp,m) T-1 frame image I matched therewitht-1The characteristic points in (x, y),
Figure BDA0001914686670000086
as a translation parameter
Figure BDA00019146866700000815
The following equation:
Figure BDA0001914686670000087
therefore, the method has the advantages that,
Figure BDA0001914686670000088
due to the fact that
Figure BDA0001914686670000089
Having found that, from the equation (2), the target translation amount can be found by the following equation
Figure BDA00019146866700000810
Figure BDA00019146866700000811
Wherein N is the total logarithm of the feature points.
And 4, step 4: find out
Figure BDA00019146866700000812
Then, the residual r is inversely obtained from equation (1)m
Figure BDA00019146866700000813
For rmSetting a threshold value, determining r by experimentmWhen the pixel is more than or equal to 0.5, (x)c,m,yc,m) As outliers, (x) are removedc,m,yc,m) And its matching point (x)p,m,yp,m)。
And 5: after the outer points are removed, the step 2 and the step 3 are repeatedly executed for the remaining characteristic point pairs to obtain the final scaling parameters
Figure BDA0001914686670000091
And translation parameters
Figure BDA00019146866700000919
The above calculation process is repeated for each ROI in the current image. A scaling parameter and a translation parameter corresponding to each ROI are obtained.
The parameter calculation process of fig. 4 illustrates: (a) the rectangular box in the figure represents the current picture ItThe ith ROI rectangular region in (x, y), the dots in the figure representing the PIXOThe matched characteristic point pair is the position in the current frame, and the scaling parameter
Figure BDA0001914686670000092
And target translation parameters
Figure BDA0001914686670000093
The initial value of (a) is calculated from the dots; (b) the triangular points in the graph represent points that are not on the target or points that are incorrectly matched, resulting in final scaling parameters, by performing inverse calculations on the initial values of the scaling and translation parameters
Figure BDA0001914686670000094
And target translation parameters
Figure BDA0001914686670000095
Calculated from the remaining dots in the figure.
Horizontal edge information accumulation
For each ROI in the current image, a horizontal edge information accumulation function is created or updated.
For the ith ROI area in the current image, and the calculated scaling parameter
Figure BDA0001914686670000096
And translation parameters
Figure BDA00019146866700000920
And finding a target ROI area in the t-1 frame image matched with the target ROI area. The search process is as follows:
is provided with
Figure BDA0001914686670000097
For the ith ROI area in the current image
Figure BDA0001914686670000098
The height of the rectangular area is
Figure BDA0001914686670000099
Pixel of width of
Figure BDA00019146866700000910
A pixel. Calculates the corresponding rectangular area of the frame t-1
Figure BDA00019146866700000911
Upper left vertex coordinates of
Figure BDA00019146866700000912
And height of rectangular region
Figure BDA00019146866700000913
And width
Figure BDA00019146866700000914
Figure BDA00019146866700000915
Figure BDA00019146866700000916
Figure BDA00019146866700000917
According to
Figure BDA00019146866700000918
The ROI rectangular frame with the maximum overlapping area at the time t-1 and meeting a certain rule is searched for and serves as a matched target ROI area, and the corresponding horizontal edge information accumulation function is updated and serves as the horizontal edge information accumulation function of the current ROI. The certain rule may specify, for example, that the overlapping area of the two overlapping rectangular frames occupies 50% or more of the area of each of the two overlapping rectangular frames. If there is no matching target ROI area, a horizontal edge information accumulation function is created for it. Horizontal edge information accumulation function
Figure BDA0001914686670000101
The creation and update process of (1) is as follows:
a creating process:
Figure BDA0001914686670000102
wherein the content of the first and second substances,
Figure BDA0001914686670000103
for the purpose of the horizontal edge intensity response function,
Figure BDA0001914686670000104
b belongs to (1,2,3,. and f), wherein f is a history accumulated length, and f is 10 in the implementation process of the patent.
And (3) updating:
Figure BDA0001914686670000105
the above calculation process is repeated for each ROI in the current image. Obtaining a horizontal edge information accumulation function L corresponding to each ROIt(a,b)。
Bottom edge detection
For each ROI in the current image, a function L is accumulated according to its horizontal edge informationtAnd (a, b) calculating the position of the lower bottom edge of the target.
For the ith ROI in the current image, setting the horizontal edge information accumulation function as
Figure BDA0001914686670000106
The horizontal edge detection process is as follows:
first calculate LtColumn Peak function of (a, b)
Figure BDA0001914686670000107
Sum column peak mark function
Figure BDA0001914686670000108
Marking possible horizontal edges in an imageThe row coordinate position.
Figure BDA0001914686670000109
Figure BDA00019146866700001010
Computing horizontal edge detection function
Figure BDA00019146866700001011
Figure BDA0001914686670000111
Is selected to satisfy
Figure BDA0001914686670000116
The maximum value a of (a) is taken as the line position of the lower base. Thereby determining the image line where the lower bottom edge is located and finishing the lower bottom edge detection. Wherein Th1And Th2Respectively as threshold of accumulated frame number and threshold of horizontal edge response intensity, and setting Th1=0.7,Th2=5。
When f is 3 in fig. 5, the horizontal edge information accumulation function
Figure BDA0001914686670000112
Schematic representation of (a). Wherein, the series 1 is
Figure BDA0001914686670000113
Series 2 is
Figure BDA0001914686670000114
Series 3 is
Figure BDA0001914686670000115
It can be seen that after multi-frame accumulation, the waveforms of the horizontal edge information of the vehicle are overlapped, and the background horizontal edge below the ROI does not satisfy the scaling sum of the vehicleThe parameters are translated and thus show a misalignment in the cumulative function and do not overlap well.

Claims (4)

1. A vehicle bottom edge positioning method based on horizontal edge information accumulation is characterized by comprising the following steps:
in an image frame of a video, acquiring a rectangular frame containing a vehicle;
calculating a horizontal edge strength response function of each rectangular frame and a scaling parameter and a translation parameter corresponding to each rectangular frame;
for each rectangular frame in the current image, creating or updating a horizontal edge information accumulation function;
for each rectangular frame in the current image, calculating the position of the lower bottom edge of the target according to the horizontal edge information accumulation function of the rectangular frame;
said calculation of the horizontal edge intensity response function, i.e. in a rectangular frame containing the vehicle, for the current frame image ItThe ith rectangular frame region in (x, y)
Figure FDA0002746826170000011
Calculating a horizontal edge intensity response function, comprising the steps of:
for the current frame image ItThe i-th ROI rectangular region in (x, y)
Figure FDA0002746826170000012
Calculating the absolute function of the partial derivatives in the x-direction
Figure FDA0002746826170000013
Figure FDA0002746826170000014
Wherein
Figure FDA0002746826170000015
To represent
Figure FDA0002746826170000016
Derivatives in the vertical direction within the image area;
computing horizontal edge intensity response function
Figure FDA0002746826170000017
Figure FDA0002746826170000018
Wherein the content of the first and second substances,
Figure FDA0002746826170000019
x is an integer which is the number of atoms,
Figure FDA00027468261700000110
is a rectangular frame areaThe height of the rectangular area is
Figure FDA00027468261700000112
Pixel of width of
Figure FDA00027468261700000113
A pixel; the calculation of the scaling parameters and the translation parameters corresponding to each rectangular frame, i.e. for the current frame image I in the rectangular frame containing the vehicletThe ith rectangular frame region in (x, y)
Figure FDA00027468261700000114
Computing rectangular box regions
Figure FDA00027468261700000115
In the current frame image It(x, y) and the previous frame image It-1Scaling parameters in (x, y)
Figure FDA0002746826170000021
And a translation parameter Tt iThe method comprises the following steps:
step 1: in the area of the rectangular frame
Figure FDA0002746826170000022
Detecting characteristic points in the image and obtaining the image I of the previous framet-1Matching feature points in (x, y) to form N pairs of matched feature points;
step 2: calculating a scaling parameter
Figure FDA0002746826170000023
Figure FDA0002746826170000024
Wherein med is the median of the elements in the set, (x)c,m,yc,m) Is composed of
Figure FDA00027468261700000220
M characteristic point of (x)p,m,yp,m) Last frame image I matched with itt-1The characteristic points in (x, y), N is the total logarithm of the characteristic points;
and step 3: calculating target translation amount
Figure FDA0002746826170000025
Figure FDA0002746826170000026
Wherein the content of the first and second substances,
Figure FDA0002746826170000027
for translation parameter Tt iTwo components of (a);
and 4, step 4:determining the residual error rm
Figure FDA0002746826170000028
At rmWhen the pixel is more than or equal to 0.5, (x)c,m,yc,m) As outliers, (x) are removedc,m,yc,m) And its matching point (x)p,m,yp,m);
And 5: after the outer points are removed, the step 2 and the step 3 are repeatedly executed for the remaining characteristic point pairs to obtain the final scaling parameters
Figure FDA0002746826170000029
And a translation parameter Tt i
The horizontal edge information accumulation function is created or updated for each rectangular frame in the current image, namely for the ith rectangular frame region and the scaling parameter in the current image
Figure FDA00027468261700000210
And a translation parameter Tt iAnd searching a target rectangular frame area in the t-1 frame image matched with the target rectangular frame area, wherein the method comprises the following steps:
is provided with
Figure FDA00027468261700000211
For the ith rectangular frame area in the current image
Figure FDA00027468261700000212
The height of the rectangular frame area is
Figure FDA00027468261700000213
Pixel of width of
Figure FDA00027468261700000214
Pixel, calculating rectangular frame area
Figure FDA00027468261700000215
Corresponding to the rectangular region in the t-1 frame
Figure FDA00027468261700000216
Upper left vertex coordinates of
Figure FDA00027468261700000217
And height of rectangular region
Figure FDA00027468261700000218
And width
Figure FDA00027468261700000219
Figure FDA0002746826170000031
Figure FDA0002746826170000032
Figure FDA0002746826170000033
Wherein the content of the first and second substances,
Figure FDA0002746826170000034
for translation parameter Tt iThe two components of (a) and (b),
Figure FDA0002746826170000035
is a scaling parameter;
according to
Figure FDA0002746826170000036
Finding a rectangular frame with the maximum overlapping area at the time t-1 and meeting a predetermined rule as the position of (A)Rectangular frame area
Figure FDA0002746826170000037
Updating the corresponding horizontal edge information accumulation function of the matched target rectangular frame area to be used as the horizontal edge information accumulation function of the current rectangular frame;
if there is no rectangular frame region
Figure FDA0002746826170000038
The matched target rectangular frame area is the rectangular frame area
Figure FDA0002746826170000039
A horizontal edge information accumulation function is created.
2. The method for locating the bottom edge of a vehicle based on the accumulation of horizontal edge information as claimed in claim 1, wherein the step of obtaining the position of a rectangular frame containing the vehicle in the image frame of the video comprises the following steps:
collecting a vehicle picture as a positive sample in an off-line manner, collecting a background picture as a negative sample, and training a classifier;
in the image frame, traversing and searching each position of the image in a sliding window mode, calling a trained classifier to perform online detection, and reserving a rectangular frame detected as a vehicle;
clustering a plurality of rectangular frames detected by a classifier in the image to obtain rectangular frames containing vehicles; and only one rectangular frame is contained for the same vehicle.
3. The method as claimed in claim 1, wherein the function of accumulating the horizontal edge information is a function of the accumulation of the horizontal edge information
Figure FDA00027468261700000310
The creation and update process of (1) is as follows:
a creating process:
Figure FDA00027468261700000311
wherein the content of the first and second substances,
Figure FDA00027468261700000312
for the purpose of the horizontal edge intensity response function,
Figure FDA00027468261700000313
f is the history accumulated length;
and (3) updating:
Figure FDA0002746826170000041
4. the method as claimed in claim 1, wherein the target bottom edge position is calculated according to the horizontal edge information accumulation function for each rectangular frame in the current image, that is, for the ith rectangular frame region in the current image, the horizontal edge information accumulation function is set as
Figure FDA0002746826170000042
The horizontal edge detection process is as follows:
first, a horizontal edge information accumulation function L is calculatedtColumn Peak function of (a, b)
Figure FDA0002746826170000043
Sum column peak mark function
Figure FDA0002746826170000044
Marking the row coordinate position of the possible horizontal edges in the image:
Figure FDA0002746826170000045
Figure FDA0002746826170000046
wherein the content of the first and second substances,
Figure FDA0002746826170000047
f is the history accumulated length;
calculating a horizontal edge detection function Ft i(a):
Figure FDA0002746826170000048
Wherein Th1And Th2Respectively a threshold value of accumulated frame number and a threshold value of horizontal edge response intensity;
selecting a material satisfying Ft i(a)>Th2The maximum value a of the lower edge is used as the line position of the lower edge, so that the image line where the lower edge is located is determined, and the lower edge detection is completed.
CN201811567600.0A 2018-12-21 2018-12-21 Vehicle bottom edge positioning method based on horizontal edge information accumulation Active CN109815812B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811567600.0A CN109815812B (en) 2018-12-21 2018-12-21 Vehicle bottom edge positioning method based on horizontal edge information accumulation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811567600.0A CN109815812B (en) 2018-12-21 2018-12-21 Vehicle bottom edge positioning method based on horizontal edge information accumulation

Publications (2)

Publication Number Publication Date
CN109815812A CN109815812A (en) 2019-05-28
CN109815812B true CN109815812B (en) 2020-12-04

Family

ID=66601808

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811567600.0A Active CN109815812B (en) 2018-12-21 2018-12-21 Vehicle bottom edge positioning method based on horizontal edge information accumulation

Country Status (1)

Country Link
CN (1) CN109815812B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110738181B (en) * 2019-10-21 2022-08-05 东软睿驰汽车技术(沈阳)有限公司 Method and device for determining vehicle orientation information
CN111340877B (en) * 2020-03-25 2023-10-27 北京爱笔科技有限公司 Vehicle positioning method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101029824A (en) * 2006-02-28 2007-09-05 沈阳东软软件股份有限公司 Method and apparatus for positioning vehicle based on characteristics
CN104036275A (en) * 2014-05-22 2014-09-10 东软集团股份有限公司 Method and device for detecting target objects in vehicle blind areas

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7720580B2 (en) * 2004-12-23 2010-05-18 Donnelly Corporation Object detection system for vehicle
CN103544487B (en) * 2013-11-01 2019-11-22 扬州瑞控汽车电子有限公司 Front truck recognition methods based on monocular vision
CN105205459B (en) * 2015-09-16 2019-02-05 东软集团股份有限公司 A kind of recognition methods of characteristics of image vertex type and device
CN105844222B (en) * 2016-03-18 2019-07-30 上海欧菲智能车联科技有限公司 The front vehicles collision warning systems and method of view-based access control model

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101029824A (en) * 2006-02-28 2007-09-05 沈阳东软软件股份有限公司 Method and apparatus for positioning vehicle based on characteristics
CN104036275A (en) * 2014-05-22 2014-09-10 东软集团股份有限公司 Method and device for detecting target objects in vehicle blind areas

Also Published As

Publication number Publication date
CN109815812A (en) 2019-05-28

Similar Documents

Publication Publication Date Title
CN111460926B (en) Video pedestrian detection method fusing multi-target tracking clues
CN111693972B (en) Vehicle position and speed estimation method based on binocular sequence images
Kong et al. Vanishing point detection for road detection
Shi et al. Fast and robust vanishing point detection for unstructured road following
US8446468B1 (en) Moving object detection using a mobile infrared camera
Liu et al. Rear vehicle detection and tracking for lane change assist
US8379928B2 (en) Obstacle detection procedure for motor vehicle
CN108052904B (en) Method and device for acquiring lane line
CN104282020A (en) Vehicle speed detection method based on target motion track
Mu et al. Multiscale edge fusion for vehicle detection based on difference of Gaussian
CN111340855A (en) Road moving target detection method based on track prediction
US11093778B2 (en) Method and system for selecting image region that facilitates blur kernel estimation
CN115578470B (en) Monocular vision positioning method and device, storage medium and electronic equipment
CN110992424B (en) Positioning method and system based on binocular vision
CN109815812B (en) Vehicle bottom edge positioning method based on horizontal edge information accumulation
Jang et al. Road lane semantic segmentation for high definition map
EP3633617A2 (en) Image processing device
Ponsa et al. On-board image-based vehicle detection and tracking
CN107506753B (en) Multi-vehicle tracking method for dynamic video monitoring
CN115187941A (en) Target detection positioning method, system, equipment and storage medium
CN114972427A (en) Target tracking method based on monocular vision, terminal equipment and storage medium
CN113269007A (en) Target tracking device and method for road monitoring video
Vajak et al. A rethinking of real-time computer vision-based lane detection
CN113221739B (en) Monocular vision-based vehicle distance measuring method
CN113361299B (en) Abnormal parking detection method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant