CN109815812B - Vehicle bottom edge positioning method based on horizontal edge information accumulation - Google Patents
Vehicle bottom edge positioning method based on horizontal edge information accumulation Download PDFInfo
- Publication number
- CN109815812B CN109815812B CN201811567600.0A CN201811567600A CN109815812B CN 109815812 B CN109815812 B CN 109815812B CN 201811567600 A CN201811567600 A CN 201811567600A CN 109815812 B CN109815812 B CN 109815812B
- Authority
- CN
- China
- Prior art keywords
- rectangular frame
- image
- horizontal edge
- vehicle
- frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Landscapes
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention provides a vehicle bottom edge positioning method based on horizontal edge information accumulation, which comprises the following steps: in an image frame of a video, acquiring a rectangular frame containing a vehicle; calculating a horizontal edge strength response function of each rectangular frame and a scaling parameter and a translation parameter corresponding to each rectangular frame; for each rectangular frame in the current image, creating or updating a horizontal edge information accumulation function; and for each rectangular frame in the current image, calculating the position of the lower bottom edge of the target according to the horizontal edge information accumulation function of the rectangular frame. According to the invention, through calculating the image zooming parameter and the translation parameter, multi-frame accumulation is carried out on the single-frame horizontal edge information, along with the motion of the vehicle, the background edge information is continuously weakened in the accumulation process, and the vehicle edge information is continuously strengthened in the accumulation process, so that the background interference is effectively overcome, and the accurate position of the lower bottom edge of the vehicle is finally obtained. Compared with the existing lower bottom edge positioning method based on the vehicle bottom shadow characteristics, the method can effectively overcome the complex background interference.
Description
Technical Field
The invention relates to the technical field of vehicle detection, in particular to a vehicle bottom edge positioning method based on horizontal edge information accumulation.
Background
The vision-based vehicle detection system has important application in the fields of automobile auxiliary driving, automatic driving and the like. The vehicle detection system based on the monocular camera is favored by various large vehicle factories due to the advantages of low cost, flexible installation, easy integration with other hardware equipment and the like. In a vehicle detection system based on a monocular camera, the real spatial position of a detected vehicle is often calculated according to the position of the lower bottom edge of the vehicle in an image, and then the relative position relationship between the vehicle and the detected vehicle is determined, so that the functions of collision early warning, vehicle following and the like are realized. It becomes very important to accurately locate the position of the lower edge of the vehicle in the image.
The existing lower bottom edge positioning method is often used for positioning by utilizing information such as symmetry of a vehicle, shadow of the bottom edge of the vehicle and the like, and is easily influenced by background and illumination change, so that the position of the lower bottom edge is not accurately calculated.
Disclosure of Invention
Aiming at the defects in the prior art, the technical problem to be solved by the invention is to provide a vehicle bottom edge positioning method based on horizontal edge information accumulation, wherein edge information which is possibly the bottom edge of a vehicle is stored in a single-frame image, and then the final position of the bottom edge of the vehicle is determined by utilizing multi-frame image information.
The technical scheme adopted by the invention for realizing the purpose is as follows: a vehicle bottom edge positioning method based on horizontal edge information accumulation comprises the following steps:
in an image frame of a video, acquiring a rectangular frame containing a vehicle;
calculating a horizontal edge strength response function of each rectangular frame and a scaling parameter and a translation parameter corresponding to each rectangular frame;
for each rectangular frame in the current image, creating or updating a horizontal edge information accumulation function;
and for each rectangular frame in the current image, calculating the position of the lower bottom edge of the target according to the horizontal edge information accumulation function of the rectangular frame.
The method for acquiring the position of a rectangular frame containing a vehicle in an image frame of a video comprises the following steps:
collecting a vehicle picture as a positive sample in an off-line manner, collecting a background picture as a negative sample, and training a classifier;
in the image frame, traversing and searching each position of the image in a sliding window mode, calling a trained classifier to perform online detection, and reserving a rectangular frame detected as a vehicle;
clustering a plurality of rectangular frames detected by a classifier in the image to obtain rectangular frames containing vehicles; and only one rectangular frame is contained for the same vehicle.
Said calculation of the horizontal edge intensity response function, i.e. in a rectangular frame containing the vehicle, for the current frame image ItThe ith rectangular frame region in (x, y)Calculating a horizontal edge intensity response function, comprising the steps of:
for the current frame image ItThe i-th ROI rectangular region in (x, y)Calculating the absolute function of the partial derivatives in the x-direction
Wherein the content of the first and second substances,x is an integer which is the number of atoms,is a rectangular frame areaThe height of the rectangular area isPixel of width ofA pixel.
The calculation of the scaling parameters and the translation parameters corresponding to each rectangular frame, i.e. for the current frame image I in the rectangular frame containing the vehicletThe ith rectangular frame region in (x, y)Computing rectangular box regionsIn the current frame image It(x, y) and the previous frame image It-1Scaling parameters in (x, y)And translation parametersThe method comprises the following steps:
step 1: in the area of the rectangular frameDetecting characteristic points in the image and obtaining the image I of the previous framet-1Matching feature points in (x, y) to form N pairs of matched feature points;
Wherein med is the median of the elements in the set, (x)c,m,yc,m) Is composed ofM characteristic point of (x)p,m,yp,m) Last frame image I matched with itt-1The characteristic points in (x, y), N is the total logarithm of the characteristic points;
Wherein the content of the first and second substances,as a translation parameterTwo components of (a);
and 4, step 4: determining the residual error rm,
At rmWhen the pixel is more than or equal to 0.5, (x)c,m,yc,m) As outliers, (x) are removedc,m,yc,m) And its matching point (x)p,m,yp,m);
And 5: after the outer points are removed, the step 2 and the step 3 are repeatedly executed for the remaining characteristic point pairs to obtain the final scaling parametersAnd translation parameters
The horizontal edge information accumulation function is created or updated for each rectangular frame in the current image, namely for the ith rectangular frame region and the scaling parameter in the current imageAnd translation parametersFinding a target rectangular frame area in the t-1 frame image matched with the target rectangular frame area, comprising the following steps of:
is provided withFor the ith rectangular frame area in the current imageThe height of the rectangular frame area isPixel of width ofPixel, calculating rectangular frame areaCorresponding to the rectangular region in the t-1 frameUpper left vertex coordinates ofAnd height of rectangular regionAnd width
Wherein the content of the first and second substances,as a translation parameterThe two components of (a) and (b),is a scaling parameter;
according toFinding the rectangular frame with the largest overlapping area at the time t-1 and meeting the predetermined rule as the rectangular frame regionUpdating the corresponding horizontal edge information accumulation function of the matched target rectangular frame area to be used as the horizontal edge information accumulation function of the current rectangular frame;
if there is no rectangular frame regionThe matched target rectangular frame area is the rectangular frame areaA horizontal edge information accumulation function is created.
The horizontal edge information accumulation functionThe creation and update process of (1) is as follows:
a creating process:
wherein the content of the first and second substances,for the purpose of the horizontal edge intensity response function,b belongs to (1,2,3,. eta., f), and f is a history accumulated length;
and (3) updating:
for each rectangular frame in the current image, the position of the lower bottom edge of the target is calculated according to the horizontal edge information accumulation function, namely for the ith rectangular frame area in the current image, the horizontal edge information accumulation function is set asThe horizontal edge detection process is as follows:
first, a horizontal edge information accumulation function L is calculatedtColumn Peak function of (a, b)Sum column peak mark functionMarking the row coordinate position of the possible horizontal edges in the image:
wherein the content of the first and second substances,b belongs to (1,2,3,. eta., f), and f is a history accumulated length;
Wherein Th1And Th2Respectively a threshold value of accumulated frame number and a threshold value of horizontal edge response intensity;
is selected to satisfyThe maximum value a of the lower edge is used as the line position of the lower edge, so that the image line where the lower edge is located is determined, and the lower edge detection is completed.
The invention has the following advantages and beneficial effects:
1. according to the invention, through calculating the image zooming parameter and the translation parameter, multi-frame accumulation is carried out on the single-frame horizontal edge information, along with the motion of the vehicle, the background edge information is continuously weakened in the accumulation process, and the vehicle edge information is continuously strengthened in the accumulation process, so that the background interference is effectively overcome, and the accurate position of the lower bottom edge of the vehicle is finally obtained. Compared with the existing lower bottom edge positioning method based on the vehicle bottom shadow characteristics, the method can effectively overcome the complex background interference.
2. The invention can realize the association of the vehicle target frames detected by a single frame, thereby determining the positions of the same vehicle in different frames of the video. The method can also be used for video tracking of the vehicle target, namely, the position of the vehicle in other frames is automatically tracked under the condition of only an initial target frame. The method is not only suitable for the vehicle target frame, but also suitable for the tracking of other rigid body targets or the association of the target frames.
3. The invention is generally applicable to rigid body targets, and during the running process of the self-vehicle, the same rigid body target is approximately positioned on a plane parallel to a vehicle-mounted camera and has the same zooming and translating parameters, while the background is positioned on different planes, and the zooming and translating parameters are different from the zooming and translating parameters. Based on the rule, the invention designs a method for removing the non-target area feature points based on the reverse calculation of the zooming and translating parameters, and can effectively distinguish whether the feature points in the target frame are the feature points positioned on the target object.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a schematic illustration of three ROI of vehicles detected in a three lane situation from the front of a vehicle;
FIG. 3 is a schematic diagram of an image coordinate system of the present invention;
FIG. 4 is a schematic diagram of a parameter calculation process according to the present invention;
fig. 5 is a diagram illustrating a horizontal edge information accumulation function when f is 3.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples.
The method is suitable for a vehicle detection system of a vehicle-mounted camera, the mounting position of the vehicle-mounted camera is positioned at the front windshield, the bumper and other positions of the self vehicle or corresponding positions behind the vehicle, and the vehicle-mounted camera is respectively used for monitoring other vehicles from the front or the back of the self vehicle in the running process of the self vehicle, and the functions of collision early warning and the like are realized by positioning the lower bottom edge of the vehicle-mounted camera and calculating the space position. When the camera is installed, the optical axis of the camera is required to be basically parallel to the vehicle body (namely parallel to the ground), if the camera is installed and has a pitch angle, the camera can be subjected to image correction through an off-line external reference calibration method, and the application of the method is not influenced.
As shown in fig. 1, a method for locating a bottom edge of a vehicle based on horizontal edge information accumulation includes the following steps:
in an image frame of a video, acquiring a rectangular frame containing a vehicle;
calculating each momentHorizontal edge intensity response function of the shape frame and scaling parameters and translation parameters corresponding to each rectangular frame, i.e. for the current frame image I in the rectangular frame containing the vehicletThe ith rectangular frame region in (x, y)Computing rectangular box regionsHorizontal edge intensity response function and rectangular frame areaScaling parameter ofAnd translation parameters
For each rectangular frame in the current image, creating or updating a horizontal edge information accumulation function;
for each rectangular box in the current image, accumulate function L according to its horizontal edge informationtAnd (a, b) calculating the position of the lower bottom edge of the target.
The above steps are described in detail below.
Vehicle ROI position acquisition
Vehicle region of interest (ROI) position acquisition refers to acquiring a rectangular frame position containing a vehicle in an image frame of a video. In the prior art, there are many methods for obtaining the ROI position of the vehicle, such as a knowledge-based method, an optical flow-based method, a statistical learning-based method, and the like. According to the invention, a statistical learning-based method is adopted to obtain the position of the ROI of the vehicle, and the position detection of the ROI of the vehicle is completed by off-line training of an AdaBoost classifier. The method comprises the following steps: firstly, collecting a vehicle picture as a positive sample in an off-line manner, collecting a background picture as a negative sample, and training an AdaBoost classifier; then, in the image frame, traversing and searching each position of the image in a sliding window mode, calling a trained AdaBoost classifier for online detection, and reserving a rectangular frame detected as a vehicle; and finally, clustering a plurality of rectangular frames in the image to obtain a rectangular frame containing the vehicle ROI. Clustering is performed so that only one ROI rectangle is contained in the same vehicle.
For tasks such as target tracking, video annotation and the like, the position of a rectangular frame containing the vehicle can be manually and roughly drawn in the first frame where the vehicle appears to serve as the vehicle ROI. The vehicle bottom edge position output function in the invention can be accomplished as well.
The rectangular box in fig. 2 represents the detected vehicle ROI.
Horizontal edge intensity response function calculation
In the vehicle ROI, a horizontal edge intensity response function is calculated. The coordinate system is shown in FIG. 3 for the current frame image ItThe i-th ROI rectangular region in (x, y)Calculating the absolute function of the partial derivatives in the x-direction
WhereinTo representWithin the image area, the derivative in the vertical direction. Next, a horizontal edge intensity response function is calculated
Wherein the content of the first and second substances,x is an integer.Is a rectangular areaThe height of the rectangular area isPixel of width ofA pixel.
The above calculation process is repeated for each ROI in the current image. Obtaining a horizontal edge intensity response function H corresponding to each ROIt(x)。
Target zoom and pan parameter calculation
For the current frame image ItThe i-th ROI rectangular region in (x, y)ComputingIn picture It(x, y) and image It-1Scaling parameters in (x, y)And translation parametersThe calculation process is as follows:
step 1: in thatThe Harris angular point detection and calculation method is adoptedMethod (prior art, refer to wild goose, Lanmeihui, Wanjonqong, etc.. an improved Harris-based corner point detection method [ J]Computer technology and development, 2009,10(5): 130-. And adopts Lucas and Kanade's characteristic point tracking method (in the prior art, refer to Tomasi C, Kanade T.detection and tracking of point features R]School of Computer Science, Carnegie Mellon Univ.,1991.) to obtain images I in adjacent framest-1The matched feature points in (1) form N pairs of matched feature point pairs.
Wherein med is the median of the elements in the set.
And step 3: let (x)c,m,yc,m) Is composed ofM characteristic point of (x)p,m,yp,m) T-1 frame image I matched therewitht-1The characteristic points in (x, y),as a translation parameterThe following equation:
therefore, the method has the advantages that,
due to the fact thatHaving found that, from the equation (2), the target translation amount can be found by the following equation
Wherein N is the total logarithm of the feature points.
For rmSetting a threshold value, determining r by experimentmWhen the pixel is more than or equal to 0.5, (x)c,m,yc,m) As outliers, (x) are removedc,m,yc,m) And its matching point (x)p,m,yp,m)。
And 5: after the outer points are removed, the step 2 and the step 3 are repeatedly executed for the remaining characteristic point pairs to obtain the final scaling parametersAnd translation parameters
The above calculation process is repeated for each ROI in the current image. A scaling parameter and a translation parameter corresponding to each ROI are obtained.
The parameter calculation process of fig. 4 illustrates: (a) the rectangular box in the figure represents the current picture ItThe ith ROI rectangular region in (x, y), the dots in the figure representing the PIXOThe matched characteristic point pair is the position in the current frame, and the scaling parameterAnd target translation parametersThe initial value of (a) is calculated from the dots; (b) the triangular points in the graph represent points that are not on the target or points that are incorrectly matched, resulting in final scaling parameters, by performing inverse calculations on the initial values of the scaling and translation parametersAnd target translation parametersCalculated from the remaining dots in the figure.
Horizontal edge information accumulation
For each ROI in the current image, a horizontal edge information accumulation function is created or updated.
For the ith ROI area in the current image, and the calculated scaling parameterAnd translation parametersAnd finding a target ROI area in the t-1 frame image matched with the target ROI area. The search process is as follows:
is provided withFor the ith ROI area in the current imageThe height of the rectangular area isPixel of width ofA pixel. Calculates the corresponding rectangular area of the frame t-1Upper left vertex coordinates ofAnd height of rectangular regionAnd width
According toThe ROI rectangular frame with the maximum overlapping area at the time t-1 and meeting a certain rule is searched for and serves as a matched target ROI area, and the corresponding horizontal edge information accumulation function is updated and serves as the horizontal edge information accumulation function of the current ROI. The certain rule may specify, for example, that the overlapping area of the two overlapping rectangular frames occupies 50% or more of the area of each of the two overlapping rectangular frames. If there is no matching target ROI area, a horizontal edge information accumulation function is created for it. Horizontal edge information accumulation functionThe creation and update process of (1) is as follows:
a creating process:
wherein the content of the first and second substances,for the purpose of the horizontal edge intensity response function,b belongs to (1,2,3,. and f), wherein f is a history accumulated length, and f is 10 in the implementation process of the patent.
And (3) updating:
the above calculation process is repeated for each ROI in the current image. Obtaining a horizontal edge information accumulation function L corresponding to each ROIt(a,b)。
Bottom edge detection
For each ROI in the current image, a function L is accumulated according to its horizontal edge informationtAnd (a, b) calculating the position of the lower bottom edge of the target.
For the ith ROI in the current image, setting the horizontal edge information accumulation function asThe horizontal edge detection process is as follows:
first calculate LtColumn Peak function of (a, b)Sum column peak mark functionMarking possible horizontal edges in an imageThe row coordinate position.
Is selected to satisfyThe maximum value a of (a) is taken as the line position of the lower base. Thereby determining the image line where the lower bottom edge is located and finishing the lower bottom edge detection. Wherein Th1And Th2Respectively as threshold of accumulated frame number and threshold of horizontal edge response intensity, and setting Th1=0.7,Th2=5。
When f is 3 in fig. 5, the horizontal edge information accumulation functionSchematic representation of (a). Wherein, the series 1 isSeries 2 isSeries 3 isIt can be seen that after multi-frame accumulation, the waveforms of the horizontal edge information of the vehicle are overlapped, and the background horizontal edge below the ROI does not satisfy the scaling sum of the vehicleThe parameters are translated and thus show a misalignment in the cumulative function and do not overlap well.
Claims (4)
1. A vehicle bottom edge positioning method based on horizontal edge information accumulation is characterized by comprising the following steps:
in an image frame of a video, acquiring a rectangular frame containing a vehicle;
calculating a horizontal edge strength response function of each rectangular frame and a scaling parameter and a translation parameter corresponding to each rectangular frame;
for each rectangular frame in the current image, creating or updating a horizontal edge information accumulation function;
for each rectangular frame in the current image, calculating the position of the lower bottom edge of the target according to the horizontal edge information accumulation function of the rectangular frame;
said calculation of the horizontal edge intensity response function, i.e. in a rectangular frame containing the vehicle, for the current frame image ItThe ith rectangular frame region in (x, y)Calculating a horizontal edge intensity response function, comprising the steps of:
for the current frame image ItThe i-th ROI rectangular region in (x, y)Calculating the absolute function of the partial derivatives in the x-direction
Wherein the content of the first and second substances,x is an integer which is the number of atoms,is a rectangular frame areaThe height of the rectangular area isPixel of width ofA pixel; the calculation of the scaling parameters and the translation parameters corresponding to each rectangular frame, i.e. for the current frame image I in the rectangular frame containing the vehicletThe ith rectangular frame region in (x, y)Computing rectangular box regionsIn the current frame image It(x, y) and the previous frame image It-1Scaling parameters in (x, y)And a translation parameter Tt iThe method comprises the following steps:
step 1: in the area of the rectangular frameDetecting characteristic points in the image and obtaining the image I of the previous framet-1Matching feature points in (x, y) to form N pairs of matched feature points;
Wherein med is the median of the elements in the set, (x)c,m,yc,m) Is composed ofM characteristic point of (x)p,m,yp,m) Last frame image I matched with itt-1The characteristic points in (x, y), N is the total logarithm of the characteristic points;
Wherein the content of the first and second substances,for translation parameter Tt iTwo components of (a);
and 4, step 4:determining the residual error rm,
At rmWhen the pixel is more than or equal to 0.5, (x)c,m,yc,m) As outliers, (x) are removedc,m,yc,m) And its matching point (x)p,m,yp,m);
And 5: after the outer points are removed, the step 2 and the step 3 are repeatedly executed for the remaining characteristic point pairs to obtain the final scaling parametersAnd a translation parameter Tt i;
The horizontal edge information accumulation function is created or updated for each rectangular frame in the current image, namely for the ith rectangular frame region and the scaling parameter in the current imageAnd a translation parameter Tt iAnd searching a target rectangular frame area in the t-1 frame image matched with the target rectangular frame area, wherein the method comprises the following steps:
is provided withFor the ith rectangular frame area in the current imageThe height of the rectangular frame area isPixel of width ofPixel, calculating rectangular frame areaCorresponding to the rectangular region in the t-1 frameUpper left vertex coordinates ofAnd height of rectangular regionAnd width
Wherein the content of the first and second substances,for translation parameter Tt iThe two components of (a) and (b),is a scaling parameter;
according toFinding a rectangular frame with the maximum overlapping area at the time t-1 and meeting a predetermined rule as the position of (A)Rectangular frame areaUpdating the corresponding horizontal edge information accumulation function of the matched target rectangular frame area to be used as the horizontal edge information accumulation function of the current rectangular frame;
2. The method for locating the bottom edge of a vehicle based on the accumulation of horizontal edge information as claimed in claim 1, wherein the step of obtaining the position of a rectangular frame containing the vehicle in the image frame of the video comprises the following steps:
collecting a vehicle picture as a positive sample in an off-line manner, collecting a background picture as a negative sample, and training a classifier;
in the image frame, traversing and searching each position of the image in a sliding window mode, calling a trained classifier to perform online detection, and reserving a rectangular frame detected as a vehicle;
clustering a plurality of rectangular frames detected by a classifier in the image to obtain rectangular frames containing vehicles; and only one rectangular frame is contained for the same vehicle.
3. The method as claimed in claim 1, wherein the function of accumulating the horizontal edge information is a function of the accumulation of the horizontal edge informationThe creation and update process of (1) is as follows:
a creating process:
wherein the content of the first and second substances,for the purpose of the horizontal edge intensity response function,f is the history accumulated length;
and (3) updating:
4. the method as claimed in claim 1, wherein the target bottom edge position is calculated according to the horizontal edge information accumulation function for each rectangular frame in the current image, that is, for the ith rectangular frame region in the current image, the horizontal edge information accumulation function is set asThe horizontal edge detection process is as follows:
first, a horizontal edge information accumulation function L is calculatedtColumn Peak function of (a, b)Sum column peak mark functionMarking the row coordinate position of the possible horizontal edges in the image:
calculating a horizontal edge detection function Ft i(a):
Wherein Th1And Th2Respectively a threshold value of accumulated frame number and a threshold value of horizontal edge response intensity;
selecting a material satisfying Ft i(a)>Th2The maximum value a of the lower edge is used as the line position of the lower edge, so that the image line where the lower edge is located is determined, and the lower edge detection is completed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811567600.0A CN109815812B (en) | 2018-12-21 | 2018-12-21 | Vehicle bottom edge positioning method based on horizontal edge information accumulation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811567600.0A CN109815812B (en) | 2018-12-21 | 2018-12-21 | Vehicle bottom edge positioning method based on horizontal edge information accumulation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109815812A CN109815812A (en) | 2019-05-28 |
CN109815812B true CN109815812B (en) | 2020-12-04 |
Family
ID=66601808
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811567600.0A Active CN109815812B (en) | 2018-12-21 | 2018-12-21 | Vehicle bottom edge positioning method based on horizontal edge information accumulation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109815812B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110738181B (en) * | 2019-10-21 | 2022-08-05 | 东软睿驰汽车技术(沈阳)有限公司 | Method and device for determining vehicle orientation information |
CN111340877B (en) * | 2020-03-25 | 2023-10-27 | 北京爱笔科技有限公司 | Vehicle positioning method and device |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101029824A (en) * | 2006-02-28 | 2007-09-05 | 沈阳东软软件股份有限公司 | Method and apparatus for positioning vehicle based on characteristics |
CN104036275A (en) * | 2014-05-22 | 2014-09-10 | 东软集团股份有限公司 | Method and device for detecting target objects in vehicle blind areas |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7720580B2 (en) * | 2004-12-23 | 2010-05-18 | Donnelly Corporation | Object detection system for vehicle |
CN103544487B (en) * | 2013-11-01 | 2019-11-22 | 扬州瑞控汽车电子有限公司 | Front truck recognition methods based on monocular vision |
CN105205459B (en) * | 2015-09-16 | 2019-02-05 | 东软集团股份有限公司 | A kind of recognition methods of characteristics of image vertex type and device |
CN105844222B (en) * | 2016-03-18 | 2019-07-30 | 上海欧菲智能车联科技有限公司 | The front vehicles collision warning systems and method of view-based access control model |
-
2018
- 2018-12-21 CN CN201811567600.0A patent/CN109815812B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101029824A (en) * | 2006-02-28 | 2007-09-05 | 沈阳东软软件股份有限公司 | Method and apparatus for positioning vehicle based on characteristics |
CN104036275A (en) * | 2014-05-22 | 2014-09-10 | 东软集团股份有限公司 | Method and device for detecting target objects in vehicle blind areas |
Also Published As
Publication number | Publication date |
---|---|
CN109815812A (en) | 2019-05-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111460926B (en) | Video pedestrian detection method fusing multi-target tracking clues | |
CN111693972B (en) | Vehicle position and speed estimation method based on binocular sequence images | |
Kong et al. | Vanishing point detection for road detection | |
Shi et al. | Fast and robust vanishing point detection for unstructured road following | |
US8446468B1 (en) | Moving object detection using a mobile infrared camera | |
Liu et al. | Rear vehicle detection and tracking for lane change assist | |
US8379928B2 (en) | Obstacle detection procedure for motor vehicle | |
CN108052904B (en) | Method and device for acquiring lane line | |
CN104282020A (en) | Vehicle speed detection method based on target motion track | |
Mu et al. | Multiscale edge fusion for vehicle detection based on difference of Gaussian | |
CN111340855A (en) | Road moving target detection method based on track prediction | |
US11093778B2 (en) | Method and system for selecting image region that facilitates blur kernel estimation | |
CN115578470B (en) | Monocular vision positioning method and device, storage medium and electronic equipment | |
CN110992424B (en) | Positioning method and system based on binocular vision | |
CN109815812B (en) | Vehicle bottom edge positioning method based on horizontal edge information accumulation | |
Jang et al. | Road lane semantic segmentation for high definition map | |
EP3633617A2 (en) | Image processing device | |
Ponsa et al. | On-board image-based vehicle detection and tracking | |
CN107506753B (en) | Multi-vehicle tracking method for dynamic video monitoring | |
CN115187941A (en) | Target detection positioning method, system, equipment and storage medium | |
CN114972427A (en) | Target tracking method based on monocular vision, terminal equipment and storage medium | |
CN113269007A (en) | Target tracking device and method for road monitoring video | |
Vajak et al. | A rethinking of real-time computer vision-based lane detection | |
CN113221739B (en) | Monocular vision-based vehicle distance measuring method | |
CN113361299B (en) | Abnormal parking detection method and device, storage medium and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |