CN114463660A - Vehicle type judging method based on video radar fusion perception - Google Patents

Vehicle type judging method based on video radar fusion perception Download PDF

Info

Publication number
CN114463660A
CN114463660A CN202111525359.7A CN202111525359A CN114463660A CN 114463660 A CN114463660 A CN 114463660A CN 202111525359 A CN202111525359 A CN 202111525359A CN 114463660 A CN114463660 A CN 114463660A
Authority
CN
China
Prior art keywords
video
radar
picture
vehicle
vehicle type
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111525359.7A
Other languages
Chinese (zh)
Inventor
高超
何煜埕
张申浩
谢争明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Aerospace Dawei Technology Co Ltd
Original Assignee
Jiangsu Aerospace Dawei Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Aerospace Dawei Technology Co Ltd filed Critical Jiangsu Aerospace Dawei Technology Co Ltd
Priority to CN202111525359.7A priority Critical patent/CN114463660A/en
Priority to PCT/CN2022/081188 priority patent/WO2023108931A1/en
Publication of CN114463660A publication Critical patent/CN114463660A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention belongs to the technical field of target identification, and discloses a vehicle type judging method based on video radar fusion perception, wherein a trapezoidal detection area is defined in a video picture along a lane frame, and coordinates in the detection area in the video picture are mapped to the longitudinal distance and the transverse distance of a radar; extracting the length value and the width of a target detected by a radar, extracting an RGB value vector of a picture in a vehicle detection frame, and forming a vector group representing the target by the length value, the width value and the RGB vector of the target; forming a training set by vector group parameter samples of a plurality of videos and radars, and training by an SVM (support vector machine) trainer; and inputting the data acquired in real time into an objective function of the SVM trainer to identify the vehicle type. The invention integrates the advantages of radar and video, so that the radar and video integrated machine can perform stable and accurate vehicle type detection in all weather and all live conditions.

Description

Vehicle type judging method based on video radar fusion perception
Technical Field
The invention belongs to the technical field of target recognition, and particularly relates to a vehicle type judging method based on video radar fusion perception.
Background
In the face of increasingly complex road traffic conditions, along with the development of scientific technology, radar and video sensing technologies are increasingly applied to intelligent traffic, and a radar sensor measures the distance, speed and angle of surrounding objects by transmitting high-frequency electromagnetic waves and receiving echoes. The video sensor detects the type and angle of the surrounding object by monitoring the video image in the lens. However, both radar sensors and video sensors have limitations in practical applications. Limitations such as radar technology are: firstly, the detail resolution of the environment and obstacles is not high, particularly in terms of angular resolution, and secondly, the type of object cannot be identified. Video technology is limited in that: firstly, the influence of illumination and environment such as fog, rain and snow weather is large, and secondly, distance and speed information of a target cannot be accurately acquired; it is essential how to effectively fuse the video and radar data.
Disclosure of Invention
According to the method, the characteristic parameters of the radar recognition vehicle and the characteristic parameters of the video recognition vehicle are extracted in a mode of combining the radar and the video, meanwhile, the abnormal situation of the video recognition caused by the severe environment is processed, different detection schemes are selected, a Support Vector Machine (SVM) is used for calculation, and the vehicle type is distinguished through multiple SVM binary recognition.
The invention discloses a vehicle type judging method based on video radar fusion perception, which comprises the following steps of:
setting a trapezoidal detection area in a video picture along a lane frame, and mapping coordinates in the detection area in the video picture to a longitudinal distance and a transverse distance of a radar;
extracting the length value and the width of a target detected by a radar, extracting an RGB value vector of a picture in a vehicle detection frame, and forming a vector group representing the target by the length value, the width value and the RGB vector of the target;
forming a training set by vector group parameter samples of a plurality of videos and radars, and training by an SVM (support vector machine) trainer;
and inputting the data acquired in real time into an objective function of the SVM trainer to identify the vehicle type.
Further, the radar is installed at the perpendicular bisector of the detected lane.
Further, the step of framing the trapezoid detection area along the lane in the video frame includes:
in the video picture, a trapezoid detection area is defined along a lane, and the coordinates (x) of four vertexes in the picture are recorded1,y1),(x2,y2),(x3,y3),(x4,y4) Wherein y is1=y2,y3=y4Calculating the height h of the trapezoid detection area in the video picturev=y4-y2
Calibrating actual detection region width DrAnd the number of lanes M;
detecting the height h of a region in a videovDividing n into equal parts from top to bottom, and calibrating hvActual distance dh corresponding to each bisectori(i=1,2,...,n,n+1);
Calculating the vertical coordinate y of any point in the detection area in the video picturek(y2≤yk≤y4) Corresponding actual distance Hk
Calculating the left and right boundary lines L of the lane in the video picture1And L2Function of (c):
Figure BDA0003410158810000021
Figure BDA0003410158810000022
calculating the coordinate (x) of any point in the detection area in the video picturek,yk) Corresponding actual distance D from the perpendicular bisector of the lanekDetecting the coordinates (x) in the area in the video picturek,yk) Longitudinal distance H mapped to radarkAnd a lateral distance Dk
Further, it is characterized byH is describedkThe calculation formula and the steps are as follows:
Figure BDA0003410158810000031
Figure BDA0003410158810000032
further, the actual distance DkThe calculation formula and the steps are as follows:
Figure BDA0003410158810000033
Figure BDA0003410158810000034
Figure BDA0003410158810000035
further, the extracting the length value and the width of the target detected by the radar, and the extracting the RGB value vector of the picture in the vehicle detection frame includes:
using a vehicle sample trained in advance at a video end, carrying out vehicle detection in a video detection area through a feature classifier, and extracting the detected vehicle coordinates (x)k,yk) WIDTH of vehicle detection frame with detected WIDTH of vehicle detection framekAnd high HEIGHTk
Calculating the coordinates (D) in the radar coordinate system corresponding to the detected vehicle coordinates in the videok,Hk);
Looking for radar detected relative (D)k,Hk) Extracting the length value C of the target detected by the radar from the coordinates of the target with the minimum deviationlAnd width Cw
Converting the picture in the detected vehicle detection frame into a picture with width W and height HTaking the RGB value of the picture and recording as a vector omega with the length of W x H x 3p
Further, the training by the SVM trainer comprises:
constructing an SVM objective function:
Figure BDA0003410158810000036
wherein y isiIs epsilon { -1,1}, delta is a positive integer, and xiIs a parameter omegap、CLAnd CwA vector set is formed, when the video feasibility is judged to be low, the vector set is [ C ]L,CW]Otherwise is [ omega ]p,CL,CW];
The vector group parameter samples of n videos and radars form a training set, and training is carried out through an SVM (support vector machine) trainer to obtain the minimum | | | w | |, namely the minimum w | |TAnd b.
Further, the method for determining that the video feasibility is low is as follows:
converting each frame of picture into a gray value, counting the mean value imgL of the picture brightness, setting the upper limit value imgL _ max and the lower limit value imgL _ min of the picture brightness, if the picture brightness exceeds the upper limit value, the picture is too bright, if the picture brightness is lower than the lower limit value, the picture is too dark, if the accumulated frame number is lower than the lower limit value of the brightness or higher than the upper limit value of the brightness, the frame number exceeds 3000 frames, and the abnormal condition is caused, and the video detection reliability is low at the moment.
Further, the method for determining that the video feasibility degree is low further includes:
counting the number of detected vehicles C _ num _ v in the video detection area and the number of detected vehicles C _ num _ r of the radar in the actual detection area, wherein if | C _ num _ v-C _ num _ r | is larger than C _ num _ r 0.2 and the accumulated time is larger than 120 seconds, the video detection is abnormal, and the feasibility of the video detection is low.
Compared with the prior art, the invention has the beneficial effects that:
the method combines the radar and the video, adopts a mode of video and radar multi-parameter fusion to carry out high-accuracy vehicle type detection when a video scene is proper, and uses the radar parameters to carry out detection when the video detection error is large in severe weather so as to improve the detection stability;
the advantages of the radar and the video are fused, so that the radar and video integrated machine can perform stable and accurate vehicle type detection in all weather and all live conditions.
Drawings
FIG. 1 is a schematic view of a video frame with a trapezoidal detection area defined along a lane;
FIG. 2 is a flow chart of the present invention for determining video confidence;
FIG. 3 is a flow chart of a method for determining vehicle type according to the present invention.
Detailed Description
The invention is further described with reference to the accompanying drawings, but the invention is not limited in any way, and any alterations or substitutions based on the teaching of the invention are within the scope of the invention.
The method comprises the following steps: unifying the radar and the video to detect the position of the target.
1) The trapezoidal detection area is framed along the lane in the video frame as shown in FIG. 1, and the coordinates (x) of the four vertices in the frame are recorded1,y1),(x2,y2),(x3,y3),(x4,y4). Wherein y is1=y2,y3=y4Calculating the height h of the trapezoid detection area in the video picturev=y4-y2(units are pixels in the video).
2) Calibrating actual detection region width DrAnd the number of lanes M.
3) Detecting the height h of a region in a videovDividing n into equal parts from top to bottom, and calibrating hvActual distance dh corresponding to each bisectori(i=1,2,...,n,n+1)。
4) Calculating the vertical coordinate y of any point in the detection area in the video picturek(y2≤yk≤y4) Corresponding actual distance Hk. The calculation formula and the steps are as follows:
Figure BDA0003410158810000051
(i is a positive integer)
Figure BDA0003410158810000052
5) Calculating the left and right boundary lines L of the lane in the video picture1And L2Function of (2)
Figure BDA0003410158810000053
Figure BDA0003410158810000054
6) The radar is arranged at the perpendicular bisector of the detected lane, and the coordinate (x) of any point in the detection area in the video picture is calculatedk,yk) Corresponding actual distance D from the perpendicular bisector of the lanek(the left side of the perpendicular bisector is negative and the right side is positive).
Figure BDA0003410158810000055
Figure BDA0003410158810000056
Figure BDA0003410158810000061
Thereby, the coordinate (x) in the detection area in the video picture can be detectedk,yk) Longitudinal distance H mapped to radarkAnd a lateral distance Dk
Step two: radar parameter extraction and video parameter extraction
1) Using a vehicle sample trained in advance at a video end, and carrying out vehicle detection on a video detection area through a haar + cascades feature classifierExtracting the detected coordinates (x) of the vehiclek,yk) WIDTH of vehicle detection frame with detected WIDTH of vehicle detection framekAnd high HEIGHTk
2) Calculating the coordinates (D) in the radar coordinate system corresponding to the detected vehicle coordinates in the video through the formula in the step onek,Hk). Looking for radar detected relative (D)k,Hk) Extracting the length value C of the target detected by the radar from the coordinates of the target with the minimum deviationlAnd width Cw
3) Setting a uniform width W and a uniform height H, converting the picture in the detected vehicle detection frame in the step two 1) into a picture with the width W and the height H, extracting the RGB value of the picture, and recording the RGB value as a vector omega with the length W, H and 3p
4) Will omegap、CLAnd CwAnd sending the vehicle type identification data to the fourth step.
Step three: video processing exception condition troubleshooting
Because the influence of weather or light change is large during video detection, the accuracy of video vehicle detection is reduced when extreme weather such as heavy fog, heavy rain, strong light and the like or external factors interfere, and the accuracy of judging the vehicle type by combining the radar vision is influenced. Therefore, the abnormal condition of the video needs to be monitored in real time and checked, and the reliability of the video parameters is reduced when the video is abnormal. And when the video credibility is low, the vehicle type detection of the step four only uses radar data. As shown in fig. 2, the steps are as follows:
1) converting each frame of picture into a gray value, counting the mean value imgL of the picture brightness, setting the upper limit value imgL _ max and the lower limit value imgL _ min of the picture brightness, if the picture brightness exceeds the upper limit value, the picture is too bright, if the picture brightness is lower than the lower limit value, the picture is too dark, if the accumulated frame number is lower than the lower limit value of the brightness or higher than the upper limit value of the brightness, the frame number exceeds 3000 frames, and the abnormal condition is caused, and the video detection reliability is low at the moment.
2) Counting the number of vehicles C _ num _ v in the video detection area detected in the step two 1) and the number of vehicles C _ num _ r detected by the radar in the actual detection area. And calculating | C _ num _ v-C _ num _ r |, and if the value is greater than C _ num _ r 0.2 and the accumulated time is greater than 120 seconds, the video detection is abnormal, and the feasibility of the video detection is low.
Step four: vehicle type recognition of radar and video parameters by using SVM (support vector machine)
1) Constructing an SVM objective function:
Figure BDA0003410158810000071
wherein y isiIs epsilon { -1,1}, delta is a positive integer, and xiA vector composed of the parameters extracted in the second step is [ C ] when the video feasibility is judged to be low in the third stepL,CW]Otherwise is [ omega ]p,CL,CW]。
N video and radar parameter samples are prepared in advance, training is carried out through an SVM trainer, and the minimum w is obtainedTAnd b.
2) Vehicle type identification
Using [ omega ] collected in step twop,CL,CW]And inputting the data into the objective function, and if the result is larger than or equal to delta, the vehicle is a big vehicle, otherwise, the vehicle is a small vehicle. If a plurality of vehicle types such as large, medium and small types need to be distinguished, a plurality of objective functions are constructed, and the vehicle types are selected in a bisection mode for a plurality of times. Illustratively, if a cart, a middle cart and a trolley need to be identified, two objective functions (a first objective function and a second objective function, and delta values in the first objective function and the second objective function are different) are constructed according to formula 4, a training set of the first objective function is composed of trolley sample parameters and middle cart sample parameters, the trolley sample parameters and the middle cart sample parameters are input into the first objective function for training, after training is finished, the collected real-time vehicle image parameters are input into the first objective function, if a result is greater than or equal to delta, the vehicle is the middle cart, and otherwise, the vehicle is the trolley. The training set of the second objective function consists of middle car sample parameters and cart sample parameters, the middle car sample parameters and the cart sample parameters are input into the second objective function for training, after the training is finished, the acquired real-time car image parameters are input into the second objective function, if the result is larger than or equal to delta, the car is a big car, otherwise, the car is a middle carAnd (5) carrying out vehicle operation.
Compared with the prior art, the invention has the beneficial effects that:
the method combines the radar and the video, adopts a mode of video and radar multi-parameter fusion to carry out high-accuracy vehicle type detection when a video scene is proper, and uses the radar parameters to carry out detection when the video detection error is large in severe weather so as to improve the detection stability;
the advantages of the radar and the video are fused, so that the radar and video integrated machine can perform stable and accurate vehicle type detection in all weather and all live conditions.
The word "preferred" is used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as "preferred" is not necessarily to be construed as advantageous over other aspects or designs. Rather, use of the word "preferred" is intended to present concepts in a concrete fashion. The term "or" as used in this application is intended to mean an inclusive "or" rather than an exclusive "or". That is, unless specified otherwise or clear from context, "X employs A or B" is intended to include either of the permutations as a matter of course. That is, if X employs A; b is used as X; or X employs both A and B, then "X employs A or B" is satisfied in any of the foregoing examples.
Also, although the disclosure has been shown and described with respect to one or an implementation, equivalent alterations and modifications will occur to others skilled in the art based upon a reading and understanding of this specification and the annexed drawings. The present disclosure includes all such modifications and alterations, and is limited only by the scope of the appended claims. In particular regard to the various functions performed by the above described components (e.g., elements, etc.), the terms used to describe such components are intended to correspond, unless otherwise indicated, to any component which performs the specified function of the described component (e.g., that is functionally equivalent), even though not structurally equivalent to the disclosed structure which performs the function in the herein illustrated exemplary implementations of the disclosure. In addition, while a particular feature of the disclosure may have been disclosed with respect to only one of several implementations, such feature may be combined with one or other features of the other implementations as may be desired and advantageous for a given or particular application. Furthermore, to the extent that the terms "includes," has, "" contains, "or variants thereof are used in either the detailed description or the claims, such terms are intended to be inclusive in a manner similar to the term" comprising.
Each functional unit in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or a plurality of or more than one unit are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium. The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Each apparatus or system described above may execute the storage method in the corresponding method embodiment.
In summary, the above-mentioned embodiment is an implementation manner of the present invention, but the implementation manner of the present invention is not limited by the above-mentioned embodiment, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be regarded as equivalent replacements within the protection scope of the present invention.

Claims (10)

1. A vehicle type judging method based on video radar fusion perception is characterized by comprising the following steps:
setting a trapezoidal detection area in a video picture along a lane frame, and mapping coordinates in the detection area in the video picture to a longitudinal distance and a transverse distance of a radar;
extracting the length value and the width of a target detected by a radar, extracting an RGB value vector of a picture in a vehicle detection frame, and forming a vector group representing the target by the length value, the width value and the RGB vector of the target;
forming a training set by vector group parameter samples of a plurality of videos and radars, and training by an SVM (support vector machine) trainer;
and inputting the data acquired in real time into an objective function of the SVM trainer to identify the vehicle type.
2. The method for judging the vehicle type based on the video radar fusion perception according to claim 1, wherein the radar is installed at a perpendicular bisector of a detected lane.
3. The method for judging the vehicle type based on the video radar fusion perception according to claim 1, wherein the step of defining a trapezoid detection area along a lane frame in a video picture comprises the following steps:
in the video picture, a trapezoid detection area is defined along a lane, and the coordinates (x) of four vertexes in the picture are recorded1,y1),(x2,y2),(x3,y3),(x4,y4) Wherein y is1=y2,y3=y4Calculating the height h of the trapezoid detection area in the video picturev=y4-y2
Calibrating actual detection region width DrAnd the number of lanes M;
detecting the height h of a region in a videovDividing n into equal parts from top to bottom, and calibrating hvActual distance dh corresponding to each bisectori(i=1,2,...,n,n+1);
Calculating the vertical coordinate y of any point in the detection area in the video picturek(y2≤yk≤y4) Corresponding actual distance Hk
Calculating the left and right boundary lines L of the lane in the video picture1And L2Function of (c):
Figure FDA0003410158800000011
Figure FDA0003410158800000021
calculating the coordinate (x) of any point in the detection area in the video picturek,yk) Corresponding actual distance D from the perpendicular bisector of the lanekDetecting the coordinates (x) in the area in the video picturek,yk) Longitudinal distance H mapped to radarkAnd a lateral distance Dk
4. The method for vehicle type judgment based on video radar fusion perception according to claim 3, wherein the H iskThe calculation formula and the steps are as follows:
Figure FDA0003410158800000022
Figure FDA0003410158800000023
5. the method for vehicle type judgment based on video radar fusion perception according to claim 3, wherein the actual distance D iskThe calculation formula and the steps are as follows:
Figure FDA0003410158800000024
Figure FDA0003410158800000025
Figure FDA0003410158800000026
6. the method for judging the vehicle type based on the video radar fusion perception according to claim 1, wherein the step of extracting the length value and the width of the target detected by the radar comprises the following steps of:
using a vehicle sample trained in advance at a video end, carrying out vehicle detection in a video detection area through a feature classifier, and extracting the detected vehicle coordinates (x)k,yk) WIDTH of vehicle detection frame with detected WIDTH of vehicle detection framekAnd high HEIGHTk
Calculating the coordinates (D) in the radar coordinate system corresponding to the detected vehicle coordinates in the videok,Hk);
Looking for radar detected relative (D)k,Hk) Extracting the length value C of the target detected by the radar from the coordinates of the target with the minimum deviationlAnd width Cw
Converting the picture in the detected vehicle detection frame into a picture with width W and height H, extracting the RGB value of the picture, and recording the RGB value as a vector omega with length W x H x 3p
7. The method for judging the vehicle type based on the video radar fusion perception according to claim 1, wherein the training through the SVM trainer comprises:
constructing an SVM objective function:
Figure FDA0003410158800000031
wherein y isiIs epsilon { -1,1}, delta is a positive integer, and xiIs a parameter omegap、CLAnd CwA vector set is formed, when the video feasibility is judged to be low, the vector set is [ C ]L,CW]Otherwise is [ omega ]p,CL,CW];
The vector group parameter samples of n videos and radars form a training set, and training is carried out through an SVM (support vector machine) trainer to obtain the minimum | | | w | |, namely the minimum w | |TAnd b.
8. The method for judging the vehicle type based on the video radar fusion perception according to claim 1, wherein the method for judging the low feasibility degree of the video is as follows:
converting each frame of picture into a gray value, counting the mean value imgL of the picture brightness, setting the upper limit value imgL _ max and the lower limit value imgL _ min of the picture brightness, if the picture brightness exceeds the upper limit value, the picture is too bright, if the picture brightness is lower than the lower limit value, the picture is too dark, if the accumulated frame number is lower than the lower limit value of the brightness or higher than the upper limit value of the brightness, the frame number exceeds 3000 frames, and the abnormal condition is caused, and the video detection reliability is low at the moment.
9. The method for judging the vehicle type based on the video radar fusion perception according to claim 1, wherein the method for judging the low video feasibility further comprises the following steps:
counting the number of detected vehicles C _ num _ v in the video detection area and the number of detected vehicles C _ num _ r of the radar in the actual detection area, wherein if | C _ num _ v-C _ num _ r | is larger than C _ num _ r 0.2 and the accumulated time is larger than 120 seconds, the video detection is abnormal, and the feasibility of the video detection is low.
10. The method for judging the vehicle type based on the video radar fusion perception according to claim 7, wherein the step of inputting the data collected in real time into an objective function of an SVM trainer and the step of recognizing the vehicle type comprises the steps of: and inputting the collected vehicle vector group parameters into the objective function, wherein if the result is larger than or equal to delta, the vehicle is a big vehicle, and otherwise, the vehicle is a small vehicle.
CN202111525359.7A 2021-12-14 2021-12-14 Vehicle type judging method based on video radar fusion perception Pending CN114463660A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111525359.7A CN114463660A (en) 2021-12-14 2021-12-14 Vehicle type judging method based on video radar fusion perception
PCT/CN2022/081188 WO2023108931A1 (en) 2021-12-14 2022-03-16 Vehicle model determining method based on video-radar fusion perception

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111525359.7A CN114463660A (en) 2021-12-14 2021-12-14 Vehicle type judging method based on video radar fusion perception

Publications (1)

Publication Number Publication Date
CN114463660A true CN114463660A (en) 2022-05-10

Family

ID=81405899

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111525359.7A Pending CN114463660A (en) 2021-12-14 2021-12-14 Vehicle type judging method based on video radar fusion perception

Country Status (2)

Country Link
CN (1) CN114463660A (en)
WO (1) WO2023108931A1 (en)

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102810250B (en) * 2012-07-31 2014-07-02 长安大学 Video based multi-vehicle traffic information detection method
CN103559791B (en) * 2013-10-31 2015-11-18 北京联合大学 A kind of vehicle checking method merging radar and ccd video camera signal
US10037472B1 (en) * 2017-03-21 2018-07-31 Delphi Technologies, Inc. Automated vehicle object detection system with camera image and radar data fusion
CN106951879B (en) * 2017-03-29 2020-04-14 重庆大学 Multi-feature fusion vehicle detection method based on camera and millimeter wave radar
CN107895492A (en) * 2017-10-24 2018-04-10 河海大学 A kind of express highway intelligent analysis method based on conventional video
CN109948661B (en) * 2019-02-27 2023-04-07 江苏大学 3D vehicle detection method based on multi-sensor fusion
CN112162271A (en) * 2020-08-18 2021-01-01 河北省交通规划设计院 Vehicle type recognition method of microwave radar under multiple scenes
CN112541953B (en) * 2020-12-29 2023-04-14 江苏航天大为科技股份有限公司 Vehicle detection method based on radar signal and video synchronous coordinate mapping

Also Published As

Publication number Publication date
WO2023108931A1 (en) 2023-06-22

Similar Documents

Publication Publication Date Title
CN108960183B (en) Curve target identification system and method based on multi-sensor fusion
CN112396650B (en) Target ranging system and method based on fusion of image and laser radar
US9292750B2 (en) Method and apparatus for detecting traffic monitoring video
Goldbeck et al. Lane detection and tracking by video sensors
CN106951879B (en) Multi-feature fusion vehicle detection method based on camera and millimeter wave radar
US9846812B2 (en) Image recognition system for a vehicle and corresponding method
JP4919036B2 (en) Moving object recognition device
US8670592B2 (en) Clear path detection using segmentation-based method
CN107463890B (en) A kind of Foregut fermenters and tracking based on monocular forward sight camera
JP5136504B2 (en) Object identification device
CN111563469A (en) Method and device for identifying irregular parking behaviors
CN112698302A (en) Sensor fusion target detection method under bumpy road condition
WO2003001473A1 (en) Vision-based collision threat detection system_
CN107389084A (en) Planning driving path planing method and storage medium
CN110197173B (en) Road edge detection method based on binocular vision
Lin et al. Lane departure and front collision warning using a single camera
EP2813973B1 (en) Method and system for processing video image
US20210350705A1 (en) Deep-learning-based driving assistance system and method thereof
CN112541953A (en) Vehicle detection method based on radar signal and video synchronous coordinate mapping
CN114495064A (en) Monocular depth estimation-based vehicle surrounding obstacle early warning method
CN112927283A (en) Distance measuring method and device, storage medium and electronic equipment
CN114325634A (en) Method for extracting passable area in high-robustness field environment based on laser radar
CN105512641A (en) Method for using laser radar scanning method to calibrate dynamic pedestrians and vehicles in video in snowing or raining state
CN115327572A (en) Method for detecting obstacle in front of vehicle
CN112130153A (en) Method for realizing edge detection of unmanned vehicle based on millimeter wave radar and camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination