WO2023108931A1 - Vehicle model determining method based on video-radar fusion perception - Google Patents

Vehicle model determining method based on video-radar fusion perception Download PDF

Info

Publication number
WO2023108931A1
WO2023108931A1 PCT/CN2022/081188 CN2022081188W WO2023108931A1 WO 2023108931 A1 WO2023108931 A1 WO 2023108931A1 CN 2022081188 W CN2022081188 W CN 2022081188W WO 2023108931 A1 WO2023108931 A1 WO 2023108931A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
radar
picture
vehicle
detection area
Prior art date
Application number
PCT/CN2022/081188
Other languages
French (fr)
Chinese (zh)
Inventor
高超
何煜埕
张申浩
谢争明
Original Assignee
江苏航天大为科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 江苏航天大为科技股份有限公司 filed Critical 江苏航天大为科技股份有限公司
Publication of WO2023108931A1 publication Critical patent/WO2023108931A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines

Definitions

  • the invention belongs to the technical field of target recognition, in particular to a method for judging vehicle models based on video radar fusion perception.
  • Radar sensors transmit high-frequency electromagnetic waves and receive echoes The principle to measure the distance, speed and angle of surrounding objects.
  • the video sensor detects the type and angle of surrounding objects by monitoring the video image in the lens.
  • both radar sensors and video sensors have limitations in practical applications.
  • the limitations of radar technology are: first, the detailed resolution of the environment and obstacles, especially in terms of angular resolution, is not high; second, the target type cannot be identified.
  • the limitations of video technology are: first, it is greatly affected by light and environment such as fog, rain, snow, etc. Second, it is impossible to accurately obtain the distance and speed information of the target; how to effectively integrate video and radar data is very important. necessary.
  • the present invention extracts the characteristic parameters of the radar recognition vehicle and the characteristic parameter of the video recognition vehicle through the combination of the radar and the video, and at the same time combines the processing of the abnormal situation of the video recognition caused by the harsh environment, selects different detection schemes and uses the support vector machine SVM to calculate , distinguish the vehicle type through multiple SVM binary recognition.
  • the method for judging a car model based on video radar fusion perception disclosed by the present invention includes the following steps: frame a trapezoidal detection area along the lane in the video screen, and map the coordinates in the detection area in the video screen to the longitudinal distance and lateral distance of the radar;
  • Extract the length value and the width of the target detected by the radar extract the RGB value vector of the picture in the vehicle detection frame, and form the vector group representing the target with the target length value, width value and RGB vector;
  • the vector group parameter samples of multiple videos and radars are used to form a training set, which is trained by the SVM trainer;
  • the radar is installed at the vertical line of the detected lane.
  • the frame trapezoidal detection area along the lane in the video picture includes:
  • the extraction of the length and width of the target detected by the radar, and the extraction of the RGB value vector of the picture in the vehicle detection frame include:
  • said training by SVM trainer includes:
  • xi is a vector group composed of parameters ⁇ p , C L and C w , when the video feasibility is judged to be low, the vector group is [C L ,C W ], otherwise [ ⁇ p , CL ,C W ];
  • the training set is composed of n video and radar vector group parameter samples, and trained by the SVM trainer to obtain the minimum
  • each frame of picture into a grayscale value, count the average value imgL of the picture brightness, set the upper and lower limit values imgL_max and imgL_min of the picture brightness, if the upper limit value is exceeded, the picture will be too bright, if it is lower than the lower limit value, the picture will be too dark, and the cumulative value will be lower than If the lower limit of brightness or the number of frames higher than the upper limit of brightness exceeds 3000 frames, it is an abnormal situation, and the reliability of video detection is low at this time.
  • the method for judging that the video feasibility is low also includes:
  • the present invention combines radar and video, adopts video and radar multi-parameter fusion mode for high-accuracy vehicle detection when the video scene is suitable, and uses radar parameters for detection when the video detection error is large in bad weather, thereby improving detection stability sexual method;
  • the All-in-one camera can perform stable and accurate vehicle detection in all weather conditions.
  • Fig. 1 is a schematic diagram of a trapezoidal detection area framed along the lane in the video picture of the present invention
  • Fig. 2 is a flow chart of judging video credibility of the present invention
  • Fig. 3 is a flow chart of the method for judging the vehicle type of the present invention.
  • Step 1 Unify the position of radar and video detection targets.
  • the radar is installed at the mid-perpendicular line of the detected lane, and the actual distance D k (left of the mid-perpendicular line) corresponding to the coordinates (x k , y k ) of any point in the detection area in the video picture from the mid-perpendicular line of the lane is calculated. side is negative, right is positive).
  • the coordinates (x k , y k ) in the detection area in the video frame can be mapped to the longitudinal distance H k and the lateral distance D k of the radar.
  • Step 2 Radar parameter extraction and video parameter extraction
  • step 2 Calculate the coordinates (D k , H k ) in the radar coordinate system corresponding to the coordinates of the detected vehicle in the video through the formula in step 1. Find the coordinates of the target with the smallest relative (D k , H k ) deviation detected by the radar, and extract the length value C l and width C w of the target detected by the radar.
  • step 4 Send ⁇ p , CL and C w to step 4 to identify the vehicle type.
  • Step 3 Troubleshoot video processing exceptions
  • Step 4 Use SVM to identify vehicle models based on radar and video parameters
  • y i ⁇ ⁇ -1,1 ⁇ , ⁇ is a positive integer
  • xi is a vector composed of parameters extracted in step 2.
  • the vector is [C L , C W ] , otherwise [ ⁇ p , CL ,C W ].
  • the vehicle is a large vehicle, otherwise it is a small vehicle. If it is necessary to distinguish multiple types of vehicles such as large, medium, and small, construct multiple objective functions and perform multiple binary selections to select vehicle types. Exemplarily, if it is necessary to identify large vehicles, medium vehicles and small vehicles, then construct two objective functions according to formula 4 (the first objective function and the second objective function, the ⁇ values in the first objective function and the second objective function are different in size ), the training set of the first objective function is composed of small car sample parameters and middle car sample parameters, which are input into the first objective function for training.
  • the collected real-time vehicle image parameters are input into the first objective function, If the result is greater than or equal to ⁇ , the vehicle is a medium vehicle, otherwise it is a small vehicle.
  • the training set of the second objective function is composed of the sample parameters of medium vehicles and large vehicles, which are input into the second objective function for training.
  • the collected real-time vehicle image parameters are input into the second objective function, if If the result is greater than or equal to ⁇ , the vehicle is a large vehicle, otherwise it is a medium vehicle.
  • the present invention combines radar and video, adopts video and radar multi-parameter fusion mode for high-accuracy vehicle detection when the video scene is suitable, and uses radar parameters for detection when the video detection error is large in bad weather, thereby improving detection stability sexual method;
  • the All-in-one camera can perform stable and accurate vehicle detection in all weather conditions.
  • the word "preferred” means serving as an example, instance or illustration. Any aspect or design described herein as “preferred” is not necessarily to be construed as advantageous over other aspects or designs. Rather, use of the word “preferably” is intended to present concepts in a concrete manner.
  • the term “or” is intended to mean an inclusive “or” rather than an exclusive “or”. That is, unless otherwise specified or clear from context, "X employs A or B” is meant to naturally include either of the permutations. That is, if X employs A; X employs B; or X employs both A and B, then "X employs A or B" is satisfied in any of the foregoing instances.
  • Each functional unit in the embodiment of the present invention may be integrated into one processing module, or each unit may physically exist separately, or multiple or more of the above units may be integrated into one module.
  • the above-mentioned integrated modules can be implemented in the form of hardware or in the form of software function modules. If the integrated modules are implemented in the form of software function modules and sold or used as independent products, they can also be stored in a computer-readable storage medium.
  • the storage medium mentioned above may be a read-only memory, a magnetic disk or an optical disk, and the like.
  • Each of the above devices or systems may execute the storage method in the corresponding method embodiment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Traffic Control Systems (AREA)

Abstract

The present invention relates to the technical field of target recognition. Disclosed is a vehicle model determining method based on video-radar fusion perception. The method comprises: framing a trapezoidal detection area along a lane in a video picture, and mapping, to a longitudinal distance and a transverse distance of a radar, the coordinates in the detection area in the video picture; extracting a length value and a width value of a target detected by the radar, extracting RGB value vectors of the picture in a vehicle detection frame, and forming a vector set representing the target by the length value and the width value of the target, and the RGB vectors; forming, into a training set, vector set parameter samples of a plurality of videos and the radar, and training same by an SVM trainer; and inputting the data acquired in real time into an SVM trainer objective function to identify the vehicle model. According to the present invention, the advantages of the radar and the video are combined, so that a radar and video all-in-one machine can stably and accurately detect the vehicle model under all-weather and all real-time conditions.

Description

一种基于视频雷达融合感知的判断车型方法A method for judging car models based on video radar fusion perception 技术领域technical field
本发明属于目标识别技术领域,尤其涉及一种基于视频雷达融合感知的判断车型方法。The invention belongs to the technical field of target recognition, in particular to a method for judging vehicle models based on video radar fusion perception.
背景技术Background technique
面对日益复杂的道路交通状况,道路交通管理的相关部随着科学技术的发展,雷达和视频传感技术越来越多的应用在智能交通中,雷达传感器通过发射高频电磁波及接收回波的原理来测量周围物体的距离、速度、角度。视频传感器通过监测镜头内的视频影像来检测周围物体的类型、角度。但是雷达传感器和视频传感器在实际应用中均存在局限性。如雷达技术的局限性在于:第一,对环境和障碍物的细节分辨率尤其在角度分辨率方面不高,第二,无法对目标类型进行识别。视频技术在的局限在于:第一,受光照、环境如雾、雨、雪天气等影响较大,第二,无法准确获取目标的距离、速度信息;如何将视频和雷达数据进行有效融合是非常必要的。In the face of increasingly complex road traffic conditions, with the development of science and technology, the relevant departments of road traffic management have increasingly applied radar and video sensing technology in intelligent transportation. Radar sensors transmit high-frequency electromagnetic waves and receive echoes The principle to measure the distance, speed and angle of surrounding objects. The video sensor detects the type and angle of surrounding objects by monitoring the video image in the lens. However, both radar sensors and video sensors have limitations in practical applications. For example, the limitations of radar technology are: first, the detailed resolution of the environment and obstacles, especially in terms of angular resolution, is not high; second, the target type cannot be identified. The limitations of video technology are: first, it is greatly affected by light and environment such as fog, rain, snow, etc. Second, it is impossible to accurately obtain the distance and speed information of the target; how to effectively integrate video and radar data is very important. necessary.
发明内容Contents of the invention
本发明通过雷达与视频结合的方式,提取雷达识别车辆的特征参数以及视频识别车辆的特征参数,同时结合对环境恶劣导致的视频识别异常情况处理,选取不同的检测方案使用支持向量机SVM进行计算,通过多次SVM二分识别对车辆进行车型区分。The present invention extracts the characteristic parameters of the radar recognition vehicle and the characteristic parameter of the video recognition vehicle through the combination of the radar and the video, and at the same time combines the processing of the abnormal situation of the video recognition caused by the harsh environment, selects different detection schemes and uses the support vector machine SVM to calculate , distinguish the vehicle type through multiple SVM binary recognition.
本发明公开的基于视频雷达融合感知的判断车型方法,包括以下步骤:在视频画面中沿车道框定梯形检测区域,将视频画面中检测区域内的坐标映射到雷达的纵向距离和横向距离;The method for judging a car model based on video radar fusion perception disclosed by the present invention includes the following steps: frame a trapezoidal detection area along the lane in the video screen, and map the coordinates in the detection area in the video screen to the longitudinal distance and lateral distance of the radar;
提取雷达所检测到目标的长度值和宽度,提取车辆检测框内图片的RGB 值向量,将所述目标长度值、宽度值和RGB向量组成表征该目标的向量组;Extract the length value and the width of the target detected by the radar, extract the RGB value vector of the picture in the vehicle detection frame, and form the vector group representing the target with the target length value, width value and RGB vector;
将多个视频和雷达的向量组参数样本组成训练集,通过SVM训练器进行训练;The vector group parameter samples of multiple videos and radars are used to form a training set, which is trained by the SVM trainer;
将实时采集的数据输入到SVM训练器目标函数中,对车型进行识别。Input the real-time collected data into the objective function of the SVM trainer to identify the vehicle type.
进一步的,雷达安装在被检测车道中垂线处。Further, the radar is installed at the vertical line of the detected lane.
进一步的,所述在视频画面中沿车道框定梯形检测区域包括:Further, the frame trapezoidal detection area along the lane in the video picture includes:
在视频画面中沿车道框定梯形检测区域,记录四个顶点在画面中的坐标(x 1,y 1),(x 2,y 2),(x 3,y 3),(x 4,y 4),其中y 1=y 2,y 3=y 4,计算视频画面中梯形检测区域的高h v=y 4-y 2Frame the trapezoidal detection area along the lane in the video screen, and record the coordinates of the four vertices in the screen (x 1 , y 1 ), (x 2 , y 2 ), (x 3 , y 3 ), (x 4 , y 4 ), wherein y 1 =y 2 , y 3 =y 4 , calculate the height h v =y 4 -y 2 of the trapezoidal detection area in the video picture;
标定实际检测区域宽度D r和车道个数M; Calibrate the actual detection area width D r and the number of lanes M;
将视频中检测区域的高h v从上往下进行n等分,标定h v每个等分线对应的实际距离dh i(i=1,2,...,n,n+1); Divide the height h v of the detection area in the video into n equal parts from top to bottom, and calibrate the actual distance dh i corresponding to each equal line of h v (i=1,2,...,n,n+1);
计算视频画面中检测区域内任意点的纵坐标y k(y 2≤y k≤y 4)所对应的实际距离H kCalculate the actual distance H k corresponding to the vertical coordinate y k (y 2y k ≤ y 4 ) of any point in the detection area in the video picture;
计算视频画面中车道的左右边界线L 1和L 2的函数: The function to calculate the left and right boundary lines L 1 and L 2 of the lane in the video frame:
Figure PCTCN2022081188-appb-000001
Figure PCTCN2022081188-appb-000001
Figure PCTCN2022081188-appb-000002
Figure PCTCN2022081188-appb-000002
计算视频画面中检测区域内任意点的坐标(x k,y k)所对应的距离车道中垂线处的实际距离D k,将视频画面中检测区域内的坐标(x k,y k)映射到雷达的纵向距离H k和横向距离D kCalculate the actual distance D k corresponding to the coordinates (x k , y k ) of any point in the detection area in the video screen from the mid-perpendicular line of the lane, and map the coordinates (x k , y k ) in the detection area in the video screen Longitudinal distance H k and lateral distance D k to the radar.
进一步的,所述H k的计算公式与步骤如下: Further, the calculation formula and steps of the H k are as follows:
Figure PCTCN2022081188-appb-000003
Figure PCTCN2022081188-appb-000003
Figure PCTCN2022081188-appb-000004
Figure PCTCN2022081188-appb-000004
进一步的,所述实际距离D k的计算公式与步骤如下: Further, the calculation formula and steps of the actual distance D k are as follows:
Figure PCTCN2022081188-appb-000005
Figure PCTCN2022081188-appb-000005
Figure PCTCN2022081188-appb-000006
Figure PCTCN2022081188-appb-000006
Figure PCTCN2022081188-appb-000007
Figure PCTCN2022081188-appb-000007
进一步的,所述提取雷达所检测到该目标的长度值和宽度,提取车辆检测框内图片的RGB值向量包括:Further, the extraction of the length and width of the target detected by the radar, and the extraction of the RGB value vector of the picture in the vehicle detection frame include:
在视频端使用事先训练好的车辆样本,通过特征分类器对视频检测区域内进行车辆检测,提取被检测到的车辆坐标(x k,y k)与被检测到的车辆检测框的宽WIDTH k和高HEIGHT kUse pre-trained vehicle samples on the video side to detect vehicles in the video detection area through a feature classifier, extract the detected vehicle coordinates (x k , y k ) and the width WIDTH k of the detected vehicle detection frame and high HEIGHT k ;
计算出视频中被检测车辆坐标所对应的雷达坐标系中的坐标(D k,H k); Calculate the coordinates (D k , H k ) in the radar coordinate system corresponding to the detected vehicle coordinates in the video;
寻找雷达检测到的相对(D k,H k)偏差最小的目标的坐标,提取雷达所检测到该目标的长度值C l和宽度C wFind the coordinates of the target with the smallest relative (D k , H k ) deviation detected by the radar, and extract the length value C l and width C w of the target detected by the radar;
将被检测车辆检测框内图片转化为宽为W高为H的图片,提取图片的RGB值,记为一个长度为W*H*3的向量ω pConvert the picture in the detection frame of the detected vehicle into a picture with a width of W and a height of H, extract the RGB value of the picture, and record it as a vector ω p with a length of W*H*3.
进一步的,所述通过SVM训练器进行训练包括:Further, said training by SVM trainer includes:
构造SVM目标函数:Construct the SVM objective function:
Figure PCTCN2022081188-appb-000008
Figure PCTCN2022081188-appb-000008
其中y i∈{-1,1},δ为正整数,x i为参数ω p、C L和C w组成的向量组,当判断视频可行度低时,该向量组为[C L,C W],否则为[ω p,C L,C W]; Where y i ∈ {-1,1}, δ is a positive integer, xi is a vector group composed of parameters ω p , C L and C w , when the video feasibility is judged to be low, the vector group is [C L ,C W ], otherwise [ω p , CL ,C W ];
将n个视频和雷达的向量组参数样本组成训练集,通过SVM训练器进行训练,得到最小的||w||即最小的w T和b。 The training set is composed of n video and radar vector group parameter samples, and trained by the SVM trainer to obtain the minimum ||w||, that is, the minimum w T and b.
进一步的,所述判断视频可行度低的方法如下:Further, the method for judging the low feasibility of the video is as follows:
将每一帧画面转化为灰度值,统计画面亮度均值imgL,设定画面亮度上下限值imgL_max和imgL_min,超过上限值则画面过亮,低于下限值则画面过暗,累计低于亮度下限或高于亮度上限帧数超过3000帧,为异常状况,此时视频检测可信度低。Convert each frame of picture into a grayscale value, count the average value imgL of the picture brightness, set the upper and lower limit values imgL_max and imgL_min of the picture brightness, if the upper limit value is exceeded, the picture will be too bright, if it is lower than the lower limit value, the picture will be too dark, and the cumulative value will be lower than If the lower limit of brightness or the number of frames higher than the upper limit of brightness exceeds 3000 frames, it is an abnormal situation, and the reliability of video detection is low at this time.
进一步的,所述判断视频可行度低的方法还包括:Further, the method for judging that the video feasibility is low also includes:
统计检测到的视频检测区域内的车辆个数C_num_v和雷达在实际的检测区域内检测到的车辆个数C_num_r,如果|C_num_v-C_num_r|大于C_num_r*0.2且累计时间大于120秒,则视频检测异常,视频检测的可行度低。Count the number of vehicles C_num_v in the detected video detection area and the number of vehicles C_num_r detected by the radar in the actual detection area. If |C_num_v-C_num_r| is greater than C_num_r*0.2 and the cumulative time is greater than 120 seconds, the video detection is abnormal , the feasibility of video detection is low.
与现有技术相比,本发明的有益效果是:Compared with prior art, the beneficial effect of the present invention is:
本发明通过雷达与视频相结合,在视频场景合适时采用视频与雷达多参数融合的模式进行高准确度的车型检测,在恶劣天气下视频检测误差较大时使用雷达参数进行检测从而提高检测稳定性的方法;The present invention combines radar and video, adopts video and radar multi-parameter fusion mode for high-accuracy vehicle detection when the video scene is suitable, and uses radar parameters for detection when the video detection error is large in bad weather, thereby improving detection stability sexual method;
融合雷达与视频的优点,使雷视一体机能在全天候全实况下进行稳定和准确的车型检测。Combining the advantages of radar and video, the All-in-one camera can perform stable and accurate vehicle detection in all weather conditions.
附图说明Description of drawings
图1本发明的视频画面中沿车道框定梯形检测区域示意图;Fig. 1 is a schematic diagram of a trapezoidal detection area framed along the lane in the video picture of the present invention;
图2本发明的判断视频可信度流程图;Fig. 2 is a flow chart of judging video credibility of the present invention;
图3本发明的判断车型方法流程图。Fig. 3 is a flow chart of the method for judging the vehicle type of the present invention.
具体实施方式Detailed ways
下面结合附图对本发明作进一步的说明,但不以任何方式对本发明加以 限制,基于本发明教导所作的任何变换或替换,均属于本发明的保护范围。The present invention will be further described below in conjunction with the accompanying drawings, but the present invention is not limited in any way, and any transformation or replacement done based on the teaching of the present invention belongs to the protection scope of the present invention.
步骤一:统一雷达与视频检测目标的位置。Step 1: Unify the position of radar and video detection targets.
1)如图1所示在视频画面中沿车道框定梯形检测区域,记录四个顶点在画面中的坐标(x 1,y 1),(x 2,y 2),(x 3,y 3),(x 4,y 4)。其中y 1=y 2,y 3=y 4,计算视频画面中梯形检测区域的高h v=y 4-y 2(单位为视频中的像素点)。 1) As shown in Figure 1, frame the trapezoidal detection area along the lane in the video screen, and record the coordinates (x 1 , y 1 ), (x 2 , y 2 ), (x 3 , y 3 ) of the four vertices in the screen , (x 4 ,y 4 ). Where y 1 =y 2 , y 3 =y 4 , calculate the height h v =y 4 -y 2 of the trapezoid detection area in the video picture (the unit is the pixel in the video).
2)标定实际检测区域宽度D r和车道个数M。 2) Calibrate the actual detection area width D r and the number M of lanes.
3)将视频中检测区域的高h v从上往下进行n等分,标定h v每个等分线对应的实际距离dh i(i=1,2,...,n,n+1)。 3) Divide the height h v of the detection area in the video into n equal parts from top to bottom, and calibrate the actual distance dh i corresponding to each equal line of h v (i=1,2,...,n,n+1 ).
4)计算视频画面中检测区域内任意点的纵坐标y k(y 2≤y k≤y 4)所对应的实际距离H k。计算公式与步骤如下: 4) Calculate the actual distance H k corresponding to the ordinate y k (y 2 ≤y k ≤y 4 ) of any point in the detection area in the video frame. The calculation formula and steps are as follows:
Figure PCTCN2022081188-appb-000009
(i取正整数)
Figure PCTCN2022081188-appb-000009
(i takes a positive integer)
Figure PCTCN2022081188-appb-000010
Figure PCTCN2022081188-appb-000010
5)计算视频画面中车道的左右边界线L 1和L 2的函数 5) Calculate the function of the left and right boundary lines L 1 and L 2 of the lane in the video picture
Figure PCTCN2022081188-appb-000011
Figure PCTCN2022081188-appb-000011
Figure PCTCN2022081188-appb-000012
Figure PCTCN2022081188-appb-000012
6)雷达安装在被检测车道中垂线处,计算视频画面中检测区域内任意点的坐标(x k,y k)所对应的距离车道中垂线处的实际距离D k(中垂线左侧为负,右侧为正)。 6) The radar is installed at the mid-perpendicular line of the detected lane, and the actual distance D k (left of the mid-perpendicular line) corresponding to the coordinates (x k , y k ) of any point in the detection area in the video picture from the mid-perpendicular line of the lane is calculated. side is negative, right is positive).
Figure PCTCN2022081188-appb-000013
Figure PCTCN2022081188-appb-000013
Figure PCTCN2022081188-appb-000014
Figure PCTCN2022081188-appb-000014
Figure PCTCN2022081188-appb-000015
Figure PCTCN2022081188-appb-000015
由此可以将视频画面中检测区域内的坐标(x k,y k)映射到雷达的纵向距离H k和横向距离D kThus, the coordinates (x k , y k ) in the detection area in the video frame can be mapped to the longitudinal distance H k and the lateral distance D k of the radar.
步骤二:雷达参数提取和视频参数提取Step 2: Radar parameter extraction and video parameter extraction
1)在视频端使用事先训练好的车辆样本,通过haar+cascades特征分类器对视频检测区域内进行车辆检测,提取被检测到的车辆坐标(x k,y k)与被检测到的车辆检测框的宽WIDTH k和高HEIGHT k1) Use pre-trained vehicle samples on the video side, use the haar+cascades feature classifier to detect vehicles in the video detection area, extract the detected vehicle coordinates (x k , y k ) and the detected vehicle detection The box's width WIDTH k and height HEIGHT k .
2)通过步骤一中的公式,计算出视频中被检测车辆坐标所对应的雷达坐标系中的坐标(D k,H k)。寻找雷达检测到的相对(D k,H k)偏差最小的目标的坐标,提取雷达所检测到该目标的长度值C l和宽度C w2) Calculate the coordinates (D k , H k ) in the radar coordinate system corresponding to the coordinates of the detected vehicle in the video through the formula in step 1. Find the coordinates of the target with the smallest relative (D k , H k ) deviation detected by the radar, and extract the length value C l and width C w of the target detected by the radar.
3)设定一个统一的宽W高H,将步骤二1)中被检测车辆检测框内图片转化为宽为W高为H的图片,提取图片的RGB值,记为一个长度为W*H*3的向量ω p3) Set a unified width W and height H, convert the picture in the detection frame of the detected vehicle in step 2 1) into a picture with a width of W and a height of H, extract the RGB value of the picture, and record it as a length of W*H *3 vector ω p .
4)将ω p、C L和C w送入步骤四中,进行车型识别。 4) Send ω p , CL and C w to step 4 to identify the vehicle type.
步骤三:视频处理异常状况排查Step 3: Troubleshoot video processing exceptions
由于视频检测时,受到天气或光线变化影响较大,在大雾、暴雨、强光等极端天气或外界因素干扰时,视频车辆检测的准确度会降低,影响到雷视结合判断车辆类型的准确度。因此我们需要实时监控视频的异常状况,并进行排查,在视频异常时降低视频参数的可信度。当视频可信度低时,步骤四的车型检测将只使用雷达数据。如图2所示,步骤如下:Because video detection is greatly affected by changes in weather or light, when extreme weather such as heavy fog, heavy rain, strong light or external factors interfere, the accuracy of video vehicle detection will be reduced, which will affect the accuracy of Levision combined with judging the vehicle type Spend. Therefore, we need to monitor the abnormality of the video in real time and conduct investigations to reduce the credibility of the video parameters when the video is abnormal. When the video reliability is low, only radar data will be used for vehicle detection in step 4. As shown in Figure 2, the steps are as follows:
1)将每一帧画面转化为灰度值,统计画面亮度均值imgL,设定画面亮度上下限值imgL_max和imgL_min,超过上限值则画面过亮,低于下限值则画面过暗,累计低于亮度下限或高于亮度上限帧数超过3000帧,为异常状况, 此时视频检测可信度低。1) Convert each frame of picture into a grayscale value, count the average value imgL of the picture brightness, set the upper and lower limits of the picture brightness imgL_max and imgL_min, if the picture exceeds the upper limit, the picture will be too bright, if it is lower than the lower limit, the picture will be too dark, and the accumulated If the number of frames lower than the lower limit of brightness or higher than the upper limit of brightness exceeds 3000 frames, it is an abnormal situation, and the reliability of video detection is low at this time.
2)统计步骤二1)中检测到的视频检测区域内的车辆个数C_num_v和雷达在实际的检测区域内检测到的车辆个数C_num_r。计算|C_num_v-C_num_r|,如果该值大于C_num_r*0.2且累计时间大于120秒,则视频检测异常,视频检测的可行度低。2) Count the number of vehicles C_num_v in the video detection area detected in step 2 1) and the number of vehicles C_num_r detected by the radar in the actual detection area. Calculate |C_num_v-C_num_r|, if the value is greater than C_num_r*0.2 and the cumulative time is greater than 120 seconds, the video detection is abnormal and the feasibility of video detection is low.
步骤四:使用SVM对雷达与视频参数进行车型识别Step 4: Use SVM to identify vehicle models based on radar and video parameters
1)构造SVM目标函数:1) Construct the SVM objective function:
Figure PCTCN2022081188-appb-000016
Figure PCTCN2022081188-appb-000016
其中y i∈{-1,1},δ为正整数,x i为步骤二中提取的参数组成的向量,当步骤三中判断视频可行度低时,该向量为[C L,C W],否则为[ω p,C L,C W]。 Where y i ∈ {-1,1}, δ is a positive integer, and xi is a vector composed of parameters extracted in step 2. When the video feasibility is judged to be low in step 3, the vector is [C L , C W ] , otherwise [ω p , CL ,C W ].
提前准备好n个视频和雷达的参数样本,通过SVM训练器进行训练,得到最小的||w||即最小的w T和b。 Prepare n video and radar parameter samples in advance, and train them through the SVM trainer to get the smallest ||w||, which is the smallest w T and b.
2)车型识别2) Model identification
使用步骤二中采集到的[ω p,C L,C W],输入到目标函数中,如果结果大于等于δ,则该车辆为大车,反之则为小车。如果需要区分多种车辆类型如大、中、小等类型,则构造多个目标函数,进行多次二分选择车辆类型。示例性的,如果需要识别大车、中车和小车,则根据公式4构造两个目标函数(第一目标函数和第二目标函数,第一目标函数和第二目标函数中的δ值大小不同),第一目标函数的训练集由小车样本参数和中车样本参数组成,输入到第一目标函数中,进行训练,训练结束后,将采集的实时车辆图像参数输入到第一目标函数中,如果结果大于等于δ,则该车辆为中车,反之则为小车。第二目标函数的训练集由中车样本参数和大车样本参数组成,输入到第二目标函数中,进行训练,训练结束后,将采集的实时车辆图像参数输入到第二目标函 数中,如果结果大于等于δ,则该车辆为大车,反之则为中车。 Use [ω p , CL ,C W ] collected in step 2 and input it into the objective function. If the result is greater than or equal to δ, the vehicle is a large vehicle, otherwise it is a small vehicle. If it is necessary to distinguish multiple types of vehicles such as large, medium, and small, construct multiple objective functions and perform multiple binary selections to select vehicle types. Exemplarily, if it is necessary to identify large vehicles, medium vehicles and small vehicles, then construct two objective functions according to formula 4 (the first objective function and the second objective function, the δ values in the first objective function and the second objective function are different in size ), the training set of the first objective function is composed of small car sample parameters and middle car sample parameters, which are input into the first objective function for training. After the training, the collected real-time vehicle image parameters are input into the first objective function, If the result is greater than or equal to δ, the vehicle is a medium vehicle, otherwise it is a small vehicle. The training set of the second objective function is composed of the sample parameters of medium vehicles and large vehicles, which are input into the second objective function for training. After the training, the collected real-time vehicle image parameters are input into the second objective function, if If the result is greater than or equal to δ, the vehicle is a large vehicle, otherwise it is a medium vehicle.
与现有技术相比,本发明的有益效果是:Compared with prior art, the beneficial effect of the present invention is:
本发明通过雷达与视频相结合,在视频场景合适时采用视频与雷达多参数融合的模式进行高准确度的车型检测,在恶劣天气下视频检测误差较大时使用雷达参数进行检测从而提高检测稳定性的方法;The present invention combines radar and video, adopts video and radar multi-parameter fusion mode for high-accuracy vehicle detection when the video scene is suitable, and uses radar parameters for detection when the video detection error is large in bad weather, thereby improving detection stability sexual method;
融合雷达与视频的优点,使雷视一体机能在全天候全实况下进行稳定和准确的车型检测。Combining the advantages of radar and video, the All-in-one camera can perform stable and accurate vehicle detection in all weather conditions.
本文所使用的词语“优选的”意指用作实例、示例或例证。本文描述为“优选的”任意方面或设计不必被解释为比其他方面或设计更有利。相反,词语“优选的”的使用旨在以具体方式提出概念。如本申请中所使用的术语“或”旨在意指包含的“或”而非排除的“或”。即,除非另外指定或从上下文中清楚,“X使用A或B”意指自然包括排列的任意一个。即,如果X使用A;X使用B;或X使用A和B二者,则“X使用A或B”在前述任一示例中得到满足。As used herein, the word "preferred" means serving as an example, instance or illustration. Any aspect or design described herein as "preferred" is not necessarily to be construed as advantageous over other aspects or designs. Rather, use of the word "preferably" is intended to present concepts in a concrete manner. As used in this application, the term "or" is intended to mean an inclusive "or" rather than an exclusive "or". That is, unless otherwise specified or clear from context, "X employs A or B" is meant to naturally include either of the permutations. That is, if X employs A; X employs B; or X employs both A and B, then "X employs A or B" is satisfied in any of the foregoing instances.
而且,尽管已经相对于一个或实现方式示出并描述了本公开,但是本领域技术人员基于对本说明书和附图的阅读和理解将会想到等价变型和修改。本公开包括所有这样的修改和变型,并且仅由所附权利要求的范围限制。特别地关于由上述组件(例如元件等)执行的各种功能,用于描述这样的组件的术语旨在对应于执行所述组件的指定功能(例如其在功能上是等价的)的任意组件(除非另外指示),即使在结构上与执行本文所示的本公开的示范性实现方式中的功能的公开结构不等同。此外,尽管本公开的特定特征已经相对于若干实现方式中的仅一个被公开,但是这种特征可以与如可以对给定或特定应用而言是期望和有利的其他实现方式的一个或其他特征组合。而且,就术 语“包括”、“具有”、“含有”或其变形被用在具体实施方式或权利要求中而言,这样的术语旨在以与术语“包含”相似的方式包括。Moreover, although the disclosure has been shown and described with respect to one or an implementation, equivalent alterations and modifications will occur to others skilled in the art upon the reading and understanding of this specification and the annexed drawings. The present disclosure includes all such modifications and variations and is limited only by the scope of the appended claims. With particular regard to various functions performed by the above-mentioned components (eg, elements, etc.), terms used to describe such components are intended to correspond to any component that performs the specified function of the component (eg, it is functionally equivalent) Even if there are no structural equivalents to the disclosed structures that perform the functions in the exemplary implementations of the present disclosure shown herein (unless otherwise indicated). Furthermore, although a particular feature of the present disclosure has been disclosed with respect to only one of several implementations, such feature may be combined with one or other features of other implementations as may be desirable and advantageous for a given or particular application. combination. Also, to the extent the terms "comprises", "has", "comprising" or variations thereof are used in the detailed description or the claims, such terms are intended to be encompassed in a manner similar to the term "comprising".
本发明实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以多个或多个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。上述提到的存储介质可以是只读存储器,磁盘或光盘等。上述的各装置或系统,可以执行相应方法实施例中的存储方法。Each functional unit in the embodiment of the present invention may be integrated into one processing module, or each unit may physically exist separately, or multiple or more of the above units may be integrated into one module. The above-mentioned integrated modules can be implemented in the form of hardware or in the form of software function modules. If the integrated modules are implemented in the form of software function modules and sold or used as independent products, they can also be stored in a computer-readable storage medium. The storage medium mentioned above may be a read-only memory, a magnetic disk or an optical disk, and the like. Each of the above devices or systems may execute the storage method in the corresponding method embodiment.
综上所述,上述实施例为本发明的一种实施方式,但本发明的实施方式并不受所述实施例的限制,其他的任何背离本发明的精神实质与原理下所做的改变、修饰、代替、组合、简化,均应为等效的置换方式,都包含在本发明的保护范围之内。In summary, the above-mentioned embodiment is an embodiment of the present invention, but the embodiment of the present invention is not limited by the embodiment, any other changes that deviate from the spirit and principle of the present invention, Modifications, substitutions, combinations, and simplifications should all be equivalent replacement methods, and are all included within the protection scope of the present invention.

Claims (10)

  1. 一种基于视频雷达融合感知的判断车型方法,其特征在于,包括以下步骤:A method for judging vehicle models based on video radar fusion perception, characterized in that it comprises the following steps:
    在视频画面中沿车道框定梯形检测区域,将视频画面中检测区域内的坐标映射到雷达的纵向距离和横向距离;Frame the trapezoidal detection area along the lane in the video screen, and map the coordinates in the detection area in the video screen to the longitudinal distance and lateral distance of the radar;
    提取雷达所检测到目标的长度值和宽度,提取车辆检测框内图片的RGB值向量,将所述目标长度值、宽度值和RGB向量组成表征该目标的向量组;Extract the length value and the width of the target detected by the radar, extract the RGB value vector of the picture in the vehicle detection frame, and form the vector group representing the target with the target length value, width value and RGB vector;
    将多个视频和雷达的向量组参数样本组成训练集,通过SVM训练器进行训练;The vector group parameter samples of multiple videos and radars are used to form a training set, which is trained by the SVM trainer;
    将实时采集的数据输入到SVM训练器目标函数中,对车型进行识别。Input the real-time collected data into the objective function of the SVM trainer to identify the vehicle type.
  2. 根据权利要求1所述的基于视频雷达融合感知的判断车型方法,其特征在于,雷达安装在被检测车道中垂线处。The method for judging a car model based on video radar fusion perception according to claim 1, wherein the radar is installed at the center perpendicular of the detected lane.
  3. 根据权利要求1所述的基于视频雷达融合感知的判断车型方法,其特征在于,所述在视频画面中沿车道框定梯形检测区域包括:The method for judging a car model based on video radar fusion perception according to claim 1, wherein the framed trapezoidal detection area along the lane in the video picture comprises:
    在视频画面中沿车道框定梯形检测区域,记录四个顶点在画面中的坐标(x 1,y 1),(x 2,y 2),(x 3,y 3),(x 4,y 4),其中y 1=y 2,y 3=y 4,计算视频画面中梯形检测区域的高h v=y 4-y 2Frame the trapezoidal detection area along the lane in the video screen, and record the coordinates of the four vertices in the screen (x 1 , y 1 ), (x 2 , y 2 ), (x 3 , y 3 ), (x 4 , y 4 ), wherein y 1 =y 2 , y 3 =y 4 , calculate the height h v =y 4 -y 2 of the trapezoidal detection area in the video picture;
    标定实际检测区域宽度D r和车道个数M; Calibrate the actual detection area width D r and the number of lanes M;
    将视频中检测区域的高h v从上往下进行n等分,标定h v每个等分线对应的实际距离dh i(i=1,2,...,n,n+1); Divide the height h v of the detection area in the video into n equal parts from top to bottom, and calibrate the actual distance dh i corresponding to each equal line of h v (i=1,2,...,n,n+1);
    计算视频画面中检测区域内任意点的纵坐标y k(y 2≤y k≤y 4)所对应的实际距离H kCalculate the actual distance H k corresponding to the vertical coordinate y k (y 2y k ≤ y 4 ) of any point in the detection area in the video picture;
    计算视频画面中车道的左右边界线L 1和L 2的函数: The function to calculate the left and right boundary lines L 1 and L 2 of the lane in the video frame:
    Figure PCTCN2022081188-appb-100001
    Figure PCTCN2022081188-appb-100001
    Figure PCTCN2022081188-appb-100002
    Figure PCTCN2022081188-appb-100002
    计算视频画面中检测区域内任意点的坐标(x k,y k)所对应的距离车道中垂线处的实际距离D k,将视频画面中检测区域内的坐标(x k,y k)映射到雷达的纵向距离H k和横向距离D kCalculate the actual distance D k corresponding to the coordinates (x k , y k ) of any point in the detection area in the video screen from the mid-perpendicular line of the lane, and map the coordinates (x k , y k ) in the detection area in the video screen Longitudinal distance H k and lateral distance D k to the radar.
  4. 根据权利要求3所述的基于视频雷达融合感知的判断车型方法,其特征在于,所述H k的计算公式与步骤如下: The method for judging a car model based on video radar fusion perception according to claim 3, wherein the calculation formula and steps of H are as follows:
    Figure PCTCN2022081188-appb-100003
    Figure PCTCN2022081188-appb-100003
    Figure PCTCN2022081188-appb-100004
    Figure PCTCN2022081188-appb-100004
  5. 根据权利要求3所述的基于视频雷达融合感知的判断车型方法,其特征在于,所述实际距离D k的计算公式与步骤如下: The method for judging a car model based on video radar fusion perception according to claim 3, wherein the calculation formula and steps of the actual distance D are as follows:
    Figure PCTCN2022081188-appb-100005
    Figure PCTCN2022081188-appb-100005
    Figure PCTCN2022081188-appb-100006
    Figure PCTCN2022081188-appb-100006
    Figure PCTCN2022081188-appb-100007
    Figure PCTCN2022081188-appb-100007
  6. 根据权利要求1所述的基于视频雷达融合感知的判断车型方法,其特征在于,所述提取雷达所检测到该目标的长度值和宽度,提取车辆检测框内图片的RGB值向量包括:The method for judging a car model based on video radar fusion perception according to claim 1, wherein the length value and width of the target detected by the extraction radar, and the extraction of the RGB value vector of the picture in the vehicle detection frame include:
    在视频端使用事先训练好的车辆样本,通过特征分类器对视频检测区域内进行车辆检测,提取被检测到的车辆坐标(x k,y k)与被检测到的车辆检测框的宽WIDTH k和高HEIGHT kUse pre-trained vehicle samples on the video side to detect vehicles in the video detection area through a feature classifier, extract the detected vehicle coordinates (x k , y k ) and the width WIDTH k of the detected vehicle detection frame and high HEIGHT k ;
    计算出视频中被检测车辆坐标所对应的雷达坐标系中的坐标(D k,H k); Calculate the coordinates (D k , H k ) in the radar coordinate system corresponding to the detected vehicle coordinates in the video;
    寻找雷达检测到的相对(D k,H k)偏差最小的目标的坐标,提取雷达所检 测到该目标的长度值C l和宽度C wFind the coordinates of the target with the smallest relative (D k , H k ) deviation detected by the radar, and extract the length value C l and width C w of the target detected by the radar;
    将被检测车辆检测框内图片转化为宽为W高为H的图片,提取图片的RGB值,记为一个长度为W*H*3的向量ω pConvert the picture in the detection frame of the detected vehicle into a picture with a width of W and a height of H, extract the RGB value of the picture, and record it as a vector ω p with a length of W*H*3.
  7. 根据权利要求1所述的基于视频雷达融合感知的判断车型方法,其特征在于,所述通过SVM训练器进行训练包括:The method for judging a car model based on video radar fusion perception according to claim 1, wherein said training by an SVM trainer includes:
    构造SVM目标函数:Construct the SVM objective function:
    Figure PCTCN2022081188-appb-100008
    Figure PCTCN2022081188-appb-100008
    其中y i∈{-1,1},δ为正整数,x i为参数ω p、C L和C w组成的向量组,当判断视频可行度低时,该向量组为[C L,C W],否则为[ω p,C L,C W]; Where y i ∈ {-1,1}, δ is a positive integer, xi is a vector group composed of parameters ω p , C L and C w , when the video feasibility is judged to be low, the vector group is [C L ,C W ], otherwise [ω p , CL ,C W ];
    将n个视频和雷达的向量组参数样本组成训练集,通过SVM训练器进行训练,得到最小的||w||即最小的w T和b。 The training set is composed of n video and radar vector group parameter samples, and trained by the SVM trainer to obtain the minimum ||w||, that is, the minimum w T and b.
  8. 根据权利要求1所述的基于视频雷达融合感知的判断车型方法,其特征在于,所述判断视频可行度低的方法如下:The method for judging a car model based on video radar fusion perception according to claim 1, wherein the method for judging that the video feasibility is low is as follows:
    将每一帧画面转化为灰度值,统计画面亮度均值imgL,设定画面亮度上下限值imgL_max和imgL_min,超过上限值则画面过亮,低于下限值则画面过暗,累计低于亮度下限或高于亮度上限帧数超过3000帧,为异常状况,此时视频检测可信度低。Convert each frame of picture into a grayscale value, count the average value imgL of the picture brightness, set the upper and lower limit values imgL_max and imgL_min of the picture brightness, if the upper limit value is exceeded, the picture will be too bright, if it is lower than the lower limit value, the picture will be too dark, and the cumulative value will be lower than If the lower limit of brightness or the number of frames higher than the upper limit of brightness exceeds 3000 frames, it is an abnormal situation, and the reliability of video detection is low at this time.
  9. 根据权利要求1所述的基于视频雷达融合感知的判断车型方法,其特征在于,所述判断视频可行度低的方法还包括:The method for judging a car model based on video radar fusion perception according to claim 1, wherein the method for judging that the video feasibility is low also includes:
    统计检测到的视频检测区域内的车辆个数C_num_v和雷达在实际的检测区域内检测到的车辆个数C_num_r,如果|C_num_v-C_num_r|大于C_num_r*0.2且累计时间大于120秒,则视频检测异常,视频检测的可行度低。Count the number of vehicles C_num_v in the detected video detection area and the number of vehicles C_num_r detected by the radar in the actual detection area. If |C_num_v-C_num_r| is greater than C_num_r*0.2 and the cumulative time is greater than 120 seconds, the video detection is abnormal , the feasibility of video detection is low.
  10. 根据权利要求7所述的基于视频雷达融合感知的判断车型方法,其特征在于,所述将实时采集的数据输入到SVM训练器目标函数中,对车型进行识别包括:将采集到的车辆向量组参数,输入到目标函数中,如果结果大于等于δ,则该车辆为大车,反之则为小车。The method for judging a car model based on video radar fusion perception according to claim 7, wherein the input of the real-time collected data into the SVM trainer objective function, and identifying the car model includes: grouping the collected vehicle vectors Parameters, input into the objective function, if the result is greater than or equal to δ, the vehicle is a large vehicle, otherwise it is a small vehicle.
PCT/CN2022/081188 2021-12-14 2022-03-16 Vehicle model determining method based on video-radar fusion perception WO2023108931A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111525359.7A CN114463660A (en) 2021-12-14 2021-12-14 Vehicle type judging method based on video radar fusion perception
CN202111525359.7 2021-12-14

Publications (1)

Publication Number Publication Date
WO2023108931A1 true WO2023108931A1 (en) 2023-06-22

Family

ID=81405899

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/081188 WO2023108931A1 (en) 2021-12-14 2022-03-16 Vehicle model determining method based on video-radar fusion perception

Country Status (2)

Country Link
CN (1) CN114463660A (en)
WO (1) WO2023108931A1 (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102810250A (en) * 2012-07-31 2012-12-05 长安大学 Video based multi-vehicle traffic information detection method
CN103559791A (en) * 2013-10-31 2014-02-05 北京联合大学 Vehicle detection method fusing radar and CCD camera signals
CN106951879A (en) * 2017-03-29 2017-07-14 重庆大学 Multiple features fusion vehicle checking method based on camera and millimetre-wave radar
CN107895492A (en) * 2017-10-24 2018-04-10 河海大学 A kind of express highway intelligent analysis method based on conventional video
EP3379289A1 (en) * 2017-03-21 2018-09-26 Delphi Technologies LLC Automated vehicle object detection system with camera image and radar data fusion
CN109948661A (en) * 2019-02-27 2019-06-28 江苏大学 A kind of 3D vehicle checking method based on Multi-sensor Fusion
CN112162271A (en) * 2020-08-18 2021-01-01 河北省交通规划设计院 Vehicle type recognition method of microwave radar under multiple scenes
CN112541953A (en) * 2020-12-29 2021-03-23 江苏航天大为科技股份有限公司 Vehicle detection method based on radar signal and video synchronous coordinate mapping

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102810250A (en) * 2012-07-31 2012-12-05 长安大学 Video based multi-vehicle traffic information detection method
CN103559791A (en) * 2013-10-31 2014-02-05 北京联合大学 Vehicle detection method fusing radar and CCD camera signals
EP3379289A1 (en) * 2017-03-21 2018-09-26 Delphi Technologies LLC Automated vehicle object detection system with camera image and radar data fusion
CN106951879A (en) * 2017-03-29 2017-07-14 重庆大学 Multiple features fusion vehicle checking method based on camera and millimetre-wave radar
CN107895492A (en) * 2017-10-24 2018-04-10 河海大学 A kind of express highway intelligent analysis method based on conventional video
CN109948661A (en) * 2019-02-27 2019-06-28 江苏大学 A kind of 3D vehicle checking method based on Multi-sensor Fusion
CN112162271A (en) * 2020-08-18 2021-01-01 河北省交通规划设计院 Vehicle type recognition method of microwave radar under multiple scenes
CN112541953A (en) * 2020-12-29 2021-03-23 江苏航天大为科技股份有限公司 Vehicle detection method based on radar signal and video synchronous coordinate mapping

Also Published As

Publication number Publication date
CN114463660A (en) 2022-05-10

Similar Documents

Publication Publication Date Title
CN112396650B (en) Target ranging system and method based on fusion of image and laser radar
TWI722355B (en) Systems and methods for correcting a high-definition map based on detection of obstructing objects
US20230014874A1 (en) Obstacle detection method and apparatus, computer device, and storage medium
CN107463890B (en) A kind of Foregut fermenters and tracking based on monocular forward sight camera
CN114842438B (en) Terrain detection method, system and readable storage medium for automatic driving automobile
WO2020000251A1 (en) Method for identifying video involving violation at intersection based on coordinated relay of video cameras
US20200041284A1 (en) Map road marking and road quality collecting apparatus and method based on adas system
US9292750B2 (en) Method and apparatus for detecting traffic monitoring video
Zheng et al. A novel vehicle detection method with high resolution highway aerial image
CN110197173B (en) Road edge detection method based on binocular vision
CN105718872A (en) Auxiliary method and system for rapid positioning of two-side lanes and detection of deflection angle of vehicle
CN107389084A (en) Planning driving path planing method and storage medium
WO2021253245A1 (en) Method and device for identifying vehicle lane changing tendency
EP2813973B1 (en) Method and system for processing video image
US10936920B2 (en) Determining geographical map features with multi-sensor input
CN114495064A (en) Monocular depth estimation-based vehicle surrounding obstacle early warning method
Liu et al. Vehicle detection and ranging using two different focal length cameras
CN114325634A (en) Method for extracting passable area in high-robustness field environment based on laser radar
CN111967396A (en) Processing method, device and equipment for obstacle detection and storage medium
WO2021017211A1 (en) Vehicle positioning method and device employing visual sensing, and vehicle-mounted terminal
CN113449632A (en) Vision and radar perception algorithm optimization method and system based on fusion perception and automobile
Ghahremannezhad et al. Robust road region extraction in video under various illumination and weather conditions
CN114648549A (en) Traffic scene target detection and positioning method fusing vision and laser radar
CN113029185B (en) Road marking change detection method and system in crowdsourcing type high-precision map updating
WO2023108931A1 (en) Vehicle model determining method based on video-radar fusion perception

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22905702

Country of ref document: EP

Kind code of ref document: A1