WO2017161544A1 - 一种基于单摄像头视频序列匹配的车辆测速方法及其系统 - Google Patents

一种基于单摄像头视频序列匹配的车辆测速方法及其系统 Download PDF

Info

Publication number
WO2017161544A1
WO2017161544A1 PCT/CN2016/077292 CN2016077292W WO2017161544A1 WO 2017161544 A1 WO2017161544 A1 WO 2017161544A1 CN 2016077292 W CN2016077292 W CN 2016077292W WO 2017161544 A1 WO2017161544 A1 WO 2017161544A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
video sequence
matching
vehicle
module
Prior art date
Application number
PCT/CN2016/077292
Other languages
English (en)
French (fr)
Inventor
裴继红
李伟洲
谢维信
Original Assignee
深圳大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳大学 filed Critical 深圳大学
Priority to PCT/CN2016/077292 priority Critical patent/WO2017161544A1/zh
Publication of WO2017161544A1 publication Critical patent/WO2017161544A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/052Detecting movement of traffic to be counted or controlled with provision for determining speed or overspeed

Definitions

  • the present invention relates to the field of image video processing technologies, and in particular, to a vehicle speed measurement method and system based on single camera video sequence matching.
  • Vehicle target matching is an important research content in vehicle speed measurement. It is often necessary to detect and identify vehicle targets before the vehicle is speeding. Target matching is the process of finding a target image in a video image to be tested by a specific algorithm.
  • the traditional target matching method is mainly image matching, that is, in two frames of images, a process of having the same target as another frame is searched from one frame of image.
  • the prior art proposes that the most common method for matching targets between two frames of images is based on features.
  • the method first extracts features in the matched images, which may be color features, points of interest features, gradient features, edge features, etc., and the geometric transformation is determined by the similarity measure and some constraints, and finally the transformation is applied to Match the image.
  • features in the matched images which may be color features, points of interest features, gradient features, edge features, etc.
  • the geometric transformation is determined by the similarity measure and some constraints
  • the transformation is applied to Match the image.
  • it is generally not only one feature is used for target matching, but more is to combine multiple features of the above features to complete the target matching.
  • Another method is to find the feature of interest points for target matching, such as the commonly used SIFT features.
  • the SIFT features are based on the interest points of some local appearances on the object, and are independent of the size and rotation of the image, and the matching precision is high.
  • the SIFT algorithm also has shortcomings.
  • the dimension of the feature vector is up to 128 dimensions.
  • the amount of data calculated during matching is large and time consuming, and the image gray information is used. The color information is ignored, so the image information is not fully obtained. use.
  • the method for matching a target between two frames of the prior art has a problem of high algorithm complexity and low computational efficiency.
  • an object of the present invention is to provide a vehicle speed measurement method and system based on single camera video sequence matching, which aims to solve the problem of high algorithm complexity and low calculation efficiency of the vehicle target matching technology in the prior art.
  • the invention provides a vehicle speed measurement method based on single camera video sequence matching, the method comprising:
  • the step of establishing a data collection environment and starting to collect and read data specifically includes:
  • the collected video data is read to obtain a vehicle target video sequence and a video sequence to be matched, respectively.
  • the step of using the matching algorithm to find the same vehicle target appearing in different windows according to the read data specifically includes:
  • the dynamic sliding window matching theorem is used to match, and a set of correlation coefficient values is obtained.
  • the maximum correlation coefficient value is compared with the set threshold to obtain a matching result, and different windows are searched according to the matching result.
  • the step of calculating the vehicle speed of the same vehicle target specifically includes:
  • the vehicle speed of the target vehicle is calculated based on the actual distance.
  • the present invention also provides a vehicle speed measuring system based on single camera video sequence matching, the system comprising:
  • a pre-processing module for establishing a data collection environment and starting to collect and read data
  • a target matching module configured to use a matching algorithm to find the same vehicle target appearing in different windows according to the read data
  • Target speed module for calculating the speed of the same vehicle target.
  • the preprocessing comprises:
  • a window setting sub-module for setting a window for capturing a vehicle target and starting to collect video data
  • the video reading sub-module is configured to read the collected video data to obtain a vehicle target video sequence and a video sequence to be matched, respectively.
  • the target matching module includes:
  • a foreground target sub-module configured to preprocess the two video sequences of the vehicle target video sequence and the video sequence to be matched, respectively, to implement segmentation and shadow removal of the foreground target and the background;
  • a feature extraction sub-module configured to extract feature values, and calculate a color histogram of each frame in the foreground object corresponding to the two video sequences to obtain a feature matrix M and a matrix N, respectively;
  • the feature comparison sub-module is configured to perform matching according to the obtained feature matrix M and the matrix N by using a dynamic sliding window matching theorem, obtain a set of correlation coefficient values, and compare the maximum correlation coefficient value with the set threshold to obtain a matching result. And based on the matching results to find the same vehicle target appearing in different windows.
  • the target speed measuring module comprises:
  • a frame number obtaining sub-module configured to obtain a frame number of each of the vehicle target video sequence and the video sequence to be matched when the target vehicle passes through different windows respectively;
  • a first calculation sub-module for calculating a physical distance between the frame and the frame
  • a second calculating submodule configured to calculate a vehicle speed of the target vehicle according to the actual distance.
  • the technical solution provided by the invention greatly reduces the algorithm complexity in the vehicle target matching technology, thereby improving the calculation efficiency.
  • FIG. 1 is a flowchart of a vehicle speed measurement method based on single camera video sequence matching according to an embodiment of the present invention
  • step S11 shown in FIG. 1 according to an embodiment of the present invention
  • FIG 3 is an environment diagram of video data collection in an embodiment of the present invention.
  • FIG. 4 is a comparison diagram of an actual road plan and a video image road map according to an embodiment of the present invention.
  • FIG. 5 is a diagram showing a dynamic sliding window matching theorem according to an embodiment of the present invention.
  • FIG. 6 is a detailed flowchart of step S12 shown in FIG. 1 according to an embodiment of the present invention.
  • FIG. 7 is a detailed flowchart of step S123 shown in FIG. 6 according to an embodiment of the present invention.
  • FIG. 8 is a detailed flowchart of step S13 shown in FIG. 1 according to an embodiment of the present invention.
  • FIG. 9 is a schematic diagram showing the internal structure of a vehicle speed measurement system 10 based on single camera video sequence matching according to an embodiment of the present invention.
  • FIG. 10 is a schematic diagram showing the internal structure of the preprocessing module 11 shown in FIG. 9 according to an embodiment of the present invention.
  • FIG. 11 is a schematic diagram showing the internal structure of the target matching module 12 shown in FIG. 9 according to an embodiment of the present invention.
  • FIG. 12 is a schematic diagram showing the internal structure of the target speed measuring module 13 shown in FIG. 9 according to an embodiment of the present invention.
  • a specific embodiment of the present invention provides a vehicle speed measurement method based on single camera video sequence matching, and the method mainly includes the following steps:
  • the vehicle speed measurement method based on single camera video sequence matching provided by the invention can greatly reduce the algorithm complexity in the vehicle target matching technology, thereby improving the calculation efficiency.
  • a vehicle speed measurement method based on single camera video sequence matching provided by the present invention will be described in detail below.
  • FIG. 1 is a flowchart of a vehicle speed measurement method based on single camera video sequence matching according to an embodiment of the present invention.
  • step S11 a data collection environment is established and data acquisition and reading are started.
  • the step S11 of establishing a data collection environment and starting to collect and read data specifically includes S111-S113, as shown in FIG. 2.
  • FIG. 2 is a detailed flowchart of step S11 shown in FIG. 1 according to an embodiment of the present invention.
  • step S111 an environment for recording a video is established.
  • the preparation of the data is collected, and the environment for recording the video is established.
  • the camera is assumed to be fixed on the bridge, and the road is down-slope along the direction of the vehicle. .
  • step S112 a window for capturing a vehicle target is set, and acquisition of video data is started.
  • one channel is selected in the field of view, and two windows are respectively set at a distance. Assume that the vehicle travels straight in the same lane for a short period of time, passing through window 1 and window 2 at different times.
  • the window is set to a rectangle, and the window is adaptively adjusted according to the distance of the vision, as shown in FIG.
  • the horse route strips of the actual road plane are parallel, and the distance between the lines and the lines is equal, that is, the distance between P1 and P3 in the figure is equal to the distance between P2 and P4.
  • the actual captured video because the camera is facing the running direction of the vehicle, looks into a fixed angle and looks forward, so the road plane obtained is the same as the right picture in Figure 4. It can be seen that the distance between the P2 and P4 points close to the camera is relatively large, and the distance between the P1 and P3 points at a relatively long distance is small, and the farther away, the distance between the two lines is proportionally reduced.
  • the moving vehicle moves from the near side to the far side in the video, it will also shrink. Therefore, when setting the window, it is also scaled according to the scale to ensure the stability and feasibility of the obtained data. So the key now is to find the mapping between the actual road plane and the video image plane.
  • the P1 coordinate of the right figure of Fig. 4 is (x1, y1)
  • the P2 coordinate is (x2, y2)
  • the P3 coordinate is (x3, y1)
  • the P4 coordinate is (x4, y2). Since the scaling is mainly due to the difference in the position of the Y-axis coordinates, the main factor determining the ratio is the y-coordinate.
  • the proportion of the window size should satisfy:
  • is also the mapping we require. Different y coordinate values, ⁇ is variable.
  • the width of the window is first set to the distance between the two lines of the lane, so that the width d is fixed at different y-coordinate values, for example, y1
  • the width is d1
  • the width at y2 is d2.
  • the adaptive method is used to determine the height h of the window.
  • the window height h is always the width d As long as d is determined, h is determined. So as long as we draw the width d of the window, h is automatically generated, and a reasonable window is obtained.
  • step S113 the collected video data is read to obtain a vehicle target video sequence and a video sequence to be matched, respectively.
  • the vehicle target video sequence 1 is a sequence of consecutive frames captured by a certain vehicle through the window 1. In the present embodiment, the vehicle target video sequence 1 is only about 10-20 frames.
  • the video sequence 2 to be matched has a plurality of consecutive frames of the car passing through the window 2. In the present embodiment, the video sequence 2 to be matched is 1000 frames.
  • the target matching method proposed by the present invention is no longer based on a single frame image, but a continuous frame sequence of video as a basis for matching, that is, a target matching is completed.
  • the result of matching is obtained by the relationship between two sequences, which is called dynamic sliding window matching theorem.
  • two video sequences are acquired, one is the vehicle target video sequence 1 in which the target vehicle appears in the camera area window, and the other is the video sequence 2 to be matched, in order to find whether there is a vehicle target video in the video sequence 2 to be matched.
  • the vehicle target video sequence 1 is an m frame
  • the video sequence 2 to be matched is an n frame, m>n.
  • a specific feature is selected as a representative value of each frame of the video sequence, so that the vehicle target video sequence 1 can form a matrix M of m columns, wherein each column of the matrix is a feature value of each frame of the video sequence.
  • the video sequence 2 to be matched can form a matrix N of n columns.
  • the first column of the matrix N corresponds to the first column of the matrix M.
  • the matrix M corresponds one-to-one with the first n frames of the matrix N, and a correlation metric is obtained by calculating the correlation of the n frames corresponding to the matrix N and the matrix M.
  • the matrix N starts to slide to the right, slides one column to the right each time, slides once, and calculates a correlation metric.
  • the m-n correlation metrics that can be obtained at this time until stopped after sliding m-n times.
  • step S12 the matching algorithm is used to find the same vehicle target appearing in different windows according to the read data.
  • the step S12 of searching for the same vehicle target appearing in different windows by using the matching algorithm according to the read data specifically includes S121-S123, as shown in FIG. 6.
  • FIG. 6 is a detailed flowchart of step S12 shown in FIG. 1 according to an embodiment of the present invention.
  • step S121 the two video sequences of the vehicle target video sequence and the video sequence to be matched are respectively preprocessed to realize segmentation and shadow removal of the foreground object and the background.
  • the original video sequence is subjected to background initialization, background update, and foreground target detection using hybrid Gaussian background modeling (MOG). Because the foreground target detected by the mixed Gaussian background model has motion shadows, the foreground target is further shadow-detected by the HSV ratio space method, and the shadow is removed.
  • MOG hybrid Gaussian background modeling
  • step S121 specifically includes the following two sub-steps (1), (2).
  • MOG Mixed Gaussian Background Modeling
  • the foreground moving target in the video sequence has been extracted in (1), wherein the foreground moving target includes two parts of the motion factor, that is, the shadow of the foreground target vehicle and the moving vehicle. Therefore, in order to obtain a foreground target vehicle of the vehicle, in the example of the present invention, the foreground target is subjected to shadow detection using the HSV ratio space method, and the shadow is removed.
  • the basic idea of the HSV ratio space shadow detection method is that the pixel of the shadow area has the characteristics of darker brightness, lower saturation, and smaller chromaticity change than the background pixel of the corresponding position. According to this feature, the brightness ratio, the chromaticity difference value, and the saturation difference value of the foreground moving target pixel point and the background pixel point can be judged as a threshold, and the case is judged as a shadow.
  • step S122 the feature values are extracted, and the color histogram of each frame in the foreground objects corresponding to the two video sequences is calculated to obtain the feature matrix M and the matrix N, respectively.
  • image video processing especially the identification of matching, the most important is the description of the features and how to extract these features.
  • the selection of features is also an important factor affecting the quality of the algorithm. So choosing the right features plays a crucial role in video image processing.
  • Representative features of image recognition generally have the following: color, gradient, texture, shape, and the like.
  • Color histograms are the most commonly used statistical features in image video processing. Each pixel of the image can be regarded as a point in a 3-dimensional space, and the color space has RGB, Munsell, CIEL*a*b, CIEL*u*v*, HSV, and the like. In order to facilitate the verification of the feasibility of the dynamic sliding window matching theorem, in the embodiment of the present invention, only the histogram of the RGB color space is selected as the statistical feature.
  • the RGB three-dimensional space includes three coordinate axes of R, G, and B. The value of each coordinate axis ranges from 0 to 255, and the R, G, and B color levels of each frame are combined into one column vector as a column of the feature matrix.
  • video sequence 1 is an m frame
  • video sequence 2 is an n frame, m>n.
  • the feature matrix M obtained by the vehicle target video sequence 1 is:
  • the feature matrix N of the video sequence 2 to be matched can be obtained:
  • N [g 1 g 2 ... ... g n-1 g n ].
  • step S123 according to the obtained feature matrix M and the matrix N, the dynamic sliding window matching theorem is used for matching, and a set of correlation coefficient values is obtained, and the maximum correlation coefficient value is compared with the set threshold to obtain a matching result, and according to Match results to find the same vehicle target that appears in different windows.
  • step S123 further includes four sub-steps S1231-S1234, as shown in FIG.
  • FIG. 7 is a detailed flowchart of step S123 shown in FIG. 6 according to an embodiment of the present invention.
  • step S1231 the obtained matrix M is aligned with the first column of the matrix N, and the correlation coefficient corr 1 between the matrix M and the partial submatrix M(1) and the matrix N is calculated and stored in the array.
  • the obtained matrix M is aligned with the first column of the matrix N, that is, the matrix M is aligned with the partial sub-matrix.
  • the calculation matrix M aligns the correlation coefficient corr 1 between the partial sub-matrix M(1) and the matrix N and stores them in the array.
  • step S1232 the matrix N slides to the right and slides one column, the second column of the matrix M is aligned with the first column of the matrix N, and the correlation coefficient corr 2 between the partial matrix M(2) and the matrix N of the matrix M is calculated and saved. In the array.
  • the matrix N slides to the right and slides one column, and the second column of the matrix M is aligned with the first column of the matrix N, that is, the sub-matrix of the matrix M.
  • the correlation coefficient corr 2 between the M-aligned partial sub-matrix M(2) and the matrix N is calculated by the correlation coefficient formula and stored in the array.
  • step S1232 is repeated until sliding mn times.
  • the matrix N slides one column to the right each time, and the matrix M aligns the correlation coefficient corr i between the partial sub-matrix M(i) and the matrix N and stores them in the array.
  • step S1234 the largest correlation coefficient value corr max is found in the array, and corr max is compared with a suitable threshold T. If corr max ⁇ T, the matching is successful. Otherwise the match fails.
  • the largest correlation coefficient value corr max is found in the array saved in the above step, and corr max is compared with a suitable threshold T. If corr max ⁇ T, the matching is successful, and the video sequence to be matched is 1 The first to max+n frames are matched to the target vehicle appearing in the video sequence 2. If corr max ⁇ T, the matching fails, and there is no target vehicle appearing in the video sequence 2 in the video sequence 1 to be matched. Among them, the T value is generally taken as 0.9.
  • step S13 the vehicle speed of the same vehicle target is calculated.
  • the step S13 of calculating the vehicle speed of the same vehicle target specifically includes S131-S133, as shown in FIG.
  • FIG. 8 is a detailed flowchart of step S13 shown in FIG. 1 according to an embodiment of the present invention.
  • step S131 the respective number of frames in the vehicle target video sequence and the video sequence to be matched when the target vehicle passes through different windows respectively is acquired.
  • step S1234 after the result of step S1234 is that the vehicle target video sequence 1 and the video sequence 2 to be matched match the same car successfully, the present embodiment selects the car to appear in the window 1 and the window respectively. Two frames of port 2 to calculate the speed. Let the number of frames taken in sequence 1 be the f1 frame, and the number of frames taken in sequence 2 be the f2 frame. When the vehicle target passes through the window, when there is always one frame appearing in the window, the car occupies the largest area ratio of the window. In this embodiment, the frame with the largest ratio is selected as the f1 and f2 frames, respectively. In this embodiment, the ratio of the number of pixel points of the image mask of each frame to the total number of pixels of the window is determined, and the frame with the largest ratio is selected.
  • step S132 the actual distance between the frame and the frame is calculated.
  • the window 1 and the window 2 in the image are the same in the actual road plane.
  • both the window 1 and the lower bottom of the window 2 are disposed in parallel at the lower end of the dotted line of the road.
  • the center of the body is roughly in the middle of the window, it is calculated as the middle point, as shown in Figure 4.
  • L1 is equal to the distance L2 between the bottom edge of the window 1 and the bottom edge of the window 2, that is, as long as the distance L2 between the lower ends of the two road dotted lines is known.
  • L2 can be obtained by road line specifications.
  • step S133 the vehicle speed of the target vehicle is calculated based on the actual distance.
  • the actual distance between the two window cars is obtained as L2 (units), and the number of frames appearing in the two windows is f1 and f2, respectively.
  • the vehicle speed measurement method based on single camera video sequence matching provided by the invention can greatly reduce the algorithm complexity in the vehicle target matching technology, thereby improving the calculation efficiency.
  • the embodiment of the present invention further provides a vehicle speed measuring system 10 based on single camera video sequence matching, which mainly includes:
  • the pre-processing module 11 is configured to establish a data collection environment and start collecting and reading data
  • the target matching module 12 is configured to use a matching algorithm to find out different windows according to the read data.
  • the target speed measuring module 13 is configured to calculate the vehicle speed of the same vehicle target.
  • the invention provides a vehicle speed measuring system 10 based on single camera video sequence matching, which can greatly reduce the algorithm complexity in the vehicle target matching technology, thereby improving the calculation efficiency.
  • the vehicle speed measuring system 10 based on the single camera video sequence matching mainly includes a preprocessing module 11 , a target matching module 12 , and a target speed measuring module 13 .
  • the pre-processing module 11 is configured to establish a data collection environment and start collecting and reading data.
  • the pre-processing module 11 specifically includes an environment establishing sub-module 111, a window setting sub-module 112, and a video reading sub-module 113, as shown in FIG.
  • FIG. 10 is a schematic diagram showing the internal structure of the preprocessing module 11 shown in FIG. 9 according to an embodiment of the present invention.
  • the environment creation sub-module 111 is used to establish an environment for recording video.
  • the preparation of the data is collected, and the environment for recording the video is established.
  • the camera is assumed to be fixed on the bridge, and the road is down-slope along the direction of the vehicle. .
  • the window setting sub-module 112 is configured to set a window for capturing a vehicle target and start collecting video data.
  • one channel is selected in the field of view, and two windows are respectively set at a distance.
  • the window is set to be a rectangle, and the window is adaptively adjusted according to the distance of the vision, as shown in FIG. 4 , and the related description is described in detail in the foregoing description in step S112, and the description is not repeated here.
  • the video reading sub-module 113 is configured to read the collected video data to obtain a vehicle target video sequence and a video sequence to be matched, respectively.
  • the target matching module 12 is configured to use the matching algorithm to find the same vehicle target appearing in different windows according to the read data.
  • the target matching module 12 specifically includes a foreground target sub-module 121, a feature extraction sub-module 122, and a feature comparison sub-module 123, as shown in FIG.
  • FIG. 11 is a schematic diagram showing the internal structure of the target matching module 12 shown in FIG. 9 according to an embodiment of the present invention.
  • the foreground target sub-module 121 is configured to preprocess the two video sequences of the vehicle target video sequence and the video sequence to be matched, respectively, to implement segmentation and shadow removal of the foreground target and the background.
  • the specific pre-processing process is referred to the related description of the foregoing step S121, and the repeated description is not repeated here.
  • the feature extraction sub-module 122 is configured to extract feature values, and calculate a color histogram of each frame in the foreground object corresponding to the two video sequences to obtain a feature matrix M and a matrix N, respectively.
  • the feature comparison sub-module 123 is configured to perform matching according to the obtained feature matrix M and the matrix N by using a dynamic sliding window matching theorem, obtain a set of correlation coefficient values, and compare the maximum correlation coefficient value with the set threshold to obtain a matching result. And find the same vehicle target that appears in different windows based on the matching result.
  • the target speed measuring module 13 is configured to calculate the vehicle speed of the same vehicle target.
  • the target speed measurement module 13 specifically includes a frame number acquisition sub-module 131, a first calculation sub-module 132, and a second calculation sub-module 133, as shown in FIG.
  • FIG. 12 it is a schematic diagram showing the internal structure of the target speed measuring module 13 shown in FIG. 9 according to an embodiment of the present invention.
  • a frame number obtaining sub-module 131 configured to acquire a target vehicle in a vehicle target when passing through different windows respectively The number of frames in the video sequence and the video sequence to be matched.
  • the specific frame number acquisition process is referred to the related description of the foregoing step S131, and the repeated description is not repeated here.
  • the first calculation sub-module 132 is configured to calculate the actual distance between the frame and the frame.
  • the calculation process of the actual distance between the frame and the frame is referred to the related description of the foregoing step S132, and the description thereof will not be repeated here.
  • the second calculation sub-module 133 is configured to calculate a vehicle speed of the target vehicle according to the actual distance.
  • the calculation process of the vehicle speed of the target vehicle is referred to the related description of the foregoing step S133, and the description thereof will not be repeated here.
  • the invention provides a vehicle speed measuring system 10 based on single camera video sequence matching, which can greatly reduce the algorithm complexity in the vehicle target matching technology, thereby improving the calculation efficiency.
  • each unit included is only divided according to functional logic, but is not limited to the above division, as long as the corresponding function can be implemented; in addition, the specific name of each functional unit is also They are only used to facilitate mutual differentiation and are not intended to limit the scope of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

一种基于单摄像头视频序列匹配的车辆测速方法及系统。该方法包括:建立数据采集环境,并开始采集和读取数据(S11);根据读取到的数据利用匹配算法寻找不同窗口中出现的同一车辆目标(S12);计算同一车辆目标的车速(S13)。该系统极大地降低了车辆目标匹配技术中的算法复杂度,进而提高了计算效率。

Description

一种基于单摄像头视频序列匹配的车辆测速方法及其系统 技术领域
本发明涉及图像视频处理技术领域,尤其涉及一种基于单摄像头视频序列匹配的车辆测速方法及其系统。
背景技术
随着经济的发展,人民生活水平的提高,汽车数量的增长速度远远超过道路基础设施建设的速度,车路矛盾日益突出,出现道路交通拥挤、事故频繁等问题。而事故发生的主要原因有超速和酒驾。因此,对车辆实行是否超速的检测是刻不容缓的。如何解决车辆是否超速的问题,保障交通安全,已成为智能交通系统中热门的研究课题之一。
实时车速的检测和采集方法很多,目前较为成熟的有环形线圈测速、雷达测速等。而基于视频的车辆运行速度检测方法,具有安装简便、设置灵活、覆盖区域广等优点。近几年,基于视频的实时测速系统得到了很大发展,已成为一种具有良好前景的检测手段,对城市道路和高速公路的交通智能管理具有一定的理论意义和实用价值。
基于视频的车速检测方法研究,需要知道目标车辆的位移信息和行走时间,因此需要进行运动车辆的检测与提取,然后根据目标车辆的特征进行跟踪,最后通过获得的车辆的位置信息来实现车辆速度的测量。车辆目标匹配在车辆测速中是一个重要的研究内容。车辆测速之前,常常需要检测和识别车辆目标。目标匹配就是通过特定的算法在待测视频图像中寻找目标图像的过程。
传统的目标匹配方法主要是图像匹配,即在两帧图像中,从一帧图像中寻找与另一帧中具有相同目标的过程。
现有技术提出针对两帧图像之间目标的匹配最常用方法是基于特征的匹 配,该方法首先在匹配的图像中提取特征,这些特征可以是颜色特征、兴趣点特征、梯度特征、边缘特征等,用相似性度量和一些约束条件确定几何变换,最后将该变换作用于待匹配图像。为了得到较精确的匹配,一般不会只用一个特征进行目标匹配,而更多的是采取对以上特征中多个特征进行融合来完成目标匹配。还有一种做法是寻找兴趣点特征进行目标匹配,例如常用的SIFT特征,SIFT特征是基于物体上的一些局部外观的兴趣点而与影像的大小和旋转无关,匹配精度高。但是SIFT算法也存在不足之处,如特征向量的维数高达128维,匹配时计算数据量大、耗时长,并且使用的是图像灰度信息,忽略了彩色信息,故图像信息未能得到充分利用。
由此可见,现有技术利用两帧图像之间进行目标的匹配方法中,一种是提取多个特征进行融合作为目标匹配的基础。这种方法,在提取多个特征进行融合的中间过程,计算量比较大,算法也比较复杂。而另一种是以兴趣点特征作为目标匹配的基础。这种方法同样存在着信息量大,计算量大的问题。
综上所述,现有技术两帧图像之间进行目标的匹配的方法存在算法复杂度高、计算效率低的问题。
发明内容
有鉴于此,本发明的目的在于提供一种基于单摄像头视频序列匹配的车辆测速方法及其系统,旨在解决现有技术中车辆目标匹配技术的算法复杂度高以及计算效率低的问题。
本发明提出一种基于单摄像头视频序列匹配的车辆测速方法,所述方法包括:
建立数据采集环境,并开始采集和读取数据;
根据读取到的数据利用匹配算法寻找不同窗口中出现的同一车辆目标;
计算同一车辆目标的车速。
优选的,所述建立数据采集环境,并开始采集和读取数据的步骤具体包括:
建立录制视频的环境;
设置捕捉车辆目标的窗口,并开始采集视频数据;
读取采集到的视频数据,分别获得车辆目标视频序列和待匹配的视频序列。
优选的,所述根据读取到的数据利用匹配算法寻找不同窗口中出现的同一车辆目标的步骤具体包括:
分别对所述车辆目标视频序列和待匹配的视频序列这两个视频序列进行预处理,以实现前景目标和背景的分割、阴影去除;
提取特征值,计算两个视频序列对应的前景目标中每一帧的颜色直方图,以分别得到特征矩阵M和矩阵N;
根据得到的特征矩阵M和矩阵N,采用动态滑窗匹配定理进行匹配,得到一组相关系数值,将最大的相关系数值与设定阈值进行比较,得到匹配结果,并根据匹配结果寻找不同窗口中出现的同一车辆目标。
优选的,所述计算同一车辆目标的车速的步骤具体包括:
获取目标车辆分别经过不同窗口时在车辆目标视频序列和待匹配的视频序列中各自的帧数;
计算帧与帧之间的实际距离;
根据所述实际距离计算目标车辆的车速。
另一方面,本发明还提供一种基于单摄像头视频序列匹配的车辆测速系统,所述系统包括:
预处理模块,用于建立数据采集环境,并开始采集和读取数据;
目标匹配模块,用于根据读取到的数据利用匹配算法寻找不同窗口中出现的同一车辆目标;
目标测速模块,用于计算同一车辆目标的车速。
优选的,所述预处理包括:
环境建立子模块,用于建立录制视频的环境;
窗口设置子模块,用于设置捕捉车辆目标的窗口,并开始采集视频数据;
视频读取子模块,用于读取采集到的视频数据,分别获得车辆目标视频序列和待匹配的视频序列。
优选的,所述目标匹配模块包括:
前景目标子模块,用于分别对所述车辆目标视频序列和待匹配的视频序列这两个视频序列进行预处理,以实现前景目标和背景的分割、阴影去除;
特征提取子模块,用于提取特征值,计算两个视频序列对应的前景目标中每一帧的颜色直方图,以分别得到特征矩阵M和矩阵N;
特征比较子模块,用于根据得到的特征矩阵M和矩阵N,采用动态滑窗匹配定理进行匹配,得到一组相关系数值,将最大的相关系数值与设定阈值进行比较,得到匹配结果,并根据匹配结果寻找不同窗口中出现的同一车辆目标。
优选的,所述目标测速模块包括:
帧数获取子模块,用于获取目标车辆分别经过不同窗口时在车辆目标视频序列和待匹配的视频序列中各自的帧数;
第一计算子模块,用于计算帧与帧之间的实际距离;
第二计算子模块,用于根据所述实际距离计算目标车辆的车速。
本发明提供的技术方案极大地降低了车辆目标匹配技术中的算法复杂度,进而提高了计算效率。
附图说明
图1为本发明一实施方式中基于单摄像头视频序列匹配的车辆测速方法流程图;
图2为本发明一实施方式中图1所示的步骤S11的详细流程图;
图3为本发明一实施方式中视频数据采集的环境图;
图4为本发明一实施方式中实际道路平面图和视频图像道路图的对比图;
图5为本发明一实施方式中动态滑窗匹配定理图;
图6为本发明一实施方式中图1所示的步骤S12的详细流程图;
图7为本发明一实施方式中图6所示的步骤S123的详细流程图;
图8为本发明一实施方式中图1所示的步骤S13的详细流程图;
图9为本发明一实施方式中基于单摄像头视频序列匹配的车辆测速系统10的内部结构示意图;
图10为本发明一实施方式中图9所示的预处理模块11的内部结构示意图;
图11为本发明一实施方式中图9所示的目标匹配模块12的内部结构示意图;
图12为本发明一实施方式中图9所示的目标测速模块13的内部结构示意图。
具体实施方式
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
本发明具体实施方式提供了一种基于单摄像头视频序列匹配的车辆测速方法,所述方法主要包括如下步骤:
S11、建立数据采集环境,并开始采集和读取数据;
S12、根据读取到的数据利用匹配算法寻找不同窗口中出现的同一车辆目标;
S13、计算同一车辆目标的车速。
本发明提供的一种基于单摄像头视频序列匹配的车辆测速方法能极大地降低车辆目标匹配技术中的算法复杂度,进而提高了计算效率。
以下将对本发明所提供的一种基于单摄像头视频序列匹配的车辆测速方法进行详细说明。
请参阅图1,为本发明一实施方式中基于单摄像头视频序列匹配的车辆测速方法流程图。
在步骤S11中,建立数据采集环境,并开始采集和读取数据。
在本实施方式中,建立数据采集环境,并开始采集和读取数据的步骤S11具体包括S111—S113,如图2所示。
请参阅图2,为本发明一实施方式中图1所示的步骤S11的详细流程图。
在步骤S111中,建立录制视频的环境。
在本实施方式中,采集数据的准备工作,建立录制视频的环境,例如如图3所示,在本实施例中,摄像头假设被固定在天桥上,沿着车辆前进的方向向下俯拍公路。
在步骤S112中,设置捕捉车辆目标的窗口,并开始采集视频数据。
在本实施方式中,在视场中选择一条通道,相隔一段距离分别设置两个窗口。假设车辆短时间内都是在同一车道上直线行驶,在不同时刻经过窗口1和窗口2。在本实施例中,窗口设置为长方形,窗口根据视觉远近,自适应调整比例,如图4所示。
其中,从图4左图可以看到,实际道路平面的马路线条是平行的,线与线之间的距离是相等的,即图中P1与P3的距离等于P2与P4的距离。而实际拍摄的视频,由于摄像机是正对车辆运行方向,俯视向前成一个固定的角度,所以拍摄得到的道路平面是图4右图的样子。可以看到,离摄像机近的P2,P4点之间的距离比较大,较远处的P1,P3点之间的距离较小,而且越远处,两条线之间的距离成比例缩小。
由于运动车辆在视频中从近处运动到远处也会随之缩小,因此在设置窗口的时候,也要根据比例进行缩放,才能保证得到的数据具有稳定性以及可行性。所以现在的关键是找到实际道路平面与视频图像平面之间的映射关系。
假如设上图4右图P1坐标为(x1,y1),P2坐标为(x2,y2),P3坐标为(x3,y1),P4坐标为(x4,y2)。因为缩放主要是由于Y轴坐标位置不同而不同,因此决定比例的主要因素是y坐标。
在y1处,设P1与P3之间的距离为d1,
在y2处,设P2与P4之间的距离为d2。
窗口大小的比例应该满足:
Figure PCTCN2016077292-appb-000001
φ也就是我们要求的映射关系。不同y坐标值,φ是变化的。
由于车辆在车道上为直线行驶,可以在设置窗口的时候,先把窗口的宽取设置为车道的两条线之间的距离,这样在不同的y坐标值处就固定了宽度d,例如y1处宽为d1,y2处宽为d2。
确定宽度值之后,接下来就用自适应的方式来确定窗口的高h。
我们要保证窗口能够容纳下车身,根据经验值,可以设置
Figure PCTCN2016077292-appb-000002
这样一来,窗口高度h始终是宽度d的
Figure PCTCN2016077292-appb-000003
只要确定了d,h也就确定了。所以只要我们画出窗口的宽度d,h就自动生成,得到一个合理窗口。
在步骤S113中,读取采集到的视频数据,分别获得车辆目标视频序列和待匹配的视频序列。
其中,车辆目标视频序列1是某一车辆经过窗口1拍摄到的连续帧序列,在本实施例中车辆目标视频序列1大概只有10~20帧。而待匹配的视频序列2有很多车经过窗口2的连续帧序列,在本实施例中待匹配的视频序列2是1000帧。
为了解决现有技术存在的问题,本发明提出的目标匹配方法不再是以单帧图像为匹配基础,而是通过视频的连续帧序列作为匹配的基础,也就是说,完成目标的匹配,是通过两个序列之间的关系获得匹配的结果,称之为动态滑窗匹配定理。首先,获取两个视频序列,一个是目标车辆出现摄像头区域窗口的车辆目标视频序列1,另一个是待匹配的视频序列2,目的是要在待匹配的视频序列2中寻找是否存在车辆目标视频序列1的同一目标,若存在,则找出目标车辆在待匹配的视频序列2中出现的相关帧数。接下来,是匹配这两段视频序列的基本思想。
如图5所示,假设车辆目标视频序列1为m帧,待匹配的视频序列2为n帧,m>n。首先,选取特定的特征,作为视频序列每一帧的代表值,这样车辆目标视频序列1可以组成一个m列的矩阵M,其中矩阵的每一列为视频序列每一帧的特征值。同理,待匹配的视频序列2可以组成一个n列的矩阵N。矩阵N的首列与矩阵M首列对应,此时,矩阵M与矩阵N的前n帧一一对应,通过计算出矩阵N与矩阵M对应n帧的相关性,得出一个相关度量值。接下来,矩阵N向右开始滑动,每次向右滑动一列,滑动一次,计算得到一个相关度量值。直到滑动m-n次之后停止,此时可以得到的m-n个相关度量值。最后,可以得到m-n个相关度量值的一个分布,其中最高值,最有可能是我们想要得到的匹配结果,即同一个目标。
请继续参阅图1,在步骤S12中,根据读取到的数据利用匹配算法寻找不同窗口中出现的同一车辆目标。
在本实施方式中,根据读取到的数据利用匹配算法寻找不同窗口中出现的同一车辆目标的步骤S12具体包括S121—S123,如图6所示。
请参阅图6,为本发明一实施方式中图1所示的步骤S12的详细流程图。
在步骤S121中,分别对所述车辆目标视频序列和待匹配的视频序列这两个视频序列进行预处理,以实现前景目标和背景的分割、阴影去除。
在本实施方式中,利用混合高斯背景建模(MOG)对原始视频序列进行背景初始化、背景更新、前景目标检测。因为用混合高斯背景建模检测出来的前景目标带有运动阴影,因此进一步用HSV比值空间法对前景目标进行阴影检测,并去除阴影。
在本实施方式中,步骤S121具体包括以下两个子步骤(1)、(2)。
(1)利用混合高斯背景建模(MOG)对原始视频序列进行背景初始化,背景更新,前景目标检测。
混合高斯背景建模(MOG)是对单像素点在时间序列上建立的统计模型。它假设确定位置背景像素值在时间序列上的概率密度可以用多个高斯密度函数 加权表达,当下一帧像素出现,如果它在该统计模型上计算得到的概率密度值小时认为它是目标像素,反之,则认为是背景像素。
(2)利用HSV比值空间法对前景运动目标进行阴影检测,并去除阴影。
本发明实施例中,在(1)中已经提取出视频序列中的前景运动目标,其中前景运动目标包含了两部分的运动因素,即前景目标车辆和运动车辆的阴影。因此,为了得到车辆的前景目标车辆,本发明实例中利用HSV比值空间法对前景目标进行阴影检测,并去除阴影。
HSV比值空间阴影检测法的基本思想是认为阴影区域像素点与其对应位置背景像素点相比,具有亮度变暗、饱和度降低、色度变化较小的特点。根据这一特点,可以通过前景运动目标像素点与背景像素点的亮度比值、色度差值、饱和度差值作阈值判断,符合情况的判为阴影。
在步骤S122中,提取特征值,计算两个视频序列对应的前景目标中每一帧的颜色直方图,以分别得到特征矩阵M和矩阵N。
在图像视频处理中,特别是识别匹配,最关键在于特征的描述及如何提取这些特征,特征的选取也是影响算法好坏的重要因素。所以选择正确的特征在视频图像处理中有着至关重要的作用。图像识别代表性的特征描述一般有以下:颜色、梯度、纹理、形状等。
颜色直方图是图像视频处理中最常用的统计特征。图像的每一个像素可以看作是3维空间的一个点,颜色空间有RGB,Munsell,CIEL*a*b,CIEL*u*v*,HSV等。为了方便验证动态滑窗匹配定理的可行性,本发明实施例中只选用RGB颜色空间的直方图作为统计特征。RGB三维空间包括R、G、B三个坐标轴,每一个坐标轴的值从0到255,将每一帧的R、G、B色彩级数合成一个列向量,作为特征矩阵的一列。
如果每一帧图像用向量表示的话,视频序列1为m帧,视频序列2为n帧,m>n。那么,
for i=1:m
fi=(Ri,Gi,Bi)'
end
车辆目标视频序列1得到的特征矩阵M为:
M=[f1 f2 ... ... fn-1 fn ... ... fm-1 fm];
同理可得待匹配的视频序列2的特征矩阵N:
N=[g1 g2 ... ... gn-1 gn]。
在步骤S123中,根据得到的特征矩阵M和矩阵N,采用动态滑窗匹配定理进行匹配,得到一组相关系数值,将最大的相关系数值与设定阈值进行比较,得到匹配结果,并根据匹配结果寻找不同窗口中出现的同一车辆目标。
其中,矩阵M的列数为m,矩阵N的列数为n,m>n。在本实施方式中,步骤S123还包括四个子步骤S1231-S1234,如图7所示。
请参阅图7,为本发明一实施方式中图6所示的步骤S123的详细流程图。
在步骤S1231中,将得到的矩阵M与矩阵N首列对齐,计算矩阵M对齐部分子矩阵M(1)与矩阵N之间的相关系数corr1,并保存在数组中。
在本实施方式中,将得到的矩阵M与矩阵N首列对齐,即矩阵M对齐部分子矩阵
M(1)=[f1 f2 ... ... fn-1 fn]
与矩阵N对齐。
利用相关系数公式
Figure PCTCN2016077292-appb-000004
计算矩阵M对齐部分子矩阵M(1)与矩阵N之间的相关系数corr1,并保存在数组中。
在步骤S1232中,矩阵N向右滑动滑动一列,矩阵M的第二列与矩阵N 首列对齐,计算矩阵M对齐部分子矩阵M(2)与矩阵N之间的相关系数corr2,并保存在数组中。
在本实施方式中,矩阵N向右滑动滑动一列,矩阵M的第二列与矩阵N首列对齐,即矩阵M的子矩阵
M(2)=[f2 f3 ... ... fn fn+1]
与矩阵N对齐。利用相关系数公式计算M对齐部分子矩阵M(2)与矩阵N之间的相关系数corr2,并保存在数组中。
在步骤S1233中,重复步骤S1232,直到滑动m-n次。矩阵N每次向右滑动一列,计算矩阵M对齐部分子矩阵M(i)与矩阵N之间的相关系数corri,并保存在数组中。
在步骤S1234中,在数组中找到最大的相关系数值corrmax,将corrmax与一个合适的阈值T比较,若corrmax≥T,则匹配成功。否则匹配失败。
在本实施方式中,在上面步骤中保存的数组中找到最大的相关系数值corrmax,将corrmax与一个合适的阈值T比较,若corrmax≥T,则匹配成功,待匹配的视频序列1中,第max~max+n帧匹配到视频序列2中出现的目标车辆。若corrmax<T,则匹配失败,待匹配的视频序列1中没有视频序列2中出现的目标车辆。其中,T值一般取0.9。
请继续参阅图1,在步骤S13中,计算同一车辆目标的车速。
在本实施方式中,计算同一车辆目标的车速的步骤S13具体包括S131—S133,如图8所示。
请参阅图8,为本发明一实施方式中图1所示的步骤S13的详细流程图。
在步骤S131中,获取目标车辆分别经过不同窗口时在车辆目标视频序列和待匹配的视频序列中各自的帧数。
在本实施方式中,当步骤S1234的结果为车辆目标视频序列1和待匹配的视频序列2匹配同一辆车成功之后,本实施例选取车子分别出现在窗口1和窗 口2的两帧,来计算速度。设序列1取到的帧数为第f1帧,序列2取到的帧数第f2帧。因为车辆目标经过窗口时,总有一帧出现在窗口中时,车子占窗口的面积比最大。在本实施例中选取比值最大的那一帧,分别作为f1,f2帧。在本实施例中,利用每一帧图像掩膜为1的像素点个数与窗口总像素个数的比例来判断,选取比值最大的那一帧。
在步骤S132中,计算帧与帧之间的实际距离。
在本实施方式中,为了能够直接获得两个窗口之间的距离,从而能够方便得到成功匹配到同一目标车辆后,计算实际的距离信息。
由于在步骤S112中窗口的大小是根据实际比例进行自适应缩放的,也就是说图像中窗口1和窗口2在实际道路平面是一样的。本发明实施例中选择设置窗口的时候,窗口1和窗口2的下底都设置在马路虚线的下端平行。因为车身的中心大致在窗口的中间,就以正中间点计算,如图4所示。这样窗口2中心与窗口1中心的距离为L1,L1又等于窗口1底边与窗口2底边的距离L2,即只要知道两条马路虚线下端之间的距离L2就可以了。这里L2可以通过公路线规格得到。
在步骤S133中,根据所述实际距离计算目标车辆的车速。
获得了两窗口车之间的实际距离为L2(单位米),车在两窗口出现的帧数又分别为f1和f2。录制的视频是25帧/秒。所以1帧=1/25秒。
因此,可以计算速度
Figure PCTCN2016077292-appb-000005
本发明提供的一种基于单摄像头视频序列匹配的车辆测速方法能极大地降低车辆目标匹配技术中的算法复杂度,进而提高了计算效率。
本发明具体实施方式还提供一种基于单摄像头视频序列匹配的车辆测速系统10,主要包括:
预处理模块11,用于建立数据采集环境,并开始采集和读取数据;
目标匹配模块12,用于根据读取到的数据利用匹配算法寻找不同窗口中出 现的同一车辆目标;
目标测速模块13,用于计算同一车辆目标的车速。
本发明提供的一种基于单摄像头视频序列匹配的车辆测速系统10,能极大地降低车辆目标匹配技术中的算法复杂度,进而提高了计算效率。
请参阅图9,所示为本发明一实施方式中基于单摄像头视频序列匹配的车辆测速系统10的结构示意图。在本实施方式中,基于单摄像头视频序列匹配的车辆测速系统10主要包括预处理模块11、目标匹配模块12以及目标测速模块13。
预处理模块11,用于建立数据采集环境,并开始采集和读取数据。
在本实施方式中,预处理模块11具体包括环境建立子模块111、窗口设置子模块112以及视频读取子模块113,如图10所示。
请参阅图10,所示为本发明一实施方式中图9所示的预处理模块11的内部结构示意图。
环境建立子模块111,用于建立录制视频的环境。
在本实施方式中,采集数据的准备工作,建立录制视频的环境,例如如图3所示,在本实施例中,摄像头假设被固定在天桥上,沿着车辆前进的方向向下俯拍公路。
窗口设置子模块112,用于设置捕捉车辆目标的窗口,并开始采集视频数据。
在本实施方式中,在视场中选择一条通道,相隔一段距离分别设置两个窗口。假设车辆短时间内都是在同一车道上直线行驶,在不同时刻经过窗口1和窗口2。在本实施例中,窗口设置为长方形,窗口根据视觉远近,自适应调整比例,如图4所示,其中相关的描述详见前述步骤S112中的相关记载,在此不做重复描述。
视频读取子模块113,用于读取采集到的视频数据,分别获得车辆目标视频序列和待匹配的视频序列。
请继续参阅图9,目标匹配模块12,用于根据读取到的数据利用匹配算法寻找不同窗口中出现的同一车辆目标。
在本实施方式中,目标匹配模块12具体包括前景目标子模块121、特征提取子模块122以及特征比较子模块123,如图11所示。
请参阅图11,所示为本发明一实施方式中图9所示的目标匹配模块12的内部结构示意图。
前景目标子模块121,用于分别对所述车辆目标视频序列和待匹配的视频序列这两个视频序列进行预处理,以实现前景目标和背景的分割、阴影去除。
在本实施方式中,具体的预处理过程参见前述步骤S121的相关记载,在此不做重复描述。
特征提取子模块122,用于提取特征值,计算两个视频序列对应的前景目标中每一帧的颜色直方图,以分别得到特征矩阵M和矩阵N。
在本实施方式中,具体的提取和计算过程参见前述步骤S122的相关记载,在此不做重复描述。
特征比较子模块123,用于根据得到的特征矩阵M和矩阵N,采用动态滑窗匹配定理进行匹配,得到一组相关系数值,将最大的相关系数值与设定阈值进行比较,得到匹配结果,并根据匹配结果寻找不同窗口中出现的同一车辆目标。
在本实施方式中,具体的特征匹配过程参见前述步骤S123的相关记载,在此不做重复描述。
请继续参阅图9,目标测速模块13,用于计算同一车辆目标的车速。
在本实施方式中,目标测速模块13具体包括帧数获取子模块131、第一计算子模块132以及第二计算子模块133,如图12所示。
请参阅图12,所示为本发明一实施方式中图9所示的目标测速模块13的内部结构示意图。
帧数获取子模块131,用于获取目标车辆分别经过不同窗口时在车辆目标 视频序列和待匹配的视频序列中各自的帧数。
在本实施方式中,具体的帧数获取过程参见前述步骤S131的相关记载,在此不做重复描述。
第一计算子模块132,用于计算帧与帧之间的实际距离。
在本实施方式中,帧与帧之间的实际距离的计算过程参见前述步骤S132的相关记载,在此不做重复描述。
第二计算子模块133,用于根据所述实际距离计算目标车辆的车速。
在本实施方式中,目标车辆的车速的计算过程参见前述步骤S133的相关记载,在此不做重复描述。
本发明提供的一种基于单摄像头视频序列匹配的车辆测速系统10,能极大地降低车辆目标匹配技术中的算法复杂度,进而提高了计算效率。
值得注意的是,上述实施例中,所包括的各个单元只是按照功能逻辑进行划分的,但并不局限于上述的划分,只要能够实现相应的功能即可;另外,各功能单元的具体名称也只是为了便于相互区分,并不用于限制本发明的保护范围。
另外,本领域普通技术人员可以理解实现上述各实施例方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,相应的程序可以存储于一计算机可读取存储介质中,所述的存储介质,如ROM/RAM、磁盘或光盘等。
以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。

Claims (8)

  1. 一种基于单摄像头视频序列匹配的车辆测速方法,其特征在于,所述方法包括:
    建立数据采集环境,并开始采集和读取数据;
    根据读取到的数据利用匹配算法寻找不同窗口中出现的同一车辆目标;
    计算同一车辆目标的车速。
  2. 如权利要求1所述的基于单摄像头视频序列匹配的车辆测速方法,其特征在于,所述建立数据采集环境,并开始采集和读取数据的步骤具体包括:
    建立录制视频的环境;
    设置捕捉车辆目标的窗口,并开始采集视频数据;
    读取采集到的视频数据,分别获得车辆目标视频序列和待匹配的视频序列。
  3. 如权利要求2所述的基于单摄像头视频序列匹配的车辆测速方法,其特征在于,所述根据读取到的数据利用匹配算法寻找不同窗口中出现的同一车辆目标的步骤具体包括:
    分别对所述车辆目标视频序列和待匹配的视频序列这两个视频序列进行预处理,以实现前景目标和背景的分割、阴影去除;
    提取特征值,计算两个视频序列对应的前景目标中每一帧的颜色直方图,以分别得到特征矩阵M和矩阵N;
    根据得到的特征矩阵M和矩阵N,采用动态滑窗匹配定理进行匹配,得到一组相关系数值,将最大的相关系数值与设定阈值进行比较,得到匹配结果,并根据匹配结果寻找不同窗口中出现的同一车辆目标。
  4. 如权利要求3所述的基于单摄像头视频序列匹配的车辆测速方法,其特征在于,所述计算同一车辆目标的车速的步骤具体包括:
    获取目标车辆分别经过不同窗口时在车辆目标视频序列和待匹配的视频序列中各自的帧数;
    计算帧与帧之间的实际距离;
    根据所述实际距离计算目标车辆的车速。
  5. 一种基于单摄像头视频序列匹配的车辆测速系统,其特征在于,所述系统包括:
    预处理模块,用于建立数据采集环境,并开始采集和读取数据;
    目标匹配模块,用于根据读取到的数据利用匹配算法寻找不同窗口中出现的同一车辆目标;
    目标测速模块,用于计算同一车辆目标的车速。
  6. 如权利要求5所述的基于单摄像头视频序列匹配的车辆测速系统,其特征在于,所述预处理包括:
    环境建立子模块,用于建立录制视频的环境;
    窗口设置子模块,用于设置捕捉车辆目标的窗口,并开始采集视频数据;
    视频读取子模块,用于读取采集到的视频数据,分别获得车辆目标视频序列和待匹配的视频序列。
  7. 如权利要求6所述的基于单摄像头视频序列匹配的车辆测速系统,其特征在于,所述目标匹配模块包括:
    前景目标子模块,用于分别对所述车辆目标视频序列和待匹配的视频序列这两个视频序列进行预处理,以实现前景目标和背景的分割、阴影去除;
    特征提取子模块,用于提取特征值,计算两个视频序列对应的前景目标中每一帧的颜色直方图,以分别得到特征矩阵M和矩阵N;
    特征比较子模块,用于根据得到的特征矩阵M和矩阵N,采用动态滑窗匹配定理进行匹配,得到一组相关系数值,将最大的相关系数值与设定阈值进行比较,得到匹配结果,并根据匹配结果寻找不同窗口中出现的同一车辆目标。
  8. 如权利要求7所述的基于单摄像头视频序列匹配的车辆测速系统,其特征在于,所述目标测速模块包括:
    帧数获取子模块,用于获取目标车辆分别经过不同窗口时在车辆目标视频序列和待匹配的视频序列中各自的帧数;
    第一计算子模块,用于计算帧与帧之间的实际距离;
    第二计算子模块,用于根据所述实际距离计算目标车辆的车速。
PCT/CN2016/077292 2016-03-25 2016-03-25 一种基于单摄像头视频序列匹配的车辆测速方法及其系统 WO2017161544A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/077292 WO2017161544A1 (zh) 2016-03-25 2016-03-25 一种基于单摄像头视频序列匹配的车辆测速方法及其系统

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/077292 WO2017161544A1 (zh) 2016-03-25 2016-03-25 一种基于单摄像头视频序列匹配的车辆测速方法及其系统

Publications (1)

Publication Number Publication Date
WO2017161544A1 true WO2017161544A1 (zh) 2017-09-28

Family

ID=59900963

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/077292 WO2017161544A1 (zh) 2016-03-25 2016-03-25 一种基于单摄像头视频序列匹配的车辆测速方法及其系统

Country Status (1)

Country Link
WO (1) WO2017161544A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110766611A (zh) * 2019-10-31 2020-02-07 北京沃东天骏信息技术有限公司 图像处理方法、装置、存储介质及电子设备
CN111862624A (zh) * 2020-07-29 2020-10-30 浙江大华技术股份有限公司 车辆匹配方法、装置、存储介质及电子装置
CN114140461A (zh) * 2021-12-09 2022-03-04 成都智元汇信息技术股份有限公司 基于边缘识图盒子的切图方法、电子设备及介质
CN114241749A (zh) * 2021-11-26 2022-03-25 深圳市戴升智能科技有限公司 一种基于时间序列的视频信标数据关联方法和系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6917692B1 (en) * 1999-05-25 2005-07-12 Thomson Licensing S.A. Kalman tracking of color objects
CN101604448A (zh) * 2009-03-16 2009-12-16 北京中星微电子有限公司 一种运动目标的测速方法和系统
CN102136196A (zh) * 2011-03-10 2011-07-27 北京大学深圳研究生院 一种基于图像特征的车辆测速方法
CN103473791A (zh) * 2013-09-10 2013-12-25 惠州学院 监控视频中异常速度事件自动识别方法
CN104504913A (zh) * 2014-12-25 2015-04-08 珠海高凌环境科技有限公司 视频车流检测方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6917692B1 (en) * 1999-05-25 2005-07-12 Thomson Licensing S.A. Kalman tracking of color objects
CN101604448A (zh) * 2009-03-16 2009-12-16 北京中星微电子有限公司 一种运动目标的测速方法和系统
CN102136196A (zh) * 2011-03-10 2011-07-27 北京大学深圳研究生院 一种基于图像特征的车辆测速方法
CN103473791A (zh) * 2013-09-10 2013-12-25 惠州学院 监控视频中异常速度事件自动识别方法
CN104504913A (zh) * 2014-12-25 2015-04-08 珠海高凌环境科技有限公司 视频车流检测方法及装置

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110766611A (zh) * 2019-10-31 2020-02-07 北京沃东天骏信息技术有限公司 图像处理方法、装置、存储介质及电子设备
CN111862624A (zh) * 2020-07-29 2020-10-30 浙江大华技术股份有限公司 车辆匹配方法、装置、存储介质及电子装置
CN114241749A (zh) * 2021-11-26 2022-03-25 深圳市戴升智能科技有限公司 一种基于时间序列的视频信标数据关联方法和系统
CN114241749B (zh) * 2021-11-26 2022-12-13 深圳市戴升智能科技有限公司 一种基于时间序列的视频信标数据关联方法和系统
CN114140461A (zh) * 2021-12-09 2022-03-04 成都智元汇信息技术股份有限公司 基于边缘识图盒子的切图方法、电子设备及介质

Similar Documents

Publication Publication Date Title
US12002225B2 (en) System and method for transforming video data into directional object count
Zhou et al. LIDAR and vision-based real-time traffic sign detection and recognition algorithm for intelligent vehicle
Kühnl et al. Monocular road segmentation using slow feature analysis
Balali et al. Multi-class US traffic signs 3D recognition and localization via image-based point cloud model using color candidate extraction and texture-based recognition
CN104978567B (zh) 基于场景分类的车辆检测方法
CN112825192B (zh) 基于机器学习的对象辨识系统及其方法
CN111915583B (zh) 复杂场景中基于车载红外热像仪的车辆和行人检测方法
CN110866430A (zh) 一种车牌识别方法及装置
WO2017161544A1 (zh) 一种基于单摄像头视频序列匹配的车辆测速方法及其系统
Shi et al. A vision system for traffic sign detection and recognition
CN104851089A (zh) 一种基于三维光场的静态场景前景分割方法和装置
Rabiu Vehicle detection and classification for cluttered urban intersection
CN108416798A (zh) 一种基于光流的车辆距离估计方法
Poggenhans et al. A universal approach to detect and classify road surface markings
Li et al. Automatic passenger counting system for bus based on RGB-D video
CN110675442A (zh) 一种结合目标识别技术的局部立体匹配方法及系统
CN105844666B (zh) 一种基于单摄像头视频序列匹配的车辆测速方法及其系统
EP4287137A1 (en) Method, device, equipment, storage media and system for detecting drivable space of road
CN109191473B (zh) 一种基于对称分析的车辆粘连分割方法
Han et al. Accurate and robust vanishing point detection method in unstructured road scenes
Liu et al. Obstacle recognition for ADAS using stereovision and snake models
Schomerus et al. Camera-based lane border detection in arbitrarily structured environments
Chen et al. Amobile system combining laser scanners and cameras for urban spatial objects extraction
Wu et al. Camera-based clear path detection
Che et al. Traffic light recognition for real scenes based on image processing and deep learning

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16894899

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 03/12/2018)

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 20/08/2019)

122 Ep: pct application non-entry in european phase

Ref document number: 16894899

Country of ref document: EP

Kind code of ref document: A1