WO2017161544A1 - 一种基于单摄像头视频序列匹配的车辆测速方法及其系统 - Google Patents
一种基于单摄像头视频序列匹配的车辆测速方法及其系统 Download PDFInfo
- Publication number
- WO2017161544A1 WO2017161544A1 PCT/CN2016/077292 CN2016077292W WO2017161544A1 WO 2017161544 A1 WO2017161544 A1 WO 2017161544A1 CN 2016077292 W CN2016077292 W CN 2016077292W WO 2017161544 A1 WO2017161544 A1 WO 2017161544A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- target
- video sequence
- matching
- vehicle
- module
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/052—Detecting movement of traffic to be counted or controlled with provision for determining speed or overspeed
Definitions
- the present invention relates to the field of image video processing technologies, and in particular, to a vehicle speed measurement method and system based on single camera video sequence matching.
- Vehicle target matching is an important research content in vehicle speed measurement. It is often necessary to detect and identify vehicle targets before the vehicle is speeding. Target matching is the process of finding a target image in a video image to be tested by a specific algorithm.
- the traditional target matching method is mainly image matching, that is, in two frames of images, a process of having the same target as another frame is searched from one frame of image.
- the prior art proposes that the most common method for matching targets between two frames of images is based on features.
- the method first extracts features in the matched images, which may be color features, points of interest features, gradient features, edge features, etc., and the geometric transformation is determined by the similarity measure and some constraints, and finally the transformation is applied to Match the image.
- features in the matched images which may be color features, points of interest features, gradient features, edge features, etc.
- the geometric transformation is determined by the similarity measure and some constraints
- the transformation is applied to Match the image.
- it is generally not only one feature is used for target matching, but more is to combine multiple features of the above features to complete the target matching.
- Another method is to find the feature of interest points for target matching, such as the commonly used SIFT features.
- the SIFT features are based on the interest points of some local appearances on the object, and are independent of the size and rotation of the image, and the matching precision is high.
- the SIFT algorithm also has shortcomings.
- the dimension of the feature vector is up to 128 dimensions.
- the amount of data calculated during matching is large and time consuming, and the image gray information is used. The color information is ignored, so the image information is not fully obtained. use.
- the method for matching a target between two frames of the prior art has a problem of high algorithm complexity and low computational efficiency.
- an object of the present invention is to provide a vehicle speed measurement method and system based on single camera video sequence matching, which aims to solve the problem of high algorithm complexity and low calculation efficiency of the vehicle target matching technology in the prior art.
- the invention provides a vehicle speed measurement method based on single camera video sequence matching, the method comprising:
- the step of establishing a data collection environment and starting to collect and read data specifically includes:
- the collected video data is read to obtain a vehicle target video sequence and a video sequence to be matched, respectively.
- the step of using the matching algorithm to find the same vehicle target appearing in different windows according to the read data specifically includes:
- the dynamic sliding window matching theorem is used to match, and a set of correlation coefficient values is obtained.
- the maximum correlation coefficient value is compared with the set threshold to obtain a matching result, and different windows are searched according to the matching result.
- the step of calculating the vehicle speed of the same vehicle target specifically includes:
- the vehicle speed of the target vehicle is calculated based on the actual distance.
- the present invention also provides a vehicle speed measuring system based on single camera video sequence matching, the system comprising:
- a pre-processing module for establishing a data collection environment and starting to collect and read data
- a target matching module configured to use a matching algorithm to find the same vehicle target appearing in different windows according to the read data
- Target speed module for calculating the speed of the same vehicle target.
- the preprocessing comprises:
- a window setting sub-module for setting a window for capturing a vehicle target and starting to collect video data
- the video reading sub-module is configured to read the collected video data to obtain a vehicle target video sequence and a video sequence to be matched, respectively.
- the target matching module includes:
- a foreground target sub-module configured to preprocess the two video sequences of the vehicle target video sequence and the video sequence to be matched, respectively, to implement segmentation and shadow removal of the foreground target and the background;
- a feature extraction sub-module configured to extract feature values, and calculate a color histogram of each frame in the foreground object corresponding to the two video sequences to obtain a feature matrix M and a matrix N, respectively;
- the feature comparison sub-module is configured to perform matching according to the obtained feature matrix M and the matrix N by using a dynamic sliding window matching theorem, obtain a set of correlation coefficient values, and compare the maximum correlation coefficient value with the set threshold to obtain a matching result. And based on the matching results to find the same vehicle target appearing in different windows.
- the target speed measuring module comprises:
- a frame number obtaining sub-module configured to obtain a frame number of each of the vehicle target video sequence and the video sequence to be matched when the target vehicle passes through different windows respectively;
- a first calculation sub-module for calculating a physical distance between the frame and the frame
- a second calculating submodule configured to calculate a vehicle speed of the target vehicle according to the actual distance.
- the technical solution provided by the invention greatly reduces the algorithm complexity in the vehicle target matching technology, thereby improving the calculation efficiency.
- FIG. 1 is a flowchart of a vehicle speed measurement method based on single camera video sequence matching according to an embodiment of the present invention
- step S11 shown in FIG. 1 according to an embodiment of the present invention
- FIG 3 is an environment diagram of video data collection in an embodiment of the present invention.
- FIG. 4 is a comparison diagram of an actual road plan and a video image road map according to an embodiment of the present invention.
- FIG. 5 is a diagram showing a dynamic sliding window matching theorem according to an embodiment of the present invention.
- FIG. 6 is a detailed flowchart of step S12 shown in FIG. 1 according to an embodiment of the present invention.
- FIG. 7 is a detailed flowchart of step S123 shown in FIG. 6 according to an embodiment of the present invention.
- FIG. 8 is a detailed flowchart of step S13 shown in FIG. 1 according to an embodiment of the present invention.
- FIG. 9 is a schematic diagram showing the internal structure of a vehicle speed measurement system 10 based on single camera video sequence matching according to an embodiment of the present invention.
- FIG. 10 is a schematic diagram showing the internal structure of the preprocessing module 11 shown in FIG. 9 according to an embodiment of the present invention.
- FIG. 11 is a schematic diagram showing the internal structure of the target matching module 12 shown in FIG. 9 according to an embodiment of the present invention.
- FIG. 12 is a schematic diagram showing the internal structure of the target speed measuring module 13 shown in FIG. 9 according to an embodiment of the present invention.
- a specific embodiment of the present invention provides a vehicle speed measurement method based on single camera video sequence matching, and the method mainly includes the following steps:
- the vehicle speed measurement method based on single camera video sequence matching provided by the invention can greatly reduce the algorithm complexity in the vehicle target matching technology, thereby improving the calculation efficiency.
- a vehicle speed measurement method based on single camera video sequence matching provided by the present invention will be described in detail below.
- FIG. 1 is a flowchart of a vehicle speed measurement method based on single camera video sequence matching according to an embodiment of the present invention.
- step S11 a data collection environment is established and data acquisition and reading are started.
- the step S11 of establishing a data collection environment and starting to collect and read data specifically includes S111-S113, as shown in FIG. 2.
- FIG. 2 is a detailed flowchart of step S11 shown in FIG. 1 according to an embodiment of the present invention.
- step S111 an environment for recording a video is established.
- the preparation of the data is collected, and the environment for recording the video is established.
- the camera is assumed to be fixed on the bridge, and the road is down-slope along the direction of the vehicle. .
- step S112 a window for capturing a vehicle target is set, and acquisition of video data is started.
- one channel is selected in the field of view, and two windows are respectively set at a distance. Assume that the vehicle travels straight in the same lane for a short period of time, passing through window 1 and window 2 at different times.
- the window is set to a rectangle, and the window is adaptively adjusted according to the distance of the vision, as shown in FIG.
- the horse route strips of the actual road plane are parallel, and the distance between the lines and the lines is equal, that is, the distance between P1 and P3 in the figure is equal to the distance between P2 and P4.
- the actual captured video because the camera is facing the running direction of the vehicle, looks into a fixed angle and looks forward, so the road plane obtained is the same as the right picture in Figure 4. It can be seen that the distance between the P2 and P4 points close to the camera is relatively large, and the distance between the P1 and P3 points at a relatively long distance is small, and the farther away, the distance between the two lines is proportionally reduced.
- the moving vehicle moves from the near side to the far side in the video, it will also shrink. Therefore, when setting the window, it is also scaled according to the scale to ensure the stability and feasibility of the obtained data. So the key now is to find the mapping between the actual road plane and the video image plane.
- the P1 coordinate of the right figure of Fig. 4 is (x1, y1)
- the P2 coordinate is (x2, y2)
- the P3 coordinate is (x3, y1)
- the P4 coordinate is (x4, y2). Since the scaling is mainly due to the difference in the position of the Y-axis coordinates, the main factor determining the ratio is the y-coordinate.
- the proportion of the window size should satisfy:
- ⁇ is also the mapping we require. Different y coordinate values, ⁇ is variable.
- the width of the window is first set to the distance between the two lines of the lane, so that the width d is fixed at different y-coordinate values, for example, y1
- the width is d1
- the width at y2 is d2.
- the adaptive method is used to determine the height h of the window.
- the window height h is always the width d As long as d is determined, h is determined. So as long as we draw the width d of the window, h is automatically generated, and a reasonable window is obtained.
- step S113 the collected video data is read to obtain a vehicle target video sequence and a video sequence to be matched, respectively.
- the vehicle target video sequence 1 is a sequence of consecutive frames captured by a certain vehicle through the window 1. In the present embodiment, the vehicle target video sequence 1 is only about 10-20 frames.
- the video sequence 2 to be matched has a plurality of consecutive frames of the car passing through the window 2. In the present embodiment, the video sequence 2 to be matched is 1000 frames.
- the target matching method proposed by the present invention is no longer based on a single frame image, but a continuous frame sequence of video as a basis for matching, that is, a target matching is completed.
- the result of matching is obtained by the relationship between two sequences, which is called dynamic sliding window matching theorem.
- two video sequences are acquired, one is the vehicle target video sequence 1 in which the target vehicle appears in the camera area window, and the other is the video sequence 2 to be matched, in order to find whether there is a vehicle target video in the video sequence 2 to be matched.
- the vehicle target video sequence 1 is an m frame
- the video sequence 2 to be matched is an n frame, m>n.
- a specific feature is selected as a representative value of each frame of the video sequence, so that the vehicle target video sequence 1 can form a matrix M of m columns, wherein each column of the matrix is a feature value of each frame of the video sequence.
- the video sequence 2 to be matched can form a matrix N of n columns.
- the first column of the matrix N corresponds to the first column of the matrix M.
- the matrix M corresponds one-to-one with the first n frames of the matrix N, and a correlation metric is obtained by calculating the correlation of the n frames corresponding to the matrix N and the matrix M.
- the matrix N starts to slide to the right, slides one column to the right each time, slides once, and calculates a correlation metric.
- the m-n correlation metrics that can be obtained at this time until stopped after sliding m-n times.
- step S12 the matching algorithm is used to find the same vehicle target appearing in different windows according to the read data.
- the step S12 of searching for the same vehicle target appearing in different windows by using the matching algorithm according to the read data specifically includes S121-S123, as shown in FIG. 6.
- FIG. 6 is a detailed flowchart of step S12 shown in FIG. 1 according to an embodiment of the present invention.
- step S121 the two video sequences of the vehicle target video sequence and the video sequence to be matched are respectively preprocessed to realize segmentation and shadow removal of the foreground object and the background.
- the original video sequence is subjected to background initialization, background update, and foreground target detection using hybrid Gaussian background modeling (MOG). Because the foreground target detected by the mixed Gaussian background model has motion shadows, the foreground target is further shadow-detected by the HSV ratio space method, and the shadow is removed.
- MOG hybrid Gaussian background modeling
- step S121 specifically includes the following two sub-steps (1), (2).
- MOG Mixed Gaussian Background Modeling
- the foreground moving target in the video sequence has been extracted in (1), wherein the foreground moving target includes two parts of the motion factor, that is, the shadow of the foreground target vehicle and the moving vehicle. Therefore, in order to obtain a foreground target vehicle of the vehicle, in the example of the present invention, the foreground target is subjected to shadow detection using the HSV ratio space method, and the shadow is removed.
- the basic idea of the HSV ratio space shadow detection method is that the pixel of the shadow area has the characteristics of darker brightness, lower saturation, and smaller chromaticity change than the background pixel of the corresponding position. According to this feature, the brightness ratio, the chromaticity difference value, and the saturation difference value of the foreground moving target pixel point and the background pixel point can be judged as a threshold, and the case is judged as a shadow.
- step S122 the feature values are extracted, and the color histogram of each frame in the foreground objects corresponding to the two video sequences is calculated to obtain the feature matrix M and the matrix N, respectively.
- image video processing especially the identification of matching, the most important is the description of the features and how to extract these features.
- the selection of features is also an important factor affecting the quality of the algorithm. So choosing the right features plays a crucial role in video image processing.
- Representative features of image recognition generally have the following: color, gradient, texture, shape, and the like.
- Color histograms are the most commonly used statistical features in image video processing. Each pixel of the image can be regarded as a point in a 3-dimensional space, and the color space has RGB, Munsell, CIEL*a*b, CIEL*u*v*, HSV, and the like. In order to facilitate the verification of the feasibility of the dynamic sliding window matching theorem, in the embodiment of the present invention, only the histogram of the RGB color space is selected as the statistical feature.
- the RGB three-dimensional space includes three coordinate axes of R, G, and B. The value of each coordinate axis ranges from 0 to 255, and the R, G, and B color levels of each frame are combined into one column vector as a column of the feature matrix.
- video sequence 1 is an m frame
- video sequence 2 is an n frame, m>n.
- the feature matrix M obtained by the vehicle target video sequence 1 is:
- the feature matrix N of the video sequence 2 to be matched can be obtained:
- N [g 1 g 2 ... ... g n-1 g n ].
- step S123 according to the obtained feature matrix M and the matrix N, the dynamic sliding window matching theorem is used for matching, and a set of correlation coefficient values is obtained, and the maximum correlation coefficient value is compared with the set threshold to obtain a matching result, and according to Match results to find the same vehicle target that appears in different windows.
- step S123 further includes four sub-steps S1231-S1234, as shown in FIG.
- FIG. 7 is a detailed flowchart of step S123 shown in FIG. 6 according to an embodiment of the present invention.
- step S1231 the obtained matrix M is aligned with the first column of the matrix N, and the correlation coefficient corr 1 between the matrix M and the partial submatrix M(1) and the matrix N is calculated and stored in the array.
- the obtained matrix M is aligned with the first column of the matrix N, that is, the matrix M is aligned with the partial sub-matrix.
- the calculation matrix M aligns the correlation coefficient corr 1 between the partial sub-matrix M(1) and the matrix N and stores them in the array.
- step S1232 the matrix N slides to the right and slides one column, the second column of the matrix M is aligned with the first column of the matrix N, and the correlation coefficient corr 2 between the partial matrix M(2) and the matrix N of the matrix M is calculated and saved. In the array.
- the matrix N slides to the right and slides one column, and the second column of the matrix M is aligned with the first column of the matrix N, that is, the sub-matrix of the matrix M.
- the correlation coefficient corr 2 between the M-aligned partial sub-matrix M(2) and the matrix N is calculated by the correlation coefficient formula and stored in the array.
- step S1232 is repeated until sliding mn times.
- the matrix N slides one column to the right each time, and the matrix M aligns the correlation coefficient corr i between the partial sub-matrix M(i) and the matrix N and stores them in the array.
- step S1234 the largest correlation coefficient value corr max is found in the array, and corr max is compared with a suitable threshold T. If corr max ⁇ T, the matching is successful. Otherwise the match fails.
- the largest correlation coefficient value corr max is found in the array saved in the above step, and corr max is compared with a suitable threshold T. If corr max ⁇ T, the matching is successful, and the video sequence to be matched is 1 The first to max+n frames are matched to the target vehicle appearing in the video sequence 2. If corr max ⁇ T, the matching fails, and there is no target vehicle appearing in the video sequence 2 in the video sequence 1 to be matched. Among them, the T value is generally taken as 0.9.
- step S13 the vehicle speed of the same vehicle target is calculated.
- the step S13 of calculating the vehicle speed of the same vehicle target specifically includes S131-S133, as shown in FIG.
- FIG. 8 is a detailed flowchart of step S13 shown in FIG. 1 according to an embodiment of the present invention.
- step S131 the respective number of frames in the vehicle target video sequence and the video sequence to be matched when the target vehicle passes through different windows respectively is acquired.
- step S1234 after the result of step S1234 is that the vehicle target video sequence 1 and the video sequence 2 to be matched match the same car successfully, the present embodiment selects the car to appear in the window 1 and the window respectively. Two frames of port 2 to calculate the speed. Let the number of frames taken in sequence 1 be the f1 frame, and the number of frames taken in sequence 2 be the f2 frame. When the vehicle target passes through the window, when there is always one frame appearing in the window, the car occupies the largest area ratio of the window. In this embodiment, the frame with the largest ratio is selected as the f1 and f2 frames, respectively. In this embodiment, the ratio of the number of pixel points of the image mask of each frame to the total number of pixels of the window is determined, and the frame with the largest ratio is selected.
- step S132 the actual distance between the frame and the frame is calculated.
- the window 1 and the window 2 in the image are the same in the actual road plane.
- both the window 1 and the lower bottom of the window 2 are disposed in parallel at the lower end of the dotted line of the road.
- the center of the body is roughly in the middle of the window, it is calculated as the middle point, as shown in Figure 4.
- L1 is equal to the distance L2 between the bottom edge of the window 1 and the bottom edge of the window 2, that is, as long as the distance L2 between the lower ends of the two road dotted lines is known.
- L2 can be obtained by road line specifications.
- step S133 the vehicle speed of the target vehicle is calculated based on the actual distance.
- the actual distance between the two window cars is obtained as L2 (units), and the number of frames appearing in the two windows is f1 and f2, respectively.
- the vehicle speed measurement method based on single camera video sequence matching provided by the invention can greatly reduce the algorithm complexity in the vehicle target matching technology, thereby improving the calculation efficiency.
- the embodiment of the present invention further provides a vehicle speed measuring system 10 based on single camera video sequence matching, which mainly includes:
- the pre-processing module 11 is configured to establish a data collection environment and start collecting and reading data
- the target matching module 12 is configured to use a matching algorithm to find out different windows according to the read data.
- the target speed measuring module 13 is configured to calculate the vehicle speed of the same vehicle target.
- the invention provides a vehicle speed measuring system 10 based on single camera video sequence matching, which can greatly reduce the algorithm complexity in the vehicle target matching technology, thereby improving the calculation efficiency.
- the vehicle speed measuring system 10 based on the single camera video sequence matching mainly includes a preprocessing module 11 , a target matching module 12 , and a target speed measuring module 13 .
- the pre-processing module 11 is configured to establish a data collection environment and start collecting and reading data.
- the pre-processing module 11 specifically includes an environment establishing sub-module 111, a window setting sub-module 112, and a video reading sub-module 113, as shown in FIG.
- FIG. 10 is a schematic diagram showing the internal structure of the preprocessing module 11 shown in FIG. 9 according to an embodiment of the present invention.
- the environment creation sub-module 111 is used to establish an environment for recording video.
- the preparation of the data is collected, and the environment for recording the video is established.
- the camera is assumed to be fixed on the bridge, and the road is down-slope along the direction of the vehicle. .
- the window setting sub-module 112 is configured to set a window for capturing a vehicle target and start collecting video data.
- one channel is selected in the field of view, and two windows are respectively set at a distance.
- the window is set to be a rectangle, and the window is adaptively adjusted according to the distance of the vision, as shown in FIG. 4 , and the related description is described in detail in the foregoing description in step S112, and the description is not repeated here.
- the video reading sub-module 113 is configured to read the collected video data to obtain a vehicle target video sequence and a video sequence to be matched, respectively.
- the target matching module 12 is configured to use the matching algorithm to find the same vehicle target appearing in different windows according to the read data.
- the target matching module 12 specifically includes a foreground target sub-module 121, a feature extraction sub-module 122, and a feature comparison sub-module 123, as shown in FIG.
- FIG. 11 is a schematic diagram showing the internal structure of the target matching module 12 shown in FIG. 9 according to an embodiment of the present invention.
- the foreground target sub-module 121 is configured to preprocess the two video sequences of the vehicle target video sequence and the video sequence to be matched, respectively, to implement segmentation and shadow removal of the foreground target and the background.
- the specific pre-processing process is referred to the related description of the foregoing step S121, and the repeated description is not repeated here.
- the feature extraction sub-module 122 is configured to extract feature values, and calculate a color histogram of each frame in the foreground object corresponding to the two video sequences to obtain a feature matrix M and a matrix N, respectively.
- the feature comparison sub-module 123 is configured to perform matching according to the obtained feature matrix M and the matrix N by using a dynamic sliding window matching theorem, obtain a set of correlation coefficient values, and compare the maximum correlation coefficient value with the set threshold to obtain a matching result. And find the same vehicle target that appears in different windows based on the matching result.
- the target speed measuring module 13 is configured to calculate the vehicle speed of the same vehicle target.
- the target speed measurement module 13 specifically includes a frame number acquisition sub-module 131, a first calculation sub-module 132, and a second calculation sub-module 133, as shown in FIG.
- FIG. 12 it is a schematic diagram showing the internal structure of the target speed measuring module 13 shown in FIG. 9 according to an embodiment of the present invention.
- a frame number obtaining sub-module 131 configured to acquire a target vehicle in a vehicle target when passing through different windows respectively The number of frames in the video sequence and the video sequence to be matched.
- the specific frame number acquisition process is referred to the related description of the foregoing step S131, and the repeated description is not repeated here.
- the first calculation sub-module 132 is configured to calculate the actual distance between the frame and the frame.
- the calculation process of the actual distance between the frame and the frame is referred to the related description of the foregoing step S132, and the description thereof will not be repeated here.
- the second calculation sub-module 133 is configured to calculate a vehicle speed of the target vehicle according to the actual distance.
- the calculation process of the vehicle speed of the target vehicle is referred to the related description of the foregoing step S133, and the description thereof will not be repeated here.
- the invention provides a vehicle speed measuring system 10 based on single camera video sequence matching, which can greatly reduce the algorithm complexity in the vehicle target matching technology, thereby improving the calculation efficiency.
- each unit included is only divided according to functional logic, but is not limited to the above division, as long as the corresponding function can be implemented; in addition, the specific name of each functional unit is also They are only used to facilitate mutual differentiation and are not intended to limit the scope of the present invention.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (8)
- 一种基于单摄像头视频序列匹配的车辆测速方法,其特征在于,所述方法包括:建立数据采集环境,并开始采集和读取数据;根据读取到的数据利用匹配算法寻找不同窗口中出现的同一车辆目标;计算同一车辆目标的车速。
- 如权利要求1所述的基于单摄像头视频序列匹配的车辆测速方法,其特征在于,所述建立数据采集环境,并开始采集和读取数据的步骤具体包括:建立录制视频的环境;设置捕捉车辆目标的窗口,并开始采集视频数据;读取采集到的视频数据,分别获得车辆目标视频序列和待匹配的视频序列。
- 如权利要求2所述的基于单摄像头视频序列匹配的车辆测速方法,其特征在于,所述根据读取到的数据利用匹配算法寻找不同窗口中出现的同一车辆目标的步骤具体包括:分别对所述车辆目标视频序列和待匹配的视频序列这两个视频序列进行预处理,以实现前景目标和背景的分割、阴影去除;提取特征值,计算两个视频序列对应的前景目标中每一帧的颜色直方图,以分别得到特征矩阵M和矩阵N;根据得到的特征矩阵M和矩阵N,采用动态滑窗匹配定理进行匹配,得到一组相关系数值,将最大的相关系数值与设定阈值进行比较,得到匹配结果,并根据匹配结果寻找不同窗口中出现的同一车辆目标。
- 如权利要求3所述的基于单摄像头视频序列匹配的车辆测速方法,其特征在于,所述计算同一车辆目标的车速的步骤具体包括:获取目标车辆分别经过不同窗口时在车辆目标视频序列和待匹配的视频序列中各自的帧数;计算帧与帧之间的实际距离;根据所述实际距离计算目标车辆的车速。
- 一种基于单摄像头视频序列匹配的车辆测速系统,其特征在于,所述系统包括:预处理模块,用于建立数据采集环境,并开始采集和读取数据;目标匹配模块,用于根据读取到的数据利用匹配算法寻找不同窗口中出现的同一车辆目标;目标测速模块,用于计算同一车辆目标的车速。
- 如权利要求5所述的基于单摄像头视频序列匹配的车辆测速系统,其特征在于,所述预处理包括:环境建立子模块,用于建立录制视频的环境;窗口设置子模块,用于设置捕捉车辆目标的窗口,并开始采集视频数据;视频读取子模块,用于读取采集到的视频数据,分别获得车辆目标视频序列和待匹配的视频序列。
- 如权利要求6所述的基于单摄像头视频序列匹配的车辆测速系统,其特征在于,所述目标匹配模块包括:前景目标子模块,用于分别对所述车辆目标视频序列和待匹配的视频序列这两个视频序列进行预处理,以实现前景目标和背景的分割、阴影去除;特征提取子模块,用于提取特征值,计算两个视频序列对应的前景目标中每一帧的颜色直方图,以分别得到特征矩阵M和矩阵N;特征比较子模块,用于根据得到的特征矩阵M和矩阵N,采用动态滑窗匹配定理进行匹配,得到一组相关系数值,将最大的相关系数值与设定阈值进行比较,得到匹配结果,并根据匹配结果寻找不同窗口中出现的同一车辆目标。
- 如权利要求7所述的基于单摄像头视频序列匹配的车辆测速系统,其特征在于,所述目标测速模块包括:帧数获取子模块,用于获取目标车辆分别经过不同窗口时在车辆目标视频序列和待匹配的视频序列中各自的帧数;第一计算子模块,用于计算帧与帧之间的实际距离;第二计算子模块,用于根据所述实际距离计算目标车辆的车速。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2016/077292 WO2017161544A1 (zh) | 2016-03-25 | 2016-03-25 | 一种基于单摄像头视频序列匹配的车辆测速方法及其系统 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2016/077292 WO2017161544A1 (zh) | 2016-03-25 | 2016-03-25 | 一种基于单摄像头视频序列匹配的车辆测速方法及其系统 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017161544A1 true WO2017161544A1 (zh) | 2017-09-28 |
Family
ID=59900963
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2016/077292 WO2017161544A1 (zh) | 2016-03-25 | 2016-03-25 | 一种基于单摄像头视频序列匹配的车辆测速方法及其系统 |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2017161544A1 (zh) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110766611A (zh) * | 2019-10-31 | 2020-02-07 | 北京沃东天骏信息技术有限公司 | 图像处理方法、装置、存储介质及电子设备 |
CN111862624A (zh) * | 2020-07-29 | 2020-10-30 | 浙江大华技术股份有限公司 | 车辆匹配方法、装置、存储介质及电子装置 |
CN114140461A (zh) * | 2021-12-09 | 2022-03-04 | 成都智元汇信息技术股份有限公司 | 基于边缘识图盒子的切图方法、电子设备及介质 |
CN114241749A (zh) * | 2021-11-26 | 2022-03-25 | 深圳市戴升智能科技有限公司 | 一种基于时间序列的视频信标数据关联方法和系统 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6917692B1 (en) * | 1999-05-25 | 2005-07-12 | Thomson Licensing S.A. | Kalman tracking of color objects |
CN101604448A (zh) * | 2009-03-16 | 2009-12-16 | 北京中星微电子有限公司 | 一种运动目标的测速方法和系统 |
CN102136196A (zh) * | 2011-03-10 | 2011-07-27 | 北京大学深圳研究生院 | 一种基于图像特征的车辆测速方法 |
CN103473791A (zh) * | 2013-09-10 | 2013-12-25 | 惠州学院 | 监控视频中异常速度事件自动识别方法 |
CN104504913A (zh) * | 2014-12-25 | 2015-04-08 | 珠海高凌环境科技有限公司 | 视频车流检测方法及装置 |
-
2016
- 2016-03-25 WO PCT/CN2016/077292 patent/WO2017161544A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6917692B1 (en) * | 1999-05-25 | 2005-07-12 | Thomson Licensing S.A. | Kalman tracking of color objects |
CN101604448A (zh) * | 2009-03-16 | 2009-12-16 | 北京中星微电子有限公司 | 一种运动目标的测速方法和系统 |
CN102136196A (zh) * | 2011-03-10 | 2011-07-27 | 北京大学深圳研究生院 | 一种基于图像特征的车辆测速方法 |
CN103473791A (zh) * | 2013-09-10 | 2013-12-25 | 惠州学院 | 监控视频中异常速度事件自动识别方法 |
CN104504913A (zh) * | 2014-12-25 | 2015-04-08 | 珠海高凌环境科技有限公司 | 视频车流检测方法及装置 |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110766611A (zh) * | 2019-10-31 | 2020-02-07 | 北京沃东天骏信息技术有限公司 | 图像处理方法、装置、存储介质及电子设备 |
CN111862624A (zh) * | 2020-07-29 | 2020-10-30 | 浙江大华技术股份有限公司 | 车辆匹配方法、装置、存储介质及电子装置 |
CN114241749A (zh) * | 2021-11-26 | 2022-03-25 | 深圳市戴升智能科技有限公司 | 一种基于时间序列的视频信标数据关联方法和系统 |
CN114241749B (zh) * | 2021-11-26 | 2022-12-13 | 深圳市戴升智能科技有限公司 | 一种基于时间序列的视频信标数据关联方法和系统 |
CN114140461A (zh) * | 2021-12-09 | 2022-03-04 | 成都智元汇信息技术股份有限公司 | 基于边缘识图盒子的切图方法、电子设备及介质 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12002225B2 (en) | System and method for transforming video data into directional object count | |
Zhou et al. | LIDAR and vision-based real-time traffic sign detection and recognition algorithm for intelligent vehicle | |
Kühnl et al. | Monocular road segmentation using slow feature analysis | |
Balali et al. | Multi-class US traffic signs 3D recognition and localization via image-based point cloud model using color candidate extraction and texture-based recognition | |
CN104978567B (zh) | 基于场景分类的车辆检测方法 | |
CN112825192B (zh) | 基于机器学习的对象辨识系统及其方法 | |
CN111915583B (zh) | 复杂场景中基于车载红外热像仪的车辆和行人检测方法 | |
CN110866430A (zh) | 一种车牌识别方法及装置 | |
WO2017161544A1 (zh) | 一种基于单摄像头视频序列匹配的车辆测速方法及其系统 | |
Shi et al. | A vision system for traffic sign detection and recognition | |
CN104851089A (zh) | 一种基于三维光场的静态场景前景分割方法和装置 | |
Rabiu | Vehicle detection and classification for cluttered urban intersection | |
CN108416798A (zh) | 一种基于光流的车辆距离估计方法 | |
Poggenhans et al. | A universal approach to detect and classify road surface markings | |
Li et al. | Automatic passenger counting system for bus based on RGB-D video | |
CN110675442A (zh) | 一种结合目标识别技术的局部立体匹配方法及系统 | |
CN105844666B (zh) | 一种基于单摄像头视频序列匹配的车辆测速方法及其系统 | |
EP4287137A1 (en) | Method, device, equipment, storage media and system for detecting drivable space of road | |
CN109191473B (zh) | 一种基于对称分析的车辆粘连分割方法 | |
Han et al. | Accurate and robust vanishing point detection method in unstructured road scenes | |
Liu et al. | Obstacle recognition for ADAS using stereovision and snake models | |
Schomerus et al. | Camera-based lane border detection in arbitrarily structured environments | |
Chen et al. | Amobile system combining laser scanners and cameras for urban spatial objects extraction | |
Wu et al. | Camera-based clear path detection | |
Che et al. | Traffic light recognition for real scenes based on image processing and deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16894899 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 03/12/2018) |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 20/08/2019) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16894899 Country of ref document: EP Kind code of ref document: A1 |