CN110033479B - Traffic flow parameter real-time detection method based on traffic monitoring video - Google Patents

Traffic flow parameter real-time detection method based on traffic monitoring video Download PDF

Info

Publication number
CN110033479B
CN110033479B CN201910299470.5A CN201910299470A CN110033479B CN 110033479 B CN110033479 B CN 110033479B CN 201910299470 A CN201910299470 A CN 201910299470A CN 110033479 B CN110033479 B CN 110033479B
Authority
CN
China
Prior art keywords
vehicle
time
detection
traffic
monitoring video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910299470.5A
Other languages
Chinese (zh)
Other versions
CN110033479A (en
Inventor
王成中
朱刚
张东生
张华东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Jiuzhou Video Technology Co ltd
Original Assignee
Sichuan Jiuzhou Video Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Jiuzhou Video Technology Co ltd filed Critical Sichuan Jiuzhou Video Technology Co ltd
Priority to CN201910299470.5A priority Critical patent/CN110033479B/en
Publication of CN110033479A publication Critical patent/CN110033479A/en
Application granted granted Critical
Publication of CN110033479B publication Critical patent/CN110033479B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a traffic flow parameter real-time detection method based on traffic monitoring video, which comprises the following steps: video pre-calibration: calibrating the type and the position of the vehicle; and (3) target detection: training a deep learning model based on SSD vehicle target detection by using pre-calibrated data; coordinate mapping: solving the mapping relation between the monitoring video image coordinate system and the world coordinate system; vehicle target tracking: the vehicle running is tracked in real time by adopting a kernel correlation filter tracking algorithm and combining a deep learning model of vehicle target detection; index acquisition and calculation: setting a calibration area timer, acquiring a time index, combining a vehicle target detection result, a tracking result of a vehicle tracking algorithm and a timing result of the timer, and obtaining a real-time detection result of traffic flow parameters through coordinate mapping conversion. The invention solves the problem of directly obtaining the traffic flow parameters from the traffic monitoring video, and can complete real-time accurate detection of a plurality of traffic flow parameters at one time.

Description

Traffic flow parameter real-time detection method based on traffic monitoring video
Technical Field
The invention relates to the technical field of computer vision, in particular to a traffic flow parameter real-time detection method based on traffic monitoring video.
Background
The traffic parameters can provide data support for the intelligent traffic system, so that the intelligent traffic system can fully play a role, and the traffic video research and analysis method is a very popular research field. If researchers can directly acquire traffic flow, density, speed and other information from video data, the method is very important for the development of an intelligent traffic system, and the existing traffic parameter detection method is difficult to realize the difficulty of completing real-time detection of traffic flow, density and speed at one time. The traditional methods mainly comprise traffic flow parameter video detection based on background modeling, and are easy to misjudge due to interference of external environment conditions such as vehicle shielding, light change and the like; based on the machine learning method, the method for analyzing the dynamic change of the pixels between frames is changed, and the method focuses on the target identification and extraction of the vehicle sample space, so that the method has the advantage of interference resistance. A variety of deep learning target detection basic models capable of realizing high-precision real-time detection have been sequentially proposed, which provide a basis for the development of traffic parameter detection technologies towards intelligent, networked and autonomous learning, for example:
detection based on magnetic frequency: the most widely used is based on electromagnetic coil detection, and consists of a ring coil sensor buried under the road surface, a signal detection processing unit and a feeder line. The detection principle is that the signal detection unit, the annular coil and the feeder line form a tuning circuit, and whether a vehicle passes or not is detected by detecting the change of the resonant frequency of the circuit, so that parameters such as traffic volume, occupancy rate, approximate vehicle speed and the like can be detected. However, this method requires additional equipment to be installed under the road surface, and the use effect of the coil is greatly affected by the quality of the road surface.
Wave frequency based detection: wave-frequency vehicle detection is to generate induction by emitting electromagnetic waves to a vehicle by microwaves, ultrasonic waves, infrared waves, and the like. The ultrasonic detector is one of the types applied to the expressway and consists of a probe and a control machine, and is arranged right above or obliquely above the expressway, and whether a vehicle passes or not is judged according to the difference between the emitted wave and the returned wave of the probe. The device hanging type installation has many advantages compared with road surface installation, but the detection is easily influenced by weather, pedestrians and traffic flow, and the detection precision is poor.
Video-based detection: the method for detecting the traffic flow based on the video comprises an optical flow method, an inter-frame difference method, a background difference method and the like based on target detection; the traffic speed detection method based on the video comprises a method based on sequence images and a motion vector clustering method and the like; the traffic density detection method based on the video mainly comprises the steps of realizing traffic density measurement by combining an online support vector machine classifier with a background modeling technology, replacing the recorded vehicle number with the ratio of the pixel value of the image occupied by the vehicle to the whole image, and the like. The traditional detection methods based on the video are mainly based on background modeling to finish the detection of traffic flow parameters, and are easy to misjudge due to interference of environmental conditions and the like, so that the accuracy is not high enough; on the other hand, the calculated amount is large, and the instantaneity is still to be improved.
Disclosure of Invention
In order to solve the problems in the prior art, the invention aims to provide a traffic flow parameter real-time detection method based on traffic monitoring video.
In order to achieve the above purpose, the invention adopts the following technical scheme: a traffic flow parameter real-time detection method based on traffic monitoring video comprises the following steps:
s10, video pre-calibration: calibrating the type and the position of a vehicle in the traffic monitoring video;
s20, target detection: training a deep learning model based on SSD (solid state drive) vehicle target detection by using pre-calibrated data through transfer learning and offline training, wherein the deep learning model is used for identifying various vehicle types in a traffic monitoring video and positions of the vehicle types in a traffic monitoring video picture;
s30, coordinate mapping: solving the mapping relation between the monitoring video image coordinate system and the world coordinate system by adopting an automatic parameter calibration method of the video camera based on vanishing point detection;
s40, vehicle target tracking: the vehicle running is tracked in real time by adopting a kernel correlation filter tracking algorithm and combining a deep learning model of vehicle target detection;
s50, index acquisition and calculation: setting a calibration area timer, acquiring a corresponding time index, combining a vehicle target detection result, a tracking result of a vehicle tracking algorithm and a timing result of the corresponding timer, and obtaining a real-time detection result of traffic flow parameters through coordinate mapping conversion of the monitoring video image coordinates and world coordinates.
As a preferred embodiment, the step S10 is specifically as follows:
and collecting traffic monitoring videos comprising a plurality of angles and a plurality of time periods in a certain time, storing the videos as a picture at intervals of certain frames to obtain a picture set, calibrating the type and position coordinates of a vehicle in the videos by using a picture marking tool labelImg, and dividing the picture set into a training set, a verification set and a test set by using an automatic dividing script.
As another preferred embodiment, the step S20 is specifically as follows:
downloading a pre-trained SSD basic model based on a VGG model, customizing a detection category as the type of a vehicle, performing migration training on the basic model by using the training set, adjusting super parameters of the basic model by using the verification set, and observing the performance of the basic model by using the test set until the performance reaches the requirement of completing offline learning of the model.
As another preferred embodiment, the step S30 is specifically as follows:
is provided withAs the vanishing point position of the road boundary, the monitoring video image coordinate system of the vehicle is mapped into a world coordinate system:
in the above formula, x and z are the coordinates of the three-dimensional space coordinates of any point in the road plane along the transverse direction and the advancing direction of the road plane, and u and v are the coordinates of the any point in the two-dimensional image; and theta and d are respectively the included angle between the monitoring camera and the road surface and the distance between the optical axis of the monitoring camera and the intersection point of the road surface and the exit of the monitoring camera, the mapping relation between the video coordinates and the world coordinates is determined, C is a translation constant and can be ignored, the included angle theta between the monitoring camera and the road surface can be calibrated through automatic detection of the lane line vanishing point and the lamp post vanishing point, the standard lane line of the road is used as a reference object to measure the distance, and the parameter d is calibrated.
As another preferred embodiment, the step S40 is specifically as follows:
the method comprises the steps that a kernel correlation filter tracking algorithm uses HOG characteristics of pictures, a target detector is trained in the tracking process, whether the predicted position of the next frame is a target or not is detected by using the target detector, and a training set is updated by using a new detection result so as to update the target detector; initializing a KCF tracker when each detected vehicle object is instantiated by using a KCF tracker in OpenCV, receiving the coordinate positions of a frame and a target by the KCF tracker, calculating the position of the target in a new frame by the KCF tracker when loading the latest frame, judging whether vehicles in different frames appear in the previous frame or enter the previous frame according to the vehicle tracking result, and finishing real-time quantity statistics of the vehicles in a traffic monitoring video picture.
As another preferred embodiment, in the step S50, the primary indexes of the road length, the vehicle displacement and the vehicle passing number are first obtained by mapping and converting the coordinates of the monitoring video image and the world coordinates, the vehicle density, the space occupancy and the vehicle flow are calculated based on the whole traffic monitoring video image according to the primary indexes, the average vehicle speed is calculated based on each detected vehicle, and the headway and the time occupancy are calculated based on the calibration area.
As another preferred embodiment, the step S50 is specifically as follows:
for each detected vehicle, according to the vehicle target detection result, recording the position coordinates ((u) of each vehicle on the image in the picture in real time min ,v min ),(u max ,v max ) And mapped to world coordinate system coordinates ((x) min ,z min ),(x max ,z max ) A), itThe middle x-axis is the road surface transverse direction, and the z-axis is the vehicle advancing direction; simultaneously, a timer is set, the time t from entering the picture to exiting the picture of the vehicle is recorded in real time, and the total number n of the vehicles in the lanes in one picture is recorded in real time for each lane in m lanes in the picture by integrating the detection and tracking results of the vehicle k (k=1, 2,., m), calculating the total number of vehicles N in the screen, and calculating the total number of vehicles N passing the front edge of the screen once at regular intervals p The method comprises the steps of carrying out a first treatment on the surface of the According to the coordinate mapping result, obtaining the world coordinate z of the starting point and the end point of the picture road in the advancing direction s And z is e The total length l (z) e -z s ) The method comprises the steps of carrying out a first treatment on the surface of the The calculation formulas of the lane dividing density, the space occupancy, the vehicle flow and the average vehicle speed are respectively as follows: vehicle flow=360×np (vehicle/h), average vehicle speed=1/2 (z i,max +z i,min )-z s /t,z i,max And z i,min Z coordinate values respectively representing the upper right corner and the lower left corner of the world coordinate system of the ith vehicle, z i,max -z i,min Representing the length of the ith vehicle in its forward direction, 1/2 (z i,max +z i,min ) Representing the position coordinates of the center point of the vehicle in the forward direction.
As another preferred embodiment, the step S50 further includes:
based on a calibration line in a traffic monitoring video picture, a calibration area edge detection and timer are independently set, the number of vehicles passing through the calibration line and the vehicle passing time are detected, and the vehicle head passing time interval of two vehicles is calculated once every 2 vehicles pass through the detection area edge, namely the vehicle head time interval; every M vehicles pass through the detection area and have edges, and the time interval delta T from the head to the tail of each vehicle is calculated i Record total time T s
As another preferred embodiment, the method further comprises the steps of:
s60, the traffic flow parameter calculation result and the video real-time detection result are butted to an intelligent traffic monitoring interface, the traffic flow parameter detection result is displayed in real time, and real-time adjustment of traffic measures is assisted.
The beneficial effects of the invention are as follows: according to the SSD target detection method based on the multi-scale feature map, pre-calibrated multi-angle traffic video data are trained to obtain a vehicle detection deep learning model, and the type of a vehicle and the position coordinates of the vehicle in a video are detected in real time; the conversion of video coordinates and real coordinates is calculated by a camera self-calibration method based on vanishing point detection so as to detect road length, vehicle displacement and the like; tracking the vehicle entering the picture by a tracking algorithm of a nucleated correlation filter and combining a vehicle target detection algorithm; combining the vehicle tracking algorithm result with a timer of a preset or manual calibration area, timing the vehicle entering the picture, and calculating the time occupancy; meanwhile, the time difference between the running-in time of the vehicle and the calculation time difference of the next running-in vehicle is recorded, and the headway is calculated, so that the invention accurately detects a plurality of traffic flow parameters in real time at one time based on traffic monitoring video, does not need to install and maintain additional magnetic frequency or wave frequency detection instrument equipment, and enhances the robustness to environmental change; in the detection effect, the traffic flow detection traffic flow is >95%, the time occupancy is >97%, the density is >90%, the average speed is >90%, and the space occupancy is >95%.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
Examples:
as shown in fig. 1, in the embodiment, hardware may include a video acquisition module, an intelligent traffic parameter real-time detection module, a traffic parameter calculation module, and an intelligent traffic monitoring module, where the intelligent traffic parameter real-time detection module includes three main modules including a target detection model, a target tracking algorithm, and a coordinate mapping method.
The basic workflow of the real-time traffic flow parameter detection method is as follows: firstly, the real-time traffic data acquisition is completed based on the traffic monitoring video, and compared with the traditional detection method based on word frequency and wave frequency, the method reduces the cost of additional installation and maintenance equipment. The traffic video data is input to an intelligent traffic flow parameter real-time detection module, a target detection module trained offline directly identifies a vehicle target in the monitoring video, and then the detection results of a vehicle tracking algorithm and a coordinate mapping method in a calibration area are combined to calculate each traffic flow parameter in real time. Compared with other video-based methods at present, the method based on the detection of the deep learning model can reduce the influence of environmental change and has higher precision; based on the tracking of the kernel correlation filter, the tracking speed is better than that of the traditional optical flow method and the like in real time. And then, the traffic flow parameter calculation module calculates various traffic flow parameters by combining the detection and tracking results, the auxiliary calibration area and the timing result of the timer. And finally, the calculated traffic data is connected to an intelligent traffic monitoring module in a butt joint way, and the traffic data is displayed in real time on a monitoring interface, so that traffic measures can be conveniently and timely scheduled by combining the real-time traffic flow condition.
The method specifically comprises the following steps:
step 1, video precalibration: calibrating the type and the position of a vehicle in the traffic monitoring video; and collecting traffic videos comprising a plurality of angles and a plurality of time periods for a plurality of hours, and storing the videos as a picture every 30 frames to obtain a picture set. The type of vehicle in the video was calibrated using a labelmg tool, the coordinates of the lower left and upper right corners of the vehicle target frame ((x) min ,y min ),(x max ,y max ) The xml file in the voc format is stored, and the data of the picture set is divided into a training set, a verification set and a test set by using an automatic division script.
Step 2, target detection: training a deep learning model based on SSD (solid state drive) vehicle target detection by using pre-calibrated data through transfer learning and offline training, wherein the deep learning model is used for identifying various vehicle types in a traffic monitoring video and positions of the vehicle types in a traffic monitoring video picture; specifically, the migration training of the SSD-based vehicle target detection model: firstly, converting a file in a voc format into a tfrecord format, wherein the reading speed of a binary file in the tfrecord format is faster; then downloading a VGG-based pre-trained SSD model, wherein the custom detection category is a vehicle category, such as car, bus, track and the like; and finally, on the basis of pre-training the model, performing migration training on the basic model by using the training set data in the step 1, adjusting the super parameters of the model by using the verification set data, and observing the performance of the model by using the test set data until the performance reaches the requirement to finish the offline learning of the model. And inputting traffic video for detection by using the trained model, and detecting the type of the vehicle in the video and the position coordinates of a detection frame of the vehicle in real time.
Step 3, coordinate mapping: solving the mapping relation between the monitoring video image coordinate system and the world coordinate system by adopting an automatic parameter calibration method of the video camera based on vanishing point detection; for a typical road monitoring camera configuration which is arranged above a road, the optical axis direction and the advancing direction of the road are approximately in the same plane, and a certain pitch angle is formed between the optical axis direction and the road, and a self-calibration method based on vanishing point detection is adopted. Is provided withAs the vanishing point position of the road surface boundary, the video coordinate system of the vehicle is mapped to the world coordinate system as follows. Wherein x and z are the coordinates of the three-dimensional space coordinates of any point in the road plane along the transverse direction and the advancing direction of the road surface, and u and v are the coordinates of the point in the two-dimensional image. And theta and d are respectively the included angle theta between the camera and the road surface and the distance between the camera outlet and the intersection point of the optical axis of the camera and the road surface, so that the mapping relation between the video coordinates and the world coordinates is determined. C is a translation constant, which is negligible. Automatic detection of vanishing points of the lane lines and vanishing points of the lamp postsCalibrating a pitching angle theta of the camera, measuring the distance by using a standard lane line of a highway as a reference object, and calibrating a parameter d:
step 4, tracking a vehicle target: the vehicle running is tracked in real time by adopting a kernel correlation filter tracking algorithm and combining a deep learning model of vehicle target detection; the vehicle travel is tracked using a Kernel Correlation Filter (KCF) based tracking algorithm. The KCF algorithm uses the HOG features of the picture to train a target detector during the tracking process, uses the target detector to detect whether the predicted position of the next frame is the target, and then uses the new detection result to update the training set to update the target detector. Using the self-contained KCF tracker in OpenCV, one KCF tracker is initialized for each instantiation of the vehicle object detected in step 2. The KCF tracker receives the coordinate positions of one frame and the target, and when the latest frame is loaded, the KCF tracker calculates the position of the target in the new frame. And judging which vehicles appear in the previous frame and which vehicles enter newly according to the vehicle tracking result, and accurately finishing the real-time quantity statistics of the vehicles in the video picture.
Step 5, index acquisition and calculation: setting a calibration area timer, acquiring a corresponding time index, combining a vehicle target detection result, a tracking result of a vehicle tracking algorithm and a timing result of the corresponding timer, and obtaining a real-time detection result of traffic flow parameters through coordinate mapping conversion of the monitoring video image coordinates and world coordinates. Synthesizing the detection results of the step 2-step 4, and calculating traffic flow indexes; calculating the vehicle density, the space occupancy and the vehicle flow based on the whole picture; calculating an average vehicle speed based on each detected vehicle; and calculating the headway and the time occupancy based on the calibration area. The specific calculation method of each index is as follows:
for each detected vehicle, according to the vehicle target detection result, recording the position coordinates ((u) of each vehicle on the image in the picture in real time min ,v min ),(u max ,v max ) And (b) andmapping to world coordinate System coordinates ((x) min ,z min ),(x max ,z max ) Wherein the x-axis is the road lateral direction and the z-axis is the vehicle forward direction; and simultaneously, a timer is set, and the time t from the entering picture to the exiting picture of the vehicle is recorded in real time. For each lane in m lanes in a picture, comprehensive vehicle detection and tracking results are obtained, and the total number n of the lane vehicles in the picture is recorded in real time k (k=1, 2,., m), the total number of vehicles N in the screen is calculated, and the total number of vehicles N passing through the front edge of the screen is calculated every 10s p The method comprises the steps of carrying out a first treatment on the surface of the According to the coordinate mapping result, obtaining the world coordinate z of the starting point and the end point of the picture road in the advancing direction s And z is e The total length l (z) e -z s ). The calculation formulas of the lane dividing density, the space occupancy, the vehicle flow and the vehicle speed of each vehicle are respectively as follows: traffic flow = 360 x n p (vehicle/h), average vehicle speed=1/2 (z) i,max +z i,min )-z s /t,z i,max And z i,min Z coordinate values respectively representing the upper right corner and the lower left corner of the world coordinate system of the ith vehicle, z i,max -z i,min Representing the length of the ith vehicle in its forward direction, 1/2 (z i,max +z i,min ) Representing the position coordinates of the center point of the vehicle in the forward direction.
In addition, based on the calibration line in the screen, calibration area edge detection and a timer are separately set to detect the number of vehicles passing through the calibration line and the vehicle passing time. Every 2 vehicles pass the edge of the detection area, the time interval between the two vehicles passing the detection area is calculated, and the time interval is the time interval of the two vehicles. Every 5 vehicles pass through the detection area and have edges, and the time interval delta T from the head to the tail of each vehicle is calculated i Record total time T s The following are provided
Step 6, butt joint intelligent traffic monitoring interface: and (3) the traffic flow parameter calculation result and the video real-time detection result are connected to the Qt intelligent traffic monitoring interface in an opposite mode, the traffic flow parameter detection result is displayed in real time, and real-time adjustment of traffic measures is assisted.
The foregoing examples merely illustrate specific embodiments of the invention, which are described in greater detail and are not to be construed as limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention.

Claims (5)

1. The traffic flow parameter real-time detection method based on the traffic monitoring video is characterized by comprising the following steps of:
s10, video pre-calibration: calibrating the type and the position of a vehicle in the traffic monitoring video;
s20, target detection: training a deep learning model based on SSD (solid state drive) vehicle target detection by using pre-calibrated data through transfer learning and offline training, wherein the deep learning model is used for identifying various vehicle types in a traffic monitoring video and positions of the vehicle types in a traffic monitoring video picture;
s30, coordinate mapping: solving the mapping relation between the monitoring video image coordinate system and the world coordinate system by adopting an automatic parameter calibration method of the video camera based on vanishing point detection;
the step S30 specifically includes the following steps:
is provided withAs the vanishing point position of the road boundary, the monitoring video image coordinate system of the vehicle is mapped into a world coordinate system:
in the above formula, x and z are the coordinates of the three-dimensional space coordinates of any point in the road plane along the transverse direction and the advancing direction of the road plane, and u and v are the coordinates of the any point in the two-dimensional image; θ and d are respectively the included angle between the monitoring camera and the road surface and the distance between the outlet of the monitoring camera and the intersection point of the optical axis of the monitoring camera and the road surface, the mapping relation between the video coordinates and the world coordinates is determined, C is a translation constant, the included angle θ between the monitoring camera and the road surface is calibrated through automatic detection of the lane line vanishing point and the lamp post vanishing point, the standard lane line of the road is used as a reference object to measure the distance, and the parameter d is calibrated;
s40, vehicle target tracking: the vehicle running is tracked in real time by adopting a kernel correlation filter tracking algorithm and combining a deep learning model of vehicle target detection;
s50, index acquisition and calculation: setting a calibration area timer, acquiring a corresponding time index, combining a vehicle target detection result, a tracking result of a vehicle tracking algorithm and a timing result of the corresponding timer, and obtaining a real-time detection result of traffic flow parameters through coordinate mapping conversion of a monitoring video image coordinate and a world coordinate;
the step S50 specifically includes the following steps:
for each detected vehicle, according to the vehicle target detection result, recording the position coordinates ((u) of each vehicle on the image in the picture in real time min ,v min ),(u max ,v max ) And mapped to world coordinate system coordinates ((x) min ,z min ),(x max ,z max ) Wherein the x-axis is the road lateral direction and the z-axis is the vehicle forward direction; simultaneously, a timer is set, the time t from entering the picture to exiting the picture of the vehicle is recorded in real time, and the detection and tracking results of the vehicle are integrated for each lane of m lanes in the pictureRecording total number n of vehicles in lane in one picture in real time k (k=1, 2,., m), calculating the total number of vehicles N in the screen, and calculating the total number of vehicles N passing the front edge of the screen once at regular intervals p The method comprises the steps of carrying out a first treatment on the surface of the According to the coordinate mapping result, obtaining the world coordinate z of the starting point and the end point of the picture road in the advancing direction s And z is e The total length l (z) e -z s ) The method comprises the steps of carrying out a first treatment on the surface of the The calculation formulas of the lane dividing density, the space occupancy, the vehicle flow and the average vehicle speed are respectively as follows: vehicle flow=360×np (vehicle/h), average vehicle speed=1/2 (z i,max +z i,min )-z s /t,z i,max And z i,min Z coordinate values respectively representing the upper right corner and the lower left corner of the world coordinate system of the ith vehicle, z i,max -z i,min Representing the length of the ith vehicle in its forward direction, 1/2 (z i,max +z i,min ) Position coordinates in the forward direction representing a center point of the vehicle;
in the step S50, primary indexes of road length, vehicle displacement and vehicle passing number are obtained through coordinate mapping conversion of the monitoring video image coordinates and world coordinates, according to the primary indexes, vehicle density, space occupancy and vehicle flow are calculated based on the whole traffic monitoring video image, average speed is calculated based on each detected vehicle, and headway and time occupancy are calculated based on the calibration area;
the step S50 further includes:
based on a calibration line in a traffic monitoring video picture, a calibration area edge detection and timer are independently set, the number of vehicles passing through the calibration line and the vehicle passing time are detected, and the vehicle head passing time interval of two vehicles is calculated once every 2 vehicles pass through the detection area edge, namely the vehicle head time interval; every M vehicles pass through the detection area and have edges, and the time interval delta from the head to the tail of each vehicle is calculatedT i Record total time T s Calculate once
2. The method for detecting traffic flow parameters in real time based on traffic monitoring video according to claim 1, wherein the step S10 is specifically as follows:
and collecting traffic monitoring videos comprising a plurality of angles and a plurality of time periods in a certain time, storing the videos as a picture at intervals of certain frames to obtain a picture set, calibrating the type and position coordinates of a vehicle in the videos by using a picture marking tool labelImg, and dividing the picture set into a training set, a verification set and a test set by using an automatic dividing script.
3. The method for detecting traffic flow parameters in real time based on traffic monitoring video according to claim 2, wherein the step S20 is specifically as follows:
downloading a pre-trained SSD basic model based on a VGG model, customizing a detection category as the type of a vehicle, performing migration training on the basic model by using the training set, adjusting super parameters of the basic model by using the verification set, and observing the performance of the basic model by using the test set until the performance reaches the requirement of completing offline learning of the model.
4. The method for detecting traffic flow parameters in real time based on traffic monitoring video according to claim 3, wherein the step S40 specifically comprises the following steps:
the method comprises the steps that a kernel correlation filter tracking algorithm uses HOG characteristics of pictures, a target detector is trained in the tracking process, whether the predicted position of the next frame is a target or not is detected by using the target detector, and a training set is updated by using a new detection result so as to update the target detector; initializing a KCF tracker when each detected vehicle object is instantiated by using a KCF tracker in OpenCV, receiving the coordinate positions of a frame and a target by the KCF tracker, calculating the position of the target in a new frame by the KCF tracker when loading the latest frame, judging whether vehicles in different frames appear in the previous frame or enter the previous frame according to the vehicle tracking result, and finishing real-time quantity statistics of the vehicles in a traffic monitoring video picture.
5. The traffic flow parameter real-time detection method based on traffic monitoring video according to any one of claims 1 to 4, further comprising the steps of:
s60, the traffic flow parameter calculation result and the video real-time detection result are butted to an intelligent traffic monitoring interface, the traffic flow parameter detection result is displayed in real time, and real-time adjustment of traffic measures is assisted.
CN201910299470.5A 2019-04-15 2019-04-15 Traffic flow parameter real-time detection method based on traffic monitoring video Active CN110033479B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910299470.5A CN110033479B (en) 2019-04-15 2019-04-15 Traffic flow parameter real-time detection method based on traffic monitoring video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910299470.5A CN110033479B (en) 2019-04-15 2019-04-15 Traffic flow parameter real-time detection method based on traffic monitoring video

Publications (2)

Publication Number Publication Date
CN110033479A CN110033479A (en) 2019-07-19
CN110033479B true CN110033479B (en) 2023-10-27

Family

ID=67238407

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910299470.5A Active CN110033479B (en) 2019-04-15 2019-04-15 Traffic flow parameter real-time detection method based on traffic monitoring video

Country Status (1)

Country Link
CN (1) CN110033479B (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110555423B (en) * 2019-09-09 2021-12-21 南京东控智能交通研究院有限公司 Multi-dimensional motion camera-based traffic parameter extraction method for aerial video
CN110807924A (en) * 2019-11-04 2020-02-18 吴钢 Multi-parameter fusion method and system based on full-scale full-sample real-time traffic data
CN111161545B (en) * 2019-12-24 2021-01-05 北京工业大学 Intersection region traffic parameter statistical method based on video
CN111310736B (en) * 2020-03-26 2023-06-13 上海同岩土木工程科技股份有限公司 Rapid identification method for unloading and stacking of vehicles in protection area
CN111429484B (en) * 2020-03-31 2022-03-15 电子科技大学 Multi-target vehicle track real-time construction method based on traffic monitoring video
CN111462249B (en) * 2020-04-02 2023-04-18 北京迈格威科技有限公司 Traffic camera calibration method and device
CN111599173A (en) * 2020-05-12 2020-08-28 杭州云视通互联网科技有限公司 Vehicle information automatic registration method, computer equipment and readable storage medium
CN111613061B (en) * 2020-06-03 2021-11-02 徐州工程学院 Traffic flow acquisition system and method based on crowdsourcing and block chain
CN111753797B (en) * 2020-07-02 2022-02-22 浙江工业大学 Vehicle speed measuring method based on video analysis
CN112632208B (en) * 2020-12-25 2022-12-16 际络科技(上海)有限公司 Traffic flow trajectory deformation method and device
CN112837541B (en) * 2020-12-31 2022-04-29 遵义师范学院 Intelligent traffic vehicle flow management method based on improved SSD
CN112907978A (en) * 2021-03-02 2021-06-04 江苏集萃深度感知技术研究所有限公司 Traffic flow monitoring method based on monitoring video
CN112991742B (en) * 2021-04-21 2021-08-20 四川见山科技有限责任公司 Visual simulation method and system for real-time traffic data
US11645906B2 (en) 2021-04-29 2023-05-09 Tetenav, Inc. Navigation system with traffic state detection mechanism and method of operation thereof
CN113139495A (en) * 2021-04-29 2021-07-20 姜冬阳 Tunnel side-mounted video traffic flow detection method and system based on deep learning
CN113380035B (en) * 2021-06-16 2022-11-11 山东省交通规划设计院集团有限公司 Road intersection traffic volume analysis method and system
CN113762139B (en) * 2021-09-03 2023-07-25 万申科技股份有限公司 Machine vision detection system and method for 5G+ industrial Internet

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009245042A (en) * 2008-03-31 2009-10-22 Hitachi Ltd Traffic flow measurement device and program
CN104200657A (en) * 2014-07-22 2014-12-10 杭州智诚惠通科技有限公司 Traffic flow parameter acquisition method based on video and sensor
CN107918765A (en) * 2017-11-17 2018-04-17 中国矿业大学 A kind of Moving target detection and tracing system and its method
CN108629973A (en) * 2018-05-11 2018-10-09 四川九洲视讯科技有限责任公司 Road section traffic volume congestion index computational methods based on fixed test equipment
CN108734959A (en) * 2018-04-28 2018-11-02 扬州远铭光电有限公司 A kind of embedded vision train flow analysis method and system
CN108831161A (en) * 2018-06-27 2018-11-16 深圳大学 A kind of traffic flow monitoring method, intelligence system and data set based on unmanned plane
CN109584558A (en) * 2018-12-17 2019-04-05 长安大学 A kind of traffic flow statistics method towards Optimization Control for Urban Traffic Signals

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009245042A (en) * 2008-03-31 2009-10-22 Hitachi Ltd Traffic flow measurement device and program
CN104200657A (en) * 2014-07-22 2014-12-10 杭州智诚惠通科技有限公司 Traffic flow parameter acquisition method based on video and sensor
CN107918765A (en) * 2017-11-17 2018-04-17 中国矿业大学 A kind of Moving target detection and tracing system and its method
CN108734959A (en) * 2018-04-28 2018-11-02 扬州远铭光电有限公司 A kind of embedded vision train flow analysis method and system
CN108629973A (en) * 2018-05-11 2018-10-09 四川九洲视讯科技有限责任公司 Road section traffic volume congestion index computational methods based on fixed test equipment
CN108831161A (en) * 2018-06-27 2018-11-16 深圳大学 A kind of traffic flow monitoring method, intelligence system and data set based on unmanned plane
CN109584558A (en) * 2018-12-17 2019-04-05 长安大学 A kind of traffic flow statistics method towards Optimization Control for Urban Traffic Signals

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Real-Time Traffic Pattern Collection and Analysis Model for Intelligent Traffic Intersection;Unnikrishnan Kizhakkemadam Sreekumar等;2018 IEEE International Conference on Edge Computing (EDGE);全文 *
冯莹莹 等.《智能监控视频中运动目标跟踪方法研究》.第30-31页. *
基于视频图像处理的交通流检测系统;张洁颖等;《电视技术》;20080617(第06期);全文 *

Also Published As

Publication number Publication date
CN110033479A (en) 2019-07-19

Similar Documents

Publication Publication Date Title
CN110033479B (en) Traffic flow parameter real-time detection method based on traffic monitoring video
CN110285793B (en) Intelligent vehicle track measuring method based on binocular stereo vision system
CN107424116B (en) Parking space detection method based on side surround view camera
CN106919915B (en) Map road marking and road quality acquisition device and method based on ADAS system
CN104766058B (en) A kind of method and apparatus for obtaining lane line
CN110322702A (en) A kind of Vehicular intelligent speed-measuring method based on Binocular Stereo Vision System
CN111563469A (en) Method and device for identifying irregular parking behaviors
CN105608417B (en) Traffic lights detection method and device
CN111340855A (en) Road moving target detection method based on track prediction
CN106503636A (en) A kind of road sighting distance detection method of view-based access control model image and device
CN107315095A (en) Many vehicle automatic speed-measuring methods with illumination adaptability based on Video processing
CN106127145A (en) Pupil diameter and tracking
WO2023240805A1 (en) Connected vehicle overspeed early warning method and system based on filtering correction
CN107796373A (en) A kind of distance-finding method of the front vehicles monocular vision based on track plane geometry model-driven
CN111738071B (en) Inverse perspective transformation method based on motion change of monocular camera
Mahersatillah et al. Unstructured road detection and steering assist based on hsv color space segmentation for autonomous car
CN113449632B (en) Vision and radar perception algorithm optimization method and system based on fusion perception and automobile
CN116794650A (en) Millimeter wave radar and camera data fusion target detection method and device
CN114842660B (en) Unmanned lane track prediction method and device and electronic equipment
US20220404170A1 (en) Apparatus, method, and computer program for updating map
CN110969875B (en) Method and system for road intersection traffic management
CN108615028A (en) The fine granularity detection recognition method of harbour heavy vehicle
CN115035251B (en) Bridge deck vehicle real-time tracking method based on field enhanced synthetic data set
CN113160299B (en) Vehicle video speed measurement method based on Kalman filtering and computer readable storage medium
Niu Object Detection and Tracking for Autonomous Driving by MATLAB toolbox

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant