CN117671972B - Vehicle speed detection method and device for slow traffic system - Google Patents

Vehicle speed detection method and device for slow traffic system Download PDF

Info

Publication number
CN117671972B
CN117671972B CN202410138524.0A CN202410138524A CN117671972B CN 117671972 B CN117671972 B CN 117671972B CN 202410138524 A CN202410138524 A CN 202410138524A CN 117671972 B CN117671972 B CN 117671972B
Authority
CN
China
Prior art keywords
vehicle
target vehicle
video
actual
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410138524.0A
Other languages
Chinese (zh)
Other versions
CN117671972A (en
Inventor
刘春杰
顾涛
王书灵
胡莹
荆禄波
初众甫
肖元轶
黄龙
曹宇
苏楦雯
金�一
李浥东
王涛
张哲宁
马洁
王强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Transport Institute
Original Assignee
Beijing Transport Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Transport Institute filed Critical Beijing Transport Institute
Priority to CN202410138524.0A priority Critical patent/CN117671972B/en
Publication of CN117671972A publication Critical patent/CN117671972A/en
Application granted granted Critical
Publication of CN117671972B publication Critical patent/CN117671972B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a vehicle speed detection method and device for a slow traffic system. The vehicle speed detection method facing the slow traffic system comprises the following steps: acquiring a video to be detected; analyzing the video to be detected by using a vehicle detection model, and identifying a target vehicle in the video to be detected; the vehicle detection model is obtained through machine learning training by using a plurality of sets of training data, and each set of training data in the plurality of sets of training data comprises: a video image sequence of a lane of a non-motor vehicle, and a tag identifying a driving non-motor vehicle in the video image sequence; tracking the target vehicle in the video to be detected by using a target tracking model to obtain a motion track of the target vehicle; and analyzing the motion trail to obtain the actual running speed of the target vehicle. The invention can improve the accuracy of vehicle speed detection.

Description

Vehicle speed detection method and device for slow traffic system
Technical Field
The invention belongs to the technical field of computers, and particularly relates to a vehicle speed detection method and device for a slow traffic system.
Background
The bicycle and the electric vehicle are used as a zero-carbon and healthy traffic mode, can play a larger role in an urban traffic system, and change a travel structure by continuously improving travel quality. Therefore, based on the slow-moving traffic system, the urban slow-moving traffic network can be perfected by detecting the running speeds of the bicycles and the electric vehicles, and the safety and smoothness of the urban traffic system are ensured.
Conventional vehicle speed detection methods mainly include various manners such as loop coil vehicle sensors, interval speed measurement, radar microwave speed measurement, laser detection, etc., but these conventional manners generally have difficulty in recognizing category attributes of a vehicle such as a vehicle type, and at the same time, installation, maintenance, and operation of these devices generally require expensive investment and cost, and are susceptible to weather, multipath interference, and other external factors, thereby resulting in a reduction in speed measurement accuracy. For example, when the speed is measured by adopting the ground sensing coil speed measuring method, the ground sensing coil is required to be buried in a road, the cost for installing and maintaining the equipment is relatively high, and the ground sensing coil is also very easy to be influenced by weather, so that the speed measuring data is inaccurate.
Therefore, it is necessary to provide a vehicle speed detection method that is low in cost, stable in speed measurement, and high in accuracy.
Disclosure of Invention
In order to solve at least one technical problem in the background art, the invention provides a vehicle speed detection method and device for a slow traffic system, which are used for reducing detection cost and improving speed measurement stability and accuracy in the slow traffic system.
According to a first aspect of the present invention, there is provided a method for detecting vehicle speed in a slow traffic system, comprising:
Acquiring a video to be detected;
Analyzing the video to be detected by using a vehicle detection model, and identifying a target vehicle in the video to be detected; the vehicle detection model is obtained through machine learning training by using a plurality of sets of training data, and each set of training data in the plurality of sets of training data comprises: a sequence of video images of the lane of the non-motor vehicle, and a tag identifying the non-motor vehicle in the sequence of video images;
Tracking the target vehicle in the video to be detected by using a target tracking model to obtain a motion track of the target vehicle, wherein the motion track is generated based on the movement of the position of the pixel point of the image corresponding to the same target vehicle;
And analyzing the motion trail to obtain the actual running speed of the target vehicle.
Further, the vehicle detection model uses multiple sets of training data to train through machine learning, including:
Acquiring an initial detection model, wherein the initial detection model comprises a Yolo model;
Collecting multiple groups of non-motor vehicle lane video data of a target road section in different time periods and environments;
Preprocessing the non-motor vehicle lane video data to obtain a video image sequence, and preparing a video data set;
labeling pictures in the video data set according to preset classification categories, and generating corresponding labels for each picture, wherein the labels comprise category information and position information; the preset classification comprises two main categories of riding a bicycle and riding an electric bicycle;
Training the initial detection model by using the marked video data set until a vehicle detection model meeting preset conditions is obtained.
Further, training the initial detection model by using the noted video data set until a vehicle detection model meeting a preset condition is obtained, including:
Inputting the marked video data set into the initial detection model, training the initial detection model according to preset parameters until a weight file with highest detection precision in training rounds is obtained, and deriving a trained vehicle detection model;
wherein the preset parameters comprise the batch size of training and training rounds.
Further, before the analyzing the video to be detected by using the vehicle detection model, the method includes:
determining a non-motor vehicle lane region in the video to be detected;
the analyzing the video to be detected by using the vehicle detection model to identify the target vehicle in the video to be detected comprises the following steps:
and detecting the type of the running vehicle in the non-motor lane area by using a vehicle detection model, and identifying the target vehicle.
Further, the analyzing the video to be detected by using the vehicle detection model, to identify the target vehicle in the video to be detected, includes:
Inputting the video to be detected into the vehicle detection model;
The vehicle detection model identifies a target vehicle in the video to be detected and marks the target vehicle through a boundary box; wherein the target vehicle comprises a traveling non-motor vehicle;
Acquiring the characteristic information of the target vehicle and storing the characteristic information in a detection list, wherein the detection list comprises the mapping relation between the characteristic information of the target vehicle and the tracking ID of the target vehicle; the characteristic information comprises pixel point position, width and height information, a confidence threshold value and a category to which the pixel point belongs.
Further, the tracking the target vehicle in the video to be detected by using a target tracking model to obtain a motion trail of the target vehicle includes:
Based on the tracking ID, respectively acquiring image pixel points of a boundary box for identifying the same target vehicle in the detection list;
And sequentially connecting a plurality of image pixel points to generate a motion track of the target vehicle.
Further, the analyzing the motion trail to obtain an actual running speed of the target vehicle includes:
Calculating the pixel distance of the target vehicle based on the position relation of two different image pixel points in the motion trail;
Calculating the actual movement distance of the target vehicle based on the mapping relation from the image pixel point distance to the actual distance, wherein the mapping relation is as follows:
S = D * pixel_distance;
wherein S represents an actual motion distance, D represents a pixel distance, and pixel_distance represents an actual distance represented by each image pixel point;
Acquiring the motion time of the target vehicle based on the frame numbers and the video image frame rates of the two video frames corresponding to the two different image pixel points;
and obtaining the actual running speed of the target vehicle based on the relation between the actual movement distance and the movement time.
Further, the actual distance pixel_distance represented by each image pixel point satisfies the following formula:
pixel_distance=a/bicycle_width; or alternatively, the first and second heat exchangers may be,
pixel_distance2 = b / electric_width;
Wherein, pixel_distance1 represents the actual distance represented by each pixel point when the target vehicle is a person riding a bicycle, a is a constant, represents the actual average width of a common bicycle, and bicycle_width represents the actual width of the bicycle; pixel_distant 2 represents the actual distance represented by each pixel point when the target vehicle is a person riding the electric vehicle, b is a constant, represents the actual average width of the ordinary electric vehicle, and electric_width represents the actual width of the electric vehicle.
Further, the vehicle speed detection method includes:
The actual running speed of the target vehicle is corrected and verified based on the standard reference speed of the target vehicle.
According to a second aspect of the present invention, there is also provided a vehicle speed detection apparatus for a slow traffic system, comprising:
the data acquisition module is used for acquiring a video to be detected;
The target detection module is used for analyzing the video to be detected by utilizing a vehicle detection model and identifying a target vehicle in the video to be detected; the vehicle detection model is obtained through machine learning training by using a plurality of sets of training data, and each set of training data in the plurality of sets of training data comprises: a video image sequence of a lane of a non-motor vehicle, and a tag identifying a driving non-motor vehicle in the video image sequence;
The target tracking module is used for tracking the target vehicle in the video to be detected by utilizing a target tracking model, so as to obtain a motion track of the target vehicle, wherein the motion track is generated based on the movement of the position of the pixel point of the image corresponding to the same target vehicle;
And the data processing module is used for analyzing the motion trail to obtain the actual running speed of the target vehicle.
By the technical scheme of the invention, the following technical effects can be obtained:
The invention provides a vehicle speed detection method and a vehicle speed detection device, which mainly detect and track a bicycle and an electric vehicle running in a video frame by adopting a computer vision and image processing technology so as to identify and track the specific position and track of the vehicle, and calculate the actual running speed of the target vehicle by analyzing the change of the position of the target vehicle of the bicycle and/or the electric vehicle in the detection video along with time.
Compared with the traditional speed measuring method, the speed measuring method provided by the invention is based on video speed measurement, and has the advantages of large monitoring range and rich detection information; the speed measurement only depends on an external hardware device camera, the device is simple and low in cost, the operation is further simple, and the complexity of detection is reduced; in addition, by the detection method, accurate, real-time, stable and reliable speed detection data of the slow-moving traffic vehicles can be provided, traffic management departments can be further helped to know the speed condition of the non-motor vehicles on the slow-moving traffic road in time, and the reliability and convenience of the slow-moving traffic are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for detecting vehicle speed for a slow traffic system according to an embodiment of the present invention;
FIG. 2 is a flowchart of a training method of the vehicle detection model shown in FIG. 1;
FIG. 3 is a flowchart of a method for training the initial detection model using a labeled video data set until a vehicle detection model satisfying a preset condition is obtained, according to an embodiment of the present invention;
FIG. 4 is a flowchart of a method for tracking the target vehicle in the video to be detected and obtaining a motion trail of the target vehicle through the target tracking model according to an embodiment of the present invention;
FIG. 5 is a flowchart of a method for analyzing the motion trail and obtaining the actual running speed of the target vehicle according to the embodiment of the present invention;
Fig. 6 is a schematic structural diagram of a vehicle speed detecting device for a slow traffic system according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that the terms "first," "second," "third," and "fourth," etc. in the description and claims of the present application are used for distinguishing between different objects and not for describing a particular sequential order. The terms "comprising" and "having" and any variations thereof, are intended to cover a non-exclusive inclusion.
In the related art, the conventional vehicle speed detection method needs to purchase expensive equipment, has high cost and has high equipment installation and maintenance difficulty; meanwhile, the specific attributes of the vehicle, such as license plate numbers, vehicle types and the like, are difficult to identify by the traditional speed measuring method; among them, the speed measuring range of the radar and laser detection methods may be limited, and the methods such as radar speed measurement and infrared speed measurement are easily interfered by external factors, such as multipath interference, surrounding radio waves, and weather, thereby causing inaccurate speed measurement. In addition, the early moving object detection algorithms mainly comprise an inter-frame difference method, an optical flow method and a background difference method, but the performance of the object detection algorithms is limited when the object detection algorithms are applied to complex and dynamic environments, the object detection algorithms are easy to be interfered by various factors, the effect of the object detection algorithms in practical application is limited, factors such as inter-frame difference illumination change, camera vibration and the like are very sensitive, false detection and missed detection can be possibly caused, and the optical flow method depends on the optical flow relation between a camera and an object, so that the early moving object detection algorithms have poor performance for scenes with larger camera movement or rapid movement.
However, with the development of the computer vision field, modern target detection algorithms have adopted deep learning techniques, such as convolutional neural networks and Yolo neural networks, and have made significant progress in accuracy, robustness and adaptability. These deep learning methods can better cope with illumination changes, complex backgrounds, occlusions, and multi-target scenes. Therefore, the invention provides a vehicle speed detection method based on road monitoring video in a slow-moving traffic system, which adopts a target detection algorithm (such as Yolo algorithm) to identify and detect slow-moving traffic vehicles, people riding bicycles and people riding electric vehicles, then tracks the identified non-motor vehicle running vehicles by combining a target tracking algorithm (such as Deepsort algorithm), establishes the connection of adjacent frames of a moving target in the video, calculates the speed of the vehicle by analyzing the change of the position of the target vehicle in the video along with time and the conversion of pixels to actual distance, and finally corrects and verifies the measured speed data to ensure accuracy.
Note that, slow traffic generally refers to a traffic mode in which slow travel is adopted. In urban traffic systems, both pedestrian traffic at a speed of 4-7 km per hour and non-motor vehicle (including bicycles and electric vehicles) traffic at a speed of less than 20 km per hour can be referred to as "slow traffic".
Fig. 1 is a flow chart of a method for detecting vehicle speed for a slow traffic system according to an embodiment of the present invention. As shown in fig. 1, the present invention provides a vehicle speed detection method for a slow traffic system, the method comprising:
S10, acquiring a video to be detected;
In the step, the video to be detected is obtained according to the actual demands of the user, and the video to be detected comprises the road traffic condition of the target road section. Optionally, the video to be detected may be collected by a road monitoring device. The vehicle speed is detected based on the video data, and the video data has wider coverage area and larger detection information quantity, so that the accuracy of vehicle speed detection is improved; in addition, the speed detection is carried out based on video data, external hardware equipment only depends on a camera, the equipment is simple, the equipment is not easy to be interfered by external factors, and the speed detection accuracy is further improved.
S20, analyzing the video to be detected by using a vehicle detection model, and identifying a target vehicle in the video to be detected; the vehicle detection model is obtained through machine learning training by using a plurality of sets of training data, and each set of training data in the plurality of sets of training data comprises: a sequence of video images of a lane of a non-motor vehicle, and a tag identifying the non-motor vehicle in the sequence of video images in transit;
the purpose of this step S20 is to identify the target vehicle present in the video to be detected, which includes a running non-motor vehicle, such as two types of non-motor vehicles, human-riding bicycles and human-riding electric vehicles.
As an alternative embodiment, the vehicle detection model is Yolo models trained using multiple sets of training data. Specifically, as shown in fig. 2, the vehicle speed detection method facing the slow traffic system includes:
s21, acquiring an initial detection model, wherein the initial detection model comprises a Yolo model;
In this step, the initial detection model may be any one Yolo of Yolov, yolov, yolov, yolov4, yolov, yolov6, yolov7, yolov, and 3462. In this embodiment, the initial detection model is Yolov models, and the detection accuracy of the model is higher.
S22, acquiring multiple groups of non-motor vehicle lane video data of a target road section in different time periods and environments;
Alternatively, a large amount of non-motor vehicle lane video data can be rapidly acquired by using video equipment arranged on the road or manually shooting and collecting the non-motor vehicle lane video data of the target road section. The target road section is selected according to actual requirements. The different time periods and environments comprise the driving conditions of road bicycles and electric vehicles in the morning and evening peak time periods, the low peak time period and the night time period.
Specifically taking Beijing city as an example, high-level video and low-pile video equipment of a Beijing city parking management business center can be utilized to perform combined manual shooting on acquired data road sections for shooting at different time periods, different angles and different scenes, so that a large number of non-motor vehicle lane videos of slow traffic roads can be rapidly acquired. The step S22 is to collect video data of the non-motor vehicle lane under various scenes so as to provide rich training samples, thereby improving the accuracy of the vehicle detection model.
S23, preprocessing the non-motor vehicle lane video data to obtain a video image sequence, and preparing a video data set;
as an alternative embodiment, the preprocessing the non-motor vehicle lane video data to obtain a video image sequence and making a video data set includes:
Utilizing a ffmpeg tool to intercept pictures every 8-12 frames in the video data of the non-motor vehicle lane to obtain a video image sequence; and creating a video dataset from a plurality of pictures in the sequence of video images. In this embodiment, pictures are selected to be taken from the video data of the non-motor vehicle lane every 10 frames, so as to reduce the number of labels of the video data set and the similarity of the labels of the pictures, and prevent the initial detection model from being excessively redundant in the subsequent training.
Wherein the ffmpeg tool is used to convert a video file into a format that converts the video file into a sequence of images. The video image sequence is a collection of a series of successive images, which in this embodiment comprises 15776 pictures, which are brought together to form a video data set.
S24, labeling the pictures in the video data set according to preset classification, and generating corresponding labels for each picture, wherein the labels comprise classification information and position information;
based on the video data set created in step S23, the step S24 performs high-quality labeling on each picture in the video data set, and labels category information and position information respectively. As an optional embodiment, the labeling the pictures in the video dataset according to the preset classification category, and generating a corresponding label for each picture includes:
Labeling each picture in the video dataset according to a preset classification by utilizing Labelimg tools, and generating classification information and position information of each picture; the preset classification categories comprise types of non-motor vehicles in running, such as two classification categories of riding bicycles and riding electric vehicles.
It can be understood that, according to two categories, i.e. a person riding a bicycle and a person riding an electric bicycle, marking the target vehicle appearing in each picture until all pictures in the video data set are marked, and obtaining a data set with a specified format, wherein the data set with the specified format refers to that the pictures in the data set are marked according to the preset classification category. Optionally, in the labeling process, except for labeling the vehicle type, the position information of the target vehicle is labeled at the same time, for example, the person riding the bicycle in a certain picture is labeled as the person riding the bicycle type, and the position information of the bicycle in the picture, for example, the pixel position is labeled. A picture may be labeled with one or more labels.
And S25, training the initial detection model by using the marked video data set until a vehicle detection model meeting the preset condition is obtained.
Specifically, the step S25 further includes:
Inputting the marked video data set into the initial detection model, training the initial detection model according to preset parameters until a weight file with highest detection precision in training rounds is obtained, and deriving a trained vehicle detection model; wherein the preset parameters comprise the batch size of training and training rounds. In this embodiment, the training batch size is 32, and the training round is 200 rounds.
As an alternative embodiment, fig. 3 shows a flowchart of a method for training the initial detection model by using the labeled video data set until a vehicle detection model satisfying a preset condition is obtained, and as shown in fig. 3, a method for training the initial detection model by using the labeled video data set includes:
S251, dividing pictures in the marked video data set to generate a training set, a verification set and a test set; in this embodiment, the pictures in the video dataset are divided according to the ratio of 8:1:1, that is, the ratio of the pictures contained in the training set is 8/10, the ratio of the pictures contained in the verification set is 1/10, the ratio of the pictures contained in the test set is 1/10, and during the dividing process, the training set pictures, the verification set pictures and the test set pictures are randomly allocated, and the dataset is ready.
S252, inputting the training set into the initial detection model for training to obtain a first training model;
S253, inputting the verification set into the first training model for adjustment to obtain a second training model; in the step, the verification set picture can be repeatedly used for adjusting the first training model for multiple times, so that a more reliable and stable training model is obtained.
S254, inputting the test set into the second training model for testing, and obtaining the final vehicle detection model.
The vehicle detection model trained for the slow traffic system can accurately, conveniently and spectrally detect the non-motor vehicle running in the non-motor vehicle lane, and improves the accuracy and the practicability of the vehicle detection model. It should be noted that, each Yolo model illustrated above is an existing target detection algorithm, and specific detection algorithms are not described herein.
As an optional embodiment, before the analyzing the video to be detected using the vehicle detection model, the method includes: determining a non-motor vehicle lane region in the video to be detected; and analyzing the video to be detected by using a vehicle detection model, and identifying a target vehicle in the video to be detected, including: and detecting the type of the running vehicle in the non-motor lane area by using a vehicle detection model, and identifying the target vehicle.
It can be understood that the alternative embodiment does not need to perform recognition detection on the whole video to be detected, and a non-motor vehicle lane area can be selected in the video to be detected before recognition, so that the driving vehicles on the non-motor vehicle lane are mainly detected, and the sidewalks and motor vehicle lanes on two sides of the road are ignored, thereby reducing the number of corresponding boundary frames of the target vehicle in the subsequent detection process and improving the detection and tracking speed of the target vehicle.
In step S20, specifically, the video to be detected is input into the vehicle detection model; the vehicle detection model identifies a target vehicle in the video to be detected, and identifies the target vehicle through a boundary box; wherein the target vehicle comprises a traveling non-motor vehicle; and simultaneously acquiring the characteristic information of the target vehicle, and storing the mapping relation between the characteristic information of the target vehicle and the tracking ID of the target vehicle in a detection list. Optionally, the detection list uses video frame numbers as indexes, and stores mapping relations between tracking IDs and feature information of the target vehicles under each video frame number.
After the characteristic information of each target vehicle is acquired, a tracking ID is set for each target vehicle, the same target vehicle has the same tracking ID, and the tracking ID of each target vehicle is unique. The tracking ID is used for identifying own information so as to establish connection between adjacent frames in the video to be detected. The tracking ID may be provided in a unique symbol such as a number or a letter, and is not limited herein.
The characteristic information comprises pixel point position, width and height information, a confidence threshold value and a category to which the pixel point belongs. Optionally, the pixel point position is any image pixel point on the bounding box for identifying the target vehicle, and may be an upper left corner position, an upper right corner position, a lower left corner position, a lower right corner position, or a bounding box center position. In this embodiment, the pixel point is the center image pixel point of the bounding box, so that the measurement data can be more accurate. The width and height information is the width and height of the bounding box; the confidence threshold is a preset value, and the accuracy of the detection result can be limited by adjusting the confidence threshold; the belonging category is the category of the target vehicle currently identified, such as belonging to a person riding a bicycle or a person riding an electric vehicle.
S30, tracking the target vehicle in the video to be detected by using a target tracking model to obtain a motion track of the target vehicle, wherein the motion track is generated based on the movement of the position of the pixel point of the image corresponding to the same target vehicle;
After identifying the detected target vehicle in step S20, the detected target vehicle is tracked by further combining with the target tracking model because the target vehicle is in a moving mode in the video. Optionally, the target tracking model adopts an existing Deepsort model, and in this embodiment, the detection list dectections may be transferred into the DeepSort model to track the target vehicle.
As an alternative embodiment, fig. 4 shows a flowchart of a method for the target tracking model to track the target vehicle in the video to be detected and obtain a motion track of the target vehicle, and as shown in fig. 4, the method for the target tracking model to track the target vehicle in the video to be detected and obtain the motion track of the target vehicle includes:
S31, based on the tracking ID, respectively acquiring image pixel points of a boundary box for identifying the same target vehicle in the detection list;
Specifically, image pixels of the same tracking ID are obtained every preset video frame number, and the obtained image pixels are arranged in sequence. For example, in the detection list, a target vehicle with a tracking ID of 3 is found every 10 frames, and a center image pixel point of a bounding box of the target vehicle in a video to be detected is obtained, so that a plurality of center image pixel points of the same target vehicle can be sequentially obtained, the serial numbers are 1,2, 3.
S32, sequentially connecting a plurality of image pixel points to generate a motion track of the target vehicle.
In this step, the plurality of image pixels of the same target vehicle obtained in step S31 are connected in a video front-rear order to obtain a motion track of the target vehicle, where the motion track represents a motion track of the target vehicle in a driving process, and it is to be noted that the motion track only represents a pixel motion, but not an actual road motion. The motion trail can be understood as that the pixel point position of the target vehicle changes along with the change of the video frame number.
S40, analyzing the motion trail to obtain the actual running speed of the target vehicle.
Specifically, fig. 5 shows a flowchart of a method for analyzing the motion trail to obtain the actual running speed of the target vehicle, and as shown in fig. 5, the method for analyzing the motion trail to obtain the actual running speed of the target vehicle includes:
s41, calculating the pixel distance of the target vehicle based on the position relation of two different image pixel points in the motion trail;
Optionally, the step calculates the pixel distance of the target vehicle by using a first frame calculation method, assuming that the first image pixel position point and the last image pixel position point in the motion track are respectively And/>Then the pixel distance D of the target vehicle motion is expressed by the following formula:
(1);
S42, calculating the actual movement distance of the target vehicle based on the mapping relation from the pixel point distance to the actual distance, wherein the mapping relation is as follows:
S = D * pixel_distance; (2)
wherein S represents the actual movement distance, D represents the pixel distance, and pixel_distance represents the actual distance represented by each pixel point.
Specifically, when the vehicle is approaching or moving away from the video in the detection process, the vehicle is changed from large to small, and in order to obtain the actual displacement of the vehicle in a period of time, the mapping relationship between the pixel coordinates of the image and the actual coordinates of the corresponding points in the space can be established through the ratio of the actual width of the vehicle to the width of the pixels of the vehicle in the video image. Optionally, the actual distance pixel_distance represented by each pixel point satisfies the following formula:
pixel_distance=a/bicycle_width (3); or alternatively, the first and second heat exchangers may be,
pixel_distance2 = b / electric_width (4);
Wherein, pixel_distance1 represents the actual distance represented by each pixel point when the target vehicle is a person riding a bicycle, a is a constant, represents the actual average width of a common bicycle, and bicycle_width represents the actual width of the bicycle; pixel_distant 2 represents the actual distance represented by each pixel point when the target vehicle is a person riding the electric vehicle, b is a constant, represents the actual average width of the ordinary electric vehicle, and electric_width represents the actual width of the electric vehicle.
The actual average width of the ordinary bicycle body is 0.3m, and the actual average width of the ordinary electric bicycle body is 0.5m, so a=0.3 and b=0.5. For example, the actual distance represented by each pixel point may be regarded as pixel_distance 1=0.3/bicycle_width in the case of riding a bicycle by a person, and the actual distance represented by each pixel point may be regarded as pixel_distance 2=0.5/electric_width in the case of riding an electric bicycle by a person.
S43, acquiring the motion time of the target vehicle based on the frame numbers and the video image frame rates of the two video frames corresponding to the two different image pixel points;
In this step, the time interval between specified video frames can be obtained from the video image frame rate c. In this embodiment, the video image frame rate c is 30fps/s, which means 30 frames per 1 second. The motion time T may be calculated according to the start frame STARTFRAME and the end frame EndFrame of the motion profile, where the motion time T satisfies the following formula:
(5);
Wherein, the start frame STARTFRAME and the end frame EndFrame may respectively select any one of the video frames in the motion trail as a start or an end, but the end frame is necessarily larger than the start frame; the video image frame rate c is a constant, typically a constant c of 30.
And S44, obtaining the actual running speed of the target vehicle based on the relation between the actual movement distance and the movement time.
In this step, an actual running speed is calculated from an actual movement distance S and movement time T of the running bicycle or electric vehicle for a period of time, the actual running speed satisfying the following formula:
V = S / T (6);
The actual travel speed of the target vehicle can be obtained by substituting the actual travel distance S obtained in step S42 and the travel time obtained in step S43 into the above formula (6).
Further, the vehicle speed detection method further includes: and filtering and smoothing the actual running speed. In this embodiment, a gaussian filtering method is used to filter and smooth the actual running speed, so as to prevent the calculated vehicle speed from oscillating.
As an alternative embodiment, the vehicle speed detection method further includes:
S50, correcting and verifying the actual running speed of the target vehicle based on the standard reference speed of the target vehicle.
After the actual running speed of the target vehicle is calculated in step S40, the calculated actual running speed is corrected according to the movement direction and the offset angle of the target vehicle, and a corrected speed is obtained. In this embodiment, the movement direction is determined by determining whether the actual travel route of the target vehicle travels along a straight line, and if not, the actual movement distance is corrected according to the route deviation angle, thereby calculating the correction speed. The route deviation angle refers to an angle at which an actual travel route deviates from straight travel.
And then verifying and performing error analysis on the corrected speed by using the standard reference speed of the target vehicle to obtain the final running speed of the vehicle. The standard reference speed may be set to a non-motor vehicle specified speed, for example, a non-motor vehicle speed per hour is required to be below 20 km. The step S50 can improve the accuracy and stability of the vehicle speed detection by correcting and verifying the calculated actual running speed.
Further, the vehicle speed detection method further includes: and outputting the actual running speed video of the target vehicle. The actual running speed video is used for representing the real-time speed of the target vehicle, and can be displayed in the form of a video player and the like in electronic equipment such as a computer and the like so that a front-end user can monitor the vehicle speed in real time.
In the vehicle speed detection method provided by the embodiment of the invention, in a slow traffic system, the running bicycles and electric vehicles are identified and tracked through the Yolov neural network and the DeepSort model, compared with the traditional target detection algorithm, the method can better cope with illumination change, shielding and multi-target complex scenes, and has the advantages of better accuracy, robustness, instantaneity and the like. In addition, compared with the traditional vehicle speed detection method such as ground sensing coil speed measurement, radar speed measurement and the like, the video-based vehicle speed detection method provided by the invention has the advantages of more cost effectiveness, more flexibility and more comprehensiveness through computer vision and video image processing technology, so that the accuracy and the reliability of the speed detection of the slow traffic vehicle are ensured.
Based on the vehicle speed detection method for the slow traffic system provided in the first embodiment, the embodiment provides a vehicle speed detection device corresponding to the vehicle speed detection method, and the same content refers to the above method embodiment, which is not repeated. Fig. 6 shows a schematic structural diagram of a vehicle speed detection device for a slow traffic system, as shown in fig. 6, the vehicle speed detection device includes:
the data acquisition module 10 is used for acquiring a video to be detected;
The target detection module 20 is configured to analyze the video to be detected by using a vehicle detection model, and identify a target vehicle in the video to be detected; the vehicle detection model is obtained through machine learning training by using a plurality of sets of training data, and each set of training data in the plurality of sets of training data comprises: a video image sequence of a lane of a non-motor vehicle, and a tag identifying a driving non-motor vehicle in the video image sequence;
the target tracking module 30 is configured to track the target vehicle in the video to be detected by using a target tracking model, so as to obtain a motion track of the target vehicle, where the motion track is generated based on movement of the pixel point positions of the corresponding images of the same target vehicle;
And the data processing module 40 is used for analyzing the motion trail to obtain the actual running speed of the target vehicle.
Further, the vehicle speed detection apparatus further includes a model training module 50, and the model training module 50 is configured to: acquiring an initial detection model, wherein the initial detection model comprises a Yolo model; collecting multiple groups of non-motor vehicle lane video data of a target road section in different time periods and environments; preprocessing the non-motor vehicle lane video data to obtain a video image sequence, and preparing a video data set; labeling pictures in the video data set according to preset classification categories, and generating corresponding labels for each picture, wherein the labels comprise category information and position information; the preset classification comprises two main categories of riding a bicycle and riding an electric bicycle; training the initial detection model by using the marked video data set until a vehicle detection model meeting preset conditions is obtained.
The model training module 50 is further configured to: dividing pictures in the marked video data set to generate a training set, a verification set and a test set; inputting the training set into the initial detection model for training to obtain a first training model; inputting the verification set into the first training model for adjustment to obtain a second training model; and inputting the test set into the second training model for testing to obtain the final vehicle detection model.
The target tracking module 30 is specifically configured to: based on the tracking ID, respectively acquiring image pixel points of a boundary box for identifying the same target vehicle in the detection list; and sequentially connecting a plurality of image pixel points to generate a motion track of the target vehicle.
The data processing module 40 is specifically configured to: calculating the pixel distance of the target vehicle based on the position relation of two different image pixel points in the motion trail; calculating the actual movement distance of the target vehicle based on the mapping relation from the pixel point distance to the actual distance; acquiring the motion time of the target vehicle based on the frame numbers and the video image frame rates of the two video frames corresponding to the two different image pixel points; and obtaining the actual running speed of the target vehicle based on the relation between the actual movement distance and the movement time.
Further, the vehicle speed detection device further includes a speed smoothing module 60, where the speed smoothing module 60 is configured to filter and smooth the actual running speed to prevent the calculated vehicle speed from oscillating.
Further, the vehicle speed detection device further comprises a correction and verification module 70, wherein the correction and verification module 70 is used for correcting and verifying the actual running speed of the target vehicle based on the standard reference speed of the target vehicle, so that the accuracy and stability of vehicle speed detection are improved.
Further, the vehicle speed detection apparatus further includes a data output module 80 for outputting an actual running speed video of the target vehicle for display to the front-end user.
In the vehicle speed detection device provided by the embodiment of the invention, in a slow traffic system, the running bicycles and electric vehicles are identified and tracked through the Yolov neural network and the DeepSort model, compared with a traditional target detection algorithm, the vehicle speed detection device can better cope with illumination change, shielding and multi-target complex scenes, and has the advantages of better accuracy, robustness, instantaneity and the like. In addition, compared with the traditional vehicle speed detection method such as ground sensing coil speed measurement, radar speed measurement and the like, the video-based vehicle speed detection method provided by the invention has the advantages of more cost effectiveness, more flexibility and more comprehensiveness through computer vision and video image processing technology, so that the accuracy and the reliability of the speed detection of the slow traffic vehicle are ensured.
Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention. It will be clearly understood by those skilled in the art that, for convenience and brevity of description, specific working procedures of the apparatus and device described above may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the device embodiments described above are merely illustrative.
It will be appreciated by persons skilled in the art that the scope of the invention referred to in the present invention is not limited to the specific combinations of the technical features described above, but also covers other technical features formed by any combination of the technical features described above or their equivalents without departing from the inventive concept. Such as the above-mentioned features and the technical features disclosed in the present invention (but not limited to) having similar functions are replaced with each other.
It should be understood that, the sequence numbers of the steps in the summary and the embodiments of the present invention do not necessarily mean the order of execution, and the execution order of the processes should be determined by the functions and the internal logic, and should not be construed as limiting the implementation process of the embodiments of the present invention.

Claims (9)

1. A method for detecting vehicle speed for a slow-moving traffic system, comprising:
Acquiring a video to be detected;
Analyzing the video to be detected by using a vehicle detection model, and identifying a target vehicle in the video to be detected; the vehicle detection model is obtained through machine learning training by using a plurality of sets of training data, and each set of training data in the plurality of sets of training data comprises: the method comprises the steps that a video image sequence of a non-motor vehicle lane and a label for identifying the non-motor vehicle in the video image sequence are adopted, and a vehicle detection model is a model obtained based on a weight file with highest detection precision in training rounds;
Tracking the target vehicle in the video to be detected by using a target tracking model to obtain a motion track of the target vehicle, wherein the motion track is generated based on the movement of the position of the pixel point of the image corresponding to the same target vehicle;
Analyzing the motion trail to obtain the actual running speed of the target vehicle;
the analyzing the motion trail to obtain the actual running speed of the target vehicle specifically includes:
Determining a mapping relation from the pixel point distance to the actual distance based on the actual distance represented by each pixel point, calculating the actual movement distance of the target vehicle based on the mapping relation, and calculating the actual distance represented by each pixel point corresponding to the target vehicle based on the average width corresponding to the type of the target vehicle and the actual width of the target vehicle;
After the actual running speed of the target vehicle is obtained, the method further comprises the following steps:
Judging whether the actual running route of the target vehicle is a straight line or not, and acquiring the movement direction of the target vehicle;
Determining an offset angle of the actual driving route based on the judgment result;
calculating a corrected actual movement distance based on the offset angle;
recalculating the corrected correction speed based on the corrected actual movement distance;
And verifying the correction speed based on the standard reference speed of the target vehicle, wherein the correction speed after verification is passed is taken as the actual running speed of the target vehicle.
2. The vehicle speed detection method according to claim 1, wherein the vehicle detection model uses a process of training by machine learning using a plurality of sets of training data, comprising:
Acquiring an initial detection model, wherein the initial detection model comprises a Yolo model;
Collecting multiple groups of non-motor vehicle lane video data of a target road section in different time periods and environments;
Preprocessing the non-motor vehicle lane video data to obtain a video image sequence, and preparing a video data set;
labeling pictures in the video data set according to preset classification categories, and generating corresponding labels for each picture, wherein the labels comprise category information and position information; the preset classification comprises two main categories of riding a bicycle and riding an electric bicycle;
Training the initial detection model by using the marked video data set until a vehicle detection model meeting preset conditions is obtained.
3. The vehicle speed detection method according to claim 2, wherein training the initial detection model using the noted video data set until a vehicle detection model satisfying a preset condition is obtained, comprises:
Inputting the marked video data set into the initial detection model, training the initial detection model according to preset parameters until a weight file with highest detection precision in training rounds is obtained, and deriving a trained vehicle detection model;
wherein the preset parameters comprise the batch size of training and training rounds.
4. The vehicle speed detection method according to claim 1, characterized in that the analyzing the video to be detected using a vehicle detection model includes:
determining a non-motor vehicle lane region in the video to be detected;
the analyzing the video to be detected by using the vehicle detection model to identify the target vehicle in the video to be detected comprises the following steps:
and detecting the type of the running vehicle in the non-motor lane area by using a vehicle detection model, and identifying the target vehicle.
5. The vehicle speed detection method according to claim 1, wherein analyzing the video to be detected using a vehicle detection model, identifying a target vehicle in the video to be detected, comprises:
Inputting the video to be detected into the vehicle detection model;
The vehicle detection model identifies a target vehicle in the video to be detected and marks the target vehicle through a boundary box; wherein the target vehicle comprises a traveling non-motor vehicle;
Acquiring the characteristic information of the target vehicle and storing the characteristic information in a detection list, wherein the detection list comprises the mapping relation between the characteristic information of the target vehicle and the tracking ID of the target vehicle; the characteristic information comprises pixel point position, width and height information, a confidence threshold value and a category to which the pixel point belongs.
6. The vehicle speed detection method according to claim 5, wherein tracking the target vehicle in the video to be detected using a target tracking model to obtain a motion trajectory of the target vehicle, comprising:
Based on the tracking ID, respectively acquiring image pixel points of a boundary box for identifying the same target vehicle in the detection list;
And sequentially connecting a plurality of image pixel points to generate a motion track of the target vehicle.
7. The vehicle speed detection method according to claim 1, wherein the analyzing the motion trajectory to obtain an actual running speed of the target vehicle includes:
Calculating the pixel distance of the target vehicle based on the position relation of two different image pixel points in the motion trail;
Calculating the actual movement distance of the target vehicle based on the mapping relation from the image pixel point distance to the actual distance, wherein the mapping relation is as follows:
S = D * pixel_distance;
wherein S represents an actual motion distance, D represents a pixel distance, and pixel_distance represents an actual distance represented by each image pixel point;
Acquiring the motion time of the target vehicle based on the frame numbers and the video image frame rates of the two video frames corresponding to the two different image pixel points;
and obtaining the actual running speed of the target vehicle based on the relation between the actual movement distance and the movement time.
8. The vehicle speed detection method according to claim 7, wherein the actual distance pixel_distance represented by each image pixel satisfies the following formula:
pixel_distance=a/bicycle_width; or alternatively, the first and second heat exchangers may be,
pixel_distance2 = b / electric_width;
Wherein, pixel_distance1 represents the actual distance represented by each pixel point when the target vehicle is a person riding a bicycle, a is a constant, represents the actual average width of a common bicycle, and bicycle_width represents the actual width of the bicycle; pixel_distant 2 represents the actual distance represented by each pixel point when the target vehicle is a person riding the electric vehicle, b is a constant, represents the actual average width of the ordinary electric vehicle, and electric_width represents the actual width of the electric vehicle.
9. A vehicle speed detection apparatus for a slow-moving traffic system, comprising:
the data acquisition module is used for acquiring a video to be detected;
The target detection module is used for analyzing the video to be detected by utilizing a vehicle detection model and identifying a target vehicle in the video to be detected; the vehicle detection model is obtained through machine learning training by using a plurality of sets of training data, and each set of training data in the plurality of sets of training data comprises: the method comprises the steps that a video image sequence of a non-motor vehicle lane and a label for identifying a running non-motor vehicle in the video image sequence are adopted, and a vehicle detection model is a model obtained based on a weight file with highest detection precision in training rounds;
The target tracking module is used for tracking the target vehicle in the video to be detected by utilizing a target tracking model, so as to obtain a motion track of the target vehicle, wherein the motion track is generated based on the movement of the position of the pixel point of the image corresponding to the same target vehicle;
the data processing module is used for analyzing the motion trail to obtain the actual running speed of the target vehicle;
the analyzing the motion trail to obtain the actual running speed of the target vehicle specifically includes:
Determining a mapping relation from the pixel point distance to the actual distance based on the actual distance represented by each pixel point, calculating the actual movement distance of the target vehicle based on the mapping relation, and calculating the actual distance represented by each pixel point corresponding to the target vehicle based on the average width corresponding to the type of the target vehicle and the actual width of the target vehicle;
After the actual running speed of the target vehicle is obtained, the method further comprises the following steps:
Judging whether the actual running route of the target vehicle is a straight line or not, and acquiring the movement direction of the target vehicle;
Determining an offset angle of the actual driving route based on the judgment result;
calculating a corrected actual movement distance based on the offset angle;
recalculating the corrected correction speed based on the corrected actual movement distance;
And verifying the correction speed based on the standard reference speed of the target vehicle, wherein the correction speed after verification is passed is taken as the actual running speed of the target vehicle.
CN202410138524.0A 2024-02-01 2024-02-01 Vehicle speed detection method and device for slow traffic system Active CN117671972B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410138524.0A CN117671972B (en) 2024-02-01 2024-02-01 Vehicle speed detection method and device for slow traffic system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410138524.0A CN117671972B (en) 2024-02-01 2024-02-01 Vehicle speed detection method and device for slow traffic system

Publications (2)

Publication Number Publication Date
CN117671972A CN117671972A (en) 2024-03-08
CN117671972B true CN117671972B (en) 2024-05-14

Family

ID=90086611

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410138524.0A Active CN117671972B (en) 2024-02-01 2024-02-01 Vehicle speed detection method and device for slow traffic system

Country Status (1)

Country Link
CN (1) CN117671972B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112101433A (en) * 2020-09-04 2020-12-18 东南大学 Automatic lane-dividing vehicle counting method based on YOLO V4 and DeepsORT
CN112507935A (en) * 2020-12-17 2021-03-16 上海依图网络科技有限公司 Image detection method and device
CN115205794A (en) * 2022-03-15 2022-10-18 云粒智慧科技有限公司 Method, device, equipment and medium for identifying violation of regulations of non-motor vehicle
CN115331185A (en) * 2022-09-14 2022-11-11 摩尔线程智能科技(北京)有限责任公司 Image detection method and device, electronic equipment and storage medium
CN116343493A (en) * 2023-03-27 2023-06-27 北京博宏科元信息科技有限公司 Method and device for identifying violation of non-motor vehicle, electronic equipment and storage medium
CN116721552A (en) * 2023-06-12 2023-09-08 北京博宏科元信息科技有限公司 Non-motor vehicle overspeed identification recording method, device, equipment and storage medium
CN116824859A (en) * 2023-07-21 2023-09-29 佛山市新基建科技有限公司 Intelligent traffic big data analysis system based on Internet of things
CN116935281A (en) * 2023-07-28 2023-10-24 南京理工大学 Method and equipment for monitoring abnormal behavior of motor vehicle lane on line based on radar and video

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112101433A (en) * 2020-09-04 2020-12-18 东南大学 Automatic lane-dividing vehicle counting method based on YOLO V4 and DeepsORT
CN112507935A (en) * 2020-12-17 2021-03-16 上海依图网络科技有限公司 Image detection method and device
CN115205794A (en) * 2022-03-15 2022-10-18 云粒智慧科技有限公司 Method, device, equipment and medium for identifying violation of regulations of non-motor vehicle
CN115331185A (en) * 2022-09-14 2022-11-11 摩尔线程智能科技(北京)有限责任公司 Image detection method and device, electronic equipment and storage medium
CN116343493A (en) * 2023-03-27 2023-06-27 北京博宏科元信息科技有限公司 Method and device for identifying violation of non-motor vehicle, electronic equipment and storage medium
CN116721552A (en) * 2023-06-12 2023-09-08 北京博宏科元信息科技有限公司 Non-motor vehicle overspeed identification recording method, device, equipment and storage medium
CN116824859A (en) * 2023-07-21 2023-09-29 佛山市新基建科技有限公司 Intelligent traffic big data analysis system based on Internet of things
CN116935281A (en) * 2023-07-28 2023-10-24 南京理工大学 Method and equipment for monitoring abnormal behavior of motor vehicle lane on line based on radar and video

Also Published As

Publication number Publication date
CN117671972A (en) 2024-03-08

Similar Documents

Publication Publication Date Title
CN110472496B (en) Traffic video intelligent analysis method based on target detection and tracking
CN108320510B (en) Traffic information statistical method and system based on aerial video shot by unmanned aerial vehicle
CN104200657B (en) A kind of traffic flow parameter acquisition method based on video and sensor
CN103617412B (en) Real-time lane line detection method
CN110379168B (en) Traffic vehicle information acquisition method based on Mask R-CNN
CN111753797B (en) Vehicle speed measuring method based on video analysis
CN103324913A (en) Pedestrian event detection method based on shape features and trajectory analysis
CN107315095B (en) More vehicle automatic speed-measuring methods with illumination adaptability based on video processing
CN109064495A (en) A kind of bridge floor vehicle space time information acquisition methods based on Faster R-CNN and video technique
CN107301776A (en) Track road conditions processing and dissemination method based on video detection technology
CN106781520A (en) A kind of traffic offence detection method and system based on vehicle tracking
CN104282020A (en) Vehicle speed detection method based on target motion track
CN104183127A (en) Traffic surveillance video detection method and device
CN103425764B (en) Vehicle matching method based on videos
CN102252859B (en) Road train straight-line running transverse stability automatic identification system
CN107389084A (en) Planning driving path planing method and storage medium
CN106250816A (en) A kind of Lane detection method and system based on dual camera
CN111967360A (en) Target vehicle attitude detection method based on wheels
CN111047879A (en) Vehicle overspeed detection method
CN114715168A (en) Vehicle yaw early warning method and system under road marking missing environment
CN107506753B (en) Multi-vehicle tracking method for dynamic video monitoring
CN113591643A (en) Underground vehicle station entering and exiting detection system and method based on computer vision
Espino et al. Rail and turnout detection using gradient information and template matching
CN117671972B (en) Vehicle speed detection method and device for slow traffic system
CN114842660B (en) Unmanned lane track prediction method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant