CN117671972A - Vehicle speed detection method and device for slow traffic system - Google Patents
Vehicle speed detection method and device for slow traffic system Download PDFInfo
- Publication number
- CN117671972A CN117671972A CN202410138524.0A CN202410138524A CN117671972A CN 117671972 A CN117671972 A CN 117671972A CN 202410138524 A CN202410138524 A CN 202410138524A CN 117671972 A CN117671972 A CN 117671972A
- Authority
- CN
- China
- Prior art keywords
- vehicle
- video
- target vehicle
- target
- detected
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 178
- 238000012549 training Methods 0.000 claims abstract description 79
- 238000010801 machine learning Methods 0.000 claims abstract description 9
- 238000000034 method Methods 0.000 claims description 42
- 238000013507 mapping Methods 0.000 claims description 12
- 238000002372 labelling Methods 0.000 claims description 9
- 230000008569 process Effects 0.000 claims description 8
- 238000012545 processing Methods 0.000 claims description 7
- 238000007781 pre-processing Methods 0.000 claims description 5
- 238000005259 measurement Methods 0.000 description 15
- 238000012795 verification Methods 0.000 description 9
- 238000012360 testing method Methods 0.000 description 8
- 230000008859 change Effects 0.000 description 7
- 238000005286 illumination Methods 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 238000012937 correction Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000009499 grossing Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 238000012423 maintenance Methods 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 229910052799 carbon Inorganic materials 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000012806 monitoring device Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011895 specific detection Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/052—Detecting movement of traffic to be counted or controlled with provision for determining speed or overspeed
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/62—Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extraction; Pattern tracking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/54—Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30236—Traffic on road, railway or crossing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a vehicle speed detection method and device for a slow traffic system. The vehicle speed detection method facing the slow traffic system comprises the following steps: acquiring a video to be detected; analyzing the video to be detected by using a vehicle detection model, and identifying a target vehicle in the video to be detected; the vehicle detection model is obtained through machine learning training by using a plurality of sets of training data, and each set of training data in the plurality of sets of training data comprises: a video image sequence of a lane of a non-motor vehicle, and a tag identifying a driving non-motor vehicle in the video image sequence; tracking the target vehicle in the video to be detected by using a target tracking model to obtain a motion track of the target vehicle; and analyzing the motion trail to obtain the actual running speed of the target vehicle. The invention can improve the accuracy of vehicle speed detection.
Description
Technical Field
The invention belongs to the technical field of computers, and particularly relates to a vehicle speed detection method and device for a slow traffic system.
Background
The bicycle and the electric vehicle are used as a zero-carbon and healthy traffic mode, can play a larger role in an urban traffic system, and change a travel structure by continuously improving travel quality. Therefore, based on the slow-moving traffic system, the urban slow-moving traffic network can be perfected by detecting the running speeds of the bicycles and the electric vehicles, and the safety and smoothness of the urban traffic system are ensured.
Conventional vehicle speed detection methods mainly include various manners such as loop coil vehicle sensors, interval speed measurement, radar microwave speed measurement, laser detection, etc., but these conventional manners generally have difficulty in recognizing category attributes of a vehicle such as a vehicle type, and at the same time, installation, maintenance, and operation of these devices generally require expensive investment and cost, and are susceptible to weather, multipath interference, and other external factors, thereby resulting in a reduction in speed measurement accuracy. For example, when the speed is measured by adopting the ground sensing coil speed measuring method, the ground sensing coil is required to be buried in a road, the cost for installing and maintaining the equipment is relatively high, and the ground sensing coil is also very easy to be influenced by weather, so that the speed measuring data is inaccurate.
Therefore, it is necessary to provide a vehicle speed detection method that is low in cost, stable in speed measurement, and high in accuracy.
Disclosure of Invention
In order to solve at least one technical problem in the background art, the invention provides a vehicle speed detection method and device for a slow traffic system, which are used for reducing detection cost and improving speed measurement stability and accuracy in the slow traffic system.
According to a first aspect of the present invention, there is provided a method for detecting vehicle speed in a slow traffic system, comprising:
acquiring a video to be detected;
analyzing the video to be detected by using a vehicle detection model, and identifying a target vehicle in the video to be detected; the vehicle detection model is obtained through machine learning training by using a plurality of sets of training data, and each set of training data in the plurality of sets of training data comprises: a sequence of video images of the lane of the non-motor vehicle, and a tag identifying the non-motor vehicle in the sequence of video images;
tracking the target vehicle in the video to be detected by using a target tracking model to obtain a motion track of the target vehicle, wherein the motion track is generated based on the movement of the position of the pixel point of the image corresponding to the same target vehicle;
And analyzing the motion trail to obtain the actual running speed of the target vehicle.
Further, the vehicle detection model uses multiple sets of training data to train through machine learning, including:
acquiring an initial detection model, wherein the initial detection model comprises a Yolo model;
collecting multiple groups of non-motor vehicle lane video data of a target road section in different time periods and environments;
preprocessing the non-motor vehicle lane video data to obtain a video image sequence, and preparing a video data set;
labeling pictures in the video data set according to preset classification categories, and generating corresponding labels for each picture, wherein the labels comprise category information and position information; the preset classification comprises two main categories of riding a bicycle and riding an electric bicycle;
training the initial detection model by using the marked video data set until a vehicle detection model meeting preset conditions is obtained.
Further, training the initial detection model by using the noted video data set until a vehicle detection model meeting a preset condition is obtained, including:
inputting the marked video data set into the initial detection model, training the initial detection model according to preset parameters until a weight file with highest detection precision in training rounds is obtained, and deriving a trained vehicle detection model;
Wherein the preset parameters comprise the batch size of training and training rounds.
Further, before the analyzing the video to be detected by using the vehicle detection model, the method includes:
determining a non-motor vehicle lane region in the video to be detected;
the analyzing the video to be detected by using the vehicle detection model to identify the target vehicle in the video to be detected comprises the following steps:
and detecting the type of the running vehicle in the non-motor lane area by using a vehicle detection model, and identifying the target vehicle.
Further, the analyzing the video to be detected by using the vehicle detection model, to identify the target vehicle in the video to be detected, includes:
inputting the video to be detected into the vehicle detection model;
the vehicle detection model identifies a target vehicle in the video to be detected and marks the target vehicle through a boundary box; wherein the target vehicle comprises a traveling non-motor vehicle;
acquiring the characteristic information of the target vehicle and storing the characteristic information in a detection list, wherein the detection list comprises the mapping relation between the characteristic information of the target vehicle and the tracking ID of the target vehicle; the characteristic information comprises pixel point position, width and height information, a confidence threshold value and a category to which the pixel point belongs.
Further, the tracking the target vehicle in the video to be detected by using a target tracking model to obtain a motion trail of the target vehicle includes:
based on the tracking ID, respectively acquiring image pixel points of a boundary box for identifying the same target vehicle in the detection list;
and sequentially connecting a plurality of image pixel points to generate a motion track of the target vehicle.
Further, the analyzing the motion trail to obtain an actual running speed of the target vehicle includes:
calculating the pixel distance of the target vehicle based on the position relation of two different image pixel points in the motion trail;
calculating the actual movement distance of the target vehicle based on the mapping relation from the image pixel point distance to the actual distance, wherein the mapping relation is as follows:
S = D * pixel_distance;
wherein S represents an actual motion distance, D represents a pixel distance, and pixel_distance represents an actual distance represented by each image pixel point;
acquiring the motion time of the target vehicle based on the frame numbers and the video image frame rates of the two video frames corresponding to the two different image pixel points;
and obtaining the actual running speed of the target vehicle based on the relation between the actual movement distance and the movement time.
Further, the actual distance pixel_distance represented by each image pixel point satisfies the following formula:
pixel_distance=a/bicycle_width; or alternatively, the first and second heat exchangers may be,
pixel_distance2 = b / electric_width;
wherein, pixel_distance1 represents the actual distance represented by each pixel point when the target vehicle is a person riding a bicycle, a is a constant, represents the actual average width of a common bicycle, and bicycle_width represents the actual width of the bicycle; pixel_distant 2 represents the actual distance represented by each pixel point when the target vehicle is a person riding the electric vehicle, b is a constant, represents the actual average width of the ordinary electric vehicle, and electric_width represents the actual width of the electric vehicle.
Further, the vehicle speed detection method includes:
the actual running speed of the target vehicle is corrected and verified based on the standard reference speed of the target vehicle.
According to a second aspect of the present invention, there is also provided a vehicle speed detection apparatus for a slow traffic system, comprising:
the data acquisition module is used for acquiring a video to be detected;
the target detection module is used for analyzing the video to be detected by utilizing a vehicle detection model and identifying a target vehicle in the video to be detected; the vehicle detection model is obtained through machine learning training by using a plurality of sets of training data, and each set of training data in the plurality of sets of training data comprises: a video image sequence of a lane of a non-motor vehicle, and a tag identifying a driving non-motor vehicle in the video image sequence;
The target tracking module is used for tracking the target vehicle in the video to be detected by utilizing a target tracking model, so as to obtain a motion track of the target vehicle, wherein the motion track is generated based on the movement of the position of the pixel point of the image corresponding to the same target vehicle;
and the data processing module is used for analyzing the motion trail to obtain the actual running speed of the target vehicle.
By the technical scheme of the invention, the following technical effects can be obtained:
the invention provides a vehicle speed detection method and a vehicle speed detection device, which mainly detect and track a bicycle and an electric vehicle running in a video frame by adopting a computer vision and image processing technology so as to identify and track the specific position and track of the vehicle, and calculate the actual running speed of the target vehicle by analyzing the change of the position of the target vehicle of the bicycle and/or the electric vehicle in the detection video along with time.
Compared with the traditional speed measuring method, the speed measuring method provided by the invention is based on video speed measurement, and has the advantages of large monitoring range and rich detection information; the speed measurement only depends on an external hardware device camera, the device is simple and low in cost, the operation is further simple, and the complexity of detection is reduced; in addition, by the detection method, accurate, real-time, stable and reliable speed detection data of the slow-moving traffic vehicles can be provided, traffic management departments can be further helped to know the speed condition of the non-motor vehicles on the slow-moving traffic road in time, and the reliability and convenience of the slow-moving traffic are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for detecting vehicle speed for a slow traffic system according to an embodiment of the present invention;
FIG. 2 is a flowchart of a training method of the vehicle detection model shown in FIG. 1;
FIG. 3 is a flowchart of a method for training the initial detection model using a labeled video data set until a vehicle detection model satisfying a preset condition is obtained, according to an embodiment of the present invention;
FIG. 4 is a flowchart of a method for tracking the target vehicle in the video to be detected and obtaining a motion trail of the target vehicle through the target tracking model according to an embodiment of the present invention;
FIG. 5 is a flowchart of a method for analyzing the motion trail and obtaining the actual running speed of the target vehicle according to the embodiment of the present invention;
Fig. 6 is a schematic structural diagram of a vehicle speed detecting device for a slow traffic system according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that the terms "first," "second," "third," and "fourth," etc. in the description and claims of the present application are used for distinguishing between different objects and not for describing a particular sequential order. The terms "comprising" and "having" and any variations thereof, in embodiments of the present application, are intended to cover non-exclusive inclusions.
In the related art, the conventional vehicle speed detection method needs to purchase expensive equipment, has high cost and has high equipment installation and maintenance difficulty; meanwhile, the specific attributes of the vehicle, such as license plate numbers, vehicle types and the like, are difficult to identify by the traditional speed measuring method; among them, the speed measuring range of the radar and laser detection methods may be limited, and the methods such as radar speed measurement and infrared speed measurement are easily interfered by external factors, such as multipath interference, surrounding radio waves, and weather, thereby causing inaccurate speed measurement. In addition, the early moving object detection algorithms mainly comprise an inter-frame difference method, an optical flow method and a background difference method, but the performance of the object detection algorithms is limited when the object detection algorithms are applied to complex and dynamic environments, the object detection algorithms are easy to be interfered by various factors, the effect of the object detection algorithms in practical application is limited, factors such as inter-frame difference illumination change, camera vibration and the like are very sensitive, false detection and missed detection can be possibly caused, and the optical flow method depends on the optical flow relation between a camera and an object, so that the early moving object detection algorithms have poor performance for scenes with larger camera movement or rapid movement.
However, with the development of the field of computer vision, modern target detection algorithms have adopted deep learning techniques, such as convolutional neural networks and Yolo neural networks, and have made significant progress in accuracy, robustness and adaptability. These deep learning methods can better cope with illumination changes, complex backgrounds, occlusions, and multi-target scenes. Therefore, the invention provides a vehicle speed detection method based on road monitoring video in a slow-moving traffic system, which adopts a target detection algorithm (such as a Yolo algorithm) to identify and detect slow-moving traffic vehicles, people riding bicycles and people riding electric vehicles, then combines a target tracking algorithm (such as a deep algorithm) to track the identified non-motor vehicle running vehicles, establishes the connection of adjacent frames of a moving target in the video, calculates the speed of the vehicles by analyzing the change of the position of the target vehicle in the video along with time and the conversion of pixels to actual distances, and finally corrects and verifies the measured speed data to ensure accuracy.
Note that, slow traffic generally refers to a traffic mode in which slow travel is adopted. In urban traffic systems, both pedestrian traffic at a speed of 4-7 km per hour and non-motor vehicle (including bicycles and electric vehicles) traffic at a speed of less than 20 km per hour can be referred to as "slow traffic".
Fig. 1 is a flow chart of a method for detecting vehicle speed for a slow traffic system according to an embodiment of the present invention. As shown in fig. 1, the present invention provides a vehicle speed detection method for a slow traffic system, the method comprising:
s10, acquiring a video to be detected;
in the step, the video to be detected is obtained according to the actual demands of the user, and the video to be detected comprises the road traffic condition of the target road section. Optionally, the video to be detected may be collected by a road monitoring device. The vehicle speed is detected based on the video data, and the video data has wider coverage area and larger detection information quantity, so that the accuracy of vehicle speed detection is improved; in addition, the speed detection is carried out based on video data, external hardware equipment only depends on a camera, the equipment is simple, the equipment is not easy to be interfered by external factors, and the speed detection accuracy is further improved.
S20, analyzing the video to be detected by using a vehicle detection model, and identifying a target vehicle in the video to be detected; the vehicle detection model is obtained through machine learning training by using a plurality of sets of training data, and each set of training data in the plurality of sets of training data comprises: a sequence of video images of a lane of a non-motor vehicle, and a tag identifying the non-motor vehicle in the sequence of video images in transit;
The purpose of this step S20 is to identify the target vehicle present in the video to be detected, which includes a running non-motor vehicle, such as two types of non-motor vehicles, human-riding bicycles and human-riding electric vehicles.
As an alternative embodiment, the vehicle detection model is a Yolo model trained using multiple sets of training data. Specifically, as shown in fig. 2, the vehicle speed detection method facing the slow traffic system includes:
s21, acquiring an initial detection model, wherein the initial detection model comprises a Yolo model;
in this step, the initial detection model may be any one of a Yolo 1 model, a Yolo 2 model, a Yolo 3 model, a Yolo 4 model, a Yolo 5 model, a Yolo 6 model, a Yolo 7 model, and a Yolo 8 model. In this embodiment, the initial detection model is a Yolov8 model, and the detection accuracy of the model is higher.
S22, acquiring multiple groups of non-motor vehicle lane video data of a target road section in different time periods and environments;
alternatively, a large amount of non-motor vehicle lane video data can be rapidly acquired by using video equipment arranged on the road or manually shooting and collecting the non-motor vehicle lane video data of the target road section. The target road section is selected according to actual requirements. The different time periods and environments comprise the driving conditions of road bicycles and electric vehicles in the morning and evening peak time periods, the low peak time period and the night time period.
Specifically taking Beijing city as an example, high-level video and low-pile video equipment of a Beijing city parking management business center can be utilized to perform combined manual shooting on acquired data road sections for shooting at different time periods, different angles and different scenes, so that a large number of non-motor vehicle lane videos of slow traffic roads can be rapidly acquired. The step S22 is to collect video data of the non-motor vehicle lane under various scenes so as to provide rich training samples, thereby improving the accuracy of the vehicle detection model.
S23, preprocessing the non-motor vehicle lane video data to obtain a video image sequence, and preparing a video data set;
as an alternative embodiment, the preprocessing the non-motor vehicle lane video data to obtain a video image sequence and making a video data set includes:
utilizing a ffmpeg tool to intercept pictures every 8-12 frames in the video data of the non-motor vehicle lane to obtain a video image sequence; and creating a video dataset from a plurality of pictures in the sequence of video images. In this embodiment, pictures are selected to be taken from the video data of the non-motor vehicle lane every 10 frames, so as to reduce the number of labels of the video data set and the similarity of the labels of the pictures, and prevent the initial detection model from being excessively redundant in the subsequent training.
Wherein the ffmpeg tool is used to convert a video file into a format that converts the video file into a sequence of images. The video image sequence is a collection of a series of successive images, which in this embodiment comprises 15776 pictures, which are brought together to form a video data set.
S24, labeling the pictures in the video data set according to preset classification, and generating corresponding labels for each picture, wherein the labels comprise classification information and position information;
based on the video data set created in step S23, the step S24 performs high-quality labeling on each picture in the video data set, and labels category information and position information respectively. As an optional embodiment, the labeling the pictures in the video dataset according to the preset classification category, and generating a corresponding label for each picture includes:
labeling each picture in the video data set according to a preset classification by using a Labelimg tool, and generating classification information and position information of each picture; the preset classification categories comprise types of non-motor vehicles in running, such as two classification categories of riding bicycles and riding electric vehicles.
It can be understood that, according to two categories, i.e. a person riding a bicycle and a person riding an electric bicycle, marking the target vehicle appearing in each picture until all pictures in the video data set are marked, and obtaining a data set with a specified format, wherein the data set with the specified format refers to that the pictures in the data set are marked according to the preset classification category. Optionally, in the labeling process, except for labeling the vehicle type, the position information of the target vehicle is labeled at the same time, for example, the person riding the bicycle in a certain picture is labeled as the person riding the bicycle type, and the position information of the bicycle in the picture, for example, the pixel position is labeled. A picture may be labeled with one or more labels.
And S25, training the initial detection model by using the marked video data set until a vehicle detection model meeting the preset condition is obtained.
Specifically, the step S25 further includes:
inputting the marked video data set into the initial detection model, training the initial detection model according to preset parameters until a weight file with highest detection precision in training rounds is obtained, and deriving a trained vehicle detection model; wherein the preset parameters comprise the batch size of training and training rounds. In this embodiment, the training batch size is 32, and the training round is 200 rounds.
As an alternative embodiment, fig. 3 shows a flowchart of a method for training the initial detection model by using the labeled video data set until a vehicle detection model satisfying a preset condition is obtained, and as shown in fig. 3, a method for training the initial detection model by using the labeled video data set includes:
s251, dividing pictures in the marked video data set to generate a training set, a verification set and a test set; in this embodiment, the pictures in the video dataset are divided according to the ratio of 8:1:1, that is, the ratio of the pictures contained in the training set is 8/10, the ratio of the pictures contained in the verification set is 1/10, the ratio of the pictures contained in the test set is 1/10, and during the dividing process, the training set pictures, the verification set pictures and the test set pictures are randomly allocated, and the dataset is ready.
S252, inputting the training set into the initial detection model for training to obtain a first training model;
s253, inputting the verification set into the first training model for adjustment to obtain a second training model; in the step, the verification set picture can be repeatedly used for adjusting the first training model for multiple times, so that a more reliable and stable training model is obtained.
S254, inputting the test set into the second training model for testing, and obtaining the final vehicle detection model.
The vehicle detection model trained for the slow traffic system can accurately, conveniently and spectrally detect the non-motor vehicle running in the non-motor vehicle lane, and improves the accuracy and the practicability of the vehicle detection model. It should be noted that each Yolo model illustrated above is an existing target detection algorithm, and specific detection algorithms are not described herein.
As an optional embodiment, before the analyzing the video to be detected using the vehicle detection model, the method includes: determining a non-motor vehicle lane region in the video to be detected; and analyzing the video to be detected by using a vehicle detection model, and identifying a target vehicle in the video to be detected, including: and detecting the type of the running vehicle in the non-motor lane area by using a vehicle detection model, and identifying the target vehicle.
It can be understood that the alternative embodiment does not need to perform recognition detection on the whole video to be detected, and a non-motor vehicle lane area can be selected in the video to be detected before recognition, so that the driving vehicles on the non-motor vehicle lane are mainly detected, and the sidewalks and motor vehicle lanes on two sides of the road are ignored, thereby reducing the number of corresponding boundary frames of the target vehicle in the subsequent detection process and improving the detection and tracking speed of the target vehicle.
In step S20, specifically, the video to be detected is input into the vehicle detection model; the vehicle detection model identifies a target vehicle in the video to be detected, and identifies the target vehicle through a boundary box; wherein the target vehicle comprises a traveling non-motor vehicle; and simultaneously acquiring the characteristic information of the target vehicle, and storing the mapping relation between the characteristic information of the target vehicle and the tracking ID of the target vehicle in a detection list. Optionally, the detection list uses video frame numbers as indexes, and stores mapping relations between tracking IDs and feature information of the target vehicles under each video frame number.
After the characteristic information of each target vehicle is acquired, a tracking ID is set for each target vehicle, the same target vehicle has the same tracking ID, and the tracking ID of each target vehicle is unique. The tracking ID is used for identifying own information so as to establish connection between adjacent frames in the video to be detected. The tracking ID may be provided in a unique symbol such as a number or a letter, and is not limited herein.
The characteristic information comprises pixel point position, width and height information, a confidence threshold value and a category to which the pixel point belongs. Optionally, the pixel point position is any image pixel point on the bounding box for identifying the target vehicle, and may be an upper left corner position, an upper right corner position, a lower left corner position, a lower right corner position, or a bounding box center position. In this embodiment, the pixel point is the center image pixel point of the bounding box, so that the measurement data can be more accurate. The width and height information is the width and height of the bounding box; the confidence threshold is a preset value, and the accuracy of the detection result can be limited by adjusting the confidence threshold; the belonging category is the category of the target vehicle currently identified, such as belonging to a person riding a bicycle or a person riding an electric vehicle.
S30, tracking the target vehicle in the video to be detected by using a target tracking model to obtain a motion track of the target vehicle, wherein the motion track is generated based on the movement of the position of the pixel point of the image corresponding to the same target vehicle;
after identifying the detected target vehicle in step S20, the detected target vehicle is tracked by further combining with the target tracking model because the target vehicle is in a moving mode in the video. Optionally, the target tracking model adopts an existing deep model, and in this embodiment, the detection list defects may be transmitted to the deep model to track the target vehicle.
As an alternative embodiment, fig. 4 shows a flowchart of a method for the target tracking model to track the target vehicle in the video to be detected and obtain a motion track of the target vehicle, and as shown in fig. 4, the method for the target tracking model to track the target vehicle in the video to be detected and obtain the motion track of the target vehicle includes:
s31, based on the tracking ID, respectively acquiring image pixel points of a boundary box for identifying the same target vehicle in the detection list;
Specifically, image pixels of the same tracking ID are obtained every preset video frame number, and the obtained image pixels are arranged in sequence. For example, in the detection list, a target vehicle with a tracking ID of 3 is found every 10 frames, and a center image pixel point of a bounding box of the target vehicle in a video to be detected is obtained, so that a plurality of center image pixel points of the same target vehicle can be sequentially obtained, the serial numbers are 1,2, 3.
S32, sequentially connecting a plurality of image pixel points to generate a motion track of the target vehicle.
In this step, the plurality of image pixels of the same target vehicle obtained in step S31 are connected in a video front-rear order to obtain a motion track of the target vehicle, where the motion track represents a motion track of the target vehicle in a driving process, and it is to be noted that the motion track only represents a pixel motion, but not an actual road motion. The motion trail can be understood as that the pixel point position of the target vehicle changes along with the change of the video frame number.
S40, analyzing the motion trail to obtain the actual running speed of the target vehicle.
Specifically, fig. 5 shows a flowchart of a method for analyzing the motion trail to obtain the actual running speed of the target vehicle, and as shown in fig. 5, the method for analyzing the motion trail to obtain the actual running speed of the target vehicle includes:
s41, calculating the pixel distance of the target vehicle based on the position relation of two different image pixel points in the motion trail;
optionally, the step calculates the pixel distance of the target vehicle by using a first frame calculation method, assuming that the transportationThe first image pixel position point and the last image pixel position point in the moving track are respectivelyAnd->Then the pixel distance D of the target vehicle motion is expressed by the following formula:
(1);
s42, calculating the actual movement distance of the target vehicle based on the mapping relation from the pixel point distance to the actual distance, wherein the mapping relation is as follows:
S = D * pixel_distance; (2)
wherein S represents the actual movement distance, D represents the pixel distance, and pixel_distance represents the actual distance represented by each pixel point.
Specifically, when the vehicle is approaching or moving away from the video in the detection process, the vehicle is changed from large to small, and in order to obtain the actual displacement of the vehicle in a period of time, the mapping relationship between the pixel coordinates of the image and the actual coordinates of the corresponding points in the space can be established through the ratio of the actual width of the vehicle to the width of the pixels of the vehicle in the video image. Optionally, the actual distance pixel_distance represented by each pixel point satisfies the following formula:
pixel_distance=a/bicycle_width (3); or alternatively, the first and second heat exchangers may be,
pixel_distance2 = b / electric_width (4);
wherein, pixel_distance1 represents the actual distance represented by each pixel point when the target vehicle is a person riding a bicycle, a is a constant, represents the actual average width of a common bicycle, and bicycle_width represents the actual width of the bicycle; pixel_distant 2 represents the actual distance represented by each pixel point when the target vehicle is a person riding the electric vehicle, b is a constant, represents the actual average width of the ordinary electric vehicle, and electric_width represents the actual width of the electric vehicle.
The actual average width of the ordinary bicycle body is 0.3m, and the actual average width of the ordinary electric bicycle body is 0.5m, so a=0.3 and b=0.5. For example, the actual distance represented by each pixel point may be regarded as pixel_distance 1=0.3/bicycle_width in the case of riding a bicycle by a person, and the actual distance represented by each pixel point may be regarded as pixel_distance 2=0.5/electric_width in the case of riding an electric bicycle by a person.
S43, acquiring the motion time of the target vehicle based on the frame numbers and the video image frame rates of the two video frames corresponding to the two different image pixel points;
in this step, the time interval between specified video frames can be obtained from the video image frame rate c. In this embodiment, the video image frame rate c is 30fps/s, which means 30 frames per 1 second. The motion time T may be calculated according to a start frame StartFrame and an end frame EndFrame of the motion profile, and the motion time T satisfies the following formula:
(5);
Wherein, the start frame StartFrame and the end frame EndFrame can respectively select any video frame in the motion trail as the start or end, but the end frame is necessarily larger than the start frame; the video image frame rate c is a constant, typically a constant c of 30.
And S44, obtaining the actual running speed of the target vehicle based on the relation between the actual movement distance and the movement time.
In this step, an actual running speed is calculated from an actual movement distance S and movement time T of the running bicycle or electric vehicle for a period of time, the actual running speed satisfying the following formula:
V = S / T (6);
the actual travel speed of the target vehicle can be obtained by substituting the actual travel distance S obtained in step S42 and the travel time obtained in step S43 into the above formula (6).
Further, the vehicle speed detection method further includes: and filtering and smoothing the actual running speed. In this embodiment, a gaussian filtering method is used to filter and smooth the actual running speed, so as to prevent the calculated vehicle speed from oscillating.
As an alternative embodiment, the vehicle speed detection method further includes:
s50, correcting and verifying the actual running speed of the target vehicle based on the standard reference speed of the target vehicle.
After the actual running speed of the target vehicle is calculated in step S40, the calculated actual running speed is corrected according to the movement direction and the offset angle of the target vehicle, and a corrected speed is obtained. In this embodiment, the movement direction is determined by determining whether the actual travel route of the target vehicle travels along a straight line, and if not, the actual movement distance is corrected according to the route deviation angle, thereby calculating the correction speed. The route deviation angle refers to an angle at which an actual travel route deviates from straight travel.
And then verifying and performing error analysis on the corrected speed by using the standard reference speed of the target vehicle to obtain the final running speed of the vehicle. The standard reference speed may be set to a non-motor vehicle specified speed, for example, a non-motor vehicle speed per hour is required to be below 20 km. The step S50 can improve the accuracy and stability of the vehicle speed detection by correcting and verifying the calculated actual running speed.
Further, the vehicle speed detection method further includes: and outputting the actual running speed video of the target vehicle. The actual running speed video is used for representing the real-time speed of the target vehicle, and can be displayed in the form of a video player and the like in electronic equipment such as a computer and the like so that a front-end user can monitor the vehicle speed in real time.
In the vehicle speed detection method provided by the embodiment of the invention, in a slow traffic system, the running bicycles and electric vehicles are identified and tracked through the Yolov8 neural network and the deep Sort model, and compared with the traditional target detection algorithm, the vehicle speed detection method can better cope with illumination change, shielding and multi-target complex scenes, and has the advantages of better accuracy, robustness, instantaneity and the like. In addition, compared with the traditional vehicle speed detection method such as ground sensing coil speed measurement, radar speed measurement and the like, the video-based vehicle speed detection method provided by the invention has the advantages of more cost effectiveness, more flexibility and more comprehensiveness through computer vision and video image processing technology, so that the accuracy and the reliability of the speed detection of the slow traffic vehicle are ensured.
Based on the vehicle speed detection method for the slow traffic system provided in the first embodiment, the embodiment provides a vehicle speed detection device corresponding to the vehicle speed detection method, and the same content refers to the above method embodiment, which is not repeated. Fig. 6 shows a schematic structural diagram of a vehicle speed detection device for a slow traffic system, as shown in fig. 6, the vehicle speed detection device includes:
The data acquisition module 10 is used for acquiring a video to be detected;
the target detection module 20 is configured to analyze the video to be detected by using a vehicle detection model, and identify a target vehicle in the video to be detected; the vehicle detection model is obtained through machine learning training by using a plurality of sets of training data, and each set of training data in the plurality of sets of training data comprises: a video image sequence of a lane of a non-motor vehicle, and a tag identifying a driving non-motor vehicle in the video image sequence;
the target tracking module 30 is configured to track the target vehicle in the video to be detected by using a target tracking model, so as to obtain a motion track of the target vehicle, where the motion track is generated based on movement of the pixel point positions of the corresponding images of the same target vehicle;
and the data processing module 40 is used for analyzing the motion trail to obtain the actual running speed of the target vehicle.
Further, the vehicle speed detection apparatus further includes a model training module 50, and the model training module 50 is configured to: acquiring an initial detection model, wherein the initial detection model comprises a Yolo model; collecting multiple groups of non-motor vehicle lane video data of a target road section in different time periods and environments; preprocessing the non-motor vehicle lane video data to obtain a video image sequence, and preparing a video data set; labeling pictures in the video data set according to preset classification categories, and generating corresponding labels for each picture, wherein the labels comprise category information and position information; the preset classification comprises two main categories of riding a bicycle and riding an electric bicycle; training the initial detection model by using the marked video data set until a vehicle detection model meeting preset conditions is obtained.
The model training module 50 is further configured to: dividing pictures in the marked video data set to generate a training set, a verification set and a test set; inputting the training set into the initial detection model for training to obtain a first training model; inputting the verification set into the first training model for adjustment to obtain a second training model; and inputting the test set into the second training model for testing to obtain the final vehicle detection model.
The target tracking module 30 is specifically configured to: based on the tracking ID, respectively acquiring image pixel points of a boundary box for identifying the same target vehicle in the detection list; and sequentially connecting a plurality of image pixel points to generate a motion track of the target vehicle.
The data processing module 40 is specifically configured to: calculating the pixel distance of the target vehicle based on the position relation of two different image pixel points in the motion trail; calculating the actual movement distance of the target vehicle based on the mapping relation from the pixel point distance to the actual distance; acquiring the motion time of the target vehicle based on the frame numbers and the video image frame rates of the two video frames corresponding to the two different image pixel points; and obtaining the actual running speed of the target vehicle based on the relation between the actual movement distance and the movement time.
Further, the vehicle speed detection device further includes a speed smoothing module 60, where the speed smoothing module 60 is configured to filter and smooth the actual running speed to prevent the calculated vehicle speed from oscillating.
Further, the vehicle speed detection device further comprises a correction and verification module 70, wherein the correction and verification module 70 is used for correcting and verifying the actual running speed of the target vehicle based on the standard reference speed of the target vehicle, so that the accuracy and stability of vehicle speed detection are improved.
Further, the vehicle speed detection apparatus further includes a data output module 80 for outputting an actual running speed video of the target vehicle for display to the front-end user.
In the vehicle speed detection device provided by the embodiment of the invention, in a slow traffic system, the running bicycles and electric vehicles are identified and tracked through the Yolov8 neural network and the deep Sort model, and compared with the traditional target detection algorithm, the vehicle speed detection device can better cope with illumination change, shielding and multi-target complex scenes, and has the advantages of better accuracy, robustness, instantaneity and the like. In addition, compared with the traditional vehicle speed detection method such as ground sensing coil speed measurement, radar speed measurement and the like, the video-based vehicle speed detection method provided by the invention has the advantages of more cost effectiveness, more flexibility and more comprehensiveness through computer vision and video image processing technology, so that the accuracy and the reliability of the speed detection of the slow traffic vehicle are ensured.
Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention. It will be clearly understood by those skilled in the art that, for convenience and brevity of description, specific working procedures of the apparatus and device described above may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the device embodiments described above are merely illustrative.
It will be appreciated by persons skilled in the art that the scope of the invention referred to in the present invention is not limited to the specific combinations of the technical features described above, but also covers other technical features formed by any combination of the technical features described above or their equivalents without departing from the inventive concept. Such as the above-mentioned features and the technical features disclosed in the present invention (but not limited to) having similar functions are replaced with each other.
It should be understood that, the sequence numbers of the steps in the summary and the embodiments of the present invention do not necessarily mean the order of execution, and the execution order of the processes should be determined by the functions and the internal logic, and should not be construed as limiting the implementation process of the embodiments of the present invention.
Claims (10)
1. A method for detecting vehicle speed for a slow-moving traffic system, comprising:
acquiring a video to be detected;
analyzing the video to be detected by using a vehicle detection model, and identifying a target vehicle in the video to be detected; the vehicle detection model is obtained through machine learning training by using a plurality of sets of training data, and each set of training data in the plurality of sets of training data comprises: a sequence of video images of the lane of the non-motor vehicle, and a tag identifying the non-motor vehicle in the sequence of video images;
tracking the target vehicle in the video to be detected by using a target tracking model to obtain a motion track of the target vehicle, wherein the motion track is generated based on the movement of the position of the pixel point of the image corresponding to the same target vehicle;
and analyzing the motion trail to obtain the actual running speed of the target vehicle.
2. The vehicle speed detection method according to claim 1, wherein the vehicle detection model uses a process of training by machine learning using a plurality of sets of training data, comprising:
acquiring an initial detection model, wherein the initial detection model comprises a Yolo model;
collecting multiple groups of non-motor vehicle lane video data of a target road section in different time periods and environments;
Preprocessing the non-motor vehicle lane video data to obtain a video image sequence, and preparing a video data set;
labeling pictures in the video data set according to preset classification categories, and generating corresponding labels for each picture, wherein the labels comprise category information and position information; the preset classification comprises two main categories of riding a bicycle and riding an electric bicycle;
training the initial detection model by using the marked video data set until a vehicle detection model meeting preset conditions is obtained.
3. The vehicle speed detection method according to claim 2, wherein training the initial detection model using the noted video data set until a vehicle detection model satisfying a preset condition is obtained, comprises:
inputting the marked video data set into the initial detection model, training the initial detection model according to preset parameters until a weight file with highest detection precision in training rounds is obtained, and deriving a trained vehicle detection model;
wherein the preset parameters comprise the batch size of training and training rounds.
4. The vehicle speed detection method according to claim 1, characterized in that the analyzing the video to be detected using a vehicle detection model includes:
Determining a non-motor vehicle lane region in the video to be detected;
the analyzing the video to be detected by using the vehicle detection model to identify the target vehicle in the video to be detected comprises the following steps:
and detecting the type of the running vehicle in the non-motor lane area by using a vehicle detection model, and identifying the target vehicle.
5. The vehicle speed detection method according to claim 1, wherein analyzing the video to be detected using a vehicle detection model, identifying a target vehicle in the video to be detected, comprises:
inputting the video to be detected into the vehicle detection model;
the vehicle detection model identifies a target vehicle in the video to be detected and marks the target vehicle through a boundary box; wherein the target vehicle comprises a traveling non-motor vehicle;
acquiring the characteristic information of the target vehicle and storing the characteristic information in a detection list, wherein the detection list comprises the mapping relation between the characteristic information of the target vehicle and the tracking ID of the target vehicle; the characteristic information comprises pixel point position, width and height information, a confidence threshold value and a category to which the pixel point belongs.
6. The vehicle speed detection method according to claim 5, wherein tracking the target vehicle in the video to be detected using a target tracking model to obtain a motion trajectory of the target vehicle, comprising:
based on the tracking ID, respectively acquiring image pixel points of a boundary box for identifying the same target vehicle in the detection list;
and sequentially connecting a plurality of image pixel points to generate a motion track of the target vehicle.
7. The vehicle speed detection method according to claim 1, wherein the analyzing the motion trajectory to obtain an actual running speed of the target vehicle includes:
calculating the pixel distance of the target vehicle based on the position relation of two different image pixel points in the motion trail;
calculating the actual movement distance of the target vehicle based on the mapping relation from the image pixel point distance to the actual distance, wherein the mapping relation is as follows:
S = D * pixel_distance;
wherein S represents an actual motion distance, D represents a pixel distance, and pixel_distance represents an actual distance represented by each image pixel point;
acquiring the motion time of the target vehicle based on the frame numbers and the video image frame rates of the two video frames corresponding to the two different image pixel points;
And obtaining the actual running speed of the target vehicle based on the relation between the actual movement distance and the movement time.
8. The vehicle speed detection method according to claim 7, wherein the actual distance pixel_distance represented by each image pixel satisfies the following formula:
pixel_distance=a/bicycle_width; or alternatively, the first and second heat exchangers may be,
pixel_distance2 = b / electric_width;
wherein, pixel_distance1 represents the actual distance represented by each pixel point when the target vehicle is a person riding a bicycle, a is a constant, represents the actual average width of a common bicycle, and bicycle_width represents the actual width of the bicycle; pixel_distant 2 represents the actual distance represented by each pixel point when the target vehicle is a person riding the electric vehicle, b is a constant, represents the actual average width of the ordinary electric vehicle, and electric_width represents the actual width of the electric vehicle.
9. The vehicle speed detection method according to claim 1, characterized by further comprising:
the actual running speed of the target vehicle is corrected and verified based on the standard reference speed of the target vehicle.
10. A vehicle speed detection apparatus for a slow-moving traffic system, comprising:
the data acquisition module is used for acquiring a video to be detected;
The target detection module is used for analyzing the video to be detected by utilizing a vehicle detection model and identifying a target vehicle in the video to be detected; the vehicle detection model is obtained through machine learning training by using a plurality of sets of training data, and each set of training data in the plurality of sets of training data comprises: a video image sequence of a lane of a non-motor vehicle, and a tag identifying a driving non-motor vehicle in the video image sequence;
the target tracking module is used for tracking the target vehicle in the video to be detected by utilizing a target tracking model, so as to obtain a motion track of the target vehicle, wherein the motion track is generated based on the movement of the position of the pixel point of the image corresponding to the same target vehicle;
and the data processing module is used for analyzing the motion trail to obtain the actual running speed of the target vehicle.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410138524.0A CN117671972B (en) | 2024-02-01 | 2024-02-01 | Vehicle speed detection method and device for slow traffic system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410138524.0A CN117671972B (en) | 2024-02-01 | 2024-02-01 | Vehicle speed detection method and device for slow traffic system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117671972A true CN117671972A (en) | 2024-03-08 |
CN117671972B CN117671972B (en) | 2024-05-14 |
Family
ID=90086611
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410138524.0A Active CN117671972B (en) | 2024-02-01 | 2024-02-01 | Vehicle speed detection method and device for slow traffic system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117671972B (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112101433A (en) * | 2020-09-04 | 2020-12-18 | 东南大学 | Automatic lane-dividing vehicle counting method based on YOLO V4 and DeepsORT |
CN112507935A (en) * | 2020-12-17 | 2021-03-16 | 上海依图网络科技有限公司 | Image detection method and device |
CN115205794A (en) * | 2022-03-15 | 2022-10-18 | 云粒智慧科技有限公司 | Method, device, equipment and medium for identifying violation of regulations of non-motor vehicle |
CN115331185A (en) * | 2022-09-14 | 2022-11-11 | 摩尔线程智能科技(北京)有限责任公司 | Image detection method and device, electronic equipment and storage medium |
CN116343493A (en) * | 2023-03-27 | 2023-06-27 | 北京博宏科元信息科技有限公司 | Method and device for identifying violation of non-motor vehicle, electronic equipment and storage medium |
CN116721552A (en) * | 2023-06-12 | 2023-09-08 | 北京博宏科元信息科技有限公司 | Non-motor vehicle overspeed identification recording method, device, equipment and storage medium |
CN116824859A (en) * | 2023-07-21 | 2023-09-29 | 佛山市新基建科技有限公司 | Intelligent traffic big data analysis system based on Internet of things |
CN116935281A (en) * | 2023-07-28 | 2023-10-24 | 南京理工大学 | Method and equipment for monitoring abnormal behavior of motor vehicle lane on line based on radar and video |
-
2024
- 2024-02-01 CN CN202410138524.0A patent/CN117671972B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112101433A (en) * | 2020-09-04 | 2020-12-18 | 东南大学 | Automatic lane-dividing vehicle counting method based on YOLO V4 and DeepsORT |
CN112507935A (en) * | 2020-12-17 | 2021-03-16 | 上海依图网络科技有限公司 | Image detection method and device |
CN115205794A (en) * | 2022-03-15 | 2022-10-18 | 云粒智慧科技有限公司 | Method, device, equipment and medium for identifying violation of regulations of non-motor vehicle |
CN115331185A (en) * | 2022-09-14 | 2022-11-11 | 摩尔线程智能科技(北京)有限责任公司 | Image detection method and device, electronic equipment and storage medium |
CN116343493A (en) * | 2023-03-27 | 2023-06-27 | 北京博宏科元信息科技有限公司 | Method and device for identifying violation of non-motor vehicle, electronic equipment and storage medium |
CN116721552A (en) * | 2023-06-12 | 2023-09-08 | 北京博宏科元信息科技有限公司 | Non-motor vehicle overspeed identification recording method, device, equipment and storage medium |
CN116824859A (en) * | 2023-07-21 | 2023-09-29 | 佛山市新基建科技有限公司 | Intelligent traffic big data analysis system based on Internet of things |
CN116935281A (en) * | 2023-07-28 | 2023-10-24 | 南京理工大学 | Method and equipment for monitoring abnormal behavior of motor vehicle lane on line based on radar and video |
Also Published As
Publication number | Publication date |
---|---|
CN117671972B (en) | 2024-05-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110472496B (en) | Traffic video intelligent analysis method based on target detection and tracking | |
CN108320510B (en) | Traffic information statistical method and system based on aerial video shot by unmanned aerial vehicle | |
CN104200657B (en) | A kind of traffic flow parameter acquisition method based on video and sensor | |
CN110379168B (en) | Traffic vehicle information acquisition method based on Mask R-CNN | |
CN103617412B (en) | Real-time lane line detection method | |
CN107315095B (en) | More vehicle automatic speed-measuring methods with illumination adaptability based on video processing | |
CN111753797B (en) | Vehicle speed measuring method based on video analysis | |
CN103324913A (en) | Pedestrian event detection method based on shape features and trajectory analysis | |
CN103366154B (en) | Reconfigurable clear path detection system | |
CN107301776A (en) | Track road conditions processing and dissemination method based on video detection technology | |
CN114357019B (en) | Method for monitoring data quality of road side sensing unit in intelligent networking environment | |
CN106781520A (en) | A kind of traffic offence detection method and system based on vehicle tracking | |
CN104282020A (en) | Vehicle speed detection method based on target motion track | |
CN104183127A (en) | Traffic surveillance video detection method and device | |
CN103425764B (en) | Vehicle matching method based on videos | |
CN107389084A (en) | Planning driving path planing method and storage medium | |
CN106250816A (en) | A kind of Lane detection method and system based on dual camera | |
CN109190483A (en) | A kind of method for detecting lane lines of view-based access control model | |
CN114715168A (en) | Vehicle yaw early warning method and system under road marking missing environment | |
CN111047879A (en) | Vehicle overspeed detection method | |
CN107506753B (en) | Multi-vehicle tracking method for dynamic video monitoring | |
CN115565157A (en) | Multi-camera multi-target vehicle tracking method and system | |
CN114419485A (en) | Camera-based intelligent vehicle speed measuring method and system, storage medium and computer equipment | |
CN116631187B (en) | Intelligent acquisition and analysis system for case on-site investigation information | |
CN114530042A (en) | Urban traffic brain monitoring system based on internet of things technology |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |