CN117710843A - Intersection dynamic signal timing scheme detection method based on unmanned aerial vehicle video - Google Patents

Intersection dynamic signal timing scheme detection method based on unmanned aerial vehicle video Download PDF

Info

Publication number
CN117710843A
CN117710843A CN202311846739.XA CN202311846739A CN117710843A CN 117710843 A CN117710843 A CN 117710843A CN 202311846739 A CN202311846739 A CN 202311846739A CN 117710843 A CN117710843 A CN 117710843A
Authority
CN
China
Prior art keywords
intersection
video
unmanned aerial
vehicle
signal timing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311846739.XA
Other languages
Chinese (zh)
Inventor
周竹萍
王子旭
傅涵冰
袁昌吉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202311846739.XA priority Critical patent/CN117710843A/en
Publication of CN117710843A publication Critical patent/CN117710843A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Remote Sensing (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides an intersection dynamic signal timing scheme detection method based on unmanned aerial vehicle video. Video information of the intersection is acquired through camera equipment carried by the unmanned aerial vehicle, and vehicle track data and entrance/exit channel position data of the intersection are detected by utilizing neural network, deep learning, computer vision and image processing technologies. And detecting a dynamic signal timing scheme of the intersection through the vehicle track. The method can detect traffic signals in a large range, has the advantages of automatic processing, efficiency improvement and high accuracy, provides basic data for the development of follow-up traffic management optimization work, and has important significance.

Description

Intersection dynamic signal timing scheme detection method based on unmanned aerial vehicle video
Technical Field
The invention relates to the technical field of computer vision and the intelligent traffic field, in particular to an intersection dynamic signal timing scheme detection method based on unmanned aerial vehicle video.
Background
With the rapid development of modern society, the problem of traffic jam has become one of the problems that seriously affect the quality of life of cities, and the problem is particularly prominent at intersections. The intersection is an important component of the urban traffic network, and the signal timing scheme has decisive influence on the traffic capacity and the use efficiency of the urban roads. Therefore, how to scientifically set and optimize the signal timing scheme of urban intersections is becoming an important research direction in the traffic field.
The acquisition of the timing scheme is particularly critical when the study of the timing scheme of the intersection is performed. In large cities, intersection annunciators are often incorporated into a unified managed intelligent transportation system, so that an intersection signal timing scheme can be acquired and updated in real time. However, in some small and medium-sized cities, the acquisition of the correlation signal timing scheme is not easy because the intersection annunciator is not equipped with a correlation system. For the detection of these intersection signal timing schemes, manual detection methods and historical track data push algorithms are generally used. The manual detection method has high accuracy and strong maneuverability, but has low efficiency, high cost and high labor intensity for detecting the traffic signals of the intersections in a large range. The accuracy of the historical track data calculation method is higher, but the method is only applicable to fixed timing intersections, and the required data accuracy is high and the sample size is large. Thus, there is currently a lack of better methods to obtain a wide range of dynamic signal timing schemes.
In recent years, unmanned aerial vehicles are gradually widely applied in the intelligent transportation field due to the advantages of flexibility, wide dispatching area, high detection precision and the like. However, due to the limitation of the aerial viewing angle of the unmanned aerial vehicle, the aerial video cannot directly acquire the signal timing information of the intersection. At present, an ideal detection scheme is not available for the signal timing scheme of an induction type and self-adaptive signal control intersection due to the dynamic property of the signal timing scheme. Aiming at the problems, the method for detecting the timing scheme of the dynamic signal of the intersection based on the unmanned aerial vehicle video is provided, and aims to solve the problem of acquiring the dynamic traffic signal in a large range.
Disclosure of Invention
The invention provides a method for detecting a dynamic signal timing scheme of an intersection based on unmanned aerial vehicle video. The method comprises the steps of acquiring video information of an intersection through camera equipment carried by an unmanned aerial vehicle, detecting and deducing vehicle track information and entrance/exit channel position information of the intersection by utilizing computer vision and image processing technology, so as to calculate an intersection signal timing scheme, and detecting the intersection dynamic signal timing scheme.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
s1: aerial photographing by using an unmanned plane, collecting aerial photographing videos of intersections in a large range, and processing the videos into image data required by subsequent steps;
s2: preprocessing unmanned aerial vehicle aerial video image data to construct a motor vehicle data set of an intersection;
s3: constructing a neural network model, inputting the intersection motor vehicle data set constructed in the step S2 into the model for training, and optimizing the model according to a training result to obtain training weights of the detection model;
s4: acquiring vehicle track data in a video of an intersection and position data of each entrance/exit channel (comprising a waiting area) of the intersection by using a trained model, a multi-target tracking model and a computer vision technology;
s5: preprocessing the vehicle track data obtained by detection tracking in the step S4 to extract effective track vehicle data;
s6: and calculating signal timing and outputting a corresponding result.
Preferably, the video is processed into the required image data specifically as: and intercepting the video into video images according to a certain time interval to obtain the required video image data.
Preferably, preprocessing of unmanned aerial vehicle aerial video image data specifically includes: the acquired video images are randomly flipped, rotated, scaled and cropped, noise is added, and brightness and contrast are changed to achieve image enhancement and augmentation.
Preferably, the content of the callout includes the kind of the motor vehicle to be detected and the position of the target rectangular frame.
Preferably, the built neural network model adopts a YOLOv5s model, and the vehicle track tracking model adopts a deep start model.
Preferably, extracting valid data in the vehicle track data includes the steps of:
s51: it is determined whether the vehicle trajectory meets the continuous linearity and curvature requirements.
S52: and judging whether the vehicle track meets the requirements of direction and turn-back.
Preferably, the signal timing calculation includes the steps of:
s61: and reading video information according to the time sequence, and judging whether the running state of the vehicle changes at a certain moment (starting to drive away from the entrance road).
S62: if the state is changed, recording the vehicle information of the driving-off entrance way at the moment and within a plurality of seconds after the moment, converting the vehicle information into phase information and storing the phase information. Otherwise, the video image information at the next moment is read.
S63: and (3) reading the video, judging whether the vehicle runs off the entrance road and whether the phase information corresponding to the track of the vehicle exists in the phase information stored in the step (S62). If the vehicle state is changed and the vehicle track information does not exist in the previous phase information, the process returns to S62. Otherwise, the video at the next moment is read to repeat S63 until the video is finished.
Compared with the prior art, the invention has the remarkable advantages that:
1. the intersection information is acquired by utilizing the unmanned aerial vehicle video, so that the complicated step of manually acquiring data in the traditional method is avoided.
2. By combining computer vision and image processing technology, the automatic processing of traffic parameter extraction and signal timing scheme detection of the intersection is realized, the efficiency and accuracy are improved, and the defect of missing signal timing detection under the aerial view angle of the unmanned aerial vehicle is overcome.
3. For the inductive and adaptive signal control intersections, a dynamic signal timing scheme thereof can be obtained.
4. The real-time performance is strong, the real-time information of the intersection can be timely obtained by utilizing the video real-time transmission technology when the unmanned aerial vehicle is used, and the real-time monitoring and optimization of the signal timing scheme can be realized.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention. In the drawings:
FIG. 1 is a workflow diagram of the present invention;
FIG. 2 is a schematic view of a frame area of the entrance/exit way of the present invention;
FIG. 3 is a flow chart of the vehicle effective trajectory data extraction workflow of the present invention;
FIG. 4 is a schematic flow chart of signal timing estimation according to the present invention;
fig. 5 is a signal timing scheme estimation auxiliary diagram of the present invention.
Detailed Description
In order to make the technical solution, objects and advantages of the present invention more apparent, specific embodiments of the present invention will be described in detail below with reference to the accompanying drawings. It should be noted that the described embodiments are only some embodiments of the present invention, and not all embodiments. All other embodiments, which can be obtained by a person skilled in the art without creative efforts, according to the described embodiments of the present invention belong to the protection scope of the present invention.
The invention relates to a method for detecting a dynamic signal timing scheme of an intersection based on unmanned aerial vehicle video, which is shown in a flow chart in fig. 1 and comprises the following steps:
step 1: and acquiring aerial videos of intersections in a large range, and converting the videos into image data. The video acquisition mode of the unmanned aerial vehicle intersection is determined according to the performance of the unmanned aerial vehicle, and comprises, but is not limited to, single unmanned aerial vehicle constant line cruise shooting, multi-unmanned aerial vehicle formation flying shooting and the like. In order to obtain unified, clear and comprehensive aerial video, the embodiment of the invention requires the unmanned aerial vehicle to carry out aerial photography at the flying height of 80-120 m. Meanwhile, aerial photographing is carried out by taking the upper part of the video as the north so as to meet the consistency of the video picture direction. In addition, the embodiment of the invention requires that the video definition reach 1080P and above so as to ensure the quality and definition of the video picture.
Further, the method for converting the video into the image data comprises a manual acquisition method and an automatic real-time collection method. The manual collection method means that after the unmanned aerial vehicle finishes a shooting task, related staff reads a memory video of the unmanned aerial vehicle and intercepts the memory video into a video image according to the requirement; the automatic real-time collection method is that when the unmanned aerial vehicle shoots video, the unmanned aerial vehicle adopts an H.264 or H.265 coding and decoding standard to transmit the high-resolution video stream to a ground center. Meanwhile, the ground center processes the video frame by frame through a corresponding program, extracts video images from the video images and stores the video images in a JPG format.
Step 2: video image preprocessing, and constructing corresponding data sets. And (3) randomly overturning, rotating, scaling and cutting the JGP image obtained in the step (1), adding noise and changing brightness and contrast so as to achieve image enhancement and augmentation. And then, marking the images by adopting a semi-automatic marking method. After the labeling is completed, the images are divided into a training set and a verification set according to a certain proportion.
Further, in terms of image enhancement and augmentation, embodiments better simulate symmetric intersection situations (e.g., cross and T-shapes) by flipping the image horizontally or vertically; rotation of 30 °, 45 °, and 60 ° is used to better simulate malformed intersection conditions (e.g., X-shape, etc.); simulating images with different sizes and proportions caused by different shooting angles of the unmanned aerial vehicle through a zooming and clipping technology; adding gaussian noise to better simulate imperfect conditions in the real world; and changing brightness and contrast to better simulate intersection aerial videos at different time periods within a day.
Further, the enhanced and amplified image data sample is marked, and marking contents comprise the type of the target to be detected and the position of the target rectangular frame. The method for labeling the embodiment adopts a semi-automatic labeling method, a model is trained through high-quality data of manual labeling, then the model is applied to unlabeled data to carry out program automatic labeling to obtain labeling results, then manual intervention is carried out to correct the results, and finally a required data set is obtained. In this embodiment, the video image samples are according to the manual annotation data: program label data = 2: 8. The manual labeling uses a labeling tool LabelImg to label the sample image, a rectangular labeling tool is adopted to carry out frame selection, and a corresponding type of label is added, and a txt file with a labeling format of YOLO format is used for subsequent training. The manually annotated image dataset is input into YOLOv5s for training to obtain pre-weights. Then, based on the pre-weight and the detection labeling program, the embodiment carries out automatic detection labeling on the program labeling data, thereby obtaining the label file. After the automatic labeling is completed, the label is manually fine-tuned by using a LabelImg tool to obtain a required labeling data set. Finally, the marked picture samples are processed according to a training set and a testing set 7:3 to complete the construction of the data set.
Step 3: the YOLOv5s neural network model is configured and weight trained. And (3) carrying out specific configuration on parameters of the YOLOv5s detection model according to the requirements of the task, and then inputting a training set into the model for training. And finally, optimizing the model performance according to the verification set, thereby obtaining the model weight for the target detection task.
Step 4: and (3) acquiring vehicle track coordinate information and entrance and exit lane position information in the intersection aerial video by utilizing the neural network, the multi-target tracking technology and the computer vision technology which are obtained in the step (3).
Further, the motor vehicle in the intersection aerial video is detected and tracked by utilizing the neural network model trained in the step 3 and the multi-target tracking model deep, so that track coordinate data of the vehicle is obtained, and the track coordinate data is output in a form of a CSV file. The format of the CSV file output is as follows:
wherein i represents an ith intersection video, and R (i) represents vehicle track coordinate CSV file data output after the ith intersection video is detected and tracked. In the matrix, id n ID number, c, representing the nth vehicle object in the intersection video n Representing class, x of nth vehicle object in video i n And y n Respectively representing the center abscissa and the center ordinate of the nth vehicle target detection rectangular frame, f n Representing the nth frame instant of the video.
Further, an OpenCV function library is used to obtain coordinate position information of the entrance/exit of the intersection. The specific method comprises the following steps: first, a first frame image of an intersection video is read. Then, binding mouse events through an OpenCV function library, and respectively framing outline frames of the entry/exit channels by taking the entry/exit channel in a certain direction as the start. The contour frame of the entrance way needs to include the area of the length of two small vehicles upstream of the stop line, while the contour frame of the exit way needs to include the area of the length of two small vehicles downstream of the stop line extension line of the entrance way, and the specific area is shown in fig. 2. After finishing the frame drawing, we do the same operation for the next entry/exit channel until all the entry/exit channel operations are completed. Finally, outputting the outline frame information in a TXT format, wherein the output format is as follows:
G(i)=[d j c j x 1 y 2 x 1 y 2 … x n y n ]
wherein, G (i) represents the coordinate TXT file data output by the j-th entrance/exit channel of the video of the i-th intersection after visual processing. In the matrix, d j Indicating the orientation of the jth entrance/exit passage, c j Represents the type of the jth inlet/outlet channel (inlet channel is 1, outlet channel is 0), x n And y n The abscissa and the ordinate of the nth vertex of the drawing outline box are respectively indicated.
Step 5: and extracting effective data in the vehicle track data. In detecting and tracking motor vehicles at intersections, roadside parked vehicles and turn-around vehicles are also detected and tracked, but these data are not significant for the calculation of signal timing schemes. In addition, the related program may lose part of track points due to various uncertainties in the track acquisition process, and the formed track cannot completely express the original running route track of the vehicle, so that abnormal tracks are generated. Therefore, we need to extract valid data from these data. The flow of extracting effective track data is shown in fig. 3, and the specific implementation method is as follows:
(1) It is determined whether the vehicle meets the continuous linearity and radius of curvature requirements. It is first necessary to determine whether the vehicle trajectory satisfies the continuous linearity requirement. Continuous linearity first requires that the vehicle trajectory should be one continuous line, and should not have missing parts. To achieve this goal, we can use thresholds of "time interval" and "distance interval" to perform the screening. Specifically, if the time difference between two adjacent track points is large or the distance between the adjacent track points is too large, then we can consider the vehicle track as invalid data. The specific formula is as follows:
S t =1[t≤t thr ]
S d =1[d≤d thr ]
S line =S t +S d
wherein t is the time interval between two adjacent track points, d is the distance interval between two adjacent track points, t thr For a time interval threshold, d thr Is a distance interval threshold; 1 [. Cndot.]Represents a judgment function (if the condition is [. Cndot. ]]Is true, 1 [. Cndot.]=1, otherwise 1[ ·]=0);S t 、S d 、S line All 0,1 variables (1 represents yes, 0 represents no), wherein S t Indicating whether or not the time interval is satisfied S d Indicating whether or not the distance interval is satisfied S line And (3) the final output is a linear continuous judgment result, wherein "+" represents logic OR. Namely satisfy S t 、S d One of S line Then 1 indicates that the vehicle trajectory meets the linear continuity requirement.
Secondly, the vehicle trajectory should also meet certain radius of curvature requirements. To achieve this goal, we can apply to the rail of a vehicleThe traces are linearly fitted and the radius of curvature of each point is calculated. If the curvatures are all less than a certain threshold, it is indicated that the vehicle trajectory satisfies the continuous linearity requirement. The linear fitting uses a least square method to linearly fit the vehicle trajectory and calculate the radius of curvature of each point. If the curvature of all points is smaller than a certain threshold R thr The vehicle trajectory is considered to meet the continuous linearity requirement.
(2) And judging whether the vehicle track meets the requirements of direction and turn-back. The effective vehicle path should include an entrance and an exit, and the entrance and exit orientations should be different. If the vehicle comprises a plurality of inlet and outlet channels, the vehicle has a turning-back phenomenon; if the inlet and outlet channels are the same, the vehicle turns around. All the data are invalid data and need to be deleted.
Step 6: and carrying out signal timing calculation according to the related data and outputting a corresponding result. Through steps 1 to 5, effective vehicle track data, and each of the entrance/exit lane position coordinate data can be obtained. The signal timing scheme can be calculated based on the related data, the flow is shown in fig. 4, and the specific implementation steps are as follows:
(1) It is checked in time series whether or not there is a vehicle in a start state of the video image, that is, a vehicle speed is changed from 0 to a non-0 state. The vehicle speed is calculated by adopting a segmentation method, the track is divided into n small sections, each small section of track approximates to a straight line, and the ratio of the Euclidean distance difference of each small section to the time frame is calculated and recorded as the virtual place vehicle speed of the small section. The virtual place speed can be converted into the real place speed through a scale. The specific formula is as follows:
where v is the actual vehicle speed (n/s) of the motor vehicle at the intersection,and->Track coordinates of ith vehicle at t and t+1 time frames, t gap P is the scale of the real distance and the virtual distance of the pixel for the interval time of two adjacent frames.
Since the detected and tracked coordinates have an offset error, a judgment formula for defining a change in the state of the vehicle is as follows:
S=1[v≥v thr ]
wherein the starting speed threshold v thr For eliminating offset errors of the detection tracking. S is a 0,1 variable indicating whether the vehicle is in a start state (1 indicates yes, and 0 indicates no). v thr For the starting speed threshold, the value is selected according to the condition of each intersection, and 1m/s is usually selected.
(2) If the state that the vehicle starts to drive away from the entrance road occurs for the first time, recording the moment t and t after the moment t rec And effective vehicle information of the driving-out entrance way in time. t is t rec There are the following constraints:
t veh ≤t rec ≤t gmin
wherein: t is t veh Indicating the time from start to exit from entrance of the vehicle in which the state change occurred; t is t gmin Representing the minimum green time within the signal period.
(3) Track information of the state change vehicle is converted into phase information. The extracted effective vehicle information only has one entrance road and one exit road, and the lane information and the phase information can be mutually converted. For example, a vehicle may be driven in from a west entrance and out from a north exit, and the relevant information may be converted to a west entrance left turn phase open. The above t rec The vehicle information recorded in the time is converted into phase information and stored as the phase state.
(4) And reading the subsequent time sequence video images, and detecting whether the state that the vehicle starts to drive away from the entrance road exists or not and the phase information converted by the vehicle track information does not exist in the last phase state. If so, repeating (3) (4). And otherwise, reading the next frame information until the state appears, and repeating the step (3) (4). When the video ends, the repeat steps are ended.
(5) Phase is toAnd (5) performing state arrangement to obtain a signal timing scheme. By (2) - (5), the time of each phase change can be obtained, which can be approximately considered to be equal to the green light on time t of each phase detection green_n . At the same time, phase information I of each phase n But also from vehicle track information. And combining the information to obtain the signal timing scheme. The present embodiment is shown with 4 phases and the signal timing scheme is shown in fig. 5. Subtracting the green time starting points of two adjacent phases to obtain the green time length of the previous phase, such as I 1 Green light time length T of phase G =t green_2 -t green_1 . If the phase appears again (i.e. the phase information is the same), the red light duration of the phase is obtained by subtracting the last green light ending time from the green light on the phase. Such as I 1 Red light duration T of phase R =t green_5 -t green_2 . And combining the data to obtain the signal timing scheme.

Claims (10)

1. The intersection dynamic signal timing scheme detection method based on the unmanned aerial vehicle video is characterized by comprising the following steps of:
s1: aerial photographing by using an unmanned plane, collecting aerial photographing videos of intersections in a large range, and processing the videos into image data required by subsequent steps;
s2: preprocessing unmanned aerial vehicle aerial video image data to construct a motor vehicle data set of an intersection;
s3: constructing a neural network model, inputting the intersection motor vehicle data set constructed in the step S2 into the model for training, and optimizing the model according to a training result to obtain training weights of the detection model;
s4: acquiring vehicle track data in a video of an intersection and position data of each entrance/exit channel (comprising a waiting area) of the intersection by using a trained model, a multi-target tracking model and a computer vision technology;
s5: preprocessing the vehicle track data obtained by detection tracking in the step S4 to extract effective track vehicle data;
s6: and calculating signal timing and outputting a corresponding result.
2. The unmanned aerial vehicle video-based intersection dynamic signal timing scheme detection method according to claim 1, wherein the method comprises the following steps of: in the step S1, the shooting of the aerial video of the large-scale intersection includes a single unmanned aerial vehicle cruise shooting method, a multi-unmanned aerial vehicle formation flying shooting method and other methods, and the method for processing the video into an image includes a manual acquisition method and an automatic real-time collection method.
3. The unmanned aerial vehicle video-based intersection dynamic signal timing scheme detection method according to claim 1, wherein the method comprises the following steps of: in the step S2, the preprocessing of the video image is image enhancement and augmentation, specifically, random inversion, rotation, scaling and clipping, noise addition, brightness and contrast change, and a manual high-quality labeling method and a semi-automatic labeling method for manually correcting a machine automatic labeling method are adopted for labeling a sample.
4. The unmanned aerial vehicle video-based intersection dynamic signal timing scheme detection method according to claim 1, wherein the method comprises the following steps of: in the step S4, the yolov5s+deep model is used for detecting and tracking the vehicle track, the visual technology is used for extracting the position of the entrance/exit channel, and the mouse event is bound for semi-automatic frame selection.
5. The unmanned aerial vehicle video-based intersection dynamic signal timing scheme detection method according to claim 1, wherein the method comprises the following steps of: in the step S5, the extraction of the effective track data of the vehicle includes determining whether the track meets the linear continuity, the radius of curvature requirement, the direction requirement, and the turn-back requirement.
6. The unmanned aerial vehicle video-based intersection dynamic signal timing scheme detection method according to claim 1, wherein the method comprises the following steps of: in the step S5, a formula for continuously and linearly determining the trajectory of the vehicle and a method for determining the radius of curvature are provided.
7. The unmanned aerial vehicle video-based intersection dynamic signal timing scheme detection method according to claim 1, wherein the method comprises the following steps of: in the step S6, the signal timing estimation includes estimating based on the acquired intersection entrance/exit road position information and the extracted valid vehicle track data and outputting a corresponding result.
8. The unmanned aerial vehicle video-based intersection dynamic signal timing scheme detection method according to claim 1, wherein the method comprises the following steps of: in the step S6, the vehicle state change is determined by using a speed, and the speed is calculated by using a segmentation method.
9. The unmanned aerial vehicle video-based intersection dynamic signal timing scheme detection method according to claim 1, wherein the method comprises the following steps of: in the step S6, a method of converting vehicle track information and phase information is provided.
10. The unmanned aerial vehicle video-based intersection dynamic signal timing scheme detection method according to claim 1, wherein the method comprises the following steps of: in the step S6, the green light time and the red light time of each signal phase are calculated.
CN202311846739.XA 2023-12-29 2023-12-29 Intersection dynamic signal timing scheme detection method based on unmanned aerial vehicle video Pending CN117710843A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311846739.XA CN117710843A (en) 2023-12-29 2023-12-29 Intersection dynamic signal timing scheme detection method based on unmanned aerial vehicle video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311846739.XA CN117710843A (en) 2023-12-29 2023-12-29 Intersection dynamic signal timing scheme detection method based on unmanned aerial vehicle video

Publications (1)

Publication Number Publication Date
CN117710843A true CN117710843A (en) 2024-03-15

Family

ID=90160779

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311846739.XA Pending CN117710843A (en) 2023-12-29 2023-12-29 Intersection dynamic signal timing scheme detection method based on unmanned aerial vehicle video

Country Status (1)

Country Link
CN (1) CN117710843A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118247986A (en) * 2024-05-23 2024-06-25 清华大学 Vehicle cooperative control method for single signal intersection under mixed traffic flow

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118247986A (en) * 2024-05-23 2024-06-25 清华大学 Vehicle cooperative control method for single signal intersection under mixed traffic flow

Similar Documents

Publication Publication Date Title
CN108898085B (en) Intelligent road disease detection method based on mobile phone video
CN110619279B (en) Road traffic sign instance segmentation method based on tracking
CN105336169B (en) A kind of method and system that traffic congestion is judged based on video
CN110267101B (en) Unmanned aerial vehicle aerial video automatic frame extraction method based on rapid three-dimensional jigsaw
CN105321342A (en) Intersection vehicle queuing length detection method based on aerial video
CN105632170A (en) Mean shift tracking algorithm-based traffic flow detection method
CN104978567A (en) Vehicle detection method based on scenario classification
CN113313031B (en) Deep learning-based lane line detection and vehicle transverse positioning method
CN116434159A (en) Traffic flow statistics method based on improved YOLO V7 and Deep-Sort
CN117351702A (en) Intelligent traffic management method based on adjustment of traffic flow
CN113450573A (en) Traffic monitoring method and traffic monitoring system based on unmanned aerial vehicle image recognition
CN115457780B (en) Vehicle flow and velocity automatic measuring and calculating method and system based on priori knowledge set
CN111414861A (en) Method for realizing detection processing of pedestrians and non-motor vehicles based on deep learning
CN112465854A (en) Unmanned aerial vehicle tracking method based on anchor-free detection algorithm
CN114298163A (en) Online road condition detection system and method based on multi-source information fusion
Zheng et al. Rail detection based on LSD and the least square curve fitting
Liu et al. Multi-lane detection by combining line anchor and feature shift for urban traffic management
CN113408550B (en) Intelligent weighing management system based on image processing
CN117994295A (en) Cross-camera track splicing method based on space-time constraint
CN113392817A (en) Vehicle density estimation method and device based on multi-row convolutional neural network
CN117152971A (en) AI traffic signal optimization method based on high-altitude panoramic video
CN116229396B (en) High-speed pavement disease identification and warning method
CN114820931B (en) Virtual reality-based CIM (common information model) visual real-time imaging method for smart city
CN115440071B (en) Automatic driving illegal parking detection method
CN117710843A (en) Intersection dynamic signal timing scheme detection method based on unmanned aerial vehicle video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination