CN116128360A - Road traffic congestion level evaluation method and device, electronic equipment and storage medium - Google Patents

Road traffic congestion level evaluation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116128360A
CN116128360A CN202310108336.9A CN202310108336A CN116128360A CN 116128360 A CN116128360 A CN 116128360A CN 202310108336 A CN202310108336 A CN 202310108336A CN 116128360 A CN116128360 A CN 116128360A
Authority
CN
China
Prior art keywords
vehicle
road
detected
parameter information
traffic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310108336.9A
Other languages
Chinese (zh)
Inventor
王杨谨
张政平
支野
刘榴
田勇
刘丰磊
徐炳欧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Road Traffic Safety Research Center Ministry Of Public Security Of People's Republic Of China
Original Assignee
Road Traffic Safety Research Center Ministry Of Public Security Of People's Republic Of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Road Traffic Safety Research Center Ministry Of Public Security Of People's Republic Of China filed Critical Road Traffic Safety Research Center Ministry Of Public Security Of People's Republic Of China
Priority to CN202310108336.9A priority Critical patent/CN116128360A/en
Publication of CN116128360A publication Critical patent/CN116128360A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06393Score-carding, benchmarking or key performance indicator [KPI] analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • G06Q50/40
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The embodiment of the invention relates to a road traffic congestion level assessment method, a device, electronic equipment and a storage medium, which comprise the following steps: acquiring multi-frame images of a region to be detected within a preset time period; carrying out lane range identification on at least one frame of image in the multi-frame images to obtain a lane range of a region to be detected; carrying out road scene recognition on at least two frames of images in the multi-frame images to obtain the road scene type of the area to be detected and the road parameter information in the lane range; carrying out vehicle tracking operation on vehicles in each frame of image, and acquiring traffic parameter information corresponding to all vehicles in a lane range of a region to be detected; and evaluating the road traffic jam degree of the area to be detected according to the road scene type, the road parameter information and the traffic parameter information. By the method, the road traffic jam degree can be accurately estimated, and reliable guarantee is provided for traffic safety.

Description

Road traffic congestion level evaluation method and device, electronic equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of traffic management, in particular to a road traffic congestion level assessment method, a device, electronic equipment and a storage medium.
Background
With the trend of the road traffic situation becoming complex, the quantity of motor vehicles kept is increasing, and higher requirements are put on the road traffic safety work of related departments. Although the new technology and new equipment are applied to traffic management at present, the problems of unbalanced road traffic conditions, weak traffic safety facilities, and the like still exist, and the phenomenon of traffic jam of roads, particularly urban roads, sometimes occurs, so that people influence the passing of good traffic and the evaluation of traffic management departments.
At present, the traffic jam video detection method mainly uses a fixed camera to collect video images, calculates the traffic flow, the average speed and the like through an intelligent algorithm, and brings a preset jam monitoring model to evaluate and judge. The image acquisition range of the fixed camera is limited, the erection height and the rotation angle are limited by the installation conditions, the large-range traffic running situation perception can not be accurately realized, and hysteresis and monitoring blind areas exist. The unmanned aerial vehicle detects road congestion by means of systems such as an airborne platform and a vehicle-mounted platform, but the following problems exist: firstly, the application of a specific scene is lost, the existing scheme does not distinguish application scenes, and the actual application effect is affected. Secondly, traffic flow statistics is incomplete, the existing scheme only considers vehicle tracking, lane lines need to be marked manually, and comprehensive analysis is carried out without combining other parameters; some schemes have simpler calculation modes, and traffic flow is described by only counting the number and average speed of road vehicles. Thirdly, the traffic jam detection method is not standard enough, the detection mode of the existing scheme is single, and the traffic jam situation cannot be accurately described.
Disclosure of Invention
The application provides a road traffic congestion level assessment method, a device, electronic equipment and a storage medium, which are used for solving the problem that the road traffic congestion level assessment is inaccurate in the prior art.
In a first aspect, the present application provides a method for estimating a road traffic congestion level, the method including:
acquiring multi-frame images of a region to be detected within a preset time period;
carrying out lane range identification on at least one frame of image in the multi-frame images to obtain a lane range of a region to be detected;
carrying out road scene recognition on at least two frames of images in the multi-frame images, and acquiring the road scene type of the area to be detected and the road parameter information in the lane range, wherein the road parameter information is used for indicating the road parameter information corresponding to the road scene type;
carrying out vehicle tracking operation on vehicles in each frame of image, and acquiring traffic parameter information corresponding to all vehicles in a lane range of a region to be detected;
and evaluating the road traffic jam degree of the area to be detected according to the road scene type, the road parameter information and the traffic parameter information.
In this way, a multi-frame image of the region to be detected in a preset time period is obtained; carrying out lane range identification on at least one frame of image in the multi-frame images to obtain a lane range of a region to be detected; carrying out road scene recognition on at least two frames of images in the multi-frame images, and acquiring the road scene type of the area to be detected and the road parameter information in the lane range, wherein the road parameter information is used for indicating the road parameter information corresponding to the road scene type; carrying out vehicle tracking operation on vehicles in each frame of image, and acquiring traffic parameter information corresponding to all vehicles in a lane range of a region to be detected; and evaluating the road traffic jam degree of the area to be detected according to the road scene type, the road parameter information and the traffic parameter information. The road scene type of the area to be detected can be identified, the specific road scene type is clear, the road parameter information and the traffic parameter information corresponding to the road scene type can be adopted according to the characteristics of different scene types, the evaluation index result corresponding to the road scene type is further obtained to evaluate the road traffic jam degree, the evaluation result is more accurate than a single evaluation mode and is more in accordance with the actual road driving condition, meanwhile, the lane range is identified, vehicles in the lane range can be accurately tracked and identified, lane parameter information and traffic parameter information in the lane range are accurately counted, the evaluation result of the road traffic jam degree is more accurate, corresponding measures for optimizing the road traffic organization can be adopted based on the accurate road traffic jam degree evaluation result, the road traffic jam is relieved, and reliable guarantee is provided for traffic safety.
With reference to the first aspect, in a first embodiment of the first aspect of the present invention, performing road scene recognition on at least two frames of images in a plurality of frames of images, to obtain a road scene type of a region to be detected and road parameter information in a lane range, includes:
inputting at least two frames of images in the multi-frame images into a pre-constructed road scene recognition model, and obtaining a scene type recognition result of each frame of image;
taking the scene type with the largest scene type in all scene type identification results as the road scene type of the area to be detected;
road parameter information corresponding to the road scene type is determined based on the road scene type and the recognition result.
In this way, the road scene type recognition result of the multi-frame image is obtained through the pre-constructed road scene recognition model, and the scene type with the largest recognition result in the road scene types is used as the road scene type of the road to be detected, so that the error of model recognition can be avoided, and the road scene type of the region to be detected can be accurately determined.
With reference to the first aspect or the first embodiment of the first aspect, in a second embodiment of the first aspect of the present invention, a vehicle tracking operation is performed on vehicles in each frame of image, and traffic parameter information corresponding to all vehicles in a lane range in a to-be-detected area is obtained, which specifically includes:
Identifying vehicles in a lane range in a first frame image by adopting a pre-constructed target detection model, and generating a vehicle identification, a vehicle type and an actual vehicle position of the first vehicle in the first frame image, wherein the first frame image is any frame image except the last frame image;
predicting a predicted vehicle position of the first vehicle in a second frame image in the first frame image, wherein the second frame image is a next frame image of the first frame image;
when the actual vehicle position of the second vehicle in the second frame image is matched with the predicted vehicle position, determining that the second vehicle is the first vehicle;
or alternatively, the process may be performed,
when a third vehicle exists in the second frame image and the actual vehicle position of the third vehicle in the second frame image cannot be matched with any predicted vehicle position in the first frame image, generating a vehicle identification of the third vehicle, and acquiring the vehicle type of the third vehicle and the actual vehicle position of the third vehicle;
and generating traffic parameter information of the area to be detected according to the vehicle identifications, the vehicle types and the actual vehicle positions of all the vehicles, wherein the first vehicle is any vehicle in the first frame image, the second vehicle is any vehicle in the next frame image, and the third vehicle is any vehicle except the second vehicle in the next frame image.
By means of the method, vehicles in each frame of image are identified and tracked, the vehicles are matched according to the actual vehicle positions and the predicted vehicle positions of the vehicles, when the matching is successful, the vehicles in different frames of images can be confirmed to be identical, repeated statistics of the vehicles in different frames of images can be avoided, when the matching is unsuccessful, new vehicle information is generated, missing statistics of the vehicles in the images can not be carried out, and accuracy of traffic parameter information can be well guaranteed.
With reference to the second embodiment of the first aspect, in a third embodiment of the first aspect of the present invention, when the scene type is the first scene type, traffic parameter information of the area to be detected is generated according to the vehicle identifications of all vehicles, the vehicle types, and the actual vehicle positions, including:
determining an entrance way for the vehicle to travel according to the actual vehicle position;
counting the vehicle identifications in the area to be detected in each preset time interval of the preset time period in the nth entrance way, and taking the number of the vehicle identifications as the number of vehicles;
according to each vehicle type, the number of vehicles corresponding to each vehicle type respectively and the traffic weight corresponding to each vehicle type respectively in a preset time period in the nth entrance lane, determining the traffic of the nth entrance lane;
And constructing traffic parameter information according to the number of vehicles in each entrance way and the traffic volume of each entrance way.
With reference to the second embodiment of the first aspect, in a fourth embodiment of the first aspect of the present invention, when the scene type is the second scene type, traffic parameter information of the area to be detected is generated according to the vehicle identifications of all vehicles, the vehicle types, and the actual vehicle positions, including:
acquiring a first moment when an ith vehicle enters a lane range and a second moment when the ith vehicle exits the lane range, and taking a time interval between the first moment and the second moment as a passing time of the ith vehicle, wherein the ith vehicle is any vehicle in the lane range, and i is a positive integer;
counting the vehicle identifications of the area to be detected, and taking the number of the vehicle identifications as the number of the vehicles;
and constructing traffic parameter information according to the passing time of each vehicle and the number of vehicles.
With reference to the third embodiment of the first aspect, in a fifth embodiment of the first aspect of the present invention, when the road scene type is the first type scene type, the road parameter information includes: the number of entrance tracks in the area to be detected, and the traffic parameter information comprises: the number of vehicles parked in the area to be detected in each preset time interval of the preset time period in the nth entrance road, the total traffic volume of the nth entrance road and the number of preset time intervals are evaluated according to the road scene type, the road parameter information and the traffic parameter information, and the road traffic jam degree of the area to be detected comprises the following steps:
Determining the average delay time of the mth vehicle of the nth entrance according to the number of the entrance, the number of vehicles parked in the area to be detected in each preset time interval of the preset time period in the nth entrance, the total traffic volume of the nth entrance and the number of the preset time intervals;
determining the maximum vehicle average delay time of the area to be detected according to the vehicle average delay time of all the entrance ways;
and evaluating the road traffic congestion degree of the area to be detected according to the maximum vehicle average delay time and the first corresponding relation between the maximum vehicle average delay time and the road traffic congestion degree, wherein the nth entrance road is any entrance road in the area to be detected, the mth vehicle is any vehicle in the nth entrance road, and m and n are positive integers.
With reference to the fourth embodiment of the first aspect, in a sixth embodiment of the first aspect of the present invention, when the road scene type is the second type scene type, the road parameter information includes: the number of the same-directional lanes in the area to be detected and the same-directional lane length of the area to be detected, and the traffic parameter information comprises: the traffic time and the total number of the vehicles passing through each vehicle in the second scene type are evaluated according to the road scene type, the road parameter information and the traffic parameter information, and the road traffic jam degree of the area to be detected comprises the following steps:
Determining the total running length of the area to be detected according to the number of the homodromous lanes and the homodromous lane length;
determining the regional average speed of the region to be detected according to the total running length, the passing time of each vehicle in the second scene type and the total number of the passing vehicles;
and evaluating the road traffic congestion degree of the area to be detected according to the area average speed and the second corresponding relation between the area average speed and the road traffic congestion degree.
In this way, in the fifth embodiment and the sixth embodiment, the evaluation index (maximum vehicle average delay time) corresponding to the first type scene type and the evaluation index (area average vehicle speed) corresponding to the second type scene type are determined for the first type scene type and the second type scene type, and the determination of the evaluation index may be performed by using the evaluation index in the existing standard road traffic congestion evaluation method, which has the characteristics of authority and practicability.
In a second aspect, the present application provides a road traffic congestion level assessment apparatus, the apparatus comprising: the system comprises an acquisition module, a lane range identification module, a road scene identification module, a vehicle tracking module and a congestion degree evaluation module.
The acquisition module is used for acquiring multi-frame images of the region to be detected in a preset time period;
the lane range identification module is used for carrying out lane range identification on at least one frame of image in the multi-frame images and obtaining the lane range of the area to be detected;
the road scene recognition module is used for carrying out road scene recognition on at least two frames of images in the multi-frame images, and obtaining the road scene type of the area to be detected and the road parameter information in the lane range, wherein the road parameter information is used for indicating the road parameter information corresponding to the road scene type;
the vehicle tracking module is used for carrying out vehicle tracking operation on the vehicles in each frame of image and acquiring traffic parameter information corresponding to all the vehicles in the lane range of the area to be detected;
and the congestion degree evaluation module is used for evaluating the road traffic congestion degree of the area to be detected according to the road scene type, the road parameter information and the traffic parameter information.
Optionally, the road scene recognition module is specifically configured to input at least two frames of images in the multi-frame images into a pre-constructed road scene recognition model, and obtain a scene type recognition result of each frame of image;
taking the scene type with the largest scene type in all scene type identification results as the road scene type of the area to be detected;
Road parameter information corresponding to the road scene type is determined based on the road scene type and the recognition result.
Optionally, the vehicle tracking module is specifically configured to identify a vehicle within a lane range in a first frame image by using a pre-constructed target detection model, and generate a vehicle identifier, a vehicle type and an actual vehicle position of the first vehicle in the first frame image, where the first frame image is any frame image except for a last frame image;
predicting a predicted vehicle position of the first vehicle in a second frame image in the first frame image, wherein the second frame image is a next frame image of the first frame image;
when the actual vehicle position of the second vehicle in the second frame image is matched with the predicted vehicle position, determining that the second vehicle is the first vehicle;
or alternatively, the process may be performed,
when a third vehicle exists in the second frame image and the actual vehicle position of the third vehicle in the second frame image cannot be matched with any predicted vehicle position in the first frame image, generating a vehicle identification of the third vehicle, and acquiring the vehicle type of the third vehicle and the actual vehicle position of the third vehicle;
and generating traffic parameter information of the area to be detected according to the vehicle identifications, the vehicle types and the actual vehicle positions of all the vehicles, wherein the first vehicle is any vehicle in the first frame image, the second vehicle is any vehicle in the next frame image, and the third vehicle is any vehicle except the second vehicle in the next frame image.
Optionally, the apparatus further comprises: a determining module and a counting module;
the determining module is used for determining an entrance way of the vehicle according to the actual vehicle position;
the statistics module is used for counting the vehicle identifications in the area to be detected in each preset time interval of the preset time period in the nth entrance way, and the number of the vehicle identifications is used as the number of vehicles;
the determining module is further configured to determine a total traffic volume of the nth entrance according to each vehicle type and a number of vehicles corresponding to the vehicle type in the nth entrance within a preset time period.
Optionally, the device comprises:
the system comprises an acquisition module, a control module and a control module, wherein the acquisition module is also used for acquiring a first moment when an ith vehicle enters a lane range and a second moment when the ith vehicle exits the lane range, and taking a time interval between the first moment and the second moment as the passing time of the ith vehicle, wherein the ith vehicle is any vehicle in the lane range, and i is a positive integer;
the statistics module is also used for counting the vehicle identifications of the to-be-detected area, and the number of the vehicle identifications is used as the number of the vehicles.
Optionally, the device comprises:
the determining module is further used for determining the average delay time of the mth vehicle of the nth entrance according to the number of the entrance, the number of vehicles parked in the area to be detected in each preset time interval of the preset time period in the nth entrance, the total traffic volume of the nth entrance and the number of the preset time intervals; determining the maximum vehicle average delay time of the area to be detected according to the vehicle average delay time of all the entrance ways;
The congestion degree evaluation module is specifically configured to evaluate the road traffic congestion degree of the area to be detected according to the maximum vehicle average delay time and the first corresponding relation between the maximum vehicle average delay time and the road traffic congestion degree, where the mth vehicle is any vehicle in the nth entrance lane, and m and n are both positive integers.
Optionally, the device comprises:
the determining module is also used for determining the total running length of the area to be detected according to the number of the same-direction lanes and the length of the same-direction lanes; determining the regional average speed of the region to be detected according to the total running length, the passing time of each vehicle in the second scene type and the total number of the passing vehicles;
the congestion degree evaluation module is specifically further configured to evaluate the road traffic congestion degree of the area to be detected according to the area average speed and the second corresponding relationship between the area average speed and the road traffic congestion degree.
In a third aspect, an electronic device is provided, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
And a processor, configured to implement the steps of the road traffic congestion level assessment method according to any one of the embodiments of the first aspect when executing the program stored in the memory.
In a fourth aspect, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the road traffic congestion level assessment method as in any of the embodiments of the first aspect.
Drawings
Fig. 1 is a schematic flow chart of a road traffic congestion level assessment method according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a road scene type recognition method according to an embodiment of the present invention;
FIG. 3 is a schematic view of a traffic light controlled intersection according to the present invention;
FIG. 4 is a schematic view of a traffic light controlled intersection according to the present invention;
FIG. 5 is a schematic diagram of a section of an urban arterial road or secondary arterial road provided by the invention;
FIG. 6 is a schematic diagram of a highway or urban expressway section provided by the invention;
FIG. 7 is a schematic flow chart of a vehicle tracking operation method according to an embodiment of the present invention;
FIG. 8 is a flowchart of another vehicle tracking operation method according to an embodiment of the present invention;
Fig. 9 is a schematic flow chart of a road traffic congestion level assessment method of a first scene type provided by the embodiment of the invention;
fig. 10 is a schematic flow chart of a road traffic congestion level assessment method of a second scene type according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of a road traffic congestion level assessment device according to an embodiment of the present invention;
fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
For the purpose of facilitating an understanding of the embodiments of the present invention, reference will now be made to the following description of specific embodiments, taken in conjunction with the accompanying drawings, which are not intended to limit the embodiments of the invention.
Aiming at the technical problems mentioned in the background art, the embodiment of the application provides a road traffic congestion level assessment method, specifically referring to fig. 1, fig. 1 is a flow chart of a road traffic congestion level assessment method provided by the embodiment of the invention, and the method comprises the following steps:
step 110, acquiring multi-frame images of the region to be detected in a preset time period.
Specifically, the video acquisition device may be used to acquire the video of the region to be detected, and extract the multi-frame image in the preset time period from the video, or the image shooting device may be used to acquire the multi-frame image of the region to be detected in the preset time period.
In a preferred embodiment, the unmanned aerial vehicle is provided as a device which is rapidly developed in recent years, and provides great assistance for road traffic management. The method for traffic management by the unmanned aerial vehicle generally comprises the steps that a driver controls the unmanned aerial vehicle to fly to the upper side of a road, video live broadcasting and recording are carried out by using a carried cradle head camera, and road pavement canalization, road surface abnormal events and road traffic flow are collected. The unmanned aerial vehicle ground station or background processing software can intelligently identify videos and images acquired by the unmanned aerial vehicle, discover congestion points in time and assist staff in analysis and treatment. For example, an unmanned aerial vehicle with a video acquisition device can be used for video recording or continuous multi-frame image shooting of an area to be detected, and image data is uploaded, so that a video image shot by the unmanned aerial vehicle is obtained, firstly, the unmanned aerial vehicle shall develop flight activities under the condition that relevant departments approve the flight, and fly to the upper side of the area to be detected according to an aerial photography scheme according to a safe flight principle; secondly, adjusting the position and shooting angle of the unmanned aerial vehicle to ensure that the region to be detected is completely, clearly and stably displayed in the picture; thirdly, fixing the shooting angle, and recording images according to the set shooting duration and time interval; from time to time, the image data is transmitted from the unmanned plane end to the ground end through a downlink transmission link; and finally, executing the flight task, returning the unmanned aerial vehicle, and landing to a safe area, thereby completing the image acquisition process.
And 120, carrying out lane range identification on at least one frame of image in the multi-frame images, and acquiring the lane range of the region to be detected.
Specifically, firstly, the lane range in the multi-frame image can be identified, and the lane range of the area to be detected is determined according to all the identification results, or if the lane ranges in all the images are the same, one frame of image can be selected for identification, and the lane range of the frame of image is the lane range of the area to be detected.
In an alternative example, for example, in a stable image acquisition scene, the lane range in each frame of image is the same, the first image may be selected to perform lane range recognition, the lane range of the area to be detected is determined, the lane range may be recognized by using an object detection model based on an object detection algorithm (you only look once v, YOLOv 5), and the training process of the lane range recognition model may include: firstly, obtaining road scene images, and marking the positions of lane ranges of different road scene types in each scene image by using anchor frames so as to respectively establish lane range data sets of the different road scene types; secondly, respectively training lane range detectors of different road scene types by using a YOLOv5 algorithm.
In an alternative example, the different road scene types may be an intersection road scene type and a section road scene type, the lane range of the intersection scene being an entrance lane, the lane range of the section road scene being a concurrent lane. The entrance way is a collection of all lanes entering the intersection according to the running direction of the vehicle; the same-direction lanes refer to the collection of the motor vehicle running paths in the same direction on the interval section. In an intersection scene, the entrance way has the following image features: 1. road traffic marking features such as stop lines, guide arrows, white solid lines and the like; 2. when vehicles cover the road, the vehicle heads of the vehicles are small in distance and dense. In the section scene, the concurrent lane has road traffic marking features such as forbidden crossing of subtended roadway demarcation lines (double yellow solid line, yellow dashed solid line, and single Huang Shixian) for separating subtended traffic flow. Therefore, a target detection model can be established by using a YOLOv5 algorithm, and the identification of the inlet road and the concurrent lane is realized.
And 130, carrying out road scene recognition on at least two frames of images in the multi-frame images, and acquiring the road scene type of the region to be detected and the road parameter information in the lane range.
Specifically, the road parameter information is used to indicate the road parameter information corresponding to the road scene type, and the image classification technology may be used to identify the road scene of at least two frames of images in the multi-frame images, obtain the road scene type of the area to be detected, and determine the road parameter information in the lane range according to the identification result, for example, the number of roads in the area to be detected, the length of the roads, and the like.
Optionally, the method for identifying the road scene of at least two frames of images in the multi-frame images to obtain the road scene type of the region to be detected and the road parameter information in the lane range specifically comprises the following steps of the method as shown in fig. 2:
step 1301, inputting at least two frames of images in the multi-frame images into a pre-constructed road scene recognition model, and obtaining a scene type recognition result of each frame of image.
Specifically, a plurality of frame image samples can be randomly extracted from the video image to serve as road scene images to be identified, and all the image samples are put into a road scene identification model to obtain classification results of the images.
In step 1302, the scene type with the largest scene type in all the scene type recognition results is used as the road scene type of the region to be detected.
Specifically, the category with the highest association degree in the scene type recognition result, that is, the category with the highest road scene type, can be taken as the final recognition result.
In step 1303, road parameter information corresponding to the road scene type is determined based on the road scene type and the recognition result.
Specifically, the road scene recognition module may further obtain related information of the road, for example, information such as the number of entrance tracks or the number of homodromous lanes in the area to be detected, and the length of the lanes.
In an alternative example, the road scene recognition may use a road scene recognition method of convolutional neural network (Convolutional Neural Network, CNN) to construct a road scene type recognition model, which mainly includes the following steps: (1) Acquiring road scene images, and classifying and labeling the road scene type of each image so as to establish a road scene type data set; (2) Constructing an image classification algorithm model based on CNN, and formulating a model training strategy; (3) And (3) training the model based on the road scene data set, optimizing model parameters, and obtaining a classifier for road scene recognition, namely a road scene type recognition model.
And inputting the images to be identified into a road scene type identification model, wherein each frame of image corresponds to the identification result respectively, and taking the scene type with the largest number in scene type classification as the road scene type of the area to be detected.
In an alternative example, the road scene types may include an intersection road scene type and an interval road scene type, the intersection road scene type may include a traffic light controlled intersection as shown in fig. 3, and a no traffic light controlled intersection as shown in fig. 4, the interval road scene type may include a city main road or sub-main road interval section as shown in fig. 5, and a road or city expressway interval section as shown in fig. 6, and the like, and the scene types may be increased or decreased according to actual situations, which is not excessively limited.
It should be noted that, no sequence may be provided between the step 120 and the step 130, and the road range identification may be performed first, or the road scene identification may be performed first, specifically, the identification may be performed according to the characteristics of the constructed road scene identification model and the road range identification model, which is not limited herein too much.
In this way, the road scene type recognition result of each frame of image is obtained through the pre-constructed road scene recognition model, and the scene type with the largest recognition result in the road scene types is used as the road scene type of the road to be detected, so that the error of model recognition can be avoided, and the road scene type of the region to be detected can be accurately determined.
And 140, carrying out vehicle tracking operation on the vehicles in each frame of image, and acquiring traffic parameter information corresponding to all vehicles in the to-be-detected area within the lane range.
Specifically, each frame of image is input into a trained target detection model, a vehicle identification result of each vehicle in a lane range of each frame of image is obtained, the vehicle identification result of each vehicle comprises vehicle information such as the type of the vehicle, and statistics is carried out on the vehicle identification result of each vehicle so as to obtain traffic parameter information corresponding to all vehicles, such as the number of vehicles, the type of the vehicle corresponding to each vehicle and the like.
Optionally, vehicle tracking operation is performed on the vehicles in each frame of image, so as to obtain traffic parameter information corresponding to all vehicles in the area to be detected within the lane range, which specifically includes the steps of the method as shown in fig. 7:
in step 1401, a pre-constructed target detection model is adopted to identify vehicles within a lane range in the first frame image, and a vehicle identification, a vehicle type and an actual vehicle position of the first vehicle in the first frame image are generated.
Specifically, the first frame image is any frame image except the last frame image, and the first vehicle is any vehicle in the first frame image.
In an alternative example, the pre-constructed target detection model may be a target detection model trained according to YOLOv5 algorithm, and the model is used to identify the vehicle in the first frame image, and generate the vehicle identifier, the vehicle type and the actual vehicle position of the first vehicle in the first frame image. The actual position of the vehicle can be determined by the position of a vehicle detection frame generated by the target detection model, and when the vehicle is detected, a vehicle track tracker chain is newly built, wherein the tracker chain is used for tracking track information of the vehicle, such as the actual vehicle position (detection frame) and the predicted vehicle position (prediction frame) of the vehicle, and the identification of the vehicle and the type of the vehicle are saved, and the identification of the vehicle can be, for example, a vehicle ID (identity), and the tracker chain is in one-to-one correspondence with the vehicle.
Step 1402 predicts a predicted vehicle position of a first vehicle in a second frame image in a first frame image.
Specifically, the second frame image is the next frame image of the first frame image. And predicting the predicted vehicle position of the first vehicle in the second frame image by adopting a target prediction algorithm, and generating a prediction frame of the first vehicle in the first frame image according to the predicted vehicle position.
In an alternative example, a kalman filtering algorithm may be used, for example, to predict the predicted vehicle position of the first vehicle in the first frame image in the second frame image, and to generate a predicted frame of the first vehicle, i.e. the predicted vehicle position of the first vehicle, in the first frame image.
Step 1403, determining that the second vehicle is the first vehicle when the actual vehicle position of the second vehicle in the second frame image matches the predicted vehicle position.
Or alternatively, the process may be performed,
in step 1404, when the third vehicle exists in the second frame image and the actual vehicle position of the third vehicle in the second frame image cannot be matched with any one of the predicted vehicle positions in the first frame image, a vehicle identification of the third vehicle is generated, and the vehicle type of the third vehicle and the actual vehicle position of the third vehicle are obtained.
Specifically, when the actual vehicle position of the vehicle in the second frame image and the predicted vehicle position of the first vehicle in the first frame image can be matched, the second vehicle in the second frame image and the first vehicle in the first frame image are determined to be the same vehicle.
Or when there is no position matching with the predicted vehicle position of the first vehicle in the second frame image, the corresponding moment of the first frame image is the last moment of the first vehicle in the preset time period of the area to be detected, the first vehicle does not exist in the next frame image, so that the predicted frame of the first vehicle has no matching value, the predicted frame of the first vehicle, namely, the predicted vehicle position information, can be deleted in a tracker chain of the first vehicle, if there is a third vehicle in the second frame image, and the actual vehicle position of the third vehicle cannot be matched with any one of the predicted vehicle positions in the first frame image, the third vehicle is determined to be a vehicle of which the area to be detected newly appears in the preset time, the tracker chain of the third vehicle is newly built, the actual position information and the predicted position information of the third vehicle are included, the vehicle identifier of the third vehicle is generated, and the vehicle type of the third vehicle is acquired.
In an alternative example, for example, a flowchart of a vehicle tracking method based on unmanned aerial vehicle video collection shown in fig. 8, images of unmanned aerial vehicle video within a preset time period are extracted from the video, vehicles in a current frame image are identified by using a trained YOLOv5 target detection model, vehicle identifications, vehicle types and actual vehicle positions of all vehicles in the frame image are obtained, a vehicle track tracker chain of each vehicle is generated, and a prediction frame (predicted vehicle position) of each vehicle in a next frame image is obtained in the current frame video image by using a kalman filtering algorithm for each vehicle; generating a detection frame of each vehicle in the next frame image by using the trained YOLOv5 target detection model, matching the detection frame of the vehicle in the next frame image with each prediction frame in the current frame image by using a hungarian algorithm (Intersection over Union, IOU), if the prediction frame of the vehicle B in the next frame image is successfully matched with the detection frame of the vehicle a in the current frame image, judging that the vehicle B is the same vehicle as the vehicle a in the previous frame image, and updating the prediction frame of the vehicle a in the current frame image according to the detection frame (actual vehicle position) of the vehicle B in the next frame image, namely correcting the target information of the vehicle a, so that the prediction position is more accurate, and the tracker chain information of the vehicle a is also more accurate.
Or when there is no position matching with the predicted vehicle position of the vehicle a in the first frame image in the second frame image, deleting the vehicle prediction frame of the vehicle a from the tracker chain of the vehicle a, and if the vehicle C also appears in the next frame video image, determining that the vehicle C is a new vehicle in the next frame video image, at this time, creating the tracker chain of the vehicle C, generating the vehicle ID of the vehicle C, acquiring the vehicle type of the vehicle C, the actual vehicle position and other vehicle information.
It should be noted that, the algorithm of the target detection model may be preferentially used according to the actual situation, for example, other algorithms of YOLO series, region convolutional neural network algorithm (Region-CNN, R-CNN), and the like may all train the target detection model.
In step 1405, traffic parameter information of the area to be detected is generated according to the vehicle identifications, the vehicle types and the actual vehicle positions of all the vehicles.
Specifically, the second vehicle is any vehicle in the next frame image, and the third vehicle is any vehicle in the next frame image except the second vehicle.
Road parameter information required for evaluating the degree of road traffic congestion is related to the road scene type, and traffic parameter information corresponding to the road scene type is determined according to the determined road scene type, the vehicle identification of the vehicle, the vehicle type and the actual position of the vehicle.
By means of the method, vehicles in each frame of image are identified and tracked, the vehicles are matched according to the actual vehicle positions and the predicted vehicle positions of the vehicles, when the matching is successful, the vehicles in different frames of images can be confirmed to be identical, repeated statistics of the vehicles in different frames of images can be avoided, when the matching is unsuccessful, new vehicle information is generated, missing statistics of the vehicles in the images can not be carried out, and accuracy of traffic parameter information can be well guaranteed.
And step 150, evaluating the road traffic jam degree of the area to be detected according to the road scene type, the road parameter information and the traffic parameter information.
Specifically, according to the road scene type, an evaluation index corresponding to the road scene type is determined, according to the road parameter information and the traffic parameter information corresponding to the road scene type, an evaluation index value of the evaluation index is determined, and according to the evaluation index value, the road traffic congestion degree of the area to be detected is evaluated.
In this way, a multi-frame image of the region to be detected in a preset time period is obtained; carrying out lane range identification on at least one frame of image in the multi-frame images to obtain a lane range of a region to be detected; carrying out road scene recognition on at least two frames of images in the multi-frame images, and acquiring the road scene type of the area to be detected and the road parameter information in the lane range, wherein the road parameter information is used for indicating the road parameter information corresponding to the road scene type; carrying out vehicle tracking operation on vehicles in each frame of image, and acquiring traffic parameter information corresponding to all vehicles in a lane range of a region to be detected; and evaluating the road traffic jam degree of the area to be detected according to the road scene type, the road parameter information and the traffic parameter information. The road scene type of the area to be detected can be identified, the specific road scene type is clear, the road parameter information and the traffic parameter information corresponding to the road scene type can be adopted according to the characteristics of different scene types, the evaluation index result corresponding to the road scene type is further obtained to evaluate the road traffic jam degree, the evaluation result is more accurate than a single evaluation mode and is more in accordance with the actual road driving condition, meanwhile, the lane range is identified, vehicles in the lane range can be accurately tracked and identified, lane parameter information and traffic parameter information in the lane range are accurately counted, the evaluation result of the road traffic jam degree is more accurate, corresponding measures for optimizing the road traffic organization can be adopted based on the accurate road traffic jam degree evaluation result, the road traffic jam is relieved, and reliable guarantee is provided for traffic safety.
The traffic parameter information can be determined in each case in different road scene types in the following manner.
Optionally, when the road scene type is a first type scene type, generating traffic parameter information of the area to be detected according to vehicle identifications, vehicle types and actual vehicle positions of all vehicles, including:
determining an entrance way for the vehicle to travel according to the actual vehicle position;
counting the vehicle identifications in the area to be detected in each preset time interval of the preset time period in the nth entrance way, and taking the number of the vehicle identifications as the number of vehicles;
according to each vehicle type, the number of vehicles corresponding to each vehicle type respectively and the traffic weight corresponding to each vehicle type respectively in a preset time period in the nth entrance lane, determining the traffic of the nth entrance lane;
and constructing traffic parameter information according to the number of vehicles in each entrance way and the traffic volume of each entrance way.
Specifically, in an optional example, the first type of scene type may be an intersection scene type, including a traffic light controlled road intersection and a traffic light-free controlled road intersection, the traffic parameter information further includes traffic flow parameters of vehicles on a road section to be detected, for example, traffic volume of vehicles on the road section to be detected, when the road scene type is an intersection type, an entrance road on which the vehicles travel may be determined according to an actual vehicle position in a vehicle track tracker chain, further, after the entrance road on which the vehicles travel is determined, the number of vehicles and the types of the vehicles in each entrance road may be counted, and thus a natural traffic volume of the entrance road and a ratio of the natural traffic volume of the first vehicle type to the total natural traffic volume may be determined, the first vehicle type is any vehicle type, the natural traffic volume is a number of vehicles passing through a certain place or a certain section of a road within a selected time period, and for convenience in calculation, the natural traffic volume may be converted into a standard traffic volume of a preset vehicle type (for example, a small passenger vehicle type) as shown in the following formula:
The conversion mode is shown in the following formula:
Figure BDA0004075863350000181
wherein V is e Represents the standard traffic after conversion, V represents the total natural traffic obtained by statistics, P i Representing the percentage of the natural traffic volume of the i-th vehicle type to the total natural traffic volume, E u And a vehicle conversion coefficient representing a vehicle type of the u type, wherein k is the number of vehicle types.
The traffic volume conversion relation between other vehicle types and the small bus vehicle type is shown in a table one:
Figure BDA0004075863350000191
(Table I)
And forming traffic parameter information by the number of vehicles in each entrance way and the traffic volume of each entrance way in all entrance ways of the road section to be detected so as to evaluate the congestion degree of the first scene type by using the traffic parameter information and the lane information.
Optionally, when the road scene type is the first scene type, the road parameter information includes: the number of entrance tracks in the area to be detected, and the traffic parameter information comprises: the method comprises the following steps of evaluating the road traffic jam degree of the area to be detected according to the road scene type, the road parameter information and the traffic parameter information, wherein the number of vehicles parked in the area to be detected in each preset time interval of the preset time period in the nth entrance road, the total traffic volume of the nth entrance road and the number of preset time intervals, and the method comprises the following steps of:
Step 910, determining an average delay time of the mth vehicle of the nth entrance according to the number of entrance ways, the number of vehicles parked in the area to be detected in each preset time interval of the preset time period in the nth entrance way, the total traffic volume of the nth entrance way and the number of preset time intervals.
Specifically, the nth entrance way is any entrance way in the area to be detected, the mth vehicle is any vehicle in the nth entrance way, and m and n are positive integers.
The first scene type may be an intersection scene type, according to characteristics of an intersection scene lane, the entrance lanes are entrance lanes in four directions in the up-down and left-right directions in fig. 3, that is, the number of the entrance lanes is 4, when the road traffic congestion degree of the first scene type is evaluated, a preset time period may be divided into a plurality of time intervals, the length of the time intervals may be set according to requirements, for example, the time intervals of each frame of image, or the images with a fixed number of frames are taken as a time interval, specifically, the entrance lanes are set according to actual requirements, and the average delay time of the nth entrance lane may be determined by the following formula:
Figure BDA0004075863350000201
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0004075863350000202
represents the average delay time of the vehicle at the jth entrance lane in a certain observation time,/for the vehicle >
Figure BDA0004075863350000203
Representing the number of vehicles parked in the jth entrance lane in the continuous time interval, m is the number of the continuous time interval, deltat is the time interval, V e,j Standard traffic for the jth lane, S k,j Is the number of vehicles parked in the jth entrance lane in the kth time interval.
The vehicle position of the vehicle A in the current frame image and the previous frame image can be mapped to a world coordinate system based on vehicle tracking operation, the actual displacement of the vehicle is obtained, the vehicle speed is calculated according to the interval time of two frames of images, and when the vehicle speed is smaller than a certain vehicle speed threshold value, the vehicle A is considered to be in a parking state.
Step 920, determining the maximum vehicle average delay time of the area to be detected according to the vehicle average delay time of all the entrance ways.
Specifically, the maximum vehicle average delay time is: and taking the value with the maximum vehicle average delay time in all the entrance ways as the maximum vehicle average delay time of the road section to be detected in the preset time period. The number of the entrance ways may be identified by a scene type identification model of the road, or the number of the entrance ways in the road of the intersection scene may be obtained in advance, for example, the number of the entrance ways in fig. 3 and fig. 4 is 4, and the obtaining manner of the number of the entrance ways is not limited too much. The maximum vehicle average delay time may be determined specifically by the following formula:
Figure BDA0004075863350000204
Wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0004075863350000205
indicating maximum vehicle delay (unit: s), n indicating total number of entrances, +.>
Figure BDA0004075863350000206
The average delay time of the vehicle at the jth entrance road in a certain observation time is shown.
And step 930, evaluating the road traffic congestion degree of the area to be detected according to the maximum vehicle average delay time and the first corresponding relation between the maximum vehicle average delay time and the road traffic congestion degree.
Specifically, the maximum vehicle average delay time is taken as an evaluation index of the road traffic congestion degree under the first scene type, and the congestion degree grade of the road in the area to be detected can be obtained according to the corresponding relation between the maximum vehicle average delay time and the road traffic congestion degree grade, so that the road congestion degree can be accurately evaluated, and the road traffic congestion degree grade can be determined by the following formula:
Figure BDA0004075863350000211
wherein, level represents traffic congestion degree grading, F represents the mapping relation between index value and traffic congestion degree, x represents index value under the scene,
Figure BDA0004075863350000212
indicating the maximum vehicle delay (unit: s).
In an alternative example, the correspondence between the maximum vehicle delay and the traffic congestion degree at the signal control intersection under the first scene type is as shown in table two:
Figure BDA0004075863350000213
(Meter two)
The corresponding relation between the maximum vehicle delay and the traffic jam degree in the signal-free control intersection under the first scene type is as shown in a table III:
Figure BDA0004075863350000214
(Table III)
It should be noted that, the first type of scene may be an intersection type, or may be a scene type similar to an intersection type, for example, a "t" intersection, etc., and any road scene type may be used as the road scene of the first type of scene as long as the road scene type can evaluate the congestion degree by using the maximum vehicle average delay of all the entrance roads as an evaluation index.
And aiming at the second type scene type, the determination of the traffic parameter information and the evaluation mode of the road congestion level can be determined in the following manner, so that the evaluation result considers the characteristics of the second type scene type and the accuracy of the evaluation result is ensured.
Optionally, when the scene type is the second type scene type, generating traffic parameter information of the area to be detected according to the vehicle identifications, the vehicle types and the actual vehicle positions of all the vehicles, including:
acquiring a first moment when an ith vehicle enters a lane range and a second moment when the ith vehicle exits the lane range, and taking a time interval between the first moment and the second moment as a passing time of the ith vehicle, wherein the ith vehicle is any vehicle in the lane range, and i is a positive integer;
Counting the vehicle identifications of the area to be detected, and taking the number of the vehicle identifications as the number of the vehicles;
and constructing traffic parameter information according to the passing time of each vehicle and the number of vehicles.
Specifically, when the lane range is identified, the detection frame of the lane range can be used as the detection frame of the vehicle, the first moment when the ith vehicle enters the lane range and the second moment when the ith vehicle exits the lane range are recorded, the time interval between the first moment and the second moment is used as the passing time of the ith vehicle, the vehicle identifications generated in the preset time period in the to-be-detected area can be counted through the vehicle tracking operation, the number of the vehicle identifications is used as the number of the vehicles in the preset time period, and the traffic parameter information is formed according to the passing time and the passing number of the vehicles of each vehicle.
Optionally, when the road scene type is the second type scene type, the road parameter information includes: the number of the same-directional lanes in the area to be detected and the same-directional lane length of the area to be detected, and the traffic parameter information comprises: the method for estimating the road traffic congestion degree of the area to be detected according to the road scene type, the road parameter information and the traffic parameter information comprises the following method steps as shown in fig. 10:
Step 1010, determining the total driving length of the area to be detected according to the same-directional lane number and the same-directional lane length.
Step 1020, determining the regional average vehicle speed of the region to be detected according to the total driving length, the passing time of each vehicle in the second scene type and the total number of the passing vehicles.
Specifically, the number of the same-direction lanes may be obtained in advance or may be obtained together through the road scene type recognition, for example, in fig. 5, the number of the same-direction lanes is 2 if there are two-direction lanes, in fig. 6, the number of the same-direction lanes is 2 if there are two-direction lanes, and in the one-direction lanes, the number of the same-direction lanes is 1 if there are two-direction lanes. The lane length is the length of the lane in the lane range, and the average vehicle speed of the area to be detected can be determined by the following formula:
Figure BDA0004075863350000231
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0004075863350000232
representing the average travel speed (unit: km/h) of a section of road within the lane range of the area to be detected, where L represents the section length, t l Representing the time when the vehicle l passes through the section, p represents the number of vehicles, n is the number of lanes in the same direction, nL is the total travel length of the section of the area to be detected, and->
Figure BDA0004075863350000233
Is the total travel time of all vehicles.
And step 1030, evaluating the road traffic congestion degree of the area to be detected according to the area average speed and the second corresponding relation between the area average speed and the road traffic congestion degree.
Specifically, the road congestion level may be determined by the following formula:
Figure BDA0004075863350000234
wherein, level represents traffic congestion degree grading, F represents the mapping relation between index value and traffic congestion degree, x represents index value under the scene,
Figure BDA0004075863350000235
and the average vehicle speed of the area in the lane range of the road section to be detected is represented.
In an alternative example, the correspondence between the average travel speed of the urban arterial road and the secondary arterial road section in the second scene type and the traffic congestion degree is shown in table four:
Figure BDA0004075863350000241
(Table IV)
The corresponding relation between the average travel speed of the sections of the urban main road and the secondary main road in the second scene type and the traffic jam degree is shown in a fifth table:
Figure BDA0004075863350000242
(Table five)
The evaluation indexes and the evaluation standards from the second table to the fifth table refer to the existing industry standard, have authority and feasibility, provide reliable basis for the subsequent traffic management departments to formulate corresponding coping dredging strategies according to the road congestion degree, and indirectly guarantee the road driving safety.
The embodiments of the road traffic congestion level assessment method provided in the present application are described below to explain other embodiments of the road traffic congestion level assessment provided in the present application, specifically, see the following.
Fig. 11 is a schematic diagram of a road traffic congestion level assessment apparatus according to an embodiment of the present invention, where the apparatus includes: in a second aspect, the present application provides a road traffic congestion level assessment apparatus, the apparatus comprising: an acquisition module 1101, a lane range identification module 1102, a road scene identification module 1103, a vehicle tracking module 1104, and a congestion degree evaluation module 1105;
an acquisition module 1101, configured to acquire a multi-frame image of a region to be detected within a preset time period;
the lane range recognition module 1102 is configured to perform lane range recognition on at least one frame of image in the multiple frames of images, and obtain a lane range of the region to be detected;
the road scene recognition module 1103 is configured to perform road scene recognition on at least two frames of images in the multiple frames of images, and obtain a road scene type of a region to be detected and road parameter information within a lane range, where the road parameter information is used to indicate road parameter information corresponding to the road scene type;
the vehicle tracking module 1104 is configured to perform a vehicle tracking operation on the vehicles in each frame of image, and obtain traffic parameter information corresponding to all vehicles in the to-be-detected area within the lane range;
the congestion degree evaluation module 1105 is configured to evaluate the road traffic congestion degree of the area to be detected according to the road scene type, the road parameter information and the traffic parameter information.
Optionally, the road scene recognition module 1103 is specifically configured to input at least two frames of images in the multiple frames of images into a pre-constructed road scene recognition model, and obtain a scene type recognition result of each frame of image;
taking the scene type with the largest scene type in all scene type identification results as the road scene type of the area to be detected;
road parameter information corresponding to the road scene type is determined based on the road scene type and the recognition result.
Optionally, the vehicle tracking module 1104 is specifically configured to identify a vehicle within a lane range in a first frame image by using a pre-constructed target detection model, and generate a vehicle identifier, a vehicle type and an actual vehicle position of the first vehicle in the first frame image, where the first frame image is any frame image except for a last frame image;
predicting a predicted vehicle position of the first vehicle in a second frame image in the first frame image, wherein the second frame image is a next frame image of the first frame image;
when the actual vehicle position of the second vehicle in the second frame image is matched with the predicted vehicle position, determining that the second vehicle is the first vehicle;
or alternatively, the process may be performed,
when a third vehicle exists in the second frame image and the actual vehicle position of the third vehicle in the second frame image cannot be matched with any predicted vehicle position in the first frame image, generating a vehicle identification of the third vehicle, and acquiring the vehicle type of the third vehicle and the actual vehicle position of the third vehicle;
And generating traffic parameter information of the area to be detected according to the vehicle identifications, the vehicle types and the actual vehicle positions of all the vehicles, wherein the first vehicle is any vehicle in the first frame image, the second vehicle is any vehicle in the next frame image, and the third vehicle is any vehicle except the second vehicle in the next frame image.
Optionally, the apparatus further comprises: a determination module 1106 and a statistics module 1107;
a determination module 1106 for determining an entry way for travel of the vehicle based on the actual vehicle position;
a statistics module 1107, configured to count vehicle identifications in a to-be-detected area within each preset time interval of the preset time period in the nth entrance lane, and take the number of the vehicle identifications as the number of vehicles;
the determining module 1106 is further configured to determine a total traffic volume of the nth entrance according to each vehicle type and a number of vehicles corresponding to the vehicle type in the nth entrance within the preset time period.
Optionally, the device comprises:
the obtaining module 1101 is further configured to obtain a first time when the ith vehicle enters the lane range and a second time when the ith vehicle exits the lane range, and take a time interval between the first time and the second time as a passing time of the ith vehicle, where the ith vehicle is any vehicle in the lane range, and i is a positive integer;
The statistics module 1107 is further configured to count vehicle identifications of the to-be-detected area, and take the number of the vehicle identifications as the number of vehicles.
Optionally, the apparatus further comprises:
the determining module 1106 is further configured to determine an average delay time of the mth vehicle of the nth entrance according to the number of entrance ways, the number of vehicles parked in the to-be-detected area within each preset time interval of the preset time periods in the nth entrance way, the total traffic volume of the nth entrance way, and the number of preset time intervals; determining the maximum vehicle average delay time of the area to be detected according to the vehicle average delay time of all the entrance ways;
the congestion degree evaluation module 1105 is specifically configured to evaluate the road traffic congestion degree of the area to be detected according to the maximum vehicle average delay time and the first corresponding relation between the maximum vehicle average delay time and the road traffic congestion degree, where the nth entrance lane is any entrance lane in the area to be detected, the mth vehicle is any vehicle in the nth entrance lane, and m and n are both positive integers.
Optionally, the device comprises:
the determining module 1106 is further configured to determine a total driving length of the area to be detected according to the number of the same-directional lanes and the length of the same-directional lanes; determining the regional average speed of the region to be detected according to the total running length, the passing time of each vehicle in the second scene type and the total number of the passing vehicles;
The congestion degree evaluation module 1105 is specifically further configured to evaluate the road traffic congestion degree of the area to be detected according to the area average speed and the second corresponding relationship between the area average speed and the road traffic congestion degree.
The functions executed by each component in the road traffic congestion level assessment apparatus provided in the embodiment of the present invention are described in detail in any of the above method embodiments, so that details are not repeated here.
According to the road traffic congestion level assessment device provided by the embodiment of the invention, multi-frame images of a region to be detected in a preset time period are obtained in the mode; carrying out lane range identification on at least one frame of image in the multi-frame images to obtain a lane range of a region to be detected; carrying out road scene recognition on at least two frames of images in the multi-frame images, and acquiring the road scene type of the area to be detected and the road parameter information in the lane range, wherein the road parameter information is used for indicating the road parameter information corresponding to the road scene type; carrying out vehicle tracking operation on vehicles in each frame of image, and acquiring traffic parameter information corresponding to all vehicles in a lane range of a region to be detected; and evaluating the road traffic jam degree of the area to be detected according to the road scene type, the road parameter information and the traffic parameter information. The road scene type of the area to be detected can be identified, the specific road scene type is clear, the road parameter information and the traffic parameter information corresponding to the road scene type can be adopted according to the characteristics of different scene types, the evaluation index result corresponding to the road scene type is further obtained to evaluate the road traffic jam degree, the evaluation result is more accurate than a single evaluation mode and is more in accordance with the actual road driving condition, meanwhile, the lane range is identified, vehicles in the lane range can be accurately tracked and identified, lane parameter information and traffic parameter information in the lane range are accurately counted, the evaluation result of the road traffic jam degree is more accurate, corresponding measures for optimizing the road traffic organization can be adopted based on the accurate road traffic jam degree evaluation result, the road traffic jam is relieved, and reliable guarantee is provided for traffic safety.
As shown in fig. 12, the embodiment of the present application provides an electronic device, which includes a processor 111, a communication interface 112, a memory 113, and a communication bus 114, where the processor 111, the communication interface 112, and the memory 113 complete communication with each other through the communication bus 114.
A memory 113 for storing a computer program;
in one embodiment of the present application, the processor 111 is configured to implement the road traffic congestion level assessment method provided in any one of the foregoing method embodiments when executing the program stored in the memory 113, where the method includes:
acquiring multi-frame images of a region to be detected within a preset time period;
carrying out lane range identification on at least one frame of image in the multi-frame images to obtain a lane range of a region to be detected;
carrying out road scene recognition on at least two frames of images in the multi-frame images, and acquiring the road scene type of the area to be detected and the road parameter information in the lane range, wherein the road parameter information is used for indicating the road parameter information corresponding to the road scene type;
carrying out vehicle tracking operation on vehicles in each frame of image, and acquiring traffic parameter information corresponding to all vehicles in a lane range of a region to be detected;
And evaluating the road traffic jam degree of the area to be detected according to the road scene type, the road parameter information and the traffic parameter information.
Optionally, the step of performing road scene recognition on at least two frames of images in the multi-frame images to obtain the road scene type of the region to be detected and the road parameter information in the lane range includes:
inputting at least two frames of images in the multi-frame images into a pre-constructed road scene recognition model, and obtaining a scene type recognition result of each frame of image;
taking the scene type with the largest scene type in all scene type identification results as the road scene type of the area to be detected;
road parameter information corresponding to the road scene type is determined based on the road scene type and the recognition result.
Optionally, vehicle tracking operation is performed on vehicles in each frame of image, and traffic parameter information corresponding to all vehicles in the to-be-detected area within the lane range is obtained, which specifically includes:
identifying vehicles in a lane range in a first frame image by adopting a pre-constructed target detection model, and generating a vehicle identification, a vehicle type and an actual vehicle position of the first vehicle in the first frame image, wherein the first frame image is any frame image except the last frame image;
Predicting a predicted vehicle position of the first vehicle in a second frame image in the first frame image, wherein the second frame image is a next frame image of the first frame image;
when the actual vehicle position of the second vehicle in the second frame image is matched with the predicted vehicle position, determining that the second vehicle is the first vehicle;
or alternatively, the process may be performed,
when a third vehicle exists in the second frame image and the actual vehicle position of the third vehicle in the second frame image cannot be matched with any predicted vehicle position in the first frame image, generating a vehicle identification of the third vehicle, and acquiring the vehicle type of the third vehicle and the actual vehicle position of the third vehicle;
and generating traffic parameter information of the area to be detected according to the vehicle identifications, the vehicle types and the actual vehicle positions of all the vehicles, wherein the first vehicle is any vehicle in the first frame image, the second vehicle is any vehicle in the next frame image, and the third vehicle is any vehicle except the second vehicle in the next frame image.
Optionally, when the scene type is a first type scene type, generating traffic parameter information of the area to be detected according to vehicle identifications, vehicle types and actual vehicle positions of all vehicles, including:
Determining an entrance way for the vehicle to travel according to the actual vehicle position;
counting the vehicle identifications in the area to be detected in each preset time interval of the preset time period in the nth entrance way, and taking the number of the vehicle identifications as the number of vehicles;
according to each vehicle type, the number of vehicles corresponding to each vehicle type respectively and the traffic weight corresponding to each vehicle type respectively in a preset time period in the nth entrance lane, determining the traffic of the nth entrance lane;
and constructing traffic parameter information according to the number of vehicles in each entrance way and the traffic volume of each entrance way.
Optionally, when the scene type is the second type scene type, generating traffic parameter information of the area to be detected according to the vehicle identifications, the vehicle types and the actual vehicle positions of all the vehicles, including:
acquiring a first moment when an ith vehicle enters a lane range and a second moment when the ith vehicle exits the lane range, and taking a time interval between the first moment and the second moment as a passing time of the ith vehicle, wherein the ith vehicle is any vehicle in the lane range, and i is a positive integer;
counting the vehicle identifications of the area to be detected, and taking the number of the vehicle identifications as the number of the vehicles;
And constructing traffic parameter information according to the passing time of each vehicle and the number of vehicles.
Optionally, when the road scene type is the first scene type, the road parameter information includes: the number of entrance tracks in the area to be detected, and the traffic parameter information comprises: the number of vehicles parked in the area to be detected in each preset time interval of the preset time period in the nth entrance road, the total traffic volume of the nth entrance road and the number of preset time intervals are evaluated according to the road scene type, the road parameter information and the traffic parameter information, and the road traffic jam degree of the area to be detected comprises the following steps:
determining the average delay time of the mth vehicle of the nth entrance according to the number of the entrance, the number of vehicles parked in the area to be detected in each preset time interval of the preset time period in the nth entrance, the total traffic volume of the nth entrance and the number of the preset time intervals;
determining the maximum vehicle average delay time of the area to be detected according to the vehicle average delay time of all the entrance ways;
and evaluating the road traffic congestion degree of the area to be detected according to the maximum vehicle average delay time and the first corresponding relation between the maximum vehicle average delay time and the road traffic congestion degree, wherein the nth entrance road is any entrance road in the area to be detected, the mth vehicle is any vehicle in the nth entrance road, and m and n are positive integers.
Optionally, when the road scene type is the second type scene type, the road parameter information includes: the number of the same-directional lanes in the area to be detected and the same-directional lane length of the area to be detected, and the traffic parameter information comprises: the traffic time and the total number of the vehicles passing through each vehicle in the second scene type are evaluated according to the road scene type, the road parameter information and the traffic parameter information, and the road traffic jam degree of the area to be detected comprises the following steps:
determining the total running length of the area to be detected according to the number of the homodromous lanes and the homodromous lane length;
determining the regional average speed of the region to be detected according to the total running length, the passing time of each vehicle in the second scene type and the total number of the passing vehicles;
and evaluating the road traffic congestion degree of the area to be detected according to the area average speed and the second corresponding relation between the area average speed and the road traffic congestion degree.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the road traffic congestion level assessment method provided by any one of the method embodiments described above.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
The foregoing is merely exemplary of embodiments of the present invention to enable those skilled in the art to understand or practice the invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method for estimating road traffic congestion level, the method comprising:
acquiring multi-frame images of a region to be detected within a preset time period;
carrying out lane range identification on at least one frame of image in the multi-frame images to obtain the lane range of the region to be detected;
carrying out road scene recognition on at least two frames of images in the multi-frame images, and acquiring the road scene type of the region to be detected and the road parameter information in the lane range, wherein the road parameter information is used for indicating the road parameter information corresponding to the road scene type;
carrying out vehicle tracking operation on vehicles in each frame of image, and acquiring traffic parameter information corresponding to all vehicles in the lane range of the region to be detected;
and evaluating the road traffic jam degree of the area to be detected according to the road scene type, the road parameter information and the traffic parameter information.
2. The method according to claim 1, wherein the performing road scene recognition on at least two frames of images in the multi-frame image to obtain the road scene type of the region to be detected and the road parameter information in the lane range includes:
Inputting at least two frames of images in the multi-frame images into a pre-constructed road scene recognition model, and obtaining a scene type recognition result of each frame of image;
taking the scene type with the largest scene type in all the scene type identification results as the road scene type of the region to be detected;
and determining road parameter information corresponding to the road scene type based on the road scene type and the identification result.
3. The method according to claim 1 or 2, wherein the performing a vehicle tracking operation on the vehicles in each frame of image, and obtaining traffic parameter information corresponding to all vehicles in the lane range in the to-be-detected area, specifically includes:
identifying vehicles in a lane range in a first frame image by adopting a pre-constructed target detection model, and generating a vehicle identification, a vehicle type and an actual vehicle position of a first vehicle in the first frame image, wherein the first frame image is any frame image except the last frame image;
predicting a predicted vehicle position of the first vehicle in a second frame image in a first frame image, wherein the second frame image is a next frame image of the first frame image;
When the actual vehicle position of the second vehicle in the second frame image is matched with the predicted vehicle position, determining that the second vehicle is the first vehicle;
or alternatively, the process may be performed,
when a third vehicle exists in the second frame image and the actual vehicle position of the third vehicle in the second frame image cannot be matched with any predicted vehicle position in the first frame image, generating a vehicle identification of the third vehicle, and acquiring the vehicle type of the third vehicle and the actual vehicle position of the third vehicle;
and generating traffic parameter information of the area to be detected according to the vehicle identifications, the vehicle types and the actual vehicle positions of all vehicles, wherein the first vehicle is any vehicle in the first frame image, the second vehicle is any vehicle in the next frame image, and the third vehicle is any vehicle except the second vehicle in the next frame image.
4. The method according to claim 3, wherein when the scene type is a first type of scene type, the generating traffic parameter information of the area to be detected according to the vehicle identifications of all vehicles, the vehicle types, and the actual vehicle positions includes:
Determining an entrance way for the vehicle to travel according to the actual vehicle position;
counting the vehicle identifications in the to-be-detected area in each preset time interval of the preset time period in the nth entrance way, and taking the number of the vehicle identifications as the number of vehicles;
determining the traffic volume of the nth entrance according to each vehicle type, the number of vehicles corresponding to each vehicle type respectively and the traffic volume weight corresponding to each vehicle type respectively in the preset time period in the nth entrance;
and constructing the traffic parameter information according to the number of vehicles in each entrance way and the traffic volume of each entrance way.
5. The method according to claim 3, wherein when the scene type is a second type scene type, the generating traffic parameter information of the area to be detected according to the vehicle identifications of all vehicles, the vehicle types, and the actual vehicle positions includes:
acquiring a first moment when an ith vehicle drives into the lane range and a second moment when the ith vehicle drives out of the lane range, and taking a time interval between the first moment and the second moment as a passing time of the ith vehicle, wherein the ith vehicle is any vehicle in the lane range, and i is a positive integer;
Counting the vehicle identifications of the areas to be detected, and taking the number of the vehicle identifications as the number of vehicles;
and constructing the traffic parameter information according to the passing time and the number of vehicles of each vehicle.
6. The method of claim 4, wherein when the road scene type is a first type of scene type, the road parameter information comprises: the number of entrance tracks in the area to be detected, and the traffic parameter information comprises: the estimating the road traffic congestion degree of the to-be-detected area according to the road scene type, the road parameter information and the traffic parameter information, wherein the number of vehicles parked in the to-be-detected area in each preset time interval of the preset time period in the nth entrance road, the total traffic volume of the nth entrance road and the number of preset time intervals comprises the following steps:
determining the average delay time of the mth vehicle of the nth entrance according to the number of the entrance, the number of vehicles parked in the to-be-detected area in each preset time interval of the preset time period in the nth entrance, the total traffic volume of the nth entrance and the number of the preset time intervals;
Determining the maximum vehicle average delay time of the to-be-detected area according to the vehicle average delay time of all the entrance ways;
and evaluating the road traffic congestion degree of the area to be detected according to the maximum vehicle average delay time and the first corresponding relation between the maximum vehicle average delay time and the road traffic congestion degree, wherein the nth entrance road is any entrance road in the area to be detected, the mth vehicle is any vehicle in the nth entrance road, and m and n are positive integers.
7. The method of claim 5, wherein when the road scene type is a second category scene type, the road parameter information comprises: the number of the same-directional lanes in the area to be detected and the length of the same-directional lanes in the area to be detected, and the traffic parameter information comprises: the estimating the road traffic jam degree of the to-be-detected area according to the road scene type, the road parameter information and the traffic parameter information comprises the following steps:
determining the total running length of the area to be detected according to the number of the same-direction lanes and the length of the same-direction lanes;
Determining the regional average vehicle speed of the region to be detected according to the total running length, the passing time of each vehicle in the second scene type and the total number of the passing vehicles;
and evaluating the road traffic congestion degree of the region to be detected according to the region average speed and the second corresponding relation between the region average speed and the road traffic congestion degree.
8. A road traffic congestion level assessment apparatus, the apparatus comprising:
the acquisition module is used for acquiring multi-frame images of the region to be detected in a preset time period;
the lane range identification module is used for carrying out lane range identification on at least one frame of image in the multi-frame images and acquiring the lane range of the region to be detected;
the road scene recognition module is used for carrying out road scene recognition on at least two frames of images in the multi-frame images, and obtaining the road scene type of the region to be detected and the road parameter information in the lane range, wherein the road parameter information is used for indicating the road parameter information corresponding to the road scene type;
the vehicle tracking module is used for carrying out vehicle tracking operation on vehicles in each frame of image and acquiring traffic parameter information corresponding to all vehicles in the lane range of the region to be detected;
And the congestion degree evaluation module is used for evaluating the road traffic congestion degree of the area to be detected according to the road scene type, the road parameter information and the traffic parameter information.
9. An electronic device, comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory communicate with each other through the communication bus;
the memory is used for storing a computer program;
the processor, when configured to execute a program stored on a memory, implements the steps of the road traffic congestion level assessment method according to any one of claims 1 to 7.
10. A computer readable storage medium having stored thereon a computer program, which when executed by a processor implements the steps of the road traffic congestion level assessment method according to any one of claims 1 to 7.
CN202310108336.9A 2023-02-01 2023-02-01 Road traffic congestion level evaluation method and device, electronic equipment and storage medium Pending CN116128360A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310108336.9A CN116128360A (en) 2023-02-01 2023-02-01 Road traffic congestion level evaluation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310108336.9A CN116128360A (en) 2023-02-01 2023-02-01 Road traffic congestion level evaluation method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116128360A true CN116128360A (en) 2023-05-16

Family

ID=86307912

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310108336.9A Pending CN116128360A (en) 2023-02-01 2023-02-01 Road traffic congestion level evaluation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116128360A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117198058A (en) * 2023-11-01 2023-12-08 合肥师范学院 Road traffic intelligent supervision system based on remote sensing image
CN117612140A (en) * 2024-01-19 2024-02-27 福思(杭州)智能科技有限公司 Road scene identification method and device, storage medium and electronic equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117198058A (en) * 2023-11-01 2023-12-08 合肥师范学院 Road traffic intelligent supervision system based on remote sensing image
CN117198058B (en) * 2023-11-01 2024-01-30 合肥师范学院 Road traffic intelligent supervision system based on remote sensing image
CN117612140A (en) * 2024-01-19 2024-02-27 福思(杭州)智能科技有限公司 Road scene identification method and device, storage medium and electronic equipment
CN117612140B (en) * 2024-01-19 2024-04-19 福思(杭州)智能科技有限公司 Road scene identification method and device, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN109977812B (en) Vehicle-mounted video target detection method based on deep learning
CN112700470B (en) Target detection and track extraction method based on traffic video stream
CN109670376B (en) Lane line identification method and system
CN116128360A (en) Road traffic congestion level evaluation method and device, electronic equipment and storage medium
US11380105B2 (en) Identification and classification of traffic conflicts
KR102167291B1 (en) System and method for providing road status information
CN111325978B (en) Whole-process monitoring and warning system and method for abnormal behaviors of vehicles on expressway
CN107301776A (en) Track road conditions processing and dissemination method based on video detection technology
CN105608431A (en) Vehicle number and traffic flow speed based highway congestion detection method
Chang et al. Video analytics in smart transportation for the AIC'18 challenge
KR102167292B1 (en) Apparatus and method for providing road status information
CN112462774A (en) Urban road supervision method and system based on unmanned aerial vehicle navigation following and readable storage medium
CN112883936A (en) Method and system for detecting vehicle violation
CN113743469A (en) Automatic driving decision-making method fusing multi-source data and comprehensive multi-dimensional indexes
JP4292250B2 (en) Road environment recognition method and road environment recognition device
CN115618932A (en) Traffic incident prediction method and device based on internet automatic driving and electronic equipment
CN114973659A (en) Method, device and system for detecting indirect event of expressway
US20210397187A1 (en) Method and system for operating a mobile robot
CN111524350A (en) Method, system, terminal device and medium for detecting abnormal driving condition of vehicle and road cooperation
CN111383248A (en) Method and device for judging red light running of pedestrian and electronic equipment
CN116631187B (en) Intelligent acquisition and analysis system for case on-site investigation information
CN115440071B (en) Automatic driving illegal parking detection method
Zhang et al. Machine learning and computer vision-enabled traffic sensing data analysis and quality enhancement
CN112418000B (en) Bad driving behavior detection method and system based on monocular camera
CN113128847A (en) Entrance ramp real-time risk early warning system and method based on laser radar

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination