CN112132071A - Processing method, device and equipment for identifying traffic jam and storage medium - Google Patents

Processing method, device and equipment for identifying traffic jam and storage medium Download PDF

Info

Publication number
CN112132071A
CN112132071A CN202011034400.6A CN202011034400A CN112132071A CN 112132071 A CN112132071 A CN 112132071A CN 202011034400 A CN202011034400 A CN 202011034400A CN 112132071 A CN112132071 A CN 112132071A
Authority
CN
China
Prior art keywords
frame
frame data
current
data
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011034400.6A
Other languages
Chinese (zh)
Inventor
侯凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Eye Control Technology Co Ltd
Original Assignee
Shanghai Eye Control Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Eye Control Technology Co Ltd filed Critical Shanghai Eye Control Technology Co Ltd
Priority to CN202011034400.6A priority Critical patent/CN112132071A/en
Publication of CN112132071A publication Critical patent/CN112132071A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • G08G1/0133Traffic data processing for classifying traffic situation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The embodiment of the invention provides a processing method, a device, equipment and a storage medium for identifying traffic jam, wherein the method comprises the following steps: acquiring current frame image data and current template frame data of a target road, wherein the current template frame data comprises image frame data corresponding to a target vehicle to be tracked; determining frame data to be detected corresponding to the current frame according to the current frame image data; determining a tracking result of a target vehicle corresponding to the current frame according to the current template frame data and the frame data to be detected; and if the target road is determined to be congested according to the tracking result corresponding to each frame, corresponding handling processing is carried out. The real-time detection of road traffic jam is effectively realized, and the detection accuracy is improved.

Description

Processing method, device and equipment for identifying traffic jam and storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a processing method, an apparatus, a device, and a storage medium for identifying traffic congestion.
Background
With the rapid development of economy and technology, the living standard of people is gradually improved, and more people have own automobiles for the convenience of going out. With the continuous increase of private vehicles, great pressure is brought to urban traffic, the traffic jam phenomenon becomes a normal state, and the traffic jam brings troubles to the traveling of people, thereby influencing the rapid development of cities.
In order to alleviate traffic jam pressure and provide better service for people traveling, a congestion detection technology is gradually developed, and a congestion detection method in the prior art generally comprises the following steps: the conventional method for detecting the traffic jam includes counting vehicles by using a ground induction coil, a microwave detector, a radar, a laser detector and the like, or detecting the traffic jam by using a conventional image processing method, for example, calculating the number of vehicles by using a Gaussian clustering algorithm GMM, and acquiring the running speed of a traffic flow by using an optical flow method OPM to detect the traffic jam.
However, the congestion detection method in the prior art has an inaccurate detection result.
Disclosure of Invention
The embodiment of the invention provides a processing method, a processing device, processing equipment and a storage medium for identifying traffic jam, and aims to overcome the defects that the jam detection is not accurate enough and the like in the prior art.
In a first aspect, an embodiment of the present invention provides a processing method for identifying traffic congestion, including:
acquiring current frame image data and current template frame data of a target road, wherein the current template frame data comprises image frame data corresponding to a target vehicle to be tracked;
determining frame data to be detected corresponding to the current frame according to the current frame image data;
determining a tracking result of a target vehicle corresponding to the current frame according to the current template frame data and the frame data to be detected;
and if the target road is determined to be congested according to the tracking result corresponding to each frame, corresponding handling processing is carried out.
Optionally, determining, according to the current frame image data, frame data to be detected corresponding to the current frame, including:
preprocessing the current frame image data to obtain first image data in a first preset format;
carrying out vehicle detection on the first image data to obtain a detected first vehicle;
and taking the image frame data corresponding to the first vehicle as the frame data to be detected corresponding to the current frame.
Optionally, after determining a tracking result of the target vehicle corresponding to the current frame according to the current template frame data and the frame data to be detected, the method further includes:
supplementing image frame data corresponding to a first vehicle which does not belong to the target vehicle into the current template frame data according to the tracking result of the target vehicle corresponding to the current frame to obtain supplemented template frame data;
and taking the supplemented template frame data as template frame data of the next frame.
Optionally, the method further comprises:
preprocessing image frame data corresponding to a first vehicle which does not belong to a target vehicle to obtain second image frame data in a second preset format;
the supplementing image frame data corresponding to the first vehicle not belonging to the target vehicle into the current template frame data to obtain supplemented template frame data includes:
and supplementing the second image frame data into the current template frame data to obtain supplemented template frame data.
Optionally, performing vehicle detection on the first image data to obtain a detected first vehicle, including:
detecting the first image data by adopting a preset CenterNet algorithm to obtain each detected first target frame region and corresponding confidence;
and determining the detected first vehicle according to the corresponding confidence degree of each first target frame region.
Optionally, the tracking result includes a position area of each tracked target vehicle in the current frame and an untracked target vehicle;
judging whether the target road is congested according to the tracking result corresponding to each frame, including:
determining the average speed of the target road vehicle according to the tracking result corresponding to each frame;
and determining whether the target road is congested or not according to the average speed.
Optionally, the determining whether the target road is congested according to the average speed includes:
and if the average speed is less than a preset speed threshold value, determining that the target road is congested.
Optionally, before determining whether the target road is congested according to the tracking result corresponding to each frame, the method further includes:
determining whether each frame is congested or not according to the number of vehicles detected in each frame;
the judging whether the target road is congested according to the tracking result corresponding to each frame includes:
and judging whether the target road is congested or not according to the number of congested frames and the tracking result corresponding to each frame.
Optionally, the determining, according to the current template frame data and the frame data to be detected, a tracking result of the target vehicle corresponding to the current frame includes:
and determining a tracking result of the target vehicle corresponding to the current frame by adopting a preset SimRPN + + target tracking algorithm according to the current template frame data and the frame data to be detected.
In a second aspect, an embodiment of the present invention provides a processing apparatus for identifying traffic congestion, including:
the system comprises an acquisition module, a tracking module and a tracking module, wherein the acquisition module is used for acquiring current frame image data and current template frame data of a target road, and the current template frame data comprises image frame data corresponding to a target vehicle to be tracked;
the detection module is used for determining frame data to be detected corresponding to the current frame according to the current frame image data;
the tracking module is used for determining a tracking result of the target vehicle corresponding to the current frame according to the current template frame data and the frame data to be detected;
and the processing module is used for carrying out corresponding handling processing if the target road congestion is determined according to the tracking result corresponding to each frame.
Optionally, the detection module is specifically configured to:
preprocessing the current frame image data to obtain first image data in a first preset format;
carrying out vehicle detection on the first image data to obtain a detected first vehicle;
and taking the image frame data corresponding to the first vehicle as the frame data to be detected corresponding to the current frame.
Optionally, the processing module is further configured to:
supplementing image frame data corresponding to a first vehicle which does not belong to the target vehicle into the current template frame data according to the tracking result of the target vehicle corresponding to the current frame to obtain supplemented template frame data;
and taking the supplemented template frame data as template frame data of the next frame.
Optionally, the processing module is further configured to pre-process image frame data corresponding to a first vehicle that does not belong to the target vehicle, so as to obtain second image frame data in a second preset format;
the processing module is specifically configured to: and supplementing the second image frame data into the current template frame data to obtain supplemented template frame data.
Optionally, the detection module is specifically configured to:
detecting the first image data by adopting a preset CenterNet algorithm to obtain each detected first target frame region and corresponding confidence;
and determining the detected first vehicle according to the corresponding confidence degree of each first target frame region.
Optionally, the tracking result includes a position area of each tracked target vehicle in the current frame and an untracked target vehicle; the processing module is specifically configured to:
determining the average speed of the target road vehicle according to the tracking result corresponding to each frame;
and determining whether the target road is congested or not according to the average speed.
Optionally, the processing module is specifically configured to: and if the average speed is less than a preset speed threshold value, determining that the target road is congested.
Optionally, the processing module is further configured to determine whether each frame is congested according to the number of vehicles detected in each frame;
the processing module is specifically configured to: and judging whether the target road is congested or not according to the number of congested frames and the tracking result corresponding to each frame.
Optionally, the tracking module is specifically configured to:
and determining a tracking result of the target vehicle corresponding to the current frame by adopting a preset SimRPN + + target tracking algorithm according to the current template frame data and the frame data to be detected.
In a third aspect, an embodiment of the present invention provides a method, including: at least one processor and memory;
the memory stores computer-executable instructions;
the at least one processor executes computer-executable instructions stored by the memory to cause the at least one processor to perform the method as set forth in the first aspect above and in various possible designs of the first aspect.
In a fourth aspect, the present invention provides a computer-readable storage medium, in which computer-executable instructions are stored, and when the computer-executable instructions are executed by a processor, the method according to the first aspect and various possible designs of the first aspect is implemented.
According to the processing method, the processing device, the processing equipment and the storage medium for identifying traffic jam, provided by the embodiment of the invention, vehicle detection is carried out according to the current frame image data of the target road to obtain the frame data to be detected corresponding to the current frame, the target vehicle is tracked according to the current template frame data and the frame data to be detected to obtain the tracking result of the target vehicle corresponding to the current frame, and the target road jam is determined according to the tracking result corresponding to each frame, so that the corresponding coping processing is carried out, the real-time detection of the road traffic jam is effectively realized, and the detection accuracy is improved. The vehicle detection counting is carried out in the RIO area based on the target detection, the detection of the small target object is more accurate, and the number of the vehicles can be more accurately reflected compared with the prior art. And the vehicle speed is calculated by target tracking, the total traffic flow speed can be balanced compared with the prior art, the calculation amount is small, and the real-time performance is high.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a block diagram of a processing system according to an embodiment of the present invention;
fig. 2 is a flowchart illustrating a processing method for identifying traffic congestion according to an embodiment of the present invention;
fig. 3 is a flowchart illustrating a processing method for identifying traffic congestion according to another embodiment of the present invention;
fig. 4 is a schematic structural diagram of a SiamRPN + + network according to an embodiment of the present invention;
fig. 5 is a block diagram of an RPN block according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of an exemplary overall process flow provided by an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a processing device for identifying traffic congestion according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
With the above figures, certain embodiments of the invention have been illustrated and described in more detail below. The drawings and written description are not intended to limit the scope of the disclosed concepts in any way, but rather to illustrate the concepts of the invention by those skilled in the art with reference to specific embodiments.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms to which the present invention relates will be explained first:
GMM: an Adaptive Background mix model For Real-Time Tracking, a Gaussian Mixture model, is a Gaussian clustering algorithm.
OPM: optical flow method.
Centrnet algorithm: the target detection algorithm is based on anchor-free, the input image is 618x618, and then the target detection algorithm is divided into three branches after the characteristic extraction of a network, wherein one branch performs regression of the height and width of a frame, the other branch performs regression of the center point offset, and the last branch performs prediction of the heatmap of the category.
SiamRPN + + algorithm: the target tracking algorithm based on the twin network comprises 2 characteristic extraction branches, wherein the first branch is input of a picture to be detected, the input size of the picture is 255 multiplied by 255, then ResNet-50 characteristic extraction is carried out, and characteristic information of three key layers of the picture is extracted by utilizing characteristics obtained by a deep network; the second branch is the input of the template frame and is used for carrying out feature fusion and matching with the first branch, the structure of the second branch is also a Resnet-50 network, and finally convolution of 2 branches is carried out, specifically, the output of 3 feature layers extracted from the template frame is used as a convolution kernel of the image input branch for operation, and finally 2 branches after the convolution are obtained and are respectively used for carrying out coordinate regression results of the target frame and classification of the corresponding categories. Specific network architectures are described below.
The Bconfidence is confidence coefficient which specifically represents the confidence coefficient that the target detection box belongs to a certain class.
With the rapid development of economy and technology, the living standard of people is gradually improved, and more people have own automobiles for the convenience of going out. With the continuous increase of private vehicles, great pressure is brought to urban traffic, the traffic jam phenomenon becomes a normal state, and the traffic jam brings troubles to the traveling of people, thereby influencing the rapid development of cities. In order to relieve traffic jam pressure and provide better service for people going out, a jam detection technology is gradually developed, one of the jam detection methods in the prior art is to traditionally adopt a ground induction coil, a microwave detector, a radar, a laser detector and the like to realize vehicle counting, and the other is to adopt a traditional image processing method to carry out jam detection, for example, the number of vehicles is calculated by a Gaussian clustering algorithm GMM, and the running speed of traffic flow is acquired by combining an optical flow method OPM to carry out jam detection. But the detection result is often not accurate enough.
In order to solve the problems, the inventor creatively discovers that the deep learning-based method can more accurately detect the congestion degree of all the practical scenes such as express ways and ground roads, can accurately provide the direction of vehicle congestion, can perform real-time congestion alarm, and can be used for subsequent intelligent traffic signal lamp control and reducing the congestion condition. Therefore, the embodiment of the invention provides a processing method for identifying traffic jam, which combines the target detection and target tracking technologies to realize rapid identification of traffic jam conditions and solve the problem of inaccurate jam detection in the prior art.
The processing method for identifying traffic jam provided by the embodiment of the invention is suitable for identifying and dealing with the traffic jam condition of any road, and is suitable for any application scene needing to identify the jam, such as a scene that a map displays the jam condition of each road in real time, for example, a scene that traffic management controls the time length of a signal lamp according to the jam condition, for example, a scene that related departments need to monitor and know the traffic jam condition of each road, and the like. Fig. 1 is a schematic diagram of an architecture of a processing system according to an embodiment of the present invention. The processing system can comprise road monitoring equipment and electronic equipment (such as a server), can also comprise control equipment of a target road traffic signal lamp, can also comprise terminal equipment of related personnel, and can also comprise map service equipment. The road monitoring equipment is arranged at a corresponding position of a target road and used for shooting real-time video stream data of the target road and sending the real-time video stream data to the electronic equipment, the electronic equipment identifies whether the target road is congested or not according to the video stream data, if so, corresponding coping processing can be carried out according to a preset rule, for example, the congestion condition is sent to control equipment of a traffic signal lamp of the target road, so that the control equipment controls the time length of the signal lamp according to the congestion condition, the congestion condition can also be sent to terminal equipment of related personnel, so that the terminal equipment displays the congestion condition to the related personnel, the related personnel can assign the staff to go to the target road to direct traffic, the congestion condition can also be displayed on a map, so that a user using the map can know the congestion condition in time, and. The specific congestion handling mode can be set according to actual demands. The video stream data captured by the road monitoring device may be sent to the electronic device in real time or at preset time intervals (for example, a short time, such as several seconds, which may be specifically set according to actual requirements), and after the electronic device acquires the video stream data of the target road, the processing may be performed on a frame-by-frame basis, and a frame to be currently processed is referred to as a current frame, and in particular, can obtain the current frame image data of the target road and obtain the current template frame data, the current template frame data comprises the image frame data corresponding to the target vehicle to be tracked, the template frame data is a dynamically changed data set, the template frame data is continuously supplemented along with the processing of each frame and can also be reduced along with the disappearance of the target vehicle in the monitoring range, for the current frame, the corresponding current template frame data may be obtained after being supplemented in the previous frame processing. Specifically, the initial template frame data may be determined according to the first frame of image data, and is used as the current template frame data corresponding to the second frame of image data. Specifically, the image data of the first frame may be preprocessed to obtain image data of a preset format, the image data of the preset format is subjected to vehicle detection to obtain detected vehicles, the image frame data corresponding to the detected vehicles is used as initial template frame data to track a target vehicle in the initial template frame data, the image frame data of a vehicle newly appearing in the second frame is supplemented into the template frame data, the supplemented template frame data is used as current template frame data corresponding to the third frame, and so on, and description is omitted. After current frame image data and current template frame data of a target road are obtained, preprocessing is carried out on the current frame image data, vehicle detection is carried out on the preprocessed image data, a detected first vehicle is obtained, image frame data corresponding to the first vehicle is used as frame data to be detected, the image frame data corresponding to the first vehicle refers to area image data of the first vehicle in the current frame image data, and the detected area frame corresponding to the first vehicle is image data segmented from the current frame image data. And determining a tracking result of the target vehicle corresponding to the current frame according to the current template frame data and the frame data to be detected. After the tracking results corresponding to the frames in the preset number or the preset duration are obtained, congestion judgment can be performed once, and if the congestion of the target road is determined according to the tracking results corresponding to the frames, corresponding coping processing is performed. The preset number and the preset duration can be set according to actual requirements.
Furthermore, the terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. In the description of the following examples, "plurality" means two or more unless specifically limited otherwise.
The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present invention will be described below with reference to the accompanying drawings.
An embodiment of the present invention provides a processing method for identifying traffic congestion, which is used for identifying a traffic congestion condition of a road. The execution subject of the embodiment is a processing device for identifying traffic congestion, and the device may be disposed in an electronic device, and the electronic device may be any device having a corresponding processing function, such as a server, a desktop computer, a notebook computer, and the like.
As shown in fig. 2, a schematic flow chart of a processing method for identifying traffic congestion provided in this embodiment is provided, where the method includes:
step 101, obtaining current frame image data of a target road and current template frame data, wherein the current template frame data comprises image frame data corresponding to a target vehicle to be tracked.
Specifically, the real-time video stream data of the target road may be captured by the road monitoring device and sent to the electronic device, the video stream data captured by the road monitoring device may be sent to the electronic device in real time, or sent to the electronic device at preset time intervals (for example, short time, for example, several seconds, which may be set according to actual requirements), and may be sent continuously frame by frame, or sent one frame at certain frame intervals. The specific acquisition mode of the current frame image data is not limited. Taking sending a segment of video stream data as an example, after the electronic device acquires the video stream data of the target road, the electronic device may process the video stream data frame by frame, and a frame to be processed currently is referred to as a current frame. The current frame image data may be original frame image data in the video stream data, or may be an interested area in the original frame image data, and may be specifically set according to actual requirements. The region of interest can be preset, for example, one frame of original image data is shot to cover the vehicle driving road and the sidewalk, the image of the vehicle driving road is set as the region of interest, and only the region of interest is subjected to subsequent processing, so that the calculation amount is effectively reduced, and the processing efficiency and accuracy are improved.
The template frame data is a dynamically changing data set, and is continuously supplemented with each frame, and can also be reduced with the disappearance of the target vehicle in the monitoring range, and for the current frame, the corresponding current template frame data can be obtained after being supplemented in the previous frame processing process. Specifically, the initial template frame data may be determined according to the first frame of image data, and is used as the current template frame data corresponding to the second frame of image data. Specifically, the image data of the first frame may be preprocessed to obtain image data of a preset format, the image data of the preset format is subjected to vehicle detection to obtain detected vehicles, the image frame data corresponding to the detected vehicles is used as initial template frame data to track a target vehicle in the initial template frame data, the image frame data of a vehicle newly appearing in the second frame is supplemented into the template frame data, the supplemented template frame data is used as current template frame data corresponding to the third frame, and so on, and description is omitted. The image frame data corresponding to the vehicle is area image data obtained when the vehicle is detected in the original image data, namely a small image obtained by scratching the vehicle area from the original image.
And step 102, determining frame data to be detected corresponding to the current frame according to the current frame image data.
Specifically, the frame data to be detected includes image frame data corresponding to each vehicle detected in the current frame image data. The method comprises the steps of detecting vehicles of current frame image data to obtain detected vehicles, and using image frame data corresponding to the detected vehicles as to-be-detected frame data.
Alternatively, any practicable object detection algorithm may be employed to detect the vehicle in the current frame image data.
Illustratively, the centrnet algorithm is employed for vehicle detection.
Alternatively, the number of vehicles detected in the current frame image data may be acquired and stored as a factor for judging congestion. For example, if the number of detected vehicles in one frame of image data exceeds a preset threshold, it is considered that the target road is in a congestion state in the frame state, and if the number of detected vehicles in multiple frames of image data all exceeds the preset threshold, that is, multiple frames are in the congestion state, it can be determined that the target road is in the congestion state at present.
And 103, determining a tracking result of the target vehicle corresponding to the current frame according to the current template frame data and the frame data to be detected.
Specifically, target tracking is performed on the target vehicle in the frame data to be detected based on the current template frame data, so that a tracking result of the target vehicle corresponding to the current frame is determined. The tracking result of the target vehicle may specifically include the target vehicles tracked in the current frame, and the position areas of the tracked target vehicles in the current frame, and may further include the target vehicles not tracked.
For example, the target vehicle may be tracked using the SiamRPN + + target tracking algorithm. Specifically, the current template frame data is input into a template branch of the SiamRPN + + network, and the frame data to be detected is input into a detection branch of the SiamRPN + + network, so as to obtain the tracking result of the target vehicle.
And 104, if the target road is determined to be congested according to the tracking result corresponding to each frame, performing corresponding handling processing.
For example, the determining of the congestion of the target road according to the tracking result corresponding to each frame may specifically be that, according to the condition (position in each frame) of the target vehicle appearing in each frame and the time interval between two frames, the traveling speed of each target vehicle may be determined, and then the average speed of the target road vehicle may be determined, and if the average speed is too slow, the current target road may be considered to be in a congestion state.
For example, the average speed may be combined with the number of vehicles detected at each frame to determine whether the target link is congested.
For example, if the average speed is less than the preset speed threshold and the number of the vehicles detected in consecutive multiple frames is greater than the preset number threshold, it is determined that the target road is congested.
The congestion judgment rule may be set according to an actual demand, and this embodiment is not limited.
Coping processes may include, but are not limited to, including: the congestion condition is sent to the control equipment of the traffic signal lamp of the target road, so that the control equipment can control the time length of the signal lamp according to the congestion condition, the congestion condition can also be sent to the terminal equipment of related personnel, so that the terminal equipment is displayed to the related personnel, the related personnel can assign the staff to go to the target road to command traffic, the congestion condition can also be displayed on a map, and a user using the map can know the congestion condition in time, and the like. The specific congestion handling mode can be set according to actual demands.
Alternatively, during the processing of each frame, tracking information such as the detected position information of the area frame and the current frame information of each vehicle may be recorded.
According to the processing method for identifying traffic jam, vehicle detection is performed according to the current frame image data of the target road, the frame data to be detected corresponding to the current frame is obtained, tracking processing is performed on the target vehicle according to the current template frame data and the frame data to be detected, the tracking result of the target vehicle corresponding to the current frame is obtained, the target road jam is determined according to the tracking result corresponding to each frame, corresponding coping processing is performed, real-time detection of the road traffic jam is effectively achieved, and the detection accuracy is improved.
The method provided by the above embodiment is further described in an additional embodiment of the present invention.
As shown in fig. 3, a flow chart of the processing method for identifying traffic congestion provided in this embodiment is schematically illustrated.
As an implementable manner, on the basis of the foregoing embodiment, optionally, determining, according to the current frame image data, frame data to be detected corresponding to the current frame specifically includes:
in step 2011, the current frame image data is preprocessed to obtain first image data in a first preset format.
Step 2012, a vehicle detection is performed on the first image data to obtain a detected first vehicle.
And 2013, taking the image frame data corresponding to the first vehicle as the frame data to be detected corresponding to the current frame.
Specifically, the preprocessing may include a normalization process and a scaling process, where the normalization process includes: and (4) carrying out mean value subtraction (127.5 is subtracted from the RGB three channels respectively) and variance subtraction (255.0 is divided from the RGB three channels respectively) on the current frame image data to obtain normalized image data. The scaling process includes: the obtained normalized image data is scaled to obtain an image with a preset size, for example, an image with a resize of 608 × 608. The first vehicle may be one or more.
Alternatively, any practicable object detection algorithm may be employed to detect the vehicle in the current frame image data, obtaining the detected first vehicle.
Optionally, after determining the tracking result of the target vehicle corresponding to the current frame according to the current template frame data and the frame data to be detected, the method further includes:
step 2021, according to the tracking result of the target vehicle corresponding to the current frame, supplementing image frame data corresponding to a first vehicle not belonging to the target vehicle to the current template frame data, and obtaining supplemented template frame data;
step 2022, the supplemented template frame data is used as template frame data of the next frame.
Specifically, if the first vehicle detected in the current frame image data does not belong to the target vehicle, it indicates that the first vehicle is a vehicle that newly appears in the current frame, and then the newly appearing vehicles need to be tracked in the next frame, so that the first vehicle that does not belong to the target vehicle needs to be supplemented into the template frame data as the target vehicle to be tracked in the next frame.
It is understood that the target vehicle that is not tracked in the current frame may be marked and no tracking process may be performed subsequently.
Optionally, the method further comprises:
step 2031, preprocessing image frame data corresponding to a first vehicle not belonging to the target vehicle to obtain second image frame data in a second preset format;
correspondingly, supplementing image frame data corresponding to a first vehicle which does not belong to the target vehicle into the current template frame data to obtain supplemented template frame data, and the method comprises the following steps:
step 2032, the second image frame data is supplemented to the current template frame data, and the supplemented template frame data is obtained.
Specifically, in order to facilitate feature extraction, the image frame data in the template frame data may be preprocessed image data, and therefore, after the image frame data corresponding to each first vehicle that does not belong to the target vehicle is extracted, the image frame data needs to be preprocessed to obtain second image frame data in a second preset format, and the second image frame data is supplemented to the current template frame data to obtain supplemented template frame data. The specific operation of the preprocessing is the same as above, and is not described herein again.
For example, the image frame data included in the template frame data may be normalized 127 × 127-sized image data.
Optionally, performing vehicle detection on the first image data to obtain a detected first vehicle, including:
step 2041, detecting the first image data by using a preset centrnet algorithm, and obtaining each detected first target frame region and a corresponding confidence coefficient.
Step 2042, determining the detected first vehicle according to the confidence degree corresponding to each first target frame region.
Specifically, the centret algorithm is an existing target detection algorithm, which is not described herein again, and the detection result is one or more detected target frame regions (referred to as first target frame regions for distinction) and a confidence corresponding to each target frame region. The confidence level indicates the possibility that each first target frame area belongs to the vehicle, and a confidence level threshold value may be set, and when the confidence level of a certain first target frame area is higher than the confidence level threshold value, the first target frame area is considered to belong to the vehicle, so as to determine each detected first vehicle.
As another practicable manner, on the basis of the foregoing embodiment, optionally, the tracking result includes a position area of each tracked target vehicle in the current frame and an untracked target vehicle; judging whether the target road is congested according to the tracking result corresponding to each frame, which specifically comprises the following steps:
and step 2051, determining the average speed of the target road vehicle according to the tracking result corresponding to each frame.
And step 2052, determining whether the target road is congested or not according to the average speed.
Optionally, determining whether the target road is congested according to the average speed specifically includes: and if the average speed is less than the preset speed threshold value, determining that the target road is congested.
Specifically, the running speed of each target vehicle may be determined according to the occurrence of the target vehicle in each frame and the time interval between two frames, and further, the average speed of the target road vehicle may be determined, and if the average speed is too slow, the current target road may be considered to be in a congestion state.
For example, if a target vehicle appears at a first position in a first frame and appears in 5 subsequent frames and records the position of the target vehicle in each frame, the distance traveled by the target vehicle may be determined according to the position of the target vehicle in each frame, and the time interval between capturing the first frame and capturing the 6 th frame is T seconds, the travel speed of the target vehicle may be determined according to the distance traveled by the target vehicle and the time interval T seconds, and the average speed may be determined according to the travel speed of each target vehicle.
As another implementable manner, on the basis of the foregoing embodiment, optionally before determining whether the target road is congested according to the tracking result corresponding to each frame, the method further includes:
step 2061, determining whether each frame is congested or not according to the number of the vehicles detected in each frame;
correspondingly, judging whether the target road is congested according to the tracking result corresponding to each frame comprises the following steps:
step 2062, judging whether the target road is congested according to the number of congested frames and the tracking result corresponding to each frame.
Specifically, the number of detected vehicles may be recorded during each frame processing as a factor in determining congestion.
For example, in a certain frame, if the detected number of vehicles exceeds a preset threshold, it is considered that the frame is congested in the state, and the number of vehicles and frame information of the frame are recorded, when traffic congestion is determined, the number of vehicles and frame information of each congested frame may be acquired, and the number of congested frames may be counted, if the detected number of vehicles in the multi-frame image data exceeds the preset threshold, that is, multiple frames are congested, and the average speed is less than a preset speed threshold, it is determined that the target road is currently congested.
As another implementable manner, on the basis of the foregoing embodiment, optionally, determining, according to the current template frame data and the frame data to be detected, a tracking result of the target vehicle corresponding to the current frame includes:
step 2071, determining a tracking result of the target vehicle corresponding to the current frame by using a preset SiamRPN + + target tracking algorithm according to the current template frame data and the frame data to be detected.
Specifically, the current template frame data is input into the template branch of the SiamRPN + + network, and the frame data to be detected is input into the detection branch of the SiamRPN + + network, so as to obtain the tracking result of the target vehicle.
It should be noted that the SiamRPN + + network needs to be obtained by training and learning in advance, and specifically, may be obtained by obtaining a large amount of training data in advance for training. The specific training process is the prior art and is not described herein again.
Exemplarily, as shown in fig. 4, a schematic diagram of a SiamRPN + + network provided in this embodiment is shown. As shown in fig. 5, a structure diagram of an RPN block provided in this embodiment is shown, where the RPN block is a structure diagram of a Siamese RPN block in fig. 4. Wherein, Target represents template frame data, and Search represents frame data to be detected. In this example, the template frame data is 127 × 127 images, and the frame data to be detected is 255 × 255 images, and the specific size may be set according to actual requirements, and is not limited to the above size. Two branches of the SiamRPN + + network adopt a Resnet-50 network, a target image (namely template frame data) and a search image (namely frame data to be detected) are respectively input to two ends of a twin network, feature extraction is respectively carried out on the target image (namely template frame data) and the search image (namely frame data to be detected) through 50 layers of the Resnet network, and the target image, the template frame data and the search image are input into an RPN network (namely an RPN block) through conv3, conv4 and conv5 for target detection, and then three results are fused to output the search image with a target framed out. The specific network architecture is the existing architecture, and is not described in detail herein.
In an exemplary embodiment, the overall process flow is described in detail, and as shown in fig. 6, an exemplary overall process flow diagram is provided for this embodiment. In this example, it can be considered that the definition of congestion is that the intersection traffic flow queuing overflow time is 150 seconds, and the overflow time can be set according to the actual demand in practical application. The method specifically comprises the following steps:
1. video stream data is acquired.
2. The image data of each frame is subjected to mean subtraction (127.5 is subtracted from the RGB three channels respectively), variance subtraction (255.0 is removed from the RGB three channels respectively), and then the image with resize of 608 × 608 is subjected to the next output.
3. And (3) carrying out vehicle detection on the set region of interest (ROI) to obtain a target frame bbox of all vehicles, namely the target frame bbox may comprise the center point (x, y) and the width and height (w, h) of the vehicles.
4. And (4) screening and filtering the detected target frames, and keeping the target frames higher than the Bconfidence (confidence coefficient threshold) as the vehicle statistics of the next step.
5. Counting the target frames filtered in the step 4, if the number of the target frames is greater than a set threshold value Mbox (namely a preset threshold value), storing the number of the target frames of the current frame into a set A, otherwise, emptying the sets A and B, and then executing the step 2.
6. Extracting a detected vehicle region (namely, image frame data corresponding to a first vehicle detected in a current frame) from the frame, storing the detected vehicle region into a set B, wherein the set B stores tracking information related to the position of a target frame of each vehicle, the number of frames and the like, then preprocessing (averaging, removing variance) the target region of each vehicle (namely, image frame data corresponding to each vehicle) and reducing the size of the target region to 127x127 to be used as an input of a template frame (namely, template frame data) of a SimRPN + + target tracking algorithm for vehicle tracking, if the vehicle exists in a next frame, updating and storing a tracking result, and if a new vehicle which does not belong to the target vehicle exists in the next frame, adding the new vehicle as a new target vehicle to be tracked.
7. If the number of the stored sets A is larger than the set threshold value M (namely the number of the frames of which the number of the target frames is larger than the set threshold value Mbox), calculating the running distance of each vehicle according to the motion tracks of all the vehicles in the set B, and dividing the running distance by the total number of the tracked frames to obtain the average speed of the vehicles, otherwise, continuing to execute the step 2.
8. And if the average speed Vmean acquired in the step 7 is less than Vmax (a threshold value set in advance), performing congestion alarm, judging the driving direction based on the initial position of the vehicle to the final tracking position, and specifically judging the specific congestion direction, otherwise, emptying the sets A and B, and then executing the processing of the next frame by the step 2.
The vehicle detection counting is carried out in the RIO area based on the target detection, the detection of the small target object is more accurate, and the number of the vehicles can be more accurately reflected compared with the prior art. And the vehicle speed is calculated by target tracking, the total traffic flow speed can be balanced compared with the prior art, the calculation amount is small, and the real-time performance is high.
It should be noted that the respective implementable modes in the embodiment may be implemented individually, or may be implemented in combination in any combination without conflict, and the present invention is not limited thereto.
According to the processing method for identifying traffic jam, vehicle detection is performed according to the current frame image data of the target road, the frame data to be detected corresponding to the current frame is obtained, tracking processing is performed on the target vehicle according to the current template frame data and the frame data to be detected, the tracking result of the target vehicle corresponding to the current frame is obtained, the target road jam is determined according to the tracking result corresponding to each frame, corresponding coping processing is performed, real-time detection of the road traffic jam is effectively achieved, and the detection accuracy is improved. The vehicle detection counting is carried out in the RIO area based on the target detection, the detection of the small target object is more accurate, and the number of the vehicles can be more accurately reflected compared with the prior art. And the vehicle speed is calculated by target tracking, the total traffic flow speed can be balanced compared with the prior art, the calculation amount is small, and the real-time performance is high.
Still another embodiment of the present invention provides a processing apparatus for identifying traffic congestion, which is configured to perform the method of the above embodiment.
As shown in fig. 7, a schematic structural diagram of a processing device for identifying traffic congestion according to this embodiment is provided. The processing device 30 for identifying traffic congestion comprises an acquisition module 31, a detection module 32, a tracking module 33 and a processing module 34.
The system comprises an acquisition module, a tracking module and a tracking module, wherein the acquisition module is used for acquiring current frame image data and current template frame data of a target road, and the current template frame data comprises image frame data corresponding to a target vehicle to be tracked; the detection module is used for determining frame data to be detected corresponding to the current frame according to the current frame image data; the tracking module is used for determining a tracking result of the target vehicle corresponding to the current frame according to the current template frame data and the frame data to be detected; and the processing module is used for carrying out corresponding handling processing if the target road congestion is determined according to the tracking result corresponding to each frame.
The specific manner in which the respective modules perform operations has been described in detail in relation to the apparatus in this embodiment, and will not be elaborated upon here.
According to the processing device for identifying traffic jam provided by the embodiment, vehicle detection is performed according to the current frame image data of the target road to obtain the frame data to be detected corresponding to the current frame, tracking processing is performed on the target vehicle according to the current template frame data and the frame data to be detected to obtain the tracking result of the target vehicle corresponding to the current frame, and the target road jam is determined according to the tracking result corresponding to each frame, so that corresponding coping processing is performed, real-time detection of the road traffic jam is effectively realized, and the detection accuracy is improved.
The device provided by the above embodiment is further described in an additional embodiment of the present invention.
As a practical manner, on the basis of the foregoing embodiment, optionally, the detection module is specifically configured to:
preprocessing current frame image data to obtain first image data in a first preset format;
carrying out vehicle detection on the first image data to obtain a detected first vehicle;
and taking the image frame data corresponding to the first vehicle as the frame data to be detected corresponding to the current frame.
Optionally, the processing module is further configured to:
supplementing image frame data corresponding to a first vehicle which does not belong to the target vehicle into the current template frame data according to the tracking result of the target vehicle corresponding to the current frame to obtain supplemented template frame data;
and taking the supplemented template frame data as template frame data of the next frame.
Optionally, the processing module is further configured to pre-process image frame data corresponding to a first vehicle that does not belong to the target vehicle, and obtain second image frame data in a second preset format;
the processing module is specifically configured to: and supplementing the second image frame data into the current template frame data to obtain the supplemented template frame data.
Optionally, the detection module is specifically configured to:
detecting the first image data by adopting a preset CenterNet algorithm to obtain each detected first target frame region and corresponding confidence;
and determining the detected first vehicle according to the corresponding confidence degree of each first target frame region.
As another practicable manner, on the basis of the foregoing embodiment, optionally, the tracking result includes a position area of each tracked target vehicle in the current frame and an untracked target vehicle; the processing module is specifically configured to:
determining the average speed of the target road vehicle according to the tracking result corresponding to each frame;
and determining whether the target road is congested or not according to the average speed.
Optionally, the processing module is specifically configured to: and if the average speed is less than the preset speed threshold value, determining that the target road is congested.
As another practicable manner, on the basis of the foregoing embodiment, optionally, the processing module is further configured to determine whether each frame is congested according to the number of vehicles detected in each frame;
the processing module is specifically configured to: and judging whether the target road is congested or not according to the number of congested frames and the tracking result corresponding to each frame.
As another implementable manner, on the basis of the foregoing embodiment, optionally, the tracking module is specifically configured to:
and determining a tracking result of the target vehicle corresponding to the current frame by adopting a preset SimRPN + + target tracking algorithm according to the current template frame data and the frame data to be detected.
The specific manner in which the respective modules perform operations has been described in detail in relation to the apparatus in this embodiment, and will not be elaborated upon here.
It should be noted that the respective implementable modes in the embodiment may be implemented individually, or may be implemented in combination in any combination without conflict, and the present invention is not limited thereto.
According to the processing device for identifying traffic jam, according to the current frame image data of the target road, vehicle detection is carried out to obtain frame data to be detected corresponding to the current frame, tracking processing is carried out on the target vehicle according to the current template frame data and the frame data to be detected to obtain the tracking result of the target vehicle corresponding to the current frame, and the target road jam is determined according to the tracking result corresponding to each frame, so that corresponding coping processing is carried out, real-time detection of the road traffic jam is effectively achieved, and the detection accuracy is improved. The vehicle detection counting is carried out in the RIO area based on the target detection, the detection of the small target object is more accurate, and the number of the vehicles can be more accurately reflected compared with the prior art. And the vehicle speed is calculated by target tracking, the total traffic flow speed can be balanced compared with the prior art, the calculation amount is small, and the real-time performance is high.
Still another embodiment of the present invention provides an electronic device, configured to perform the method provided by the foregoing embodiment.
As shown in fig. 8, is a schematic structural diagram of the electronic device provided in this embodiment. The electronic device 50 includes: at least one processor 51 and memory 52;
the memory stores computer-executable instructions; the at least one processor executes computer-executable instructions stored by the memory to cause the at least one processor to perform a method as provided by any of the embodiments above.
According to the electronic device of the embodiment, vehicle detection is performed according to the current frame image data of the target road to obtain the frame data to be detected corresponding to the current frame, tracking processing is performed on the target vehicle according to the current template frame data and the frame data to be detected to obtain the tracking result of the target vehicle corresponding to the current frame, and the target road congestion is determined according to the tracking result corresponding to each frame, so that corresponding coping processing is performed, real-time detection of the road traffic congestion is effectively achieved, and the detection accuracy is improved. The vehicle detection counting is carried out in the RIO area based on the target detection, the detection of the small target object is more accurate, and the number of the vehicles can be more accurately reflected compared with the prior art. And the vehicle speed is calculated by target tracking, the total traffic flow speed can be balanced compared with the prior art, the calculation amount is small, and the real-time performance is high.
Yet another embodiment of the present invention provides a computer-readable storage medium, in which computer-executable instructions are stored, and when the processor executes the computer-executable instructions, the method provided in any one of the above embodiments is implemented.
According to the computer-readable storage medium of the embodiment, vehicle detection is performed according to current frame image data of a target road to obtain frame data to be detected corresponding to the current frame, tracking processing is performed on the target vehicle according to current template frame data and the frame data to be detected to obtain a tracking result of the target vehicle corresponding to the current frame, and congestion of the target road is determined according to the tracking result corresponding to each frame, so that corresponding coping processing is performed, real-time detection of road traffic congestion is effectively achieved, and detection accuracy is improved. The vehicle detection counting is carried out in the RIO area based on the target detection, the detection of the small target object is more accurate, and the number of the vehicles can be more accurately reflected compared with the prior art. And the vehicle speed is calculated by target tracking, the total traffic flow speed can be balanced compared with the prior art, the calculation amount is small, and the real-time performance is high.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A processing method for identifying traffic congestion, comprising:
acquiring current frame image data and current template frame data of a target road, wherein the current template frame data comprises image frame data corresponding to a target vehicle to be tracked;
determining frame data to be detected corresponding to the current frame according to the current frame image data;
determining a tracking result of a target vehicle corresponding to the current frame according to the current template frame data and the frame data to be detected;
and if the target road is determined to be congested according to the tracking result corresponding to each frame, corresponding handling processing is carried out.
2. The method according to claim 1, wherein determining the frame data to be detected corresponding to the current frame according to the current frame image data comprises:
preprocessing the current frame image data to obtain first image data in a first preset format;
carrying out vehicle detection on the first image data to obtain a detected first vehicle;
and taking the image frame data corresponding to the first vehicle as the frame data to be detected corresponding to the current frame.
3. The method according to claim 2, wherein after determining the tracking result of the target vehicle corresponding to the current frame according to the current template frame data and the frame data to be detected, the method further comprises:
supplementing image frame data corresponding to a first vehicle which does not belong to the target vehicle into the current template frame data according to the tracking result of the target vehicle corresponding to the current frame to obtain supplemented template frame data;
and taking the supplemented template frame data as template frame data of the next frame.
4. The method of claim 3, further comprising:
preprocessing image frame data corresponding to a first vehicle which does not belong to a target vehicle to obtain second image frame data in a second preset format;
the supplementing image frame data corresponding to the first vehicle not belonging to the target vehicle into the current template frame data to obtain supplemented template frame data includes:
and supplementing the second image frame data into the current template frame data to obtain supplemented template frame data.
5. The method according to claim 1, wherein the tracking result comprises a position area of each tracked target vehicle in the current frame and an untracked target vehicle;
judging whether the target road is congested according to the tracking result corresponding to each frame, including:
determining the average speed of the target road vehicle according to the tracking result corresponding to each frame;
and determining whether the target road is congested or not according to the average speed.
6. The method according to claim 1, wherein before determining whether the target road is congested according to the tracking result corresponding to each frame, the method further comprises:
determining whether each frame is congested or not according to the number of vehicles detected in each frame;
the judging whether the target road is congested according to the tracking result corresponding to each frame includes:
and judging whether the target road is congested or not according to the number of congested frames and the tracking result corresponding to each frame.
7. The method according to any one of claims 1 to 6, wherein the determining, according to the current template frame data and the frame data to be detected, the tracking result of the target vehicle corresponding to the current frame comprises:
and determining a tracking result of the target vehicle corresponding to the current frame by adopting a preset SimRPN + + target tracking algorithm according to the current template frame data and the frame data to be detected.
8. A processing apparatus for identifying traffic congestion, comprising:
the system comprises an acquisition module, a tracking module and a tracking module, wherein the acquisition module is used for acquiring current frame image data and current template frame data of a target road, and the current template frame data comprises image frame data corresponding to a target vehicle to be tracked;
the detection module is used for determining frame data to be detected corresponding to the current frame according to the current frame image data;
the tracking module is used for determining a tracking result of the target vehicle corresponding to the current frame according to the current template frame data and the frame data to be detected;
and the processing module is used for carrying out corresponding handling processing if the target road congestion is determined according to the tracking result corresponding to each frame.
9. An electronic device, comprising: at least one processor and memory;
the memory stores computer-executable instructions;
the at least one processor executing the computer-executable instructions stored by the memory causes the at least one processor to perform the method of any one of claims 1-7.
10. A computer-readable storage medium having computer-executable instructions stored thereon which, when executed by a processor, implement the method of any one of claims 1-7.
CN202011034400.6A 2020-09-27 2020-09-27 Processing method, device and equipment for identifying traffic jam and storage medium Pending CN112132071A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011034400.6A CN112132071A (en) 2020-09-27 2020-09-27 Processing method, device and equipment for identifying traffic jam and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011034400.6A CN112132071A (en) 2020-09-27 2020-09-27 Processing method, device and equipment for identifying traffic jam and storage medium

Publications (1)

Publication Number Publication Date
CN112132071A true CN112132071A (en) 2020-12-25

Family

ID=73840360

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011034400.6A Pending CN112132071A (en) 2020-09-27 2020-09-27 Processing method, device and equipment for identifying traffic jam and storage medium

Country Status (1)

Country Link
CN (1) CN112132071A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113053104A (en) * 2021-02-24 2021-06-29 上海眼控科技股份有限公司 Target state determination method and device, computer equipment and storage medium
CN113487650A (en) * 2021-06-08 2021-10-08 中移(上海)信息通信科技有限公司 Road congestion detection method, device and detection equipment
CN114999164A (en) * 2022-08-05 2022-09-02 深圳支点电子智能科技有限公司 Intelligent traffic early warning processing method and related equipment
CN115938126A (en) * 2023-01-06 2023-04-07 南京慧尔视智能科技有限公司 Radar-based overflow detection method, device, equipment and storage medium
CN116403411A (en) * 2023-06-08 2023-07-07 山东协和学院 Traffic jam prediction method and system based on multiple signal sources

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050027448A1 (en) * 2003-07-30 2005-02-03 Pioneer Corporation Device, system, method and program for notifying traffic condition and recording medium storing such program
CN109087510A (en) * 2018-09-29 2018-12-25 讯飞智元信息科技有限公司 traffic monitoring method and device
CN110084829A (en) * 2019-03-12 2019-08-02 上海阅面网络科技有限公司 Method for tracking target, device, electronic equipment and computer readable storage medium
CN110472496A (en) * 2019-07-08 2019-11-19 长安大学 A kind of traffic video intelligent analysis method based on object detecting and tracking
CN110634153A (en) * 2019-09-19 2019-12-31 上海眼控科技股份有限公司 Target tracking template updating method and device, computer equipment and storage medium
CN110853076A (en) * 2019-11-08 2020-02-28 重庆市亿飞智联科技有限公司 Target tracking method, device, equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050027448A1 (en) * 2003-07-30 2005-02-03 Pioneer Corporation Device, system, method and program for notifying traffic condition and recording medium storing such program
CN109087510A (en) * 2018-09-29 2018-12-25 讯飞智元信息科技有限公司 traffic monitoring method and device
CN110084829A (en) * 2019-03-12 2019-08-02 上海阅面网络科技有限公司 Method for tracking target, device, electronic equipment and computer readable storage medium
CN110472496A (en) * 2019-07-08 2019-11-19 长安大学 A kind of traffic video intelligent analysis method based on object detecting and tracking
CN110634153A (en) * 2019-09-19 2019-12-31 上海眼控科技股份有限公司 Target tracking template updating method and device, computer equipment and storage medium
CN110853076A (en) * 2019-11-08 2020-02-28 重庆市亿飞智联科技有限公司 Target tracking method, device, equipment and storage medium

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113053104A (en) * 2021-02-24 2021-06-29 上海眼控科技股份有限公司 Target state determination method and device, computer equipment and storage medium
CN113487650A (en) * 2021-06-08 2021-10-08 中移(上海)信息通信科技有限公司 Road congestion detection method, device and detection equipment
CN113487650B (en) * 2021-06-08 2023-09-19 中移(上海)信息通信科技有限公司 Road congestion detection method, device and detection equipment
CN114999164A (en) * 2022-08-05 2022-09-02 深圳支点电子智能科技有限公司 Intelligent traffic early warning processing method and related equipment
CN115938126A (en) * 2023-01-06 2023-04-07 南京慧尔视智能科技有限公司 Radar-based overflow detection method, device, equipment and storage medium
CN116403411A (en) * 2023-06-08 2023-07-07 山东协和学院 Traffic jam prediction method and system based on multiple signal sources
CN116403411B (en) * 2023-06-08 2023-08-11 山东协和学院 Traffic jam prediction method and system based on multiple signal sources

Similar Documents

Publication Publication Date Title
CN112132071A (en) Processing method, device and equipment for identifying traffic jam and storage medium
US20200293797A1 (en) Lane line-based intelligent driving control method and apparatus, and electronic device
CN109977782B (en) Cross-store operation behavior detection method based on target position information reasoning
CN111818313B (en) Vehicle real-time tracking method and device based on monitoring video
US20100239123A1 (en) Methods and systems for processing of video data
CN107886055A (en) A kind of retrograde detection method judged for direction of vehicle movement
CN104966304A (en) Kalman filtering and nonparametric background model-based multi-target detection tracking method
CN111008600A (en) Lane line detection method
CN111967396A (en) Processing method, device and equipment for obstacle detection and storage medium
CN113887418A (en) Method and device for detecting illegal driving of vehicle, electronic equipment and storage medium
CN112084892B (en) Road abnormal event detection management device and method thereof
CN111523385B (en) Stationary vehicle detection method and system based on frame difference method
CN112435276A (en) Vehicle tracking method and device, intelligent terminal and storage medium
CN111695545A (en) Single-lane reverse driving detection method based on multi-target tracking
Ghahremannezhad et al. Automatic road detection in traffic videos
CN112562315B (en) Method, terminal and storage medium for acquiring traffic flow information
CN116824516B (en) Road construction safety monitoring and management system
CN110889347B (en) Density traffic flow counting method and system based on space-time counting characteristics
CN116311166A (en) Traffic obstacle recognition method and device and electronic equipment
CN116363628A (en) Mark detection method and device, nonvolatile storage medium and computer equipment
Yu et al. Length-based vehicle classification in multi-lane traffic flow
CN115512263A (en) Dynamic visual monitoring method and device for falling object
CN113239931A (en) Logistics station license plate recognition method
WO2022241807A1 (en) Method for recognizing color of vehicle body of vehicle, and storage medium and terminal
CN117037045B (en) Anomaly detection system based on fusion clustering and deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination