CN111383455A - Traffic intersection object flow statistical method, device, computer equipment and medium - Google Patents

Traffic intersection object flow statistical method, device, computer equipment and medium Download PDF

Info

Publication number
CN111383455A
CN111383455A CN202010165396.0A CN202010165396A CN111383455A CN 111383455 A CN111383455 A CN 111383455A CN 202010165396 A CN202010165396 A CN 202010165396A CN 111383455 A CN111383455 A CN 111383455A
Authority
CN
China
Prior art keywords
image frame
target object
frame
processed
tracking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010165396.0A
Other languages
Chinese (zh)
Inventor
周康明
侯凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Eye Control Technology Co Ltd
Original Assignee
Shanghai Eye Control Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Eye Control Technology Co Ltd filed Critical Shanghai Eye Control Technology Co Ltd
Priority to CN202010165396.0A priority Critical patent/CN111383455A/en
Publication of CN111383455A publication Critical patent/CN111383455A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/07Controlling traffic signals
    • G08G1/081Plural intersections under common control

Abstract

The application relates to a traffic intersection object flow statistical method, a traffic intersection object flow statistical device, computer equipment and a storage medium. The method comprises the following steps: acquiring a video to be processed corresponding to a traffic intersection, and extracting a current image frame from the video to be processed; acquiring a target object determined by the previous image frame; judging whether a tracking object corresponding to the target object exists in the current image frame; when the tracking object corresponding to the target object does not exist in the current image frame, acquiring a direction judgment frame corresponding to the target object from the processed image frame, and obtaining the motion direction of the target object according to the direction judgment frame; and counting the number of the target objects corresponding to each movement direction in a preset time period to obtain the object flow of the traffic intersection. By adopting the method, the intelligent level of intersection object flow counting can be improved.

Description

Traffic intersection object flow statistical method, device, computer equipment and medium
Technical Field
The application relates to the technical field of artificial intelligence, in particular to a traffic intersection object flow statistical method, a device, computer equipment and a medium.
Background
As a big traffic country, the purchase rate of vehicles in China is higher and higher, so that a common problem, namely traffic jam, can occur. In order to better solve the problem by human intelligence, the patent specially provides a method for realizing intersection vehicle statistics by a target tracking method, so that traffic departments in various cities can reasonably arrange traffic polices to dredge intersection congestion, daily traffic volume, annual traffic volume, daily traffic peak flow and the like of urban roads can be comprehensively known, and governments and traffic departments can better design road traffic and solve the traffic smoothness. The current common technology is that a gaussian mixture model (backsgroundsubtracer MOG2) models a background, a foreground is extracted, then the MOG is used for carrying out target detection on vehicles and carrying out flow statistics on a KCF tracking technology, an area is configured in an intersection for judgment, whether the vehicles enter the area or exit the area is judged, and finally the accumulated number of the vehicles entering the area or exiting the area is determined. The method has the main problems that the scene is single, the counting function of the bidirectional intersection can be only carried out, the intersection needs to be manually configured, and the method is slightly troublesome.
Disclosure of Invention
In view of the above, it is necessary to provide a traffic intersection object flow rate statistical method, apparatus, computer device and medium capable of improving the intelligent level of intersection object flow rate counting.
A traffic intersection object flow statistics method, the method comprising:
acquiring a video to be processed corresponding to a traffic intersection, and extracting a current image frame from the video to be processed;
acquiring a target object determined by the previous image frame;
judging whether a tracking object corresponding to the target object exists in the current image frame;
when the tracking object corresponding to the target object does not exist in the current image frame, acquiring a direction judgment frame corresponding to the target object from the processed image frame, and obtaining the motion direction of the target object according to the direction judgment frame;
and counting the number of the target objects corresponding to each movement direction in a preset time period to obtain the object flow of the traffic intersection.
In one embodiment, the acquiring a direction determination frame corresponding to the target object from the processed image frames and obtaining the motion direction of the target object according to the direction determination frame includes:
acquiring an initial image frame and the last image frame of the target object which are detected for the first time;
determining the position and orientation of the target object in the initial image frame and the previous image frame;
and obtaining the motion direction of the target object according to the determined position and direction.
In one embodiment, after counting the number of target objects corresponding to each moving direction within a preset time period and obtaining the object traffic of the traffic intersection, the method further includes:
generating a traffic light control instruction corresponding to the traffic intersection according to the object flow;
and sending the traffic light control instruction to a traffic light control terminal corresponding to the traffic intersection.
In one embodiment, after acquiring the target object determined in the previous image frame, the method further includes:
obtaining object frames corresponding to the target objects, and carrying out mean value reduction processing on the target objects in the object frames;
and carrying out image size change operation in equal proportion on the object frame subjected to the average value reduction processing to obtain a target object with a preset size.
In one embodiment, the determining whether a tracking object corresponding to the target object exists in the current image frame includes:
acquiring a preset position point and the preset size of the target object, and determining a tracking area in the current frame image according to the preset position point and the preset size;
inputting the target object and the determined tracking area into a pre-generated object tracking model to determine whether a tracking object corresponding to the target object exists in the current image frame.
In one embodiment, after the determining whether the tracking object corresponding to the target object exists in the current image frame, the method further includes:
when a tracking object corresponding to the target object exists in the current image frame, updating the target object through the tracking object corresponding to the target object, and continuously extracting a next image frame from the video to be processed as the current image frame until the image frame in the video to be processed is traversed.
In one embodiment, the method further comprises:
carrying out object identification on the current image frame to obtain a plurality of objects to be processed;
after the target object is updated by the tracking object corresponding to the target object, the method further includes:
matching the object to be processed and the tracking object; acquiring the object to be processed which is not matched with the tracking object in the current image frame;
and adding the acquired object to be processed into a target object of the current image frame, and recording the current image frame as an initial image frame of the added target object.
In one embodiment, after the extracting the current image frame from the video to be processed, the method further includes:
when the current image frame is a first frame image, extracting to perform object identification on the first frame image to obtain an initial object, performing standardization processing on the initial object to be used as a target object of the first frame image, and then continuously extracting the current image frame from the video to be processed;
and when the current image frame is not the first frame image, continuously acquiring the target object determined by the previous image frame.
A traffic intersection object flow statistics apparatus, the apparatus comprising:
the system comprises a receiving module, a processing module and a processing module, wherein the receiving module is used for acquiring a video to be processed corresponding to a traffic intersection and extracting a current image frame from the video to be processed;
the target object acquisition module is used for acquiring the target object determined by the previous image frame;
the judging module is used for judging whether a tracking object corresponding to the target object exists in the current image frame;
the tracking module is used for acquiring a direction judgment frame corresponding to the target object from the processed image frame when the tracking object corresponding to the target object does not exist in the current image frame, and acquiring the motion direction of the target object according to the direction judgment frame;
and the flow counting module is used for counting the number of the target objects corresponding to each movement direction in a preset time period to obtain the object flow of the traffic intersection.
A computer device comprising a memory storing a computer program and a processor implementing the steps of any of the methods described above when the processor executes the computer program.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method of any of the above.
According to the traffic intersection object flow counting method, the device, the computer equipment and the medium, whether a tracking object corresponding to a current image frame exists or not is judged through a target object determined by a previous image frame, so that the object tracking accuracy is improved without a uniform template, and when the traffic intersection object flow is counted, a technical scheme that a direction judgment frame corresponding to a target object is obtained from the processed image frame and the moving direction of the target object is obtained according to the direction judgment frame is adopted, so that the number of the target objects corresponding to any moving direction can be counted, the accurate flow is obtained, single direction can be counted, and the intelligent level of intersection object flow counting is improved.
Drawings
FIG. 1 is a diagram of an embodiment of a traffic intersection object traffic statistics method;
FIG. 2 is a schematic flow chart of a traffic intersection object traffic statistics method in one embodiment;
FIG. 3 is a schematic diagram of a siamPRN network target tracking network in one embodiment;
FIG. 4 is a network diagram of an optimized version of yolov3 in one embodiment;
FIG. 5 is a diagram illustrating the structure of the ResNet residual module in one embodiment;
FIG. 6 is a schematic flow chart of a traffic intersection object traffic statistics method in another embodiment;
FIG. 7 is a block diagram of an embodiment of a traffic intersection object flow statistics apparatus;
FIG. 8 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The traffic intersection object flow statistical method provided by the application can be applied to the application environment shown in fig. 1. Wherein the terminal 102 and the server 104 communicate via a network. The terminal 102 may send the acquired or received video to be processed corresponding to the traffic intersection to the server 104, so that the server 104 may process the video to be processed, for example, process each frame of the video to be processed in sequence, including extracting the current frame image, and obtaining the target object determined by the previous image frame; judging whether a tracking object corresponding to the target object exists in the current image frame; when the tracking object corresponding to the target object does not exist in the current image frame, a direction judgment frame corresponding to the target object is obtained from the processed image frame, and the moving direction of the target object is obtained according to the direction judgment frame, so that the moving direction of the target object is determined according to the dynamic image frame, any moving direction of the target object can be determined, but not only two directions can be determined, and the server can count the number of the target objects corresponding to each moving direction in a preset time period to obtain the object flow of the traffic intersection. Therefore, the number of the target objects corresponding to any movement direction can be counted, so that the accurate flow is obtained, the single direction can be counted, and the intelligent level of intersection object flow counting is improved. The terminal 102 may be, but not limited to, a camera installed at a traffic intersection for collecting traffic intersection videos, or various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices that receive to-be-processed videos collected by the camera installed at the traffic intersection for collecting traffic intersection videos, and the server 104 may be implemented by an independent server or a server cluster composed of a plurality of servers.
In one embodiment, as shown in fig. 2, a traffic intersection object flow statistical method is provided, which is described by taking the application of the method to the server in fig. 1 as an example, and includes the following steps:
s202: and acquiring a video to be processed corresponding to the traffic intersection, and extracting a current image frame from the video to be processed.
Specifically, the to-be-processed video is a segment of video received by the server and corresponding to the traffic intersection of the to-be-calculated object traffic, and the to-be-processed video may be a segment of video sequence specified in advance by the user. For example, a user can input an instruction for selecting a certain traffic intersection and a corresponding time period to the server according to needs, so that the server can query the acquired video or the video being acquired according to the instruction to obtain a to-be-processed video corresponding to the traffic intersection and the time period, and if the video is a real-time video, the server can simultaneously acquire the to-be-processed video and process the to-be-processed video, so that the efficiency is improved.
The current image frame refers to an image frame which is extracted from a video to be processed and is processed by a server, the video to be processed is composed of a plurality of frame images, and the server can sequentially extract the image frames in the video to be processed as the current image frame according to a time sequence. If the server has just started to process the video to be processed, the current image frame may be a first frame image in the video to be processed, and the first frame image may be a first frame image after stable imaging.
S204: and acquiring the target object determined by the previous image frame.
Specifically, the target object refers to an object that needs to calculate the flow rate, which may refer to a vehicle, a pedestrian, or the like, or a certain type of vehicle including, but not limited to, a motorcycle, a bicycle, an electric vehicle, a car, a train, a taxi, or the like.
The target object determined by the previous image frame refers to an object tracked in the previous image frame, and includes an object existing in a preamble image tracked in the previous image frame and an object newly tracked in the previous image frame. The object existing in the preamble image tracked in the previous image frame refers to the object tracked in the previous image frame corresponding to the previous image frame, and the object newly tracked in the previous image frame refers to the object except the object existing in the preamble image tracked in the previous image frame, among all the objects identified in the previous image frame.
Specifically, when determining the target object of the previous image frame, the server first performs object recognition on the previous image frame to obtain all objects in the previous image frame. Then, an object existing in the tracked preamble image in the previous image frame is determined according to the preamble image frame, a tracking relation is established, and the object is determined to be a target object in the previous image frame. Thirdly, the server obtains a new target object according to all the objects and the determined target object, so as to obtain all the target objects in the previous image frame, and performs caching so as to facilitate the calling of the current image frame.
S206: and judging whether a tracking object corresponding to the target object exists in the current image frame.
Specifically, the server may determine whether a tracking object corresponding to the target object exists in the current image frame through a pre-trained model, for example, a siamrn network target tracking network, where the siamrn network target tracking network may accurately determine and perform long-term tracking for blocking a complex intersection and an object, thereby effectively improving the accuracy of counting.
The server determines a tracking area in the current image frame according to the position of the target object by taking the target object as a template, so that the determined template and the tracking area are input into a siaprn network target tracking network together, and whether a tracking object corresponding to the target object exists in the current image frame can be determined.
S208: when the tracking object corresponding to the target object does not exist in the current image frame, a direction judgment frame corresponding to the target object is acquired from the processed image frame, and the motion direction of the target object is obtained according to the direction judgment frame.
Specifically, the direction determination frame refers to an image frame for determining a movement direction of a target object, in which the target object needs to be identified and the movement direction of the target object is given.
Specifically, if there is no tracking object corresponding to the target object in the current image frame, that is, there is no tracking object corresponding to the target object in the determined tracking area, the server determines that the previous image frame is the last image frame in which the target object exists, so the server may acquire a direction determination frame corresponding to the target object from the already processed image frames, and obtain the moving direction of the target object from the direction determination frame. For example, the server may use a first frame and a last frame in the image frames in which the target object is identified as the direction determination frames corresponding to the target object, and obtain the moving direction of the target object according to the direction determination frames, that is, the server determines the moving direction of the target object according to the direction of the target object in the first frame and the direction of the target object in the last frame.
In addition, in order to improve the tracking accuracy, the server may further determine the position of the target object in the previous image frame when it is determined that the tracking object corresponding to the target object does not exist in the current image frame, determine whether the target object can move out of the field of view according to the position of the target object, the field of view region corresponding to the video to be processed, and the general driving speed of the target object, if so, continue to acquire the direction determination frame corresponding to the target object from the processed image frame, and obtain the moving direction of the target object according to the direction determination frame, otherwise, may perform a marking to continue to determine the tracking region in the next image frame, and when the tracking object corresponding to the target object does not exist in the next image frame or the image frame corresponding to the image frame determining that the target object can move out of the field of view, acquire the direction determination frame corresponding to the target object from the processed image frame, and judging the frame according to the direction to obtain the motion direction of the target object.
S210: and counting the number of the target objects corresponding to each movement direction in a preset time period to obtain the object flow of the traffic intersection.
Specifically, the server may cache the movement direction corresponding to each target object, and then count the number of target objects corresponding to each movement direction after the video to be processed is processed, so that the object traffic at the traffic intersection can be obtained. Or the server can add the moving direction corresponding to the target object to the set of target objects corresponding to the existing moving objects after obtaining the moving direction corresponding to the target object every time, so that the object flow of the traffic intersection can be counted in real time.
The traffic intersection object flow counting method comprises the steps of firstly judging whether a corresponding tracking object exists in a current image frame through a target object determined by a previous image frame, so that the object tracking accuracy is improved without a uniform template, and when the traffic intersection object flow is counted, adopting the technical scheme that a direction judgment frame corresponding to a target object is obtained from the processed image frame, and the movement direction of the target object is obtained according to the direction judgment frame, so that the number of the target objects corresponding to any movement direction can be counted, the accurate flow is obtained, a single direction can be counted, and the intelligent level of intersection object flow counting is improved.
In one embodiment, acquiring a direction determination frame corresponding to a target object from an already processed image frame, and obtaining a moving direction of the target object according to the direction determination frame includes: acquiring an initial image frame and a previous image frame of a target object detected for the first time; determining the position and the direction of a target object in an initial image frame and a previous image frame; and obtaining the movement direction of the target object according to the determined position and direction.
Specifically, in this embodiment, an initial image frame and a previous image frame, in which the target object is detected for the first time, are used as the direction determination frames, the initial image frame in which the target object is detected for the first time is an image frame in which the target object is detected for the first time, and the previous image frame is an image frame in which the target object is detected for the last time.
The server thus determines the position and orientation of the target object in the initial image frame and the previous image frame, and based on the position and orientation, the server may draw a line segment, for example, an extension of the orientation of the target object at the corresponding position, thus determining the direction of movement of the target object based on the angle of the extension of the direction. For example, if the angle is smaller than a first preset value, for example, 30 degrees, it is determined as straight driving and the corresponding direction is acquired, and if the angle is larger than a second preset value, for example, 80 degrees, it is determined as turning and then the turning direction is acquired as the moving direction.
In the above embodiment, because a siamRPN Network (High Performance Visual tracking with parameter area distribution Network, a High Performance target tracking model) is used, the driving direction including which direction to another direction from the intersection can be determined by the tracked video sequence of each vehicle, so that the limitation of single scene can be effectively broken through, and the method is suitable for the application requirements of real scenes.
In one embodiment, after counting the number of target objects corresponding to each moving direction within a preset time period and obtaining the object traffic of the traffic intersection, the method further includes: generating a traffic light control instruction corresponding to the traffic intersection according to the object flow; and sending the traffic light control instruction to a traffic light control terminal corresponding to the traffic intersection.
Specifically, the traffic light control instruction corresponds to the object flow, for example, when the object flow is large, for example, greater than a certain value, the green time of the object flow (for example, a vehicle) corresponding to the traffic light may be controlled to be longer, otherwise, the time may be shortened, and the traffic light control instruction is sent to the traffic light control terminal corresponding to the traffic intersection, so as to implement real-time control of the traffic light.
For the planning of smart cities. After the traffic flow is counted, the traffic light time of the intersection can be adjusted, for example, the red light time of the intersection with larger traffic flow can be adjusted to the surrounding intersections, so that the congestion degree of the intersection is reduced, and the traffic jam problem is relieved to a certain extent.
In one embodiment, after acquiring the target object determined in the previous image frame, the method further includes: acquiring object frames corresponding to a plurality of target objects, and carrying out mean value reduction processing on the target objects in the object frames; and carrying out image size change operation in equal proportion on the object frame subjected to the average value reduction processing to obtain a target object with a preset size.
In one embodiment, determining whether a tracking object corresponding to the target object exists in the current image frame includes: acquiring a preset position point and a preset size of a target object, and determining a tracking area in a current frame image according to the preset position point and the preset size; and inputting the target object and the determined tracking area into a pre-generated object tracking model to judge whether a tracking object corresponding to the target object exists in the current image frame.
In one embodiment, after determining whether a tracking object corresponding to the target object exists in the current image frame, the method further includes: when the tracking object corresponding to the target object exists in the current image frame, the target object is updated through the tracking object corresponding to the target object, and the next image frame is continuously extracted from the video to be processed to serve as the current image frame until the image frame in the video to be processed is traversed.
In one embodiment, after extracting the current image frame from the video to be processed, the method further includes: when the current image frame is a first frame image, extracting to perform object identification on the first frame image to obtain an initial object, performing standardization processing on the initial object to be used as a target object of the first frame image, and then continuously extracting the current image frame from the video to be processed; and when the current image frame is not the first image frame, continuously acquiring the target object determined by the previous image frame.
Specifically, referring to fig. 3, fig. 3 is a schematic structural diagram of a target tracking network of a siaprn network in an embodiment.
Specifically, if the current image frame is the first frame image, an initial object is obtained by performing object recognition on the first frame image, and the initial object is normalized as a target object of the first frame image, for example, by using a YOLO-v3 algorithm (a kind of target detection network), the target object in the first frame image is subjected to target detection. And then after acquiring all target frames needing to be tracked, the server performs an average value reduction preprocessing operation, performs equal-proportion resize on the target frames to prepare a 127x127 template frame, fills the template frame as an average value of picture pixels if the target frames are less than 127 pixels, and then performs matching template of subsequent frames as the input of a module frame of a siamrPN algorithm.
In this embodiment, the server first obtains an object Frame of the target object corresponding to the previous image Frame, then performs an averaging process on the target object in the object Frame, and performs an image size change operation on the object Frame after the averaging process in an equal proportion to obtain a target object with a preset size, for example, performs an equal proportion resize on the object frames to produce a Template Frame of 127 × 127, wherein if the number of the target frames is less than 127 pixels, the server is filled with an average value of picture pixels, so as to produce a Template Frame of the siampl prn network target tracking network, that is, a Template Frame of the Template prn network target tracking network in fig. 3.
In addition, the server needs to determine a tracking area corresponding to the current image frame, for example, the server may first obtain a preset position point and a preset size of the target object, where the preset position point may refer to a central position point of the target object, the server may obtain a position of the target object in a previous image frame, determine a position in the current image frame according to the position in the previous image frame, and determine a corresponding central point, that is, the preset position point, and a corresponding preset size, the server determines the tracking area in the current image frame according to the preset position point and the preset size, for example, a central point of a position where the target object appears in the previous image frame is taken as a sampling center, an area expanded by 4 times is taken as the tracking area, and the tracking area is subjected to an averaging pre-processing operation, and an equal-ratio resize is 255x255, then, the resize picture is input into another Detection branch (e.g., Detection Frame in fig. 3) of the siamRPN, and then the target position of the target object in the current Frame is predicted by performing the tracking of the target object through an algorithm.
Specifically, the server tracks the target object through the siampln network target tracking network shown in fig. 3, where the result includes untracked state and trackable state, where the untracked state indicates that the previous image frame is the last frame image corresponding to the target object, and if the target object can be tracked, the server updates the target object through the tracking object corresponding to the target object, and continues to extract the next frame image frame from the video to be processed as the current image frame until the image frame in the video to be processed is traversed, thereby completing the processing of the entire video to be processed.
In the embodiment, the target tracking network of the siamPRN network is used for shielding complex intersections and objects, so that the judgment can be accurately carried out and the long-term tracking can be carried out, and the counting accuracy is effectively improved. And because the siamrPN network is used, the driving direction including the direction from the intersection to the other direction can be accurately judged through the tracked video sequence of each vehicle, the limitation of single scene can be effectively broken through, and the method is suitable for the application requirement of the real scene.
In one embodiment, the object traffic statistical method further includes: and carrying out object identification on the current image frame to obtain a plurality of objects to be processed. After the target object is updated by the tracking object corresponding to the target object, the method further comprises the following steps: matching the object to be processed and the tracking object; acquiring an object to be processed which is not matched with the tracking object in the current image frame; and adding the acquired object to be processed into a target object of the current image frame, and recording the current image frame as an initial image frame of the added target object.
Specifically, referring to fig. 4 and fig. 5, fig. 4 is a schematic diagram of a network of an optimized version of yolov3 in an embodiment, and fig. 5 is a schematic diagram of a structure of a ResNet residual module in an embodiment, in this embodiment, the server performs object recognition on a current image frame to obtain a plurality of objects to be processed, which may be performed by using a previously trained YOLO-v3 algorithm. The YOLO-v3 algorithm used in this embodiment is based on YOLO v3, in order to pursue the requirement of real-time performance, and because of the singularity of the detection category, a lightweight YOLO v3-Tiny model is used, so that the target detection is performed efficiently and accurately enough compared with the native YOLO 3 algorithm, SSD, and the like, and the accuracy and efficiency of the target detection can be improved. That is, the optimized version of yolov3 is used in the embodiment, so that the network parameters are fewer, the weight is lighter, the method is suitable for detection of small tasks, and the speed is higher.
In addition, in this embodiment, after detecting the object to be processed, the server performs a matching operation on the object to be processed and the tracking object according to the position of the object to be processed and the position of the tracking object, for example, the similarity between the object to be processed and the tracking object may be determined, if the similarity is greater than a preset value, it is determined whether the positions of the object to be processed and the tracking object meet requirements, for example, the similarity is smaller than a certain value, if the similarity is greater than the preset value, it is determined that the object to be processed is tracked by the target object in the previous image frame, and if the similarity is not greater than. At this time, the server may record the initial image frame of which the current image frame is the added target object, that is, the first frame image corresponding to the target object, so that the cache is performed in advance to facilitate subsequent determination of the direction of the target object.
In the embodiment, the optimized version of yolov3 is used, the number of network parameters is less, the weight is lighter, the method is suitable for detection of small tasks, the speed is higher, in addition, a newly added target object is determined through matching of the object to be processed and the tracking object, the judgment mode is simpler, and meanwhile, the current image frame is recorded as the initial image frame of the added target object, so that a foundation is laid for judging the movement direction of the subsequent target object.
Specifically, referring to fig. 6, fig. 6 is a flowchart of a traffic intersection object flow statistical method in another embodiment, in which an object is a vehicle, and a YOLO-v3 lightweight model and a siaprn network target tracking network are used in combination, and the method specifically includes the following steps:
first the server obtains a given piece of video sequence. The server then performs vehicle target detection on the motor vehicle in the picture of frame 1 in the video sequence using the YOLO-v3 algorithm.
The method comprises the steps that after all target frames needing to be tracked and obtained by vehicle target detection are obtained by a server, mean value reduction preprocessing operation is carried out, then equal-proportion resize is carried out on the target frames, template frames of 127x127 are manufactured, if the template frames are less than 127 pixels, the average values of picture pixels are filled, and then the template frames are used as input of module frames of a siamRPN algorithm to carry out matching templates of subsequent frames.
The server continuously acquires a 2 nd frame picture, then takes a central point of a position where the target vehicle appears in the 1 st frame in the 2 nd frame picture as a sampling center, takes an area expanded by 4 times as a tracking area, performs mean value reduction preprocessing on the tracking area, and proportionally reduces the resize to 255x255, and then inputs the picture of the resize into the other detection branch of the siamRPN to perform vehicle tracking through an algorithm, so as to predict the target position of the target vehicle in the 2 nd frame.
Then the server obtains the position of the target vehicle in the frame 1 in the second frame, similarly, the center point of the position of the target vehicle detected in the frame 2 in the frame 3 is used as a sampling center, the area expanded by 4 times is taken, the average value reduction preprocessing operation is carried out, the image with the resize of 255x255 in equal proportion is input into the siamRPN network as the input of the detection branch, and the like until the subsequent frame is not obtained.
Finally, the server can obtain the running track of the target vehicle appearing in the 1 st frame of the target vehicle, and the 1 st frame and the last 1 st frame of the target vehicle are taken to obtain the direction and the final direction of the vehicle when entering, so as to determine the vehicle which is counted in which direction. It should be noted that the 1 st frame where the target vehicle appears is the 1 st frame where new target vehicles appear and are all the new target vehicles, and is not the 1 st frame of the video sequence. The server circularly executes the above processes to complete counting statistics of all vehicles in the whole video sequence to obtain the traffic flow.
The yolov3-Tiny algorithm network provided in the embodiment can effectively improve the mAP (mean average prediction) of a target vehicle detection model, and the FPS value of the running model can process images in real time and provide preparation for engineering; in addition, the siamRPN tracking algorithm provided in the above embodiment well solves the problem that the object size becomes large or the variation between consecutive frames is large, thereby losing the object, effectively solves the complexity of the flow of the conventional method, improves the performance of object tracking, and better accurately counts the traffic flow.
It should be understood that although the steps in the flowcharts of fig. 2 and 6 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2 and 6 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 7, there is provided a traffic intersection object flow statistics apparatus, including: the system comprises a receiving module 100, a target object obtaining module 200, a judging module 300, a tracking module 400 and a flow counting module 500, wherein:
the receiving module 100 is configured to acquire a to-be-processed video corresponding to a traffic intersection, and extract a current image frame from the to-be-processed video;
a target object obtaining module 200, configured to obtain a target object determined in a previous image frame;
a judging module 300, configured to judge whether a tracking object corresponding to the target object exists in the current image frame;
a tracking module 400, configured to, when a tracking object corresponding to the target object does not exist in the current image frame, obtain a direction determination frame corresponding to the target object from the processed image frame, and obtain a motion direction of the target object according to the direction determination frame;
the traffic flow counting module 500 is configured to count the number of target objects corresponding to each moving direction within a preset time period, so as to obtain the object flow at the traffic intersection.
In one embodiment, the tracking module 400 may include:
the frame determining unit is used for acquiring an initial image frame and a previous image frame of a target object detected for the first time;
the orientation determining unit is used for determining the positions and the directions of the target object in the initial image frame and the previous image frame;
and the movement direction determining unit is used for obtaining the movement direction of the target object according to the determined position and direction.
In one embodiment, the traffic intersection object flow statistic device further includes:
the instruction generating module is used for generating a traffic light control instruction corresponding to the traffic intersection according to the object flow;
and the instruction sending module is used for sending the traffic light control instruction to the traffic light control terminal corresponding to the traffic intersection.
In one embodiment, the traffic intersection object flow statistic device further includes:
the preprocessing module is used for acquiring object frames corresponding to a plurality of target objects and carrying out mean value reduction processing on the target objects in the object frames;
and the size unifying module is used for carrying out image size change operation with equal proportion on the object frame subjected to the average value reducing processing to obtain a target object with a preset size.
In one embodiment, the determining module 300 may include:
the tracking area determining unit is used for acquiring a preset position point and a preset size of the target object and determining a tracking area in the current frame image according to the preset position point and the preset size;
and the judging unit is used for inputting the target object and the determined tracking area into a pre-generated object tracking model so as to judge whether a tracking object corresponding to the target object exists in the current image frame.
In one embodiment, the traffic intersection object flow statistic device further includes:
and the updating module is used for updating the target object through the tracking object corresponding to the target object when the tracking object corresponding to the target object exists in the current image frame, and continuously extracting the next image frame from the video to be processed as the current image frame until the image frame in the video to be processed is traversed.
In one embodiment, the traffic intersection object flow statistic device further includes:
the object identification module is used for carrying out object identification on the current image frame to obtain a plurality of objects to be processed;
the matching module is used for matching the object to be processed and the tracking object; acquiring an object to be processed which is not matched with the tracking object in the current image frame;
and the newly adding module is used for adding the acquired object to be processed into the target object of the current image frame and recording the current image frame as the initial image frame of the added target object.
In one embodiment, the traffic intersection object flow statistic device further includes:
the first frame image determining module is used for extracting and carrying out object identification on the first frame image to obtain an initial object when the current image frame is the first frame image, carrying out standardization processing on the initial object to be used as a target object of the first frame image, and then continuously extracting the current image frame from the video to be processed;
and the non-first frame image processing module is used for continuously acquiring the target object determined by the previous image frame when the current image frame is not the first frame image.
For the specific limitation of the traffic intersection object flow statistic device, reference may be made to the above limitation on the traffic intersection object flow statistic method, and details are not described herein again. All modules in the traffic intersection object flow statistical device can be completely or partially realized through software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 8. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used to store the target object. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a traffic intersection object flow statistics method.
Those skilled in the art will appreciate that the architecture shown in fig. 8 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, there is provided a computer device comprising a memory storing a computer program and a processor implementing the following steps when the processor executes the computer program: acquiring a video to be processed corresponding to a traffic intersection, and extracting a current image frame from the video to be processed; acquiring a target object determined by the previous image frame; judging whether a tracking object corresponding to the target object exists in the current image frame; when the tracking object corresponding to the target object does not exist in the current image frame, acquiring a direction judgment frame corresponding to the target object from the processed image frame, and obtaining the motion direction of the target object according to the direction judgment frame; and counting the number of the target objects corresponding to each movement direction in a preset time period to obtain the object flow of the traffic intersection.
In one embodiment, the obtaining of a direction determination frame corresponding to a target object from an already processed image frame and the obtaining of a moving direction of the target object from the direction determination frame, which is implemented when the processor executes the computer program, includes: acquiring an initial image frame and a previous image frame of a target object detected for the first time; determining the position and the direction of a target object in an initial image frame and a previous image frame; and obtaining the movement direction of the target object according to the determined position and direction.
In one embodiment, the counting of the number of target objects corresponding to each moving direction in a preset time period, which is implemented when the processor executes the computer program, to obtain the object traffic at the traffic intersection further includes: generating a traffic light control instruction corresponding to the traffic intersection according to the object flow; and sending the traffic light control instruction to a traffic light control terminal corresponding to the traffic intersection.
In one embodiment, after the processor, when executing the computer program, acquires the target object determined in the previous image frame, the method further includes: acquiring object frames corresponding to a plurality of target objects, and carrying out mean value reduction processing on the target objects in the object frames; and carrying out image size change operation in equal proportion on the object frame subjected to the average value reduction processing to obtain a target object with a preset size.
In one embodiment, the determining whether a tracking object corresponding to the target object exists in the current image frame, which is implemented when the processor executes the computer program, includes: acquiring a preset position point and a preset size of a target object, and determining a tracking area in a current frame image according to the preset position point and the preset size; and inputting the target object and the determined tracking area into a pre-generated object tracking model to judge whether a tracking object corresponding to the target object exists in the current image frame.
In one embodiment, after the processor, when executing the computer program, determines whether a tracking object corresponding to the target object exists in the current image frame, the method further includes: when the tracking object corresponding to the target object exists in the current image frame, the target object is updated through the tracking object corresponding to the target object, and the next image frame is continuously extracted from the video to be processed to serve as the current image frame until the image frame in the video to be processed is traversed.
In one embodiment, the processor, when executing the computer program, further performs the steps of: carrying out object identification on a current image frame to obtain a plurality of objects to be processed; after the target object is updated by the tracking object corresponding to the target object, the processor is implemented when executing the computer program, the method further includes: matching the object to be processed and the tracking object; acquiring an object to be processed which is not matched with the tracking object in the current image frame; and adding the acquired object to be processed into a target object of the current image frame, and recording the current image frame as an initial image frame of the added target object.
In one embodiment, after the processor extracts the current image frame from the video to be processed, the processor executes the computer program and further comprises: when the current image frame is a first frame image, extracting to perform object identification on the first frame image to obtain an initial object, performing standardization processing on the initial object to be used as a target object of the first frame image, and then continuously extracting the current image frame from the video to be processed; and when the current image frame is not the first image frame, continuously acquiring the target object determined by the previous image frame.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of: acquiring a video to be processed corresponding to a traffic intersection, and extracting a current image frame from the video to be processed; acquiring a target object determined by the previous image frame; judging whether a tracking object corresponding to the target object exists in the current image frame; when the tracking object corresponding to the target object does not exist in the current image frame, acquiring a direction judgment frame corresponding to the target object from the processed image frame, and obtaining the motion direction of the target object according to the direction judgment frame; and counting the number of the target objects corresponding to each movement direction in a preset time period to obtain the object flow of the traffic intersection.
In one embodiment, a computer program, when executed by a processor, for acquiring a direction determination frame corresponding to a target object from an already processed image frame and obtaining a moving direction of the target object from the direction determination frame, includes: acquiring an initial image frame and a previous image frame of a target object detected for the first time; determining the position and the direction of a target object in an initial image frame and a previous image frame; and obtaining the movement direction of the target object according to the determined position and direction.
In one embodiment, the counting of the number of target objects corresponding to each moving direction within a preset time period, which is implemented when the computer program is executed by the processor, to obtain the object traffic of the traffic intersection further includes: generating a traffic light control instruction corresponding to the traffic intersection according to the object flow; and sending the traffic light control instruction to a traffic light control terminal corresponding to the traffic intersection.
In one embodiment, the computer program, when executed by the processor, further comprises, after acquiring the target object determined in the previous image frame: acquiring object frames corresponding to a plurality of target objects, and carrying out mean value reduction processing on the target objects in the object frames; and carrying out image size change operation in equal proportion on the object frame subjected to the average value reduction processing to obtain a target object with a preset size.
In one embodiment, the determining whether a tracking object corresponding to the target object exists in the current image frame, implemented when the computer program is executed by the processor, includes: acquiring a preset position point and a preset size of a target object, and determining a tracking area in a current frame image according to the preset position point and the preset size; and inputting the target object and the determined tracking area into a pre-generated object tracking model to judge whether a tracking object corresponding to the target object exists in the current image frame.
In one embodiment, the determining whether a tracking object corresponding to the target object exists in the current image frame when the computer program is executed by the processor further comprises: when the tracking object corresponding to the target object exists in the current image frame, the target object is updated through the tracking object corresponding to the target object, and the next image frame is continuously extracted from the video to be processed to serve as the current image frame until the image frame in the video to be processed is traversed.
In one embodiment, the computer program when executed by the processor further performs the steps of: carrying out object identification on a current image frame to obtain a plurality of objects to be processed; after the target object is updated by the tracking object corresponding to the target object, the computer program, when executed by the processor, further includes: matching the object to be processed and the tracking object; acquiring an object to be processed which is not matched with the tracking object in the current image frame; and adding the acquired object to be processed into a target object of the current image frame, and recording the current image frame as an initial image frame of the added target object.
In one embodiment, the computer program, when executed by the processor, further comprises, after extracting the current image frame from the video to be processed: when the current image frame is a first frame image, extracting to perform object identification on the first frame image to obtain an initial object, performing standardization processing on the initial object to be used as a target object of the first frame image, and then continuously extracting the current image frame from the video to be processed; and when the current image frame is not the first image frame, continuously acquiring the target object determined by the previous image frame.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (11)

1. A traffic intersection object flow statistics method, the method comprising:
acquiring a video to be processed corresponding to a traffic intersection, and extracting a current image frame from the video to be processed;
acquiring a target object determined by the previous image frame;
judging whether a tracking object corresponding to the target object exists in the current image frame;
when the tracking object corresponding to the target object does not exist in the current image frame, acquiring a direction judgment frame corresponding to the target object from the processed image frame, and obtaining the motion direction of the target object according to the direction judgment frame;
and counting the number of the target objects corresponding to each movement direction in a preset time period to obtain the object flow of the traffic intersection.
2. The method according to claim 1, wherein the obtaining a direction determination frame corresponding to the target object from the processed image frames and obtaining the moving direction of the target object according to the direction determination frame comprises:
acquiring an initial image frame and the last image frame of the target object which are detected for the first time;
determining the position and orientation of the target object in the initial image frame and the previous image frame;
and obtaining the motion direction of the target object according to the determined position and direction.
3. The method according to claim 1, wherein after counting the number of target objects corresponding to each moving direction within a preset time period to obtain the object traffic of the traffic intersection, the method further comprises:
generating a traffic light control instruction corresponding to the traffic intersection according to the object flow;
and sending the traffic light control instruction to a traffic light control terminal corresponding to the traffic intersection.
4. The method of claim 1, wherein after acquiring the target object determined in the previous image frame, further comprising:
obtaining object frames corresponding to the target objects, and carrying out mean value reduction processing on the target objects in the object frames;
and carrying out image size change operation in equal proportion on the object frame subjected to the average value reduction processing to obtain a target object with a preset size.
5. The method according to any one of claims 1 to 4, wherein the determining whether a tracking object corresponding to the target object exists in the current image frame comprises:
acquiring a preset position point and the preset size of the target object, and determining a tracking area in the current frame image according to the preset position point and the preset size;
inputting the target object and the determined tracking area into a pre-generated object tracking model to determine whether a tracking object corresponding to the target object exists in the current image frame.
6. The method of claim 5, wherein after determining whether a tracking object corresponding to the target object is present in the current image frame, further comprising:
when a tracking object corresponding to the target object exists in the current image frame, updating the target object through the tracking object corresponding to the target object, and continuously extracting a next image frame from the video to be processed as the current image frame until the image frame in the video to be processed is traversed.
7. The method of claim 6, further comprising:
carrying out object identification on the current image frame to obtain a plurality of objects to be processed;
after the target object is updated by the tracking object corresponding to the target object, the method further includes:
matching the object to be processed and the tracking object; acquiring the object to be processed which is not matched with the tracking object in the current image frame;
and adding the acquired object to be processed into a target object of the current image frame, and recording the current image frame as an initial image frame of the added target object.
8. The method according to any one of claims 1 to 4, wherein after the extracting the current image frame from the video to be processed, the method further comprises:
when the current image frame is a first frame image, extracting to perform object identification on the first frame image to obtain an initial object, performing standardization processing on the initial object to be used as a target object of the first frame image, and then continuously extracting the current image frame from the video to be processed;
and when the current image frame is not the first frame image, continuously acquiring the target object determined by the previous image frame.
9. A traffic intersection object flow statistics apparatus, the apparatus comprising:
the system comprises a receiving module, a processing module and a processing module, wherein the receiving module is used for acquiring a video to be processed corresponding to a traffic intersection and extracting a current image frame from the video to be processed;
the target object acquisition module is used for acquiring the target object determined by the previous image frame;
the judging module is used for judging whether a tracking object corresponding to the target object exists in the current image frame;
the tracking module is used for acquiring a direction judgment frame corresponding to the target object from the processed image frame when the tracking object corresponding to the target object does not exist in the current image frame, and acquiring the motion direction of the target object according to the direction judgment frame;
and the flow counting module is used for counting the number of the target objects corresponding to each movement direction in a preset time period to obtain the object flow of the traffic intersection.
10. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 8 when executing the computer program.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 8.
CN202010165396.0A 2020-03-11 2020-03-11 Traffic intersection object flow statistical method, device, computer equipment and medium Pending CN111383455A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010165396.0A CN111383455A (en) 2020-03-11 2020-03-11 Traffic intersection object flow statistical method, device, computer equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010165396.0A CN111383455A (en) 2020-03-11 2020-03-11 Traffic intersection object flow statistical method, device, computer equipment and medium

Publications (1)

Publication Number Publication Date
CN111383455A true CN111383455A (en) 2020-07-07

Family

ID=71222682

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010165396.0A Pending CN111383455A (en) 2020-03-11 2020-03-11 Traffic intersection object flow statistical method, device, computer equipment and medium

Country Status (1)

Country Link
CN (1) CN111383455A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111797940A (en) * 2020-07-20 2020-10-20 中国科学院长春光学精密机械与物理研究所 Image identification method based on ocean search and rescue and related device
CN113112828A (en) * 2021-04-15 2021-07-13 北京航迹科技有限公司 Intersection monitoring method, device, equipment, storage medium and program product
US20220108607A1 (en) * 2020-12-21 2022-04-07 Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. Method of controlling traffic, electronic device, roadside device, cloud control platform, and storage medium

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004171289A (en) * 2002-11-20 2004-06-17 Hitachi Ltd Road traffic measuring device
CN101320427A (en) * 2008-07-01 2008-12-10 北京中星微电子有限公司 Video monitoring method and system with auxiliary objective monitoring function
CN101329815A (en) * 2008-07-07 2008-12-24 山东省计算中心 Novel system and method for detecting four-phase vehicle flow of a traffic road junction
CN101587539A (en) * 2008-05-21 2009-11-25 上海新联纬讯科技发展有限公司 Method and system of population flow statistics based on intelligent video identification technology
CN101847206A (en) * 2010-04-21 2010-09-29 北京交通大学 Pedestrian traffic statistical method and system based on traffic monitoring facilities
CN102819764A (en) * 2012-07-18 2012-12-12 郑州金惠计算机系统工程有限公司 Method for counting pedestrian flow from multiple views under complex scene of traffic junction
CN103383816A (en) * 2013-07-01 2013-11-06 青岛海信网络科技股份有限公司 Method and device for controlling traffic signals of multipurpose electronic police mixed traffic flow detection
CN104318263A (en) * 2014-09-24 2015-01-28 南京邮电大学 Real-time high-precision people stream counting method
CN104599502A (en) * 2015-02-13 2015-05-06 重庆邮电大学 Method for traffic flow statistics based on video monitoring
CN105069429A (en) * 2015-07-29 2015-11-18 中国科学技术大学先进技术研究院 People flow analysis statistics method based on big data platform and people flow analysis statistics system based on big data platform
CN107316462A (en) * 2017-08-30 2017-11-03 济南浪潮高新科技投资发展有限公司 A kind of flow statistical method and device
CN108021848A (en) * 2016-11-03 2018-05-11 浙江宇视科技有限公司 Passenger flow volume statistical method and device
CN108205660A (en) * 2017-12-12 2018-06-26 浙江浙大中控信息技术有限公司 Infrared image population flow detection device and detection method based on top view angle
CN108648463A (en) * 2018-05-14 2018-10-12 三峡大学 Vehicle checking method and system in a kind of crossing traffic video
CN108986465A (en) * 2018-07-27 2018-12-11 深圳大学 A kind of method of vehicle Flow Detection, system and terminal device
CN109658688A (en) * 2017-10-11 2019-04-19 深圳市哈工大交通电子技术有限公司 The detection method and device of access connection traffic flow based on deep learning
CN109815936A (en) * 2019-02-21 2019-05-28 深圳市商汤科技有限公司 A kind of target object analysis method and device, computer equipment and storage medium
CN110718061A (en) * 2019-10-17 2020-01-21 长沙理工大学 Traffic intersection vehicle flow statistical method and device, storage medium and electronic equipment
CN110838134A (en) * 2019-10-10 2020-02-25 北京海益同展信息科技有限公司 Target object statistical method and device, computer equipment and storage medium

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004171289A (en) * 2002-11-20 2004-06-17 Hitachi Ltd Road traffic measuring device
CN101587539A (en) * 2008-05-21 2009-11-25 上海新联纬讯科技发展有限公司 Method and system of population flow statistics based on intelligent video identification technology
CN101320427A (en) * 2008-07-01 2008-12-10 北京中星微电子有限公司 Video monitoring method and system with auxiliary objective monitoring function
CN101329815A (en) * 2008-07-07 2008-12-24 山东省计算中心 Novel system and method for detecting four-phase vehicle flow of a traffic road junction
CN101847206A (en) * 2010-04-21 2010-09-29 北京交通大学 Pedestrian traffic statistical method and system based on traffic monitoring facilities
CN102819764A (en) * 2012-07-18 2012-12-12 郑州金惠计算机系统工程有限公司 Method for counting pedestrian flow from multiple views under complex scene of traffic junction
CN103383816A (en) * 2013-07-01 2013-11-06 青岛海信网络科技股份有限公司 Method and device for controlling traffic signals of multipurpose electronic police mixed traffic flow detection
CN104318263A (en) * 2014-09-24 2015-01-28 南京邮电大学 Real-time high-precision people stream counting method
CN104599502A (en) * 2015-02-13 2015-05-06 重庆邮电大学 Method for traffic flow statistics based on video monitoring
CN105069429A (en) * 2015-07-29 2015-11-18 中国科学技术大学先进技术研究院 People flow analysis statistics method based on big data platform and people flow analysis statistics system based on big data platform
CN108021848A (en) * 2016-11-03 2018-05-11 浙江宇视科技有限公司 Passenger flow volume statistical method and device
CN107316462A (en) * 2017-08-30 2017-11-03 济南浪潮高新科技投资发展有限公司 A kind of flow statistical method and device
CN109658688A (en) * 2017-10-11 2019-04-19 深圳市哈工大交通电子技术有限公司 The detection method and device of access connection traffic flow based on deep learning
CN108205660A (en) * 2017-12-12 2018-06-26 浙江浙大中控信息技术有限公司 Infrared image population flow detection device and detection method based on top view angle
CN108648463A (en) * 2018-05-14 2018-10-12 三峡大学 Vehicle checking method and system in a kind of crossing traffic video
CN108986465A (en) * 2018-07-27 2018-12-11 深圳大学 A kind of method of vehicle Flow Detection, system and terminal device
CN109815936A (en) * 2019-02-21 2019-05-28 深圳市商汤科技有限公司 A kind of target object analysis method and device, computer equipment and storage medium
CN110838134A (en) * 2019-10-10 2020-02-25 北京海益同展信息科技有限公司 Target object statistical method and device, computer equipment and storage medium
CN110718061A (en) * 2019-10-17 2020-01-21 长沙理工大学 Traffic intersection vehicle flow statistical method and device, storage medium and electronic equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111797940A (en) * 2020-07-20 2020-10-20 中国科学院长春光学精密机械与物理研究所 Image identification method based on ocean search and rescue and related device
US20220108607A1 (en) * 2020-12-21 2022-04-07 Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. Method of controlling traffic, electronic device, roadside device, cloud control platform, and storage medium
CN113112828A (en) * 2021-04-15 2021-07-13 北京航迹科技有限公司 Intersection monitoring method, device, equipment, storage medium and program product

Similar Documents

Publication Publication Date Title
KR102155182B1 (en) Video recording method, server, system and storage medium
CN112288770A (en) Video real-time multi-target detection and tracking method and device based on deep learning
CN111383455A (en) Traffic intersection object flow statistical method, device, computer equipment and medium
CN110672111B (en) Vehicle driving path planning method, device, system, medium and equipment
CN110634153A (en) Target tracking template updating method and device, computer equipment and storage medium
CN110163188B (en) Video processing and method, device and equipment for embedding target object in video
CN111178245A (en) Lane line detection method, lane line detection device, computer device, and storage medium
CN111860147A (en) Pedestrian re-identification model optimization processing method and device and computer equipment
CN109543691A (en) Ponding recognition methods, device and storage medium
CN113678136A (en) Obstacle detection method and device based on unmanned technology and computer equipment
WO2023030182A1 (en) Image generation method and apparatus
CN111444798A (en) Method and device for identifying driving behavior of electric bicycle and computer equipment
CN112434566A (en) Passenger flow statistical method and device, electronic equipment and storage medium
KR20220076398A (en) Object recognition processing apparatus and method for ar device
CN108764017B (en) Bus passenger flow statistical method, device and system
CN113160272B (en) Target tracking method and device, electronic equipment and storage medium
CN111008621A (en) Object tracking method and device, computer equipment and storage medium
CN113256683A (en) Target tracking method and related equipment
CN113112479A (en) Progressive target detection method and device based on key block extraction
Zhang et al. High Resolution Feature Recovering for Accelerating Urban Scene Parsing.
CN113515980B (en) Model training method, device, equipment and storage medium
CN114419018A (en) Image sampling method, system, device and medium
CN114494977A (en) Abnormal parking detection method, electronic equipment and storage medium
CN114119757A (en) Image processing method, apparatus, device, medium, and computer program product
CN113160406A (en) Road three-dimensional reconstruction method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
AD01 Patent right deemed abandoned
AD01 Patent right deemed abandoned

Effective date of abandoning: 20221018