CN113763425A - Road area calibration method and electronic equipment - Google Patents

Road area calibration method and electronic equipment Download PDF

Info

Publication number
CN113763425A
CN113763425A CN202111006532.2A CN202111006532A CN113763425A CN 113763425 A CN113763425 A CN 113763425A CN 202111006532 A CN202111006532 A CN 202111006532A CN 113763425 A CN113763425 A CN 113763425A
Authority
CN
China
Prior art keywords
target vehicle
target
vehicle
detection
road
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111006532.2A
Other languages
Chinese (zh)
Inventor
姜东昕
赵建龙
臧海洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense TransTech Co Ltd
Original Assignee
Hisense TransTech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense TransTech Co Ltd filed Critical Hisense TransTech Co Ltd
Priority to CN202111006532.2A priority Critical patent/CN113763425A/en
Publication of CN113763425A publication Critical patent/CN113763425A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30236Traffic on road, railway or crossing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure provides a road area calibration method and electronic equipment. The method comprises the following steps: aiming at any frame of video image within a specified time length in a road video, carrying out vehicle target detection on the video image by using a preset target detection neural network model to obtain the detection position of each target vehicle; identifying each target vehicle based on the detection position of each target vehicle to obtain the identification of each target vehicle, wherein the identifications of the same target vehicle in different video images are the same; for any target vehicle, determining the motion track of the target vehicle in the road through the identification and the detection position of the target vehicle in each frame of video image within the specified duration; and determining a vehicle dense area in the road according to the motion trail of each target vehicle in the road. Therefore, the method and the device can determine the vehicle dense area in the road within the specified time, can intervene the vehicle dense area in the road in time, and improve the vehicle traveling efficiency.

Description

Road area calibration method and electronic equipment
Technical Field
The invention relates to the technical field of intelligent traffic, in particular to a road area calibration method and electronic equipment.
Background
With the rapid development of economy, vehicles in urban roads are more and more, which leads to the increasingly congested urban traffic, so that the dense area of the vehicles in the urban traffic roads needs to be calibrated so as to intervene in the dense area of the vehicles in advance. Therefore, the vehicle traveling efficiency is improved.
However, there is no method available in the prior art to calibrate areas of the road where there are dense vehicles. This results in inefficient travel.
Disclosure of Invention
The exemplary embodiment of the disclosure provides a road area calibration method and an electronic device, which are used for calibrating a vehicle dense area in a road, so as to intervene the vehicle dense area in advance and improve the efficiency of travel.
A first aspect of the present disclosure provides a road area calibration method, including:
aiming at any frame of video image within a specified time length in a road video, carrying out vehicle target detection on the video image by using a preset target detection neural network model to obtain the detection position of each target vehicle;
identifying each target vehicle based on the detection position of each target vehicle to obtain the identification of each target vehicle, wherein the identifications of the same target vehicle in different video images are the same;
for any target vehicle, determining the motion track of the target vehicle in the road through the identification and the detection position of the target vehicle in each frame of video image within the specified duration;
and determining a vehicle dense area in the road according to the motion trail of each target vehicle in the road.
In the embodiment, vehicle target detection is performed on any frame of video image in a specified time length in a road video by using a preset target detection neural network model to obtain the detection position of each target vehicle, then each target vehicle is identified based on the detection position of each target vehicle to obtain the identification of each target vehicle, and the motion track of each target vehicle in the road is determined according to the identification and the detection position of each target vehicle in each frame of video image in the specified time length aiming at any target vehicle; and finally, determining a vehicle dense area in the road according to the motion track of each target vehicle in the road. Therefore, the embodiment can determine the vehicle dense region in the road within the specified time, can timely intervene in the vehicle dense region in the road, and improves the vehicle traveling efficiency.
In one embodiment, at least one target convolution in the backbone network of the target detection neural network model has a hole convolution with a set expansion rate, and the target convolution is a convolution of a specified size.
The embodiment replaces at least one convolution with a specified size by a hole convolution with an expansion rate of a set value through a main network of the target detection neural network model. Therefore, the hole convolution in the target detection neural network is utilized to expand the global visual field so as to acquire more abstract features and spatial information in the video image. The result of the vehicle target detection is more accurate.
In one embodiment, the identifying each target vehicle based on the detected position of each target vehicle to obtain the identifier of each target vehicle includes:
for any one target vehicle, respectively matching the detected position of the target vehicle with the predicted position of each target vehicle through a Hungarian algorithm, and determining the predicted position corresponding to the detected position of the target vehicle; the predicted position of each target vehicle is obtained by predicting the detection position of each target vehicle in the previous frame of video image by using a Kalman filtering algorithm;
if the number of the predicted positions corresponding to the detected positions of the target vehicles is determined to be one, determining the identification of the target vehicles in the last frame of video image corresponding to the predicted positions as the identification of the target vehicles;
if the number of the predicted positions corresponding to the detected positions of the target vehicle is determined to be multiple, determining the predicted position with the highest priority based on the cascade matching parameters corresponding to the predicted positions, and determining the mark of the target vehicle in the last frame of video image corresponding to the predicted position with the highest priority as the mark of the target vehicle, wherein the cascade matching parameters are used for representing the priority of the predicted positions.
In this embodiment, the detection positions of the target vehicles are respectively matched with the predicted positions of the target vehicles through a hungarian algorithm, the predicted positions corresponding to the detection positions of the target vehicles are determined, and when it is determined that the number of the predicted positions corresponding to the detection positions of the target vehicles is multiple, the predicted position with the highest priority is determined based on the cascade matching parameters corresponding to the predicted positions, and the identifier of the target vehicle in the previous frame of video image corresponding to the predicted position with the highest priority is determined as the identifier of the target vehicle, so that the determined identifier of the target vehicle is more accurate.
In one embodiment, before the step of matching, by the hungarian algorithm, the detected positions of the target vehicles with the predicted positions of the target vehicles, respectively, and determining the predicted positions corresponding to the detected positions of the target vehicles, for any one of the target vehicles, the method further includes:
and screening the detection positions of the target vehicles based on a preset algorithm aiming at the target vehicles with a plurality of detection positions to obtain the screened detection positions of the target vehicles.
In this embodiment, the detected positions of the target vehicles are respectively matched with the predicted positions of the target vehicles through the hungarian algorithm, and before the predicted positions corresponding to the detected positions of the target vehicles are determined, a plurality of detected positions of the target vehicles with a plurality of detected positions need to be screened, so that the accuracy of the matching result is ensured.
In one embodiment, the screening the detection position of each target vehicle based on a preset algorithm to obtain the screened detection position of each target vehicle includes:
screening the detection positions of the target vehicles by using a non-maximum suppression algorithm, and then screening the detection positions of the target vehicles again according to the confidence degrees of the detection positions of the target vehicles to obtain the screened detection positions of the target vehicles; and obtaining the confidence coefficient of the detection position of each target vehicle based on a preset target detection neural network model.
In the embodiment, after the detection positions of the target vehicles are screened by using the non-maximum suppression algorithm, the detection positions of the target vehicles are screened again according to the confidence degrees of the detection positions of the target vehicles to obtain the screened detection positions of the target vehicles, and the screened detection positions of the target vehicles are screened twice, so that the detection positions of the target vehicles are more accurate.
In one embodiment, the determining a dense vehicle region in the road according to the motion trajectory of each target vehicle in the road includes:
dividing the motion trail of each target vehicle into a specified number of sub-motion trails, wherein the lengths of the sub-motion trails in the same motion trail are equal;
clustering each sub-motion track by using a preset clustering algorithm to obtain a plurality of motion track clustering areas;
and aiming at any one motion track clustering area, if the number of the sub motion tracks in the motion track clustering area is determined to be larger than a specified threshold value, determining the motion track clustering area as a vehicle dense area in the road.
In the embodiment, the motion tracks of each target vehicle are divided into a specified number of sub-motion tracks with equal length, each sub-motion track is clustered by using a preset clustering algorithm to obtain a plurality of motion track clustering areas, and for any one motion track clustering area, if the number of each sub-motion track in the motion track clustering area is determined to be greater than a specified threshold value, the motion track clustering area is determined to be a vehicle dense area in the road, so that the motion track is divided into a plurality of sub-motion tracks, the clustering process is guaranteed to be finer-grained, and the obtained motion track clustering area is more accurate.
A second aspect of the present disclosure provides an electronic device comprising a storage unit and a processor, wherein:
the storage unit is configured to store road videos;
the processor configured to:
aiming at any frame of video image within a specified time length in a road video, carrying out vehicle target detection on the video image by using a preset target detection neural network model to obtain the detection position of each target vehicle;
identifying each target vehicle based on the detection position of each target vehicle to obtain the identification of each target vehicle, wherein the identifications of the same target vehicle in different video images are the same;
for any target vehicle, determining the motion track of the target vehicle in the road through the identification and the detection position of the target vehicle in each frame of video image within the specified duration;
and determining a vehicle dense area in the road according to the motion trail of each target vehicle in the road.
In one embodiment, at least one target convolution in the backbone network of the target detection neural network model has a hole convolution with a set expansion rate, and the target convolution is a convolution of a specified size.
In one embodiment, the processor executes the identification of each target vehicle based on the detected position of each target vehicle to obtain an identifier of each target vehicle, and is specifically configured to:
for any one target vehicle, respectively matching the detected position of the target vehicle with the predicted position of each target vehicle through a Hungarian algorithm, and determining the predicted position corresponding to the detected position of the target vehicle; the predicted position of each target vehicle is obtained by predicting the detection position of each target vehicle in the previous frame of video image by using a Kalman filtering algorithm;
if the number of the predicted positions corresponding to the detected positions of the target vehicles is determined to be one, determining the identification of the target vehicles in the last frame of video image corresponding to the predicted positions as the identification of the target vehicles;
if the number of the predicted positions corresponding to the detected positions of the target vehicle is determined to be multiple, determining the predicted position with the highest priority based on the cascade matching parameters corresponding to the predicted positions, and determining the mark of the target vehicle in the last frame of video image corresponding to the predicted position with the highest priority as the mark of the target vehicle, wherein the cascade matching parameters are used for representing the priority of the predicted positions.
In one embodiment, the processor is further configured to:
and for any one target vehicle, respectively matching the detection position of the target vehicle with the predicted position of each target vehicle through a Hungarian algorithm, and screening the detection positions of the target vehicles based on a preset algorithm to obtain the screened detection positions of the target vehicles before determining the predicted positions corresponding to the detection positions of the target vehicles and for each target vehicle with a plurality of detection positions.
In an embodiment, the processor performs the screening of the detection position of each target vehicle based on a preset algorithm to obtain the screened detection position of each target vehicle, and is specifically configured to:
screening the detection positions of the target vehicles by using a non-maximum suppression algorithm, and then screening the detection positions of the target vehicles again according to the confidence degrees of the detection positions of the target vehicles to obtain the screened detection positions of the target vehicles; and obtaining the confidence coefficient of the detection position of each target vehicle based on a preset target detection neural network model.
In one embodiment, the processor executes the determining of the dense vehicle region in the road according to the motion trajectory of each target vehicle in the road, and is specifically configured to:
dividing the motion trail of each target vehicle into a specified number of sub-motion trails, wherein the lengths of the sub-motion trails in the same motion trail are equal;
clustering each sub-motion track by using a preset clustering algorithm to obtain a plurality of motion track clustering areas;
and aiming at any one motion track clustering area, if the number of the sub motion tracks in the motion track clustering area is determined to be larger than a specified threshold value, determining the motion track clustering area as a vehicle dense area in the road.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present disclosure, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a schematic diagram of a suitable scenario in accordance with an embodiment of the present disclosure;
FIG. 2 is a schematic flow chart of a road region calibration method according to an embodiment of the disclosure;
FIG. 3 is a second schematic diagram illustrating a hole convolution according to an embodiment of the present disclosure;
FIG. 4 is a schematic structural diagram of a target detection neural network model refindet in a road region calibration method according to an embodiment of the disclosure;
FIG. 5 is a schematic diagram of vehicle target detection in a road region calibration method according to an embodiment of the disclosure;
FIG. 6 is a schematic diagram illustrating a target vehicle position screening in a road region calibration method according to an embodiment of the disclosure;
FIG. 7 is a schematic diagram of a target vehicle identification in a road region calibration method according to an embodiment of the disclosure;
FIG. 8 is a schematic diagram of a target vehicle trajectory in a road region calibration method according to an embodiment of the present disclosure;
FIG. 9 is a schematic view illustrating a process of determining a dense vehicle area in a road area calibration method according to an embodiment of the disclosure;
FIG. 10 is a schematic flow chart of vehicle dense area determination according to one embodiment of the present disclosure;
FIG. 11 is a second flowchart of a road region calibration method according to an embodiment of the disclosure;
FIG. 12 is a road zone calibration arrangement according to one embodiment of the present disclosure;
fig. 13 is a schematic structural diagram of an electronic device according to one embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some, but not all embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
The term "and/or" in the embodiments of the present disclosure describes an association relationship of associated objects, and means that there may be three relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
The application scenario described in the embodiment of the present disclosure is for more clearly illustrating the technical solution of the embodiment of the present disclosure, and does not form a limitation on the technical solution provided in the embodiment of the present disclosure, and as a person having ordinary skill in the art knows, with the occurrence of a new application scenario, the technical solution provided in the embodiment of the present disclosure is also applicable to similar technical problems. In the description of the present disclosure, the term "plurality" means two or more unless otherwise specified.
The prior art does not have a method for calibrating the dense vehicle area in the road. This results in inefficient travel.
Therefore, the present disclosure provides a road area calibration method, which performs vehicle target detection on any frame of video image in a specified duration in a road video by using a preset target detection neural network model to obtain a detection position of each target vehicle, then identifies each target vehicle based on the detection position of each target vehicle to obtain an identifier of each target vehicle, and determines a motion trajectory of the target vehicle in a road according to the identifier and the detection position of the target vehicle in each frame of video image in the specified duration for any target vehicle; and finally, determining a vehicle dense area in the road according to the motion track of each target vehicle in the road. Therefore, the embodiment can determine the vehicle dense region in the road within the specified time, can timely intervene in the vehicle dense region in the road, and improves the vehicle traveling efficiency. The embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
As shown in fig. 1, an application scenario of the road area calibration method includes a camera 110, a server 120, and a terminal device 130, where in fig. 1, one camera 110 and one terminal device 130 are taken as examples, and the number of cameras 110 and terminal devices 130 is not limited in practice. The terminal device 130 may be a mobile phone, a tablet computer, a personal computer, and the like. The server 120 may be implemented by a single server or may be implemented by a plurality of servers. The server 120 may be implemented by a physical server or may be implemented by a virtual server.
In a possible application scenario, the camera 110 sends the acquired road video to the server 120, and the server 120 performs vehicle target detection on any one frame of video image within a specified duration in the received road video by using a preset target detection neural network model to obtain a detection position of each target vehicle; identifying each target vehicle based on the detection position of each target vehicle to obtain the identification of each target vehicle, wherein the identification of the same target vehicle in different video images is the same; then, the server 120 determines the movement track of the target vehicle in the road according to the identification and the detection position of the target vehicle in each frame of video image within the specified duration for any one target vehicle; finally, the server 120 determines the vehicle dense area in the road according to the motion track of each target vehicle in the road, and sends the vehicle dense area in the road to the terminal device 130 for display.
Fig. 2 is a schematic flow chart of a road area calibration method of the present disclosure, which may include the following steps:
step 201: aiming at any frame of video image within a specified time length in a road video, carrying out vehicle target detection on the video image by using a preset target detection neural network model to obtain the detection position of each target vehicle;
in order to make the vehicle target detection result more accurate, in one embodiment, the expansion rate of at least one target convolution in the backbone network of the target detection neural network model is a hole convolution with a set value, and the target convolution is a convolution with a specified size.
Note that, the expansion ratio in this example is greater than 1.
The convolution with a convolution of 3 × 3 of a given size will be described as an example. As shown in fig. 3, an image 3a in fig. 3 is a schematic diagram of convolution operation performed by convolution with a size of 3 × 3 in the target detection neural network model in the prior art, and as can be seen from the image 3a, a field of perception of convolution operation performed by convolution with a size of 3 × 3 in the prior art is 3 × 3. As shown in image 3b of fig. 3, the field of the convolution operation with the same size of 3 x 3 and an expansion rate of 2 is 5 x 5. Therefore, the improved target detection neural network in the embodiment expands the global visual field, so as to acquire more abstract features and spatial information in the video image. The accuracy of the vehicle target detection result is further improved.
In the following, taking the Object Detection neural network model as an example, a detailed description will be given to a vehicle Object Detection process in the present disclosure, and first, as shown in fig. 4, a schematic structural diagram of an improved Object Detection process is shown, which includes an ARM (aspect Module), a TCB (transmission Connection Block), and an ODM (Object Detection Module). In the embodiment, the example that the second convolution in the ARM is replaced by a hole convolution with an expansion rate of 2 is described as follows:
firstly, a video image is subjected to multilayer convolution in an ARM to obtain feature maps with different sizes. The location and score of the vehicles can be roughly estimated in the ARM and invalid locations of some vehicles can be filtered out to reduce the search space of the classifier and roughly adjust the locations of the remaining vehicles. Meanwhile, the characteristic diagrams obtained by the ARM are input into the ODM through the TCB, and high-level characteristic diagrams and bottom-level characteristic diagrams in the characteristic diagrams obtained by the ARM are fused to enhance semantic information of the bottom-level characteristic diagrams so as to detect smaller vehicle targets. Thus, the position of each target vehicle is obtained. As shown in fig. 5, the image 5a is a video image to be detected, and the image 5b is a video image after vehicle object detection, and the video image includes the position of each object vehicle.
As shown in fig. 5, since there may be a plurality of positions of each target vehicle, in order to ensure the accuracy of the recognition result, in one embodiment, for each target vehicle with a plurality of detection positions, the detection positions of each target vehicle are filtered based on a preset algorithm, so as to obtain the detection positions of each filtered target vehicle.
In the image 5b in fig. 5, which is the image after the vehicle target is detected, as shown in the image 5b, the bounding box where two vehicles exist is plural, that is, the number of the positions where two vehicles exist is plural. The bounding box of the two vehicles is required for screening, i.e. as shown in fig. 6 after screening.
In one embodiment, the detected positions of the respective target vehicles are screened by:
screening the detection positions of the target vehicles by using a non-maximum suppression algorithm, and then screening the detection positions of the target vehicles again according to the confidence degrees of the detection positions of the target vehicles to obtain the screened detection positions of the target vehicles; and obtaining the confidence coefficient of the detection position of each target vehicle based on a preset target detection neural network model.
The overall process of screening the detection positions of the target vehicles by using the non-maximum inhibition algorithm comprises the following steps: for any target vehicle, sorting the detection positions according to the confidence degrees of the detection positions of the target vehicle, determining the detection position with the highest confidence degree in the detection positions of the target vehicle, performing IOU (Intersection-over-Union) calculation on the detection positions except the detection position with the highest confidence degree and the detection position with the highest confidence degree in the detection positions of the target vehicle, deleting the detection positions with the IOU value larger than a specified threshold value, judging whether the number of the remaining detection positions is in a specified range, if not, returning to execute the step of sorting the detection positions according to the confidence degrees of the detection positions of the target vehicle, determining the detection position with the highest confidence degree in the detection positions of the target vehicle until the number of the remaining detection positions is in the specified range, the process is ended.
Since the target in this embodiment is a vehicle, some of the targets may be similar to the shape of the vehicle, and the target detection neural network model may also be mistaken for the vehicle, the detection positions of which the confidence degrees of the detection positions of the target vehicles are smaller than the specified threshold value need to be deleted.
Step 202: identifying each target vehicle based on the detection position of each target vehicle to obtain the identification of each target vehicle, wherein the identifications of the same target vehicle in different video images are the same;
in one implementation, the identity of each target vehicle may be determined by:
for any one target vehicle, respectively matching the detected position of the target vehicle with the predicted position of each target vehicle through a Hungarian algorithm, and determining the predicted position corresponding to the detected position of the target vehicle; the predicted position of each target vehicle is obtained by predicting the detection position of each target vehicle in the previous frame of video image by using a Kalman filtering algorithm; if the number of the predicted positions corresponding to the detected positions of the target vehicles is determined to be one, determining the identification of the target vehicles in the last frame of video image corresponding to the predicted positions as the identification of the target vehicles; if the number of the predicted positions corresponding to the detected positions of the target vehicle is determined to be multiple, determining the predicted position with the highest priority based on the cascade matching parameters corresponding to the predicted positions, and determining the mark of the target vehicle in the last frame of video image corresponding to the predicted position with the highest priority as the mark of the target vehicle, wherein the cascade matching parameters are used for representing the priority of the predicted positions.
For example, if a certain vehicle has three predicted positions (bounding boxes), which are bounding box 1, bounding box 2, and bounding box 3, the initial priority of the bounding box is: the bounding box 1> the bounding box 2> the bounding box 3, and the cascade matching parameters respectively corresponding to the bounding box 1, the bounding box 2 and the bounding box 3 are respectively: 1. 0 and 2. The larger the cascade matching parameter is, the lower the priority of the corresponding bounding box is, and it can be seen that the cascade priority of the bounding box is smaller than the cascade priority of the bounding box, so the initial priority of each bounding box needs to be adjusted, and the adjusted priorities of the bounding boxes are respectively: bounding box 2> bounding box 1> bounding box 3. Therefore, the surrounding 2 is determined to have the highest priority, so that the identifier of the target vehicle in the previous frame of video image corresponding to the surrounding box is determined as the identifier of the target vehicle, and if the identifier of the target vehicle in the previous frame of video image corresponding to the surrounding box 2 is the vehicle 2, the identifier of the vehicle is set as the vehicle 2.
As shown in fig. 7, the current frame video image is an nth frame video image, and the nth frame video image is a video image obtained after the target vehicles are identified based on the detected positions of the target vehicles, and as can be seen from fig. 7, the nth frame video image includes the identifiers of the target vehicles. Wherein the identification of each target vehicle in the video image of the nth frame corresponds to the identification of each target vehicle in the 1 st frame and the n-1 st frame.
Step 203: for any target vehicle, determining the motion track of the target vehicle in the road through the identification and the detection position of the target vehicle in each frame of video image within the specified duration;
each frame of video image in the specified time duration includes the detected position and the identifier of each target vehicle, so that the motion trail of each target vehicle can be obtained, as shown in fig. 8, the motion trail of the vehicle 1, the vehicle 2, the vehicle 3 and the vehicle n is obtained.
Step 204: and determining a vehicle dense area in the road according to the motion trail of each target vehicle in the road.
In one embodiment, as shown in fig. 9, the process of determining the dense area of the vehicle may include the following steps:
step 901: dividing the motion trail of each target vehicle into a specified number of sub-motion trails, wherein the lengths of the sub-motion trails in the same motion trail are equal;
step 902: clustering each sub-motion track by using a preset clustering algorithm to obtain a plurality of motion track clustering areas;
step 903: and aiming at any one motion track clustering area, if the number of the sub motion tracks in the motion track clustering area is determined to be larger than a specified threshold value, determining the motion track clustering area as a vehicle dense area in the road.
For example, the image 10a in fig. 10 is a sub-motion trajectory obtained by dividing each motion in fig. 8, and each sub-motion trajectory is represented by a central point of the sub-motion trajectory, that is, the image 10 b. And then clustering points corresponding to the sub-motion tracks by using a preset clustering algorithm to obtain a plurality of motion track clustering areas, as shown in an image 10c, namely an area 1, an area 2 and an area 3. Wherein the number of each sub-motion trajectory in the area 2 is greater than a specified threshold, it can be determined that the area 2 is a dense area of vehicles. Image 10c is the last image displayed.
For further understanding of the technical solution of the present disclosure, the following detailed description with reference to fig. 11 may include the following steps:
step 1101: aiming at any frame of video image within a specified time length in a road video, carrying out vehicle target detection on the video image by using a preset target detection neural network model to obtain the detection position of each target vehicle;
step 1102: screening the detection positions of the target vehicles by using a non-maximum suppression algorithm, and then screening the detection positions of the target vehicles again according to the confidence degrees of the detection positions of the target vehicles to obtain the screened detection positions of the target vehicles;
step 1103: for any one target vehicle, respectively matching the detected position of the target vehicle with the predicted position of each target vehicle through a Hungarian algorithm, and determining the predicted position corresponding to the detected position of the target vehicle; the predicted position of each target vehicle is obtained by predicting the detection position of each target vehicle in the previous frame of video image by using a Kalman filtering algorithm;
step 1104: judging whether the number of predicted positions corresponding to the detected position of the target vehicle is multiple, if so, executing step 1105, otherwise, executing step 1106;
step 1105: determining a predicted position with the highest priority based on the cascade matching parameters corresponding to the predicted positions, and determining the identifier of a target vehicle in a previous frame of video image corresponding to the predicted position with the highest priority as the identifier of the target vehicle, wherein the cascade matching parameters are used for representing the priority of the predicted position;
step 1106: determining the identification of the target vehicle in the last frame of video image corresponding to the predicted position as the identification of the target vehicle;
step 1107: for any target vehicle, determining the motion track of the target vehicle in the road through the identification and the detection position of the target vehicle in each frame of video image within the specified duration;
step 1108: dividing the motion trail of each target vehicle into a specified number of sub-motion trails, wherein the lengths of the sub-motion trails in the same motion trail are equal;
step 1109: clustering each sub-motion track by using a preset clustering algorithm to obtain a plurality of motion track clustering areas;
step 1110: and clustering each sub-motion track by using a preset clustering algorithm to obtain a plurality of motion track clustering areas.
Based on the same public concept, the road area calibration method can be realized by a road area calibration device. The effect of the road area calibration device is similar to that of the method, and is not repeated herein.
Fig. 12 is a schematic structural diagram of a road area calibration device according to an embodiment of the disclosure.
As shown in fig. 12, the road region calibration apparatus 1200 of the present disclosure may include a vehicle target detection module 1210, a target vehicle identification module 1220, a motion trajectory determination module 1230, and a vehicle dense region determination module 1240.
The vehicle target detection module 1210 is used for performing vehicle target detection on any frame of video image within a specified time length in a road video by using a preset target detection neural network model to obtain a detection position of each target vehicle;
the target vehicle identification module 1220 is configured to identify each target vehicle based on the detected position of each target vehicle to obtain an identifier of each target vehicle, where the identifiers of the same target vehicle in different video images are the same;
a motion trajectory determining module 1230, configured to determine, for any one target vehicle, a motion trajectory of the target vehicle in the road according to the identifier and the detected position of the target vehicle in each frame of video image within the specified time period;
and the vehicle dense region determining module 1240 is used for determining the vehicle dense region in the road according to the motion track of each target vehicle in the road.
In one embodiment, at least one target convolution in the backbone network of the target detection neural network model has a hole convolution with a set expansion rate, and the target convolution is a convolution of a specified size.
In one embodiment, the target vehicle identification module 1220 is specifically configured to:
for any one target vehicle, respectively matching the detected position of the target vehicle with the predicted position of each target vehicle through a Hungarian algorithm, and determining the predicted position corresponding to the detected position of the target vehicle; the predicted position of each target vehicle is obtained by predicting the detection position of each target vehicle in the previous frame of video image by using a Kalman filtering algorithm;
if the number of the predicted positions corresponding to the detected positions of the target vehicles is determined to be one, determining the identification of the target vehicles in the last frame of video image corresponding to the predicted positions as the identification of the target vehicles;
if the number of the predicted positions corresponding to the detected positions of the target vehicle is determined to be multiple, determining the predicted position with the highest priority based on the cascade matching parameters corresponding to the predicted positions, and determining the mark of the target vehicle in the last frame of video image corresponding to the predicted position with the highest priority as the mark of the target vehicle, wherein the cascade matching parameters are used for representing the priority of the predicted positions.
In one embodiment, the apparatus further comprises:
and the screening module 1250 is configured to, for any one target vehicle, match the detected positions of the target vehicles with the predicted positions of the target vehicles respectively through a hungarian algorithm, and screen the detected positions of the target vehicles based on a preset algorithm to obtain the screened detected positions of the target vehicles before determining the predicted positions corresponding to the detected positions of the target vehicles and for the target vehicles with a plurality of detected positions.
In one embodiment, the screening module 1250 is specifically configured to:
screening the detection positions of the target vehicles by using a non-maximum suppression algorithm, and then screening the detection positions of the target vehicles again according to the confidence degrees of the detection positions of the target vehicles to obtain the screened detection positions of the target vehicles; and obtaining the confidence coefficient of the detection position of each target vehicle based on a preset target detection neural network model.
In one embodiment, the vehicle dense area determination module 1240 is specifically configured to:
dividing the motion trail of each target vehicle into a specified number of sub-motion trails, wherein the lengths of the sub-motion trails in the same motion trail are equal;
clustering each sub-motion track by using a preset clustering algorithm to obtain a plurality of motion track clustering areas;
and aiming at any one motion track clustering area, if the number of the sub motion tracks in the motion track clustering area is determined to be larger than a specified threshold value, determining the motion track clustering area as a vehicle dense area in the road.
After introducing a road area calibration method and apparatus according to an exemplary embodiment of the present disclosure, an electronic apparatus according to another exemplary embodiment of the present disclosure is introduced next.
As will be appreciated by one skilled in the art, aspects of the present disclosure may be embodied as a system, method or program product. Accordingly, various aspects of the present disclosure may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
In some possible implementations, an electronic device in accordance with the present disclosure may include at least one processor, and at least one computer storage medium. The computer storage medium has stored therein program code, which, when executed by a processor, causes the processor to perform the steps of the road region calibration method according to various exemplary embodiments of the present disclosure described above in this specification. For example, the processor may perform steps 201 and 204 as shown in FIG. 2.
An electronic device 1300 according to this embodiment of the disclosure is described below with reference to fig. 13. The electronic device 1300 shown in fig. 13 is only an example and should not bring any limitations to the function and scope of use of the embodiments of the present disclosure.
As shown in fig. 13, the electronic device 1300 is represented in the form of a general electronic device. The components of the electronic device 1300 may include, but are not limited to: the at least one processor 1301, the at least one computer storage medium 1302, and the bus 1303 that connects the various system components (including the computer storage medium 1302 and the processor 1301).
Bus 1303 represents one or more of any of several types of bus structures, including a computer storage media bus or computer storage media controller, a peripheral bus, a processor, or a local bus using any of a variety of bus architectures.
The computer storage media 1302 may include readable media in the form of volatile computer storage media, such as random access computer storage media (RAM)1321 and/or cache storage media 1322, and may further include read-only computer storage media (ROM) 1323.
Computer storage media 1302 may also include a program/utility 1325 having a set (at least one) of program modules 1324, such program modules 1324 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
The electronic device 1300 may also communicate with one or more external devices 1304 (e.g., keyboard, pointing device, etc.), with one or more devices that enable a user to interact with the electronic device 1300, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 1300 to communicate with one or more other electronic devices. Such communication may occur via an input/output (I/O) interface 1305. Also, the electronic device 1300 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the internet) via the network adapter 1306. As shown, the network adapter 1306 communicates with other modules for the electronic device 1300 over the bus 1303. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 1300, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
In some possible embodiments, various aspects of a road region calibration method provided by the present disclosure may also be implemented in the form of a program product, which includes program code for causing a computer device to perform the steps of the road region calibration method according to various exemplary embodiments of the present disclosure described above in this specification when the program product is run on the computer device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable diskette, a hard disk, a random access computer storage media (RAM), a read-only computer storage media (ROM), an erasable programmable read-only computer storage media (EPROM or flash memory), an optical fiber, a portable compact disc read-only computer storage media (CD-ROM), an optical computer storage media piece, a magnetic computer storage media piece, or any suitable combination of the foregoing.
The program product for road zone calibration of embodiments of the present disclosure may employ a portable compact disc read-only computer storage medium (CD-ROM) and include program code, and may be executable on an electronic device. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the consumer electronic device, partly on the consumer electronic device, as a stand-alone software package, partly on the consumer electronic device and partly on a remote electronic device, or entirely on the remote electronic device or server. In the case of remote electronic devices, the remote electronic devices may be connected to the consumer electronic device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external electronic device (for example, through the internet using an internet service provider).
It should be noted that although several modules of the apparatus are mentioned in the above detailed description, such division is merely exemplary and not mandatory. Indeed, the features and functionality of two or more of the modules described above may be embodied in one module, in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module described above may be further divided into embodiments by a plurality of modules.
Further, while the operations of the disclosed methods are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, magnetic disk computer storage media, CD-ROMs, optical computer storage media, and the like) having computer-usable program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the present disclosure. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable computer storage medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable computer storage medium produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications can be made in the present disclosure without departing from the spirit and scope of the disclosure. Thus, if such modifications and variations of the present disclosure fall within the scope of the claims of the present disclosure and their equivalents, the present disclosure is intended to include such modifications and variations as well.

Claims (10)

1. A road area calibration method is characterized by comprising the following steps:
aiming at any frame of video image within a specified time length in a road video, carrying out vehicle target detection on the video image by using a preset target detection neural network model to obtain the detection position of each target vehicle;
identifying each target vehicle based on the detection position of each target vehicle to obtain the identification of each target vehicle, wherein the identifications of the same target vehicle in different video images are the same;
for any target vehicle, determining the motion track of the target vehicle in the road through the identification and the detection position of the target vehicle in each frame of video image within the specified duration;
and determining a vehicle dense area in the road according to the motion trail of each target vehicle in the road.
2. The method of claim 1, wherein at least one target convolution in the backbone network of the target detection neural network model has a hole convolution with a set value of dilation rate and the target convolution is a convolution of a specified size.
3. The method of claim 1, wherein identifying each target vehicle based on its detected location to obtain an identification of each target vehicle comprises:
for any one target vehicle, respectively matching the detected position of the target vehicle with the predicted position of each target vehicle through a Hungarian algorithm, and determining the predicted position corresponding to the detected position of the target vehicle; the predicted position of each target vehicle is obtained by predicting the detection position of each target vehicle in the previous frame of video image by using a Kalman filtering algorithm;
if the number of the predicted positions corresponding to the detected positions of the target vehicles is determined to be one, determining the identification of the target vehicles in the last frame of video image corresponding to the predicted positions as the identification of the target vehicles;
if the number of the predicted positions corresponding to the detected positions of the target vehicle is determined to be multiple, determining the predicted position with the highest priority based on the cascade matching parameters corresponding to the predicted positions, and determining the mark of the target vehicle in the last frame of video image corresponding to the predicted position with the highest priority as the mark of the target vehicle, wherein the cascade matching parameters are used for representing the priority of the predicted positions.
4. The method as claimed in claim 3, wherein before the step of matching the detected positions of the target vehicles with the predicted positions of the target vehicles respectively by Hungarian algorithm for any one of the target vehicles and determining the predicted position corresponding to the detected position of the target vehicle, the method further comprises:
and screening the detection positions of the target vehicles based on a preset algorithm aiming at the target vehicles with a plurality of detection positions to obtain the screened detection positions of the target vehicles.
5. The method according to claim 4, wherein the screening the detection position of each target vehicle based on a preset algorithm to obtain the screened detection position of each target vehicle comprises:
screening the detection positions of the target vehicles by using a non-maximum suppression algorithm, and then screening the detection positions of the target vehicles again according to the confidence degrees of the detection positions of the target vehicles to obtain the screened detection positions of the target vehicles; and obtaining the confidence coefficient of the detection position of each target vehicle based on a preset target detection neural network model.
6. The method according to any one of claims 1 to 5, wherein the determining the vehicle dense region in the road according to the motion track of each target vehicle in the road comprises:
dividing the motion trail of each target vehicle into a specified number of sub-motion trails, wherein the lengths of the sub-motion trails in the same motion trail are equal;
clustering each sub-motion track by using a preset clustering algorithm to obtain a plurality of motion track clustering areas;
and aiming at any one motion track clustering area, if the number of the sub motion tracks in the motion track clustering area is determined to be larger than a specified threshold value, determining the motion track clustering area as a vehicle dense area in the road.
7. An electronic device, comprising a memory unit and a processor, wherein:
the storage unit is configured to store road videos;
the processor configured to:
aiming at any frame of video image within a specified time length in a road video, carrying out vehicle target detection on the video image by using a preset target detection neural network model to obtain the detection position of each target vehicle;
identifying each target vehicle based on the detection position of each target vehicle to obtain the identification of each target vehicle, wherein the identifications of the same target vehicle in different video images are the same;
for any target vehicle, determining the motion track of the target vehicle in the road through the identification and the detection position of the target vehicle in each frame of video image within the specified duration;
and determining a vehicle dense area in the road according to the motion trail of each target vehicle in the road.
8. The electronic device of claim 7, wherein at least one target convolution in the backbone network of the target detection neural network model has a hole convolution with a set value of dilation rate and the target convolution is a convolution of a specified size.
9. The electronic device of claim 7, wherein the processor performs the identifying of each target vehicle based on the detected location of each target vehicle to obtain an identifier of each target vehicle, and is specifically configured to:
for any one target vehicle, respectively matching the detected position of the target vehicle with the predicted position of each target vehicle through a Hungarian algorithm, and determining the predicted position corresponding to the detected position of the target vehicle; the predicted position of each target vehicle is obtained by predicting the detection position of each target vehicle in the previous frame of video image by using a Kalman filtering algorithm;
if the number of the predicted positions corresponding to the detected positions of the target vehicles is determined to be one, determining the identification of the target vehicles in the last frame of video image corresponding to the predicted positions as the identification of the target vehicles;
if the number of the predicted positions corresponding to the detected positions of the target vehicle is determined to be multiple, determining the predicted position with the highest priority based on the cascade matching parameters corresponding to the predicted positions, and determining the mark of the target vehicle in the last frame of video image corresponding to the predicted position with the highest priority as the mark of the target vehicle, wherein the cascade matching parameters are used for representing the priority of the predicted positions.
10. The electronic device according to any one of claims 7 to 9, wherein the processor performs the determining of the dense vehicle area in the road according to the motion trajectory of each target vehicle in the road, and is specifically configured to:
dividing the motion trail of each target vehicle into a specified number of sub-motion trails, wherein the lengths of the sub-motion trails in the same motion trail are equal;
clustering each sub-motion track by using a preset clustering algorithm to obtain a plurality of motion track clustering areas;
and aiming at any one motion track clustering area, if the number of the sub motion tracks in the motion track clustering area is determined to be larger than a specified threshold value, determining the motion track clustering area as a vehicle dense area in the road.
CN202111006532.2A 2021-08-30 2021-08-30 Road area calibration method and electronic equipment Pending CN113763425A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111006532.2A CN113763425A (en) 2021-08-30 2021-08-30 Road area calibration method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111006532.2A CN113763425A (en) 2021-08-30 2021-08-30 Road area calibration method and electronic equipment

Publications (1)

Publication Number Publication Date
CN113763425A true CN113763425A (en) 2021-12-07

Family

ID=78791866

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111006532.2A Pending CN113763425A (en) 2021-08-30 2021-08-30 Road area calibration method and electronic equipment

Country Status (1)

Country Link
CN (1) CN113763425A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114202733A (en) * 2022-02-18 2022-03-18 青岛海信网络科技股份有限公司 Video-based traffic fault detection method and device
CN114639037A (en) * 2022-03-03 2022-06-17 青岛海信网络科技股份有限公司 Method for determining vehicle saturation of high-speed service area and electronic equipment

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107680389A (en) * 2017-10-26 2018-02-09 江苏云光智慧信息科技有限公司 A kind of vehicle flowrate method of counting
CN107766821A (en) * 2017-10-23 2018-03-06 江苏鸿信系统集成有限公司 All the period of time vehicle detecting and tracking method and system in video based on Kalman filtering and deep learning
CN109919072A (en) * 2019-02-28 2019-06-21 桂林电子科技大学 Fine vehicle type recognition and flow statistics method based on deep learning and trajectory tracking
CN110472496A (en) * 2019-07-08 2019-11-19 长安大学 A kind of traffic video intelligent analysis method based on object detecting and tracking
WO2020011014A1 (en) * 2018-07-13 2020-01-16 腾讯科技(深圳)有限公司 Method and system for detecting and recognizing object in real-time video, storage medium and device
CN110853353A (en) * 2019-11-18 2020-02-28 山东大学 Vision-based density traffic vehicle counting and traffic flow calculating method and system
CN111145545A (en) * 2019-12-25 2020-05-12 西安交通大学 Road traffic behavior unmanned aerial vehicle monitoring system and method based on deep learning
CN111275960A (en) * 2018-12-05 2020-06-12 杭州海康威视系统技术有限公司 Traffic road condition analysis method, system and camera
CN111667512A (en) * 2020-05-28 2020-09-15 浙江树人学院(浙江树人大学) Multi-target vehicle track prediction method based on improved Kalman filtering
CN111695545A (en) * 2020-06-24 2020-09-22 浪潮卓数大数据产业发展有限公司 Single-lane reverse driving detection method based on multi-target tracking
CN112200830A (en) * 2020-09-11 2021-01-08 山东信通电子股份有限公司 Target tracking method and device
CN112463911A (en) * 2021-02-02 2021-03-09 智道网联科技(北京)有限公司 Road activity determination method and device and storage medium
CN112750150A (en) * 2021-01-18 2021-05-04 西安电子科技大学 Vehicle flow statistical method based on vehicle detection and multi-target tracking
CN112785625A (en) * 2021-01-20 2021-05-11 北京百度网讯科技有限公司 Target tracking method and device, electronic equipment and storage medium
US20210188263A1 (en) * 2019-12-23 2021-06-24 Baidu International Technology (Shenzhen) Co., Ltd. Collision detection method, and device, as well as electronic device and storage medium

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107766821A (en) * 2017-10-23 2018-03-06 江苏鸿信系统集成有限公司 All the period of time vehicle detecting and tracking method and system in video based on Kalman filtering and deep learning
CN107680389A (en) * 2017-10-26 2018-02-09 江苏云光智慧信息科技有限公司 A kind of vehicle flowrate method of counting
WO2020011014A1 (en) * 2018-07-13 2020-01-16 腾讯科技(深圳)有限公司 Method and system for detecting and recognizing object in real-time video, storage medium and device
CN111275960A (en) * 2018-12-05 2020-06-12 杭州海康威视系统技术有限公司 Traffic road condition analysis method, system and camera
CN109919072A (en) * 2019-02-28 2019-06-21 桂林电子科技大学 Fine vehicle type recognition and flow statistics method based on deep learning and trajectory tracking
CN110472496A (en) * 2019-07-08 2019-11-19 长安大学 A kind of traffic video intelligent analysis method based on object detecting and tracking
CN110853353A (en) * 2019-11-18 2020-02-28 山东大学 Vision-based density traffic vehicle counting and traffic flow calculating method and system
US20210188263A1 (en) * 2019-12-23 2021-06-24 Baidu International Technology (Shenzhen) Co., Ltd. Collision detection method, and device, as well as electronic device and storage medium
CN111145545A (en) * 2019-12-25 2020-05-12 西安交通大学 Road traffic behavior unmanned aerial vehicle monitoring system and method based on deep learning
CN111667512A (en) * 2020-05-28 2020-09-15 浙江树人学院(浙江树人大学) Multi-target vehicle track prediction method based on improved Kalman filtering
CN111695545A (en) * 2020-06-24 2020-09-22 浪潮卓数大数据产业发展有限公司 Single-lane reverse driving detection method based on multi-target tracking
CN112200830A (en) * 2020-09-11 2021-01-08 山东信通电子股份有限公司 Target tracking method and device
CN112750150A (en) * 2021-01-18 2021-05-04 西安电子科技大学 Vehicle flow statistical method based on vehicle detection and multi-target tracking
CN112785625A (en) * 2021-01-20 2021-05-11 北京百度网讯科技有限公司 Target tracking method and device, electronic equipment and storage medium
CN112463911A (en) * 2021-02-02 2021-03-09 智道网联科技(北京)有限公司 Road activity determination method and device and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
汪辉;高尚兵;周君;周建;张莉雯;: "基于YOLOv3的多车道车流量统计及车辆跟踪方法", 国外电子测量技术, no. 02, 15 February 2020 (2020-02-15) *
赵莉;陈泉林;: "基于Kalman滤波器的车辆检测与跟踪系统的实现", 电子测量技术, no. 02, 22 April 2007 (2007-04-22) *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114202733A (en) * 2022-02-18 2022-03-18 青岛海信网络科技股份有限公司 Video-based traffic fault detection method and device
CN114639037A (en) * 2022-03-03 2022-06-17 青岛海信网络科技股份有限公司 Method for determining vehicle saturation of high-speed service area and electronic equipment
CN114639037B (en) * 2022-03-03 2024-04-09 青岛海信网络科技股份有限公司 Method for determining vehicle saturation of high-speed service area and electronic equipment

Similar Documents

Publication Publication Date Title
KR102378859B1 (en) Method of determining quality of map trajectory matching data, device, server and medium
CN111709975B (en) Multi-target tracking method, device, electronic equipment and storage medium
CN106651901B (en) Object tracking method and device
CN114415628A (en) Automatic driving test method and device, electronic equipment and storage medium
CN113763425A (en) Road area calibration method and electronic equipment
CN113012176B (en) Sample image processing method and device, electronic equipment and storage medium
CN111292531A (en) Tracking method, device and equipment of traffic signal lamp and storage medium
CN111222509B (en) Target detection method and device and electronic equipment
CN112528927B (en) Confidence determining method based on track analysis, road side equipment and cloud control platform
CN113859264B (en) Vehicle control method, device, electronic equipment and storage medium
JP2022023910A (en) Method for acquiring traffic state and apparatus thereof, roadside device, and cloud control platform
JP2022502750A (en) Methods and devices for analyzing sensor data flows, as well as methods for guiding vehicles.
CN113657299A (en) Traffic accident determination method and electronic equipment
CN110688873A (en) Multi-target tracking method and face recognition method
CN114926791A (en) Method and device for detecting abnormal lane change of vehicles at intersection, storage medium and electronic equipment
CN113674317A (en) Vehicle tracking method and device of high-order video
CN116718197B (en) Track processing method and device, electronic equipment and storage medium
CN109800684A (en) The determination method and device of object in a kind of video
CN117611795A (en) Target detection method and model training method based on multi-task AI large model
CN115019242B (en) Abnormal event detection method and device for traffic scene and processing equipment
WO2023066080A1 (en) Forward target determination method and apparatus, electronic device and storage medium
CN112860821A (en) Human-vehicle trajectory analysis method and related product
CN115973190A (en) Decision-making method and device for automatically driving vehicle and electronic equipment
CN109800685A (en) The determination method and device of object in a kind of video
CN115171185A (en) Cross-camera face tracking method, device and medium based on time-space correlation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination