CN112069969A - Method and system for tracking highway monitoring video mirror-crossing vehicle - Google Patents

Method and system for tracking highway monitoring video mirror-crossing vehicle Download PDF

Info

Publication number
CN112069969A
CN112069969A CN202010897531.0A CN202010897531A CN112069969A CN 112069969 A CN112069969 A CN 112069969A CN 202010897531 A CN202010897531 A CN 202010897531A CN 112069969 A CN112069969 A CN 112069969A
Authority
CN
China
Prior art keywords
vehicle
image
target
tracking
detection image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010897531.0A
Other languages
Chinese (zh)
Other versions
CN112069969B (en
Inventor
李春杰
赵建东
韩明敏
郭玉彬
侯晓青
严华
高海涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HEBEI PROVINCIAL COMMUNICATIONS PLANNING AND DESIGN INSTITUTE
Beijing Jiaotong University
Original Assignee
HEBEI PROVINCIAL COMMUNICATIONS PLANNING AND DESIGN INSTITUTE
Beijing Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HEBEI PROVINCIAL COMMUNICATIONS PLANNING AND DESIGN INSTITUTE, Beijing Jiaotong University filed Critical HEBEI PROVINCIAL COMMUNICATIONS PLANNING AND DESIGN INSTITUTE
Priority to CN202010897531.0A priority Critical patent/CN112069969B/en
Publication of CN112069969A publication Critical patent/CN112069969A/en
Application granted granted Critical
Publication of CN112069969B publication Critical patent/CN112069969B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a method and a system for tracking a highway monitoring video mirror-crossing vehicle, belongs to the technical field of computer vision images, and solves the problems that the conventional mirror-crossing vehicle tracking method is difficult to implement in an actual scene and the algorithm is low in applicability. Acquiring frame images in a plurality of camera video files of a highway to be monitored, and carrying out vehicle detection on each frame image based on an improved YOLO target detection model to obtain a vehicle detection image containing a complete vehicle rectangular frame; inputting the vehicle detection image into a multi-target tracking model to obtain a vehicle tracking result; establishing a vehicle information database according to the vehicle detection image and the vehicle tracking result; the target vehicle image is intercepted based on a certain vehicle detection image corresponding to any camera number in the vehicle information database, and the motion track of the target vehicle corresponding to the target vehicle image is matched according to the vehicle information database, so that cross-mirror tracking is realized, the implementation difficulty of the tracking method is reduced, and the applicability is improved.

Description

Method and system for tracking highway monitoring video mirror-crossing vehicle
Technical Field
The invention relates to the technical field of computer vision images, in particular to a method and a system for tracking a monitoring video mirror-crossing vehicle on a highway.
Background
At present, a highway monitoring system is more and more perfect, cameras are more and more densely arranged, but the highway is still the place where accidents occur most easily, and the reason for the accidents is that the highway is fast in speed, a large number of vehicles exist, and the functions of the monitoring cameras are limited.
The existing cross-mirror vehicle re-identification technology is based on a labeled data set, although the vehicle re-identification technology is further developed along with the update and release of a high-quality data set, the data set scene is limited, the algorithm model is highly limited, the implementation is difficult in an actual scene, and the applicability of the algorithm is low.
Disclosure of Invention
In view of the foregoing analysis, embodiments of the present invention are directed to providing a method and a system for tracking a vehicle through a mirror on a highway monitoring video, so as to solve the problems that the existing method for tracking a vehicle through a mirror is difficult to implement in an actual scene and the algorithm is low in applicability.
On one hand, the embodiment of the invention provides a method for tracking a monitoring video mirror-crossing vehicle on a highway, which comprises the following steps:
acquiring frame images in a plurality of camera video files of a highway to be monitored, and carrying out vehicle detection on each frame image based on an improved YOLO target detection model to obtain a vehicle detection image containing a complete vehicle rectangular frame;
inputting the vehicle detection image into a multi-target tracking model to obtain a vehicle tracking result; the vehicle tracking result comprises a vehicle ID and a vehicle track;
establishing a vehicle information database according to the vehicle detection image and the vehicle tracking result;
and intercepting a target vehicle image based on a certain vehicle detection image corresponding to any camera number in the vehicle information database, and matching the motion track of the target vehicle corresponding to the target vehicle image according to the vehicle information database to realize cross-mirror tracking.
Further, the improved YOLO target detection model comprises a feature extraction network layer and a YOLO detection layer; wherein the feature extraction network layer comprises a stem unit and an OSA unit;
a stem unit, configured to perform downsampling on frame images in the multiple-camera video files of the highway, to obtain an image with a size of 304 × 128;
an OSA unit for convolving an input image of size 304 x 128 to obtain an image of size 19 x 512;
and the YOLO detection layer is used for obtaining a vehicle detection image containing a complete vehicle rectangular frame according to the image with the size of 19 x 512 output by the OSA unit.
Further, the multi-target tracking model comprises a motion prediction unit and a depth appearance characteristic extraction unit:
the motion prediction unit is used for predicting to obtain a current frame vehicle predicted image according to a previous frame vehicle detection image;
the depth appearance feature extraction unit comprises a re-identification network, obtains a vehicle track based on a vehicle detection image and a vehicle prediction image which are input into the re-identification network, and numbers the vehicle track to obtain a vehicle ID corresponding to the vehicle track.
Further, establishing a vehicle information database according to the vehicle tracking result specifically comprises: and storing the vehicle detection image and the vehicle track to a database based on the camera number and the vehicle ID to obtain a vehicle information database.
Further, a target vehicle image is intercepted based on a certain vehicle detection image corresponding to any camera number in the vehicle information database, and the motion track of a target vehicle corresponding to the target vehicle image is matched according to the vehicle information database, so that cross-mirror tracking is realized, and the method comprises the following steps:
acquiring a certain vehicle detection image corresponding to any camera number in the vehicle information database, and intercepting a target vehicle image;
respectively obtaining a depth characteristic matrix of the target vehicle image and a depth characteristic matrix of a certain vehicle detection image corresponding to the other camera serial numbers based on the target vehicle image and the certain vehicle detection image corresponding to the other camera serial numbers;
based on the depth feature matrix of the target vehicle image and the depth feature matrix of a certain vehicle detection image corresponding to the other camera numbers, the cosine similarity distance between the target vehicle and the vehicle in the certain vehicle detection image corresponding to the other camera numbers is obtained by utilizing the re-identification network;
classifying and sequencing the cosine similarity distances according to the camera numbers to obtain the minimum cosine similarity distance corresponding to the same camera number;
judging whether the minimum cosine similarity distance is smaller than a similarity threshold value, if so, judging that the vehicle in the corresponding vehicle detection image is a target vehicle, and if not, judging that the target vehicle drives away from the highway to be monitored;
and matching the vehicle information database based on the camera number and the vehicle ID corresponding to the vehicle detection image to obtain the motion track of the target vehicle, thereby realizing cross-mirror tracking.
In another aspect, an embodiment of the present invention provides a highway monitoring video mirror-crossing vehicle tracking system, including:
the detection module is used for acquiring frame images in a plurality of camera video files of the highway to be monitored, and carrying out vehicle detection on each frame image based on an improved YOLO target detection model to obtain a vehicle detection image containing a complete vehicle rectangular frame;
the tracking module is used for inputting the vehicle detection image into the multi-target tracking model to obtain a vehicle tracking result; the vehicle tracking result comprises a vehicle ID and a vehicle track;
the vehicle information database acquisition module is used for establishing a vehicle information database according to the vehicle detection image and the vehicle tracking result;
and the motion track obtaining module is used for intercepting a target vehicle image according to a certain vehicle detection image corresponding to any camera number in the vehicle information database, and matching the motion track of the target vehicle corresponding to the target vehicle image according to the vehicle information database to realize cross-mirror tracking.
Further, the detection module comprises a feature extraction network layer and a YOLO detection layer; wherein the feature extraction network layer comprises a stem unit and an OSA unit;
a stem unit, configured to perform downsampling on frame images in the multiple-camera video files of the highway, to obtain an image with a size of 304 × 128;
an OSA unit for convolving an input image of size 304 x 128 to obtain an image of size 19 x 512;
and the YOLO detection layer is used for obtaining a vehicle detection image containing a complete vehicle rectangular frame according to the image with the size of 19 x 512 output by the OSA unit.
Further, the tracking module includes a motion prediction unit and a depth appearance feature extraction unit:
the motion prediction unit is used for predicting to obtain a current frame vehicle predicted image according to a previous frame vehicle detection image;
the depth appearance feature extraction unit comprises a re-identification network, obtains a vehicle track based on a vehicle detection image and a vehicle prediction image which are input into the re-identification network, and numbers the vehicle track to obtain a vehicle ID corresponding to the vehicle track.
Further, the vehicle information database obtaining module stores the vehicle detection image and the vehicle track into a database according to the camera number and the vehicle ID to obtain a vehicle information database.
Further, the motion trajectory obtaining module executes the following process:
acquiring a certain vehicle detection image corresponding to any camera number in the vehicle information database, and intercepting a target vehicle image;
respectively obtaining a depth characteristic matrix of the target vehicle image and a depth characteristic matrix of a certain vehicle detection image corresponding to the other camera serial numbers based on the target vehicle image and the certain vehicle detection image corresponding to the other camera serial numbers;
based on the depth feature matrix of the target vehicle image and the depth feature matrix of a certain vehicle detection image corresponding to the other camera numbers, the cosine similarity distance between the target vehicle and the vehicle in the certain vehicle detection image corresponding to the other camera numbers is obtained by utilizing the re-identification network;
classifying and sequencing the cosine similarity distances according to the camera numbers to obtain the minimum cosine similarity distance corresponding to the same camera number;
judging whether the minimum cosine similarity distance is smaller than a similarity threshold value, if so, judging that the vehicle in the corresponding vehicle detection image is a target vehicle, and if not, judging that the target vehicle drives away from the highway to be monitored;
and matching the vehicle information database based on the camera number and the vehicle ID corresponding to the vehicle detection image to obtain the motion track of the target vehicle, thereby realizing cross-mirror tracking.
Compared with the prior art, the invention can realize at least one of the following beneficial effects:
1. a method for tracking vehicles across mirrors by monitoring videos on an expressway is characterized in that vehicles are detected through an improved YOLO target detection model to obtain vehicle detection images, the vehicles are tracked through a multi-target tracking model to obtain vehicle IDs and vehicle tracks, cosine similarity is obtained on the basis of a re-recognition network, then complete motion tracks of the target vehicles are obtained through splicing, and an efficient, rapid and high-precision video analysis technology is provided for safety monitoring and searching of the target vehicles by an expressway management department.
2. The backbone network in the YOLOv3 is replaced by a DenseNet variant VoVNet with better learning ability to obtain a feature extraction network layer, and meanwhile, according to the size of a vehicle shot by a highway monitoring video image, the large, medium and small three dimensions of the YOLOv3 detection layer are reduced to the medium, medium and small two dimensions, so that an improved YOLO target detection model is obtained, the model is smaller in size, the calculation speed is higher, the calculation amount is reduced, and a vehicle detection image containing a complete vehicle rectangular frame can be obtained more quickly.
3. The vehicle information database can be obtained by storing the vehicle detection image and the vehicle track into the database according to the naming rule of the camera number-vehicle ID, and data support and basis are provided for re-identification of the vehicle track and complete splicing of the target vehicle track.
4. The method comprises the steps of intercepting a target vehicle image from a certain vehicle detection image corresponding to any camera in a vehicle information database by utilizing a depth appearance characteristic extraction unit, calculating cosine similarity of the target vehicle image and a plurality of vehicles in the certain vehicle detection image in other cameras, matching the target vehicle and the plurality of vehicles in the certain vehicle detection image in other cameras through the cosine similarity, splicing to obtain a motion track of the target vehicle, realizing cross-mirror tracking, and having high matching efficiency and high precision.
In the invention, the technical schemes can be combined with each other to realize more preferable combination schemes. Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and drawings.
Drawings
The drawings are only for purposes of illustrating particular embodiments and are not to be construed as limiting the invention, wherein like reference numerals are used to designate like parts throughout.
FIG. 1 is an overall structure diagram of a highway surveillance video mirror-crossing vehicle tracking method;
FIG. 2 is a schematic flow chart of a method for tracking a vehicle by crossing mirrors in a monitoring video of a highway;
FIG. 3 is a schematic structural diagram of an improved YOLO target detection model;
FIG. 4 is a schematic structural diagram of a Deepsort multi-target tracking model;
FIG. 5 is a schematic structural diagram of a highway surveillance video mirror-crossing vehicle tracking system;
reference numerals:
100-a detection module, 200-a tracking module, 300-a vehicle information database obtaining module and 400-a motion trail obtaining module.
Detailed Description
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate preferred embodiments of the invention and together with the description, serve to explain the principles of the invention and not to limit the scope of the invention.
The cross-mirror tracking refers to tracking the motion track of the target vehicle based on video files shot by a plurality of cameras, and splicing the motion tracks of the target vehicle corresponding to each camera to obtain the complete motion track of the target vehicle. The existing cross-mirror vehicle re-identification technology is based on a labeled data set, although the vehicle re-identification technology is further developed along with the update and release of a high-quality data set, the data set scene is limited, the algorithm model is highly limited, the implementation is difficult in an actual scene, and the applicability of the algorithm is low. For this reason, the application provides a method and a system for tracking a highway surveillance video mirror-crossing vehicle, as shown in fig. 1, each frame image in a plurality of camera video files of a highway to be monitored is subjected to vehicle detection through an improved YOLO target detection model to obtain a vehicle detection image, the vehicle detection image is input into a multi-target tracking model to obtain a vehicle track and a vehicle ID corresponding to the vehicle track, and the vehicle ID is the number of the vehicle track; and storing the vehicle detection image and the vehicle track into a database according to the camera number and the vehicle ID, and finally matching the motion track of the target vehicle corresponding to the target vehicle image according to the vehicle information database to realize cross-mirror tracking. According to the method, the vehicle track information corresponding to the videos shot by each monitoring camera is extracted, tracks of the target vehicles are connected in a lens-crossing mode, so that the complete motion track of the target vehicles on the expressway to be monitored is obtained, the mirror-crossing tracking of the vehicles is achieved, the problems that the existing mirror-crossing vehicle tracking method is difficult to implement in an actual scene, the algorithm is low in applicability and the like are solved, a high-efficiency, quick and high-precision video analysis technology is provided for safety monitoring and searching of the target vehicles by an expressway management department, and the method has high practical value.
An embodiment of the invention discloses a method for tracking a highway surveillance video mirror-crossing vehicle, which is shown in fig. 2. The method comprises the following steps:
and step S1, frame images in a plurality of camera video files of the highway to be monitored are obtained, and vehicle detection is carried out on each frame image based on the improved YOLO target detection model to obtain a vehicle detection image containing a complete vehicle rectangular frame. The method comprises the steps of obtaining a plurality of video files of a highway to be monitored by a highway monitoring camera, and obtaining a vehicle detection image which comprises a complete vehicle rectangular frame. The improved YOLO target detection model is that the backbone network in the original YOLOv3 is replaced by a DenseNet variant VoVNet (feature extraction network layer) with better learning capacity, the model is smaller in size and higher in speed, and meanwhile, the large, medium and small scales of the YOLOv3 detection layer are reduced into the medium, small and medium scales according to the size of a vehicle shot by a monitoring video image of a highway, so that the calculation amount is further reduced.
Preferably, the improved YOLO target detection model comprises a feature extraction network layer and a YOLO detection layer; the feature extraction network layer comprises a stem unit and an OSA unit; the system comprises a stem unit, a video processing unit and a processing unit, wherein the stem unit is used for down-sampling frame images in a plurality of camera video files of the expressway to obtain images with the size of 304 x 128; an OSA unit for convolving an input image of size 304 x 128 to obtain an image of size 19 x 512; and the YOLO detection layer is used for obtaining a vehicle detection image containing a complete vehicle rectangular frame according to the image with the size of 19 × 512 output by the OSA unit, wherein the vehicle detection image comprises images of a plurality of vehicles framed by the complete vehicle rectangular frame.
Specifically, as shown in fig. 3, the improved YOLO target detection model includes a feature extraction network layer and a YOLO detection layer. The system comprises a feature extraction network layer and a characteristic extraction network layer, wherein the feature extraction network layer comprises a stem unit and an OSA unit, and the stem unit is used for down-sampling frame images in a plurality of camera video files of the expressway to obtain an image with the size of 304 x 128. The OSA unit comprises four OSA subunits, and the first OSA subunit is used for convolving the image with the size of 304X 128 output by the stem unit to obtain an image with the size of 152X 128; the second OSA subunit is operable to convolve the size 152 x 128 image output by the first OSA subunit to produce a size 76 x 256 image; the third OSA subunit is operable to convolve the size 76 × 256 image output by the second OSA subunit to obtain a size 38 × 384 image; the effect of the fourth OSA subunit is to convolve the size 38 x 384 image output by the third OSA subunit, resulting in a size 19 x 512 image. Meanwhile, the improved YOLO detection layer in the YOLO target detection model reduces the three large, medium and small dimensions of the original YOLO 3 detection layer to two medium and small dimensions, and is used for obtaining a vehicle detection image containing a complete vehicle rectangular frame according to an image with the dimension of 19 × 512 output by the fourth OSA subunit. Based on the obtained improved YOLO target detection model, a road monitoring data set is utilized to train the model, and hyper-parameter search is carried out, namely for each new generation of hyper-parameters, the previous generation (in all previous generations) with the highest adaptability is selected to carry out mutation, all parameters are simultaneously mutated according to normal distribution with about 20% of 1-sigma to obtain appropriate hyper-parameters such as learning rate, weight of each part loss function and the like, multi-scale training is carried out, and finally the generation hyper-parameters with the best network performance are stored as the hyper-parameters of formal training.
The backbone network in the YOLOv3 is replaced by a DenseNet variant VoVNet with better learning ability to obtain a feature extraction network layer, and meanwhile, according to the size of a vehicle shot by a highway monitoring video image, the large, medium and small three dimensions of the YOLOv3 detection layer are reduced to the medium, medium and small two dimensions, so that an improved YOLO target detection model is obtained, the model is smaller in size, the calculation speed is higher, the calculation amount is reduced, and a vehicle detection image containing a complete vehicle rectangular frame can be obtained more quickly.
Step S2, inputting the vehicle detection image into the multi-target tracking model to obtain a vehicle tracking result; the vehicle tracking result includes a vehicle ID and a vehicle trajectory. Preferably, the multi-target tracking model includes a motion prediction unit and a depth appearance feature extraction unit: the motion prediction unit is used for obtaining a current frame vehicle prediction image according to the previous frame vehicle detection image; the depth appearance characteristic extraction unit comprises a re-identification network, obtains a vehicle track based on a vehicle detection image and a vehicle prediction image which are input into the re-identification network, and numbers the vehicle track to obtain a vehicle ID corresponding to the vehicle track.
Specifically, as shown in fig. 4, the DeepSort multi-target tracking model includes a motion prediction unit and a depth appearance feature extraction unit, where the motion prediction unit is used to obtain a current frame vehicle predicted image according to a previous frame vehicle detection image, the depth appearance feature extraction unit mainly calculates depth feature information of the vehicle detection image and the vehicle predicted image through a re-identification network, uses the vehicle predicted image with successful association matching as a vehicle position of the vehicle in the current frame, connects coordinates of a center point of the vehicle positions of multiple frames to obtain a vehicle track, and obtains the vehicle track and numbers the vehicle track at the same time to obtain a vehicle ID corresponding to the vehicle track.
And step S3, establishing a vehicle information database according to the vehicle detection image and the vehicle tracking result.
Preferably, the step of establishing the vehicle information database according to the vehicle tracking result specifically includes: and storing the vehicle detection image and the vehicle track to a database based on the camera number and the vehicle ID to obtain a vehicle information database. Specifically, the vehicle information database can be obtained by storing the vehicle detection image and the vehicle trajectory in the database according to the naming rule of the camera number-vehicle ID based on the vehicle detection image obtained in step S1 and the vehicle ID corresponding to the vehicle trajectory obtained in step S2.
The vehicle information database can be obtained by storing the vehicle detection image and the vehicle track into the database according to the naming rule of the camera number-vehicle ID, and data support and basis are provided for re-identification of the vehicle track and complete splicing of the target vehicle track.
And step S4, intercepting a target vehicle image based on a certain vehicle detection image corresponding to any camera number in the vehicle information database, and matching the motion track of the target vehicle corresponding to the target vehicle image according to the vehicle information database to realize cross-mirror tracking. Specifically, in step S3, after the vehicle detection images and the vehicle tracks are stored in the database by using the camera numbers-vehicle IDs, a certain vehicle detection image corresponding to any camera number may be selected to intercept the target vehicle image, and then the target vehicles in certain vehicle detection images corresponding to other cameras are searched according to the target vehicle image, and the motion tracks of the target vehicles in each camera are spliced, so that the complete motion track of the target vehicle can be obtained. Preferably, the method includes the following steps of intercepting a target vehicle image based on a vehicle detection image in a vehicle information database, matching a motion track of a target vehicle corresponding to the target vehicle image according to the vehicle information database, and realizing cross-mirror tracking:
step S401, a certain vehicle detection image corresponding to any camera in the vehicle information database is obtained, and a target vehicle image is intercepted. Specifically, after the vehicle detection image and the vehicle track are stored in the database based on the camera number-vehicle ID in step S3, the target vehicle image may be captured from a certain vehicle detection image corresponding to any camera number in the vehicle information database.
Step S402, respectively obtaining a depth characteristic matrix of the target vehicle image and a depth characteristic matrix of a certain vehicle detection image corresponding to the number of the other camera based on the target vehicle image and the certain vehicle detection image corresponding to the other camera. Specifically, a depth feature matrix of the target vehicle image and a depth feature matrix of a certain vehicle detection image corresponding to the other camera numbers can be obtained by inputting the target vehicle image and the certain vehicle detection image corresponding to the other camera numbers into the depth appearance feature extraction unit. Here, the depth feature matrix is obtained according to a certain vehicle detection image corresponding to each camera, and compared with the depth feature matrix obtained by processing each real-time frame image in the process of obtaining the vehicle track in step S2, the calculation amount is reduced and the accuracy is higher.
And S403, based on the depth feature matrix of the target vehicle image and the depth feature matrix of a certain vehicle detection image corresponding to the number of other cameras, obtaining cosine similarity distances of a plurality of vehicles in the target vehicle and the certain vehicle detection image corresponding to the other cameras by using a re-identification network. Specifically, the formula for calculating the cosine similarity distance is as follows:
Figure BDA0002658926290000111
wherein cos (theta) is a cosine similarity distance, a is a depth feature matrix corresponding to a target vehicle image, b is a depth feature matrix corresponding to a vehicle detection image, and xiIs the i-dimension element, y, in the depth feature matrix corresponding to the target vehicle imageiThe method comprises the steps that the ith dimension element in a depth feature matrix corresponding to a vehicle detection image is used, n is the dimension of the depth feature matrix, and i is more than or equal to 1 and less than or equal to n.
And S404, classifying and sequencing the cosine similarity distances according to the camera numbers to obtain the minimum cosine similarity distance corresponding to the same camera numbers. Specifically, one vehicle detection image is selected based on each camera number, and each vehicle detection image comprises images of a plurality of vehicles. Based on step S404, cosine similarity distances of a plurality of vehicles in a detected image of a vehicle corresponding to the target vehicle and other cameras can be calculated, and the cosine similarity distances are classified and sorted according to camera numbers to obtain a minimum cosine similarity distance corresponding to the same camera number.
Step S405, judging whether the minimum cosine similarity distance is smaller than a similarity threshold value, if so, determining that the vehicle corresponding to the minimum cosine similarity distance in the corresponding vehicle detection image is a target vehicle, and if not, judging that the target vehicle drives away from the expressway to be monitored. Specifically, a minimum cosine similarity distance can be obtained from a certain vehicle detection image corresponding to each of the other cameras, and whether a target vehicle exists in the certain vehicle detection image corresponding to each of the other cameras can be judged based on the minimum cosine similarity distance. The similarity threshold is obtained by averaging a large number of experiments, and different vehicle types correspond to different similarity thresholds under different external conditions such as illumination, overcast and rainy conditions and the like which affect monitoring shooting.
And S406, matching the vehicle information database based on the camera number and the vehicle ID corresponding to the vehicle detection image to obtain the motion track of the target vehicle, so as to realize cross-mirror tracking. Based on the previous step, when the vehicle in the vehicle detection images corresponding to other cameras is judged to be the target image, the vehicle information database can be inquired through the cameras and the vehicle ID to obtain the corresponding vehicle track of the target vehicle in the vehicle detection images corresponding to the cameras, and the corresponding vehicle tracks are spliced to obtain the complete motion track of the target vehicle, so that the cross-mirror tracking is realized.
The method comprises the steps of intercepting a target vehicle image from a certain vehicle detection image corresponding to any camera in a vehicle information database by utilizing a depth appearance characteristic extraction unit, calculating cosine similarity of the target vehicle image and a plurality of vehicles in the certain vehicle detection image in other cameras, and matching the target vehicle and the plurality of vehicles in the certain vehicle detection image in other cameras through the cosine similarity, so as to obtain a motion track of the target vehicle, realize cross-mirror tracking, and have high matching efficiency and high precision.
Compared with the prior art, the method for tracking the expressway monitoring video across-mirror vehicles provided by the embodiment comprises the steps of detecting the vehicles through the improved YOLO target detection model to obtain vehicle detection images, tracking the vehicles through the multi-target tracking model to obtain vehicle IDs and vehicle tracks, finally obtaining cosine similarity based on the re-recognition network, further splicing to obtain the complete motion track of the target vehicles, and providing an efficient, rapid and high-precision video analysis technology for safety monitoring and searching of the target vehicles for the expressway management department.
In another embodiment of the present invention, a highway surveillance video mirror-crossing vehicle tracking system is disclosed, as shown in fig. 5, including a detection module 100, configured to obtain frame images in a plurality of camera video files of a highway to be monitored, and perform vehicle detection on each frame image based on an improved YOLO target detection model to obtain a vehicle detection image including a complete vehicle rectangular frame; the tracking module 200 is used for inputting the vehicle detection image into the multi-target tracking model to obtain a vehicle tracking result, wherein the vehicle tracking result comprises a vehicle ID and a vehicle track; a vehicle information database obtaining module 300, configured to establish a vehicle information database according to the vehicle detection image and the vehicle tracking result; and the motion track obtaining module 400 is configured to intercept a target vehicle image according to the vehicle detection image in the vehicle information database, and match a motion track of a target vehicle corresponding to the target vehicle image according to the vehicle information database, so as to implement cross-mirror tracking. Specifically, the system can select a video file, perform tracking and display a tracking result, namely an image of each vehicle, and display the coordinates of the center point of the vehicle in a form so as to obtain the vehicle track, namely the vehicle ID corresponding to the vehicle track, and also has a pause and resume tracking function.
A highway monitoring video mirror-crossing vehicle tracking system detects vehicles through an improved YOLO target detection model to obtain vehicle detection images, tracks the vehicles through a multi-target tracking model to obtain vehicle IDs and vehicle tracks, finally obtains cosine similarity based on a re-recognition network, and further splices to obtain complete motion tracks of the target vehicles, and provides an efficient, rapid and high-precision video analysis technology for a highway management department to safely monitor and search the target vehicles.
Preferably, the detection module comprises a feature extraction network layer and a YOLO detection layer; the feature extraction network layer comprises a stem unit and an OSA unit; the system comprises a stem unit, a video processing unit and a processing unit, wherein the stem unit is used for down-sampling frame images in a plurality of camera video files of the expressway to obtain images with the size of 304 x 128; an OSA unit for convolving an input image of size 304 x 128 to obtain an image of size 19 x 512; and the YOLO detection layer is used for obtaining a vehicle detection image containing a complete vehicle rectangular frame according to the image with the size of 19 x 512 output by the OSA unit.
The detection module replaces a backbone network in the Yolov3 with a variant VoVNet of DenseNet with better learning capacity to obtain a feature extraction network layer, and meanwhile, aiming at the size of a vehicle shot by a monitoring video image of an expressway, the three large and medium-small scales of the Yolov3 detection layer are reduced to two small and medium-small scales to obtain an improved Yolo target detection model, so that the model is smaller in size, the calculation speed is higher, the calculation amount is reduced, and a vehicle detection image containing a complete vehicle rectangular frame can be obtained more quickly.
Preferably, the tracking module comprises a motion prediction unit and a depth appearance feature extraction unit, wherein the motion prediction unit is used for predicting to obtain a current frame vehicle predicted image according to a previous frame vehicle detection image; the depth appearance characteristic extraction unit comprises a re-identification network, obtains a vehicle track based on a vehicle detection image and a vehicle prediction image which are input into the re-identification network, and numbers the vehicle track to obtain a vehicle ID corresponding to the vehicle track.
Preferably, the vehicle information database obtaining module stores the vehicle detection image and the vehicle track into the database according to the camera number and the vehicle ID to obtain the vehicle information database.
The vehicle information database can be obtained by storing the vehicle detection image and the vehicle track into the database through the vehicle information database acquisition module by using the naming rule of camera number-vehicle ID, and data support and basis are provided for re-identification of the vehicle track and complete splicing of the target vehicle track.
Preferably, the motion trajectory obtaining module executes the following process:
acquiring a certain vehicle detection image corresponding to any camera number in a vehicle information database, and intercepting a target vehicle image by using a depth appearance characteristic extraction unit;
respectively obtaining a depth characteristic matrix of the target vehicle image and a depth characteristic matrix of a certain vehicle detection image corresponding to the other camera number based on the target vehicle image and the certain vehicle detection image corresponding to the other camera number;
based on the depth feature matrix of the target vehicle image and the depth feature matrix of a certain vehicle detection image corresponding to the other camera numbers, a re-identification network is utilized to obtain the cosine similarity distance between the target vehicle and the vehicle in the certain vehicle detection image corresponding to the other camera numbers;
classifying and sequencing the cosine similarity distances according to the camera numbers to obtain the minimum cosine similarity distance corresponding to the same camera number;
judging whether the minimum cosine similarity distance is smaller than a similarity threshold value, if so, judging that the vehicle in the corresponding vehicle detection image is a target vehicle, and if not, judging that the target vehicle drives away from the highway to be monitored;
and matching the vehicle information database based on the camera number and the vehicle ID corresponding to the vehicle detection image to obtain the motion track of the target vehicle, thereby realizing cross-mirror tracking.
The motion track obtaining module can calculate the similarity of the target vehicle and all tracked vehicles, displays the similarity on a computer interface according to the similarity in a sequencing mode, shows that the similarity is close to the target vehicle in a month when the sequencing is closer, and outputs the successfully matched vehicle ID on the right side of the interface and splices the track after the matching is successful so as to obtain the complete track of the target vehicle. The method comprises the steps of intercepting a target vehicle image from a certain vehicle detection image corresponding to any camera in a vehicle information database by using a depth appearance characteristic extraction unit, calculating cosine similarity of the target vehicle image and a plurality of vehicles in the certain vehicle detection image in other cameras, and matching the target vehicle with the plurality of vehicles in the certain vehicle detection image in other cameras through the cosine similarity, so as to obtain a motion track of the target vehicle, realize cross-mirror tracking, and have high matching efficiency and high precision.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention.

Claims (10)

1. A method for tracking a vehicle crossing mirrors by monitoring videos on a highway is characterized by comprising the following steps:
acquiring frame images in a plurality of camera video files of a highway to be monitored, and carrying out vehicle detection on each frame image based on an improved YOLO target detection model to obtain a vehicle detection image containing a complete vehicle rectangular frame;
inputting the vehicle detection image into a multi-target tracking model to obtain a vehicle tracking result; the vehicle tracking result comprises a vehicle ID and a vehicle track;
establishing a vehicle information database according to the vehicle detection image and the vehicle tracking result;
and intercepting a target vehicle image based on a certain vehicle detection image corresponding to any camera number in the vehicle information database, and matching the motion track of the target vehicle corresponding to the target vehicle image according to the vehicle information database to realize cross-mirror tracking.
2. The method of claim 1, wherein the improved YOLO target detection model comprises a feature extraction network layer and a YOLO detection layer; wherein the feature extraction network layer comprises a stem unit and an OSA unit;
a stem unit, configured to perform downsampling on frame images in the multiple-camera video files of the highway, to obtain an image with a size of 304 × 128;
an OSA unit for convolving an input image of size 304 x 128 to obtain an image of size 19 x 512;
and the YOLO detection layer is used for obtaining a vehicle detection image containing a complete vehicle rectangular frame according to the image with the size of 19 x 512 output by the OSA unit.
3. The highway surveillance video cross-mirror vehicle tracking method according to claim 1, wherein the multi-target tracking model comprises a motion prediction unit and a depth appearance feature extraction unit:
the motion prediction unit is used for predicting to obtain a current frame vehicle predicted image according to a previous frame vehicle detection image;
the depth appearance feature extraction unit comprises a re-identification network, obtains a vehicle track based on a vehicle detection image and a vehicle prediction image which are input into the re-identification network, and numbers the vehicle track to obtain a vehicle ID corresponding to the vehicle track.
4. The method for cross-mirror vehicle tracking of highway surveillance videos according to claim 3, wherein building a vehicle information database according to the vehicle tracking result specifically comprises: and storing the vehicle detection image and the vehicle track to a database based on the camera number and the vehicle ID to obtain a vehicle information database.
5. The method for tracking the highway surveillance video mirror-crossing vehicle according to claim 4, wherein a target vehicle image is intercepted based on a certain vehicle detection image corresponding to any camera number in the vehicle information database, and the mirror-crossing tracking is realized by matching a motion track of a target vehicle corresponding to the target vehicle image according to the vehicle information database, and the method comprises the following steps:
acquiring a certain vehicle detection image corresponding to any camera number in the vehicle information database, and intercepting a target vehicle image;
respectively obtaining a depth characteristic matrix of the target vehicle image and a depth characteristic matrix of a certain vehicle detection image corresponding to the other camera serial numbers based on the target vehicle image and the certain vehicle detection image corresponding to the other camera serial numbers;
based on the depth feature matrix of the target vehicle image and the depth feature matrix of a certain vehicle detection image corresponding to other camera numbers, the cosine similarity distance between the target vehicle and a plurality of vehicles in the certain vehicle detection image corresponding to other camera numbers is obtained by utilizing the re-identification network;
classifying and sequencing the cosine similarity distances according to the camera numbers to obtain the minimum cosine similarity distance corresponding to the same camera number;
judging whether the minimum cosine similarity distance corresponding to the numbers of other cameras is smaller than a similarity threshold value, if so, judging that the vehicle in the corresponding vehicle detection image is a target vehicle, and if not, judging that the target vehicle drives away from the highway to be monitored;
matching the vehicle information database based on the camera number and the vehicle ID corresponding to the vehicle detection image, splicing to obtain the motion track of the target vehicle, and realizing cross-mirror tracking.
6. A highway surveillance video mirror-spanning vehicle tracking system, comprising:
the detection module is used for acquiring frame images in a plurality of camera video files of the highway to be monitored, and carrying out vehicle detection on each frame image based on an improved YOLO target detection model to obtain a vehicle detection image containing a complete vehicle rectangular frame;
the tracking module is used for inputting the vehicle detection image into the multi-target tracking model to obtain a vehicle tracking result; the vehicle tracking result comprises a vehicle ID and a vehicle track;
the vehicle information database acquisition module is used for establishing a vehicle information database according to the vehicle detection image and the vehicle tracking result;
and the motion track obtaining module is used for intercepting a target vehicle image according to a certain vehicle detection image corresponding to any camera number in the vehicle information database, and matching the motion track of the target vehicle corresponding to the target vehicle image according to the vehicle information database to realize cross-mirror tracking.
7. The highway surveillance video cross-mirror vehicle tracking system of claim 6, wherein the detection module comprises a feature extraction network layer and a YOLO detection layer; wherein the feature extraction network layer comprises a stem unit and an OSA unit;
a stem unit, configured to perform downsampling on frame images in the multiple-camera video files of the highway, to obtain an image with a size of 304 × 128;
an OSA unit for convolving an input image of size 304 x 128 to obtain an image of size 19 x 512;
and the YOLO detection layer is used for obtaining a vehicle detection image containing a complete vehicle rectangular frame according to the image with the size of 19 x 512 output by the OSA unit.
8. The highway surveillance video cross-mirror vehicle tracking system of claim 7, wherein the tracking module comprises a motion prediction unit and a depth appearance feature extraction unit:
the motion prediction unit is used for predicting to obtain a current frame vehicle predicted image according to a previous frame vehicle detection image;
the depth appearance feature extraction unit comprises a re-identification network, obtains a vehicle track based on a vehicle detection image and a vehicle prediction image which are input into the re-identification network, and numbers the vehicle track to obtain a vehicle ID corresponding to the vehicle track.
9. The system of claim 8, wherein the vehicle information database acquisition module stores the vehicle detection image and the vehicle track into a database according to the camera number and the vehicle ID to obtain the vehicle information database.
10. The system of claim 9, wherein the motion trajectory acquisition module performs the following process:
acquiring a certain vehicle detection image corresponding to the camera number in the vehicle information database, and intercepting a target vehicle image;
respectively obtaining a depth characteristic matrix of the target vehicle image and a depth characteristic matrix of a certain vehicle detection image corresponding to the other camera serial numbers based on the target vehicle image and the certain vehicle detection image corresponding to the other camera serial numbers;
based on the depth feature matrix of the target vehicle image and the depth feature matrix of a certain vehicle detection image corresponding to the other camera numbers, the cosine similarity distance between the target vehicle and the vehicle in the certain vehicle detection image corresponding to the other camera numbers is obtained by utilizing the re-identification network;
classifying and sequencing the cosine similarity distances according to the camera numbers to obtain the minimum cosine similarity distance corresponding to the same camera number;
judging whether the minimum cosine similarity distance corresponding to the numbers of other cameras is smaller than a similarity threshold value, if so, judging that the vehicle in the corresponding vehicle detection image is a target vehicle, and if not, judging that the target vehicle drives away from the highway to be monitored;
matching the vehicle information database based on the camera number and the vehicle ID corresponding to the vehicle detection image, splicing to obtain the motion track of the target vehicle, and realizing cross-mirror tracking.
CN202010897531.0A 2020-08-31 2020-08-31 Expressway monitoring video cross-mirror vehicle tracking method and system Active CN112069969B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010897531.0A CN112069969B (en) 2020-08-31 2020-08-31 Expressway monitoring video cross-mirror vehicle tracking method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010897531.0A CN112069969B (en) 2020-08-31 2020-08-31 Expressway monitoring video cross-mirror vehicle tracking method and system

Publications (2)

Publication Number Publication Date
CN112069969A true CN112069969A (en) 2020-12-11
CN112069969B CN112069969B (en) 2023-07-25

Family

ID=73665803

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010897531.0A Active CN112069969B (en) 2020-08-31 2020-08-31 Expressway monitoring video cross-mirror vehicle tracking method and system

Country Status (1)

Country Link
CN (1) CN112069969B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112991134A (en) * 2021-05-11 2021-06-18 交通运输部科学研究院 Driving path reduction measuring and calculating method and device and electronic equipment
CN113516054A (en) * 2021-06-03 2021-10-19 三峡大学 Wood-carrying vehicle detection, identification and tracking method
CN113657378A (en) * 2021-07-28 2021-11-16 讯飞智元信息科技有限公司 Vehicle tracking method, vehicle tracking system and computing device
CN113673395A (en) * 2021-08-10 2021-11-19 深圳市捷顺科技实业股份有限公司 Vehicle track processing method and device
CN114067270A (en) * 2021-11-18 2022-02-18 华南理工大学 Vehicle tracking method and device, computer equipment and storage medium
CN114092820A (en) * 2022-01-20 2022-02-25 城云科技(中国)有限公司 Target detection method and moving target tracking method applying same
CN114399714A (en) * 2022-01-12 2022-04-26 福州大学 Vehicle-mounted camera video-based vehicle illegal parking detection method
CN115761616A (en) * 2022-10-13 2023-03-07 深圳市芯存科技有限公司 Control method and system based on storage space self-adaption
CN117274927A (en) * 2023-09-19 2023-12-22 盐城工学院 Traffic flow monitoring method based on improved multi-target tracking

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106846374A (en) * 2016-12-21 2017-06-13 大连海事大学 The track calculating method of vehicle under multi-cam scene
CN109873990A (en) * 2019-03-13 2019-06-11 武汉大学 A kind of illegal mining method for early warning in mine based on computer vision
CN110866473A (en) * 2019-11-04 2020-03-06 浙江大华技术股份有限公司 Target object tracking detection method and device, storage medium and electronic device
CN111127520A (en) * 2019-12-26 2020-05-08 华中科技大学 Vehicle tracking method and system based on video analysis
CN111257957A (en) * 2020-02-25 2020-06-09 西安交通大学 Identification tracking system and method based on passive terahertz imaging

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106846374A (en) * 2016-12-21 2017-06-13 大连海事大学 The track calculating method of vehicle under multi-cam scene
CN109873990A (en) * 2019-03-13 2019-06-11 武汉大学 A kind of illegal mining method for early warning in mine based on computer vision
CN110866473A (en) * 2019-11-04 2020-03-06 浙江大华技术股份有限公司 Target object tracking detection method and device, storage medium and electronic device
CN111127520A (en) * 2019-12-26 2020-05-08 华中科技大学 Vehicle tracking method and system based on video analysis
CN111257957A (en) * 2020-02-25 2020-06-09 西安交通大学 Identification tracking system and method based on passive terahertz imaging

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YOUNGWAN LEE ET AL.: "An Energy and GPU-Computation Efficient Backbone Network for Real-Time Object Detection", 《2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW)》, pages 3 - 4 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112991134A (en) * 2021-05-11 2021-06-18 交通运输部科学研究院 Driving path reduction measuring and calculating method and device and electronic equipment
CN113516054A (en) * 2021-06-03 2021-10-19 三峡大学 Wood-carrying vehicle detection, identification and tracking method
CN113657378A (en) * 2021-07-28 2021-11-16 讯飞智元信息科技有限公司 Vehicle tracking method, vehicle tracking system and computing device
CN113657378B (en) * 2021-07-28 2024-04-26 讯飞智元信息科技有限公司 Vehicle tracking method, vehicle tracking system and computing device
CN113673395A (en) * 2021-08-10 2021-11-19 深圳市捷顺科技实业股份有限公司 Vehicle track processing method and device
CN114067270A (en) * 2021-11-18 2022-02-18 华南理工大学 Vehicle tracking method and device, computer equipment and storage medium
CN114399714A (en) * 2022-01-12 2022-04-26 福州大学 Vehicle-mounted camera video-based vehicle illegal parking detection method
CN114092820A (en) * 2022-01-20 2022-02-25 城云科技(中国)有限公司 Target detection method and moving target tracking method applying same
CN115761616A (en) * 2022-10-13 2023-03-07 深圳市芯存科技有限公司 Control method and system based on storage space self-adaption
CN115761616B (en) * 2022-10-13 2024-01-26 深圳市芯存科技有限公司 Control method and system based on storage space self-adaption
CN117274927A (en) * 2023-09-19 2023-12-22 盐城工学院 Traffic flow monitoring method based on improved multi-target tracking
CN117274927B (en) * 2023-09-19 2024-05-17 盐城工学院 Traffic flow monitoring method based on improved multi-target tracking

Also Published As

Publication number Publication date
CN112069969B (en) 2023-07-25

Similar Documents

Publication Publication Date Title
CN112069969A (en) Method and system for tracking highway monitoring video mirror-crossing vehicle
Tang et al. Cityflow: A city-scale benchmark for multi-target multi-camera vehicle tracking and re-identification
CN112750150B (en) Vehicle flow statistical method based on vehicle detection and multi-target tracking
CN111932580A (en) Road 3D vehicle tracking method and system based on Kalman filtering and Hungary algorithm
CN103208008B (en) Based on the quick adaptive method of traffic video monitoring target detection of machine vision
CN113139620A (en) End-to-end multi-target detection and tracking joint method based on target association learning
CN111429484A (en) Multi-target vehicle track real-time construction method based on traffic monitoring video
JP4874607B2 (en) Object positioning device
Diego et al. Video alignment for change detection
Zhang et al. Monocular visual traffic surveillance: A review
Chang et al. Video analytics in smart transportation for the AIC'18 challenge
Chen et al. Multi-camera Vehicle Tracking and Re-identification on AI City Challenge 2019.
CN112132873A (en) Multi-lens pedestrian recognition and tracking based on computer vision
CN116434159A (en) Traffic flow statistics method based on improved YOLO V7 and Deep-Sort
CN115205559A (en) Cross-domain vehicle weight recognition and continuous track construction method
CN117115412A (en) Small target detection method based on weighted score label distribution
CN114757977A (en) Moving object track extraction method fusing improved optical flow and target detection network
Pi et al. Very low-resolution moving vehicle detection in satellite videos
Mokayed et al. Nordic Vehicle Dataset (NVD): Performance of vehicle detectors using newly captured NVD from UAV in different snowy weather conditions.
Yang et al. Sea you later: Metadata-guided long-term re-identification for uav-based multi-object tracking
Jiang et al. Surveillance from above: A detection-and-prediction based multiple target tracking method on aerial videos
Castellano et al. Crowd flow detection from drones with fully convolutional networks and clustering
CN115100565B (en) Multi-target tracking method based on spatial correlation and optical flow registration
CN113703015B (en) Data processing method, device, equipment and medium
Zaman et al. Deep Learning Approaches for Vehicle and Pedestrian Detection in Adverse Weather

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 050011 No. 36, Jianshe South Street, Qiaoxi District, Shijiazhuang City, Hebei Province

Applicant after: Hebei transportation planning and Design Institute Co.,Ltd.

Applicant after: Beijing Jiaotong University

Address before: 050011 No. 36, Jianshe South Street, Shijiazhuang City, Hebei Province

Applicant before: HEBEI PROVINCIAL COMMUNICATIONS PLANNING AND DESIGN INSTITUTE

Applicant before: Beijing Jiaotong University

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant