CN110503663B - Random multi-target automatic detection tracking method based on frame extraction detection - Google Patents
Random multi-target automatic detection tracking method based on frame extraction detection Download PDFInfo
- Publication number
- CN110503663B CN110503663B CN201910659013.2A CN201910659013A CN110503663B CN 110503663 B CN110503663 B CN 110503663B CN 201910659013 A CN201910659013 A CN 201910659013A CN 110503663 B CN110503663 B CN 110503663B
- Authority
- CN
- China
- Prior art keywords
- target
- frame
- detection
- tracking
- result
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Abstract
The invention discloses a random multi-target automatic detection tracking method based on frame extraction detection, belongs to the fields of digital image processing and machine learning, and particularly relates to a random multi-target automatic detection tracking method combining target detection and a target tracking method. The invention integrates the target detection and the target tracking into the same system, and gives consideration to the advantages of detection and tracking. The proposed initial frame search method can detect when all objects appear in the video sequence, so that different classes of objects appearing in any frame in the video sequence can be automatically detected and tracked. By using the updater, the target state can be updated by considering the current detection and tracking states, so that timely error correction is realized.
Description
Technical Field
The invention belongs to the field of digital image processing and machine learning, and particularly relates to a random multi-target automatic detection tracking method combining target detection and a target tracking method.
Background
The detection and tracking of the target have wide application scenes in military use and civil use. The detection and tracking of the target is an important component in the image processing technology and comprises two subtasks of target detection and target tracking. Object detection is the process of detecting and classifying object objects in an image. The target tracking technology is a process of continuously obtaining the motion state of a target in subsequent frames by using a tracking target selected manually or given by a detector with a certain frame of a video sequence as a starting point.
Although the detection method alone can well obtain the positions of all targets and label the categories of the targets, the processing speed of detection is slow. The single-use tracking method firstly needs to manually give the initial position of the target to be tracked, secondly cannot process the newly appeared target, although the speed is high, the method cannot cope with the actual scene. Therefore, a method combining detection and tracking needs to be found, so that the advantages of the detection and tracking are both considered, and the method can be applied to complex tasks.
There have been many patents investigating tracking detection methods. An intelligent multi-target detection tracking method 108664930A, a target detection tracking method in video 108986143A, a multi-target detection tracking method, an electronic device and a storage medium 108121945A and other patents all adopt a tracking method based on single-frame detection and matching, the tracking method is not really used, and inter-frame information is wasted, so that the detection speed is slow. While the patent 106960446a, which is an integrated method for detecting and tracking water surface targets applied to unmanned boats, combines a detection and tracking method, but a method of detecting at fixed intervals cannot ensure that targets appearing in any frames are detected.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention aims to provide a random multi-target automatic detection tracking method based on frame extraction detection.
The technical scheme adopted by the invention for solving the problems is as follows: a random multi-target automatic detection tracking method based on frame extraction detection comprises the following steps:
step 1: the video is equally divided into n sections, and one frame is randomly sampled in each section to obtain a sampled frame sequence f 1 ,f 2 ,…,f k ,…,f n ;
Step 2: for each sampling frame, target detection is carried out by utilizing a pre-trained target detection neural network model, the position and the category of each target obtained by detection in each sampling frame are recorded, and a target set of each frame is counted
And 3, step 3: starting from the first sampled frame, the current frame f is analyzed k And the previous frame f k-1 Target set ofIf a new target appears in the frame, searching a first frame with the new target appearing between the frame and the previous sampling frame by using an initial frame searching method; according to the process, finding and recording first frames of all targets in the video sequence in sequence;
and 4, step 4: initializing a tracker for all detected targets in a current frame from a first frame subjected to target detection, and tracking the targets by using the trackers until the next image frame subjected to target detection; inputting the tracking result of the tracker and the target detection result of the frame into an updater, outputting the states of all targets of the current frame, and initializing the tracker again to continue tracking; according to the flow, the tracking of the whole video sequence is completed till the last frame of the video.
Further, the process of establishing the pre-trained target detection neural network model in the step 2 is as follows:
step 2.1: collecting a large number of images containing targets to be tracked, labeling all targets in the images, and making into a data set, wherein the data set is divided into a training set, a verification set and a test set;
step 2.2: selecting a target detection neural network structure for detecting the selected target, and inputting the manufactured training set and the verification set into a network for training;
step 2.3: and testing the trained network by using the test set to finally obtain a target detection network with the detection accuracy meeting the requirement for detection before subsequent tracking.
Further, the current frame f is analyzed in step 3 k Frame and previous frame f k-1 The method for judging whether a new target appears comprises the following steps:
assuming the current frame f obtained by detection k With the previous frame f k-1 Target set of Wherein the order of the elements within each set is ordered according to the coordinates of each target; for the current frame f k Each element p in the frame i In the previous frame object setFind whether there is element q corresponding to it j If not, the target q j Is a newly added target.
Further, the specific method of the initial frame search method in step 3 is as follows:
let the current frame f k The current frame target set obtained after target detection in the frame isPrevious frame f k-1 The target set obtained after target detection isLet the current frame f k Compared with the previous frame f k-1 Newly add object p n At this point, p needs to be searched n First frame f of occurrence m (ii) a Take f k-1 And f k Taking the median value of f a =(f k-1 +f k ) A/2 frame, which is detected by using a target detection network; if the obtained target set isSearching for the same as the method described aboveWhether or not there is a target p n A corresponding element; if there is a frame f indicating to be searched k-1 <f m <f a Otherwise, explain f a <f m <f k (ii) a Assuming there is no corresponding element, take f a And f k Of (d), i.e. f b =(f a +f k ) 2, to f b The result of the frame object detection isIn the same way, ifIn the presence of and target q n If the corresponding element is present, then f is indicated a <f m <f b Otherwise f b <f m <f k (ii) a According to the mode, the median values are taken in turn for detection untilThere is no corresponding target, andis present in (a); at this time, f is determined m Is the target q n The first frame to occur.
Further, the method for updating the tracking status in step 4 comprises:
suppose that the current k-th frame has been subject to target detection in the previous steps and the detected target set isThe tracking results of all trackers in the current frame are collected asFor theEach element d in i In aFind the corresponding element t j If such an element is not present, d is directly added i Add result set T r (ii) a Let t j Is and d i If the similarity distance of the corresponding element is s, calculating a selection coefficient b according to the following formula;
1.b=con(t j )×r-con(d i )×(1-r)
wherein con () is the confidence coefficient of the target detection or tracking, r is a set coefficient representing which acceptance of the detection and tracking result is higher, and r belongs to (0,1);
if b is greater than 0, the reliability of the tracking result is higher at the moment, and the updating result is
If b is less than 0, the detection result is more credible, and the update result is
Finally adding the update result r into the set T r Update to traverse the set according to the methodAnd finishing the updating of all target states of the current frame.
Further, forPreceding frame f k Each element p in (1) i Set of objects in previous frameFind whether there is element q corresponding to it j The method comprises the following steps:
let the current frame f k A certain element p of i A, target set of previous frameIs T; for the element a, the method for searching whether the corresponding element exists in the set T is as follows:
assume the set T = { T = 1 ,t 2 ,…,t i ,…,t m For element t in the set i According to the detection result, the coordinate in the corresponding image is obtained asAnd the element a corresponds to the target coordinate of (x) a ,y a ) Then a and t i The similar distance of (c) is defined as:
wherein label () is the category to which the target belongs; the smaller the final result s, the more similar the two elements; if there is an element T in the set T i Such that s (t) i )<S a If the element A is the corresponding element, the corresponding element of the element A is considered to exist in the T; wherein S a The threshold value is set to be 3.2 times of the size of the target area.
The invention has the technical effects that:
the target detection and the target tracking are integrated into the same system, and the advantages of detection and tracking are taken into consideration. The proposed initial frame search method can detect when all objects appear in the video sequence, so that different classes of objects appearing in any frame in the video sequence can be automatically detected and tracked. By using the updater, the target state can be updated by considering the current detection and tracking states, so that timely error correction is realized.
Drawings
Figure 1 is a flow chart of an automatic target detection and tracking method,
figure 2 is a detailed flow chart of an automatic target detection and tracking method,
figure 3 is a schematic diagram of object decimation detection,
figure 4 is a schematic diagram of an initial frame search method,
FIG. 5 is a schematic diagram of a tracking and updating process.
Detailed Description
In order to more clearly illustrate the technical process of the present invention, the present invention is further described below with reference to the accompanying drawings.
As shown in fig. 1 and 2, the method is divided into four steps:
step 1: the video is sampled in segments to obtain a plurality of sampling frames f 1 ,f 2 ,…,f k ,…,f n ;
And 2, step: as shown in fig. 3, for each sampling frame, target detection is performed by using a pre-trained target detection neural network model, the position and the category of each target detected in each sampling frame are recorded, and a target set of each frame is counted
And step 3: analyzing the current frame f starting from the first sampled frame k With the previous sampled frame f k-1 Target set ofIf a new target appears in the frame, searching a first frame with the new target appearing between the frame and a previous sampling frame by using an initial frame searching method. According to the process, finding and recording first frames of all targets in the video sequence in sequence;
and 4, step 4: as shown in fig. 5, starting from the first frame where object detection has been performed, a tracker is initialized for all detected objects in the current frame, and the objects are tracked by the trackers until the next image frame where object detection has been performed. And inputting the tracking result of the tracker and the target detection result of the frame into an updater, outputting the states of all targets of the current frame, and initializing the tracker again to continue tracking. According to the flow, the tracking of the whole video sequence is completed till the last frame of the video.
The pre-trained target detection neural network model establishing process in the step 2 is as follows:
step 1: a large number of images containing the target to be tracked are collected, the images should be diverse, i.e. images containing multiple states of the target to be tracked. Marking all targets in the image to manufacture a data set;
and 2, step: and selecting a target detection network structure suitable for detecting the selected target, such as an SSD or a YOLO method with a good detection effect. Inputting the prepared training set and verification set into a network for training;
and 3, step 3: and testing the trained network by using a test set to finally obtain a target detection network with the detection accuracy meeting the requirement, wherein the general accuracy is not lower than 80%. The detector is used for detection before subsequent tracking;
further, analysis f of step 3 k Frame and f k-1 The process of whether a new object appears between frames is as follows:
suppose that f-th obtained by detection k ,f k-1 All the objects of the frame are collected as Wherein the order of the elements within each set is ordered by the coordinates of each target. For the current frame f k Each element p in (1) i Set of previous framesWhether an element q corresponding to the element q exists or not is searched j If notIf present target p i Is a new target.
The initial frame search method described in step 3 is similar to the median search method, as shown in fig. 4. The method comprises the following specific processes: let the current frame f k The current frame target set obtained after target detection in the frame isPrevious frame f k-1 The target set obtained after target detection isLet the current frame f k Compared with the previous frame f k-1 Newly add object p n At this point, p needs to be searched n First frame f of occurrence m (ii) a Take f k-1 And f k The median value of (a), i.e. f a =(f k-1 +f k ) A/2 frame, which is detected by using a target detection network; if the obtained target set isSearching for the same as the method described aboveWhether or not there is a target p n A corresponding element; if there is a frame f indicating to be searched k-1 <f m <f a Otherwise f is stated a <f m <f k (ii) a Assuming there is no corresponding element, take f a And f k Median value of, i.e. f b =(f a +f k ) 2, to f b The result of the frame object detection isIn the same way, ifIn the presence of and target q n If the corresponding element is present, then f is indicated a <f m <f b Otherwise f b <f m <f k (ii) a According to the mode, the median values are taken in turn for detection untilThere is no corresponding target, andis present in (a); at this time, f is determined m Is the target q n The first frame to occur.
The method for establishing the tracker in the step 4 comprises the following steps:
the tracker can consider the traditional method or the deep learning method from the aspects of speed and accuracy, and can adopt a correlation filtering tracker or a SimFC tracker.
Taking the SiamFC tracker as an example, a network structure is first constructed according to the principle of the SiamFC tracking method. And (3) making a tracking data set by self to train the network, or directly using other people and the trained tracking network. The initial frame and the initial target state are input into the network, and then the next frame is input, so that the tracking can be started.
Claims (6)
1. A random multi-target automatic detection tracking method based on frame extraction detection comprises the following steps:
step 1: the video is equally divided into n sections, and one frame is randomly sampled in each section to obtain a sampled frame sequence f 1 ,f 2 ,…,f k ,…,f n ;
Step 2: for each sampling frame, target detection is carried out by utilizing a pre-trained target detection neural network model, the position and the category of each target obtained by detection in each sampling frame are recorded, and a target set of each frame is counted
And 3, step 3: analyzing the current frame f starting from the first sampled frame k With the previous frame f k-1 Target set ofIf a new target appears in the frame, searching a first frame with the new target appearing between the frame and the previous sampling frame by using an initial frame searching method; according to the process, finding out and recording first frames of all targets in the video sequence in sequence;
and 4, step 4: initializing a tracker for all detected targets in a current frame from a first frame subjected to target detection, and tracking the targets by using the trackers until the next image frame subjected to target detection; inputting the tracking result of the tracker and the target detection result of the frame into an updater, outputting the states of all targets of the current frame, and initializing the tracker again to continue tracking; according to the flow, the tracking of the whole video sequence is completed till the last frame of the video.
2. The method according to claim 1, wherein the pre-trained target detection neural network model in step 2 is established as follows:
step 2.1: collecting a large number of images containing targets to be tracked, labeling all targets in the images to manufacture a data set, wherein the data set is divided into a training set, a verification set and a test set;
step 2.2: selecting a target detection neural network structure for detecting the selected target, and inputting the manufactured training set and the verification set into a network for training;
step 2.3: and testing the trained network by using the test set to finally obtain a target detection network with the detection accuracy meeting the requirement for detection before subsequent tracking.
3. The method as claimed in claim 1, wherein the current frame f is analyzed in step 3 k Frame and previous frame f k-1 The method for judging whether a new target appears comprises the following steps:
assuming the current frame f obtained by detection k With the previous frame f k-1 Target set of Wherein the order of the elements within each set is ordered according to the coordinates of each target; for the current frame f k Each element p in the frame i Set of objects in previous frameFind whether there is element q corresponding to it j If not, the target q j Is a new target.
4. The random multi-target automatic detection tracking method based on frame extraction detection as claimed in claim 1, characterized in that the specific method of the initial frame search method in step 3 is:
let the current frame f k The target set of the current frame obtained after target detection in the frame isPrevious frame f k-1 The target set obtained after target detection isLet the current frame f k Compared with the previous frame f k-1 Newly adds an object p n At this point, p needs to be found n First frame f of occurrence m (ii) a Take f k-1 And f k The median value of (a), i.e. f a =(f k-1 +f k ) A/2 frame, which is detected by a target detection network; if the obtained target set is Searching for the same as the method described aboveWhether or not there is a target p n A corresponding element; if there is a frame f indicating to be searched k-1 <f m <f a Otherwise f is stated a <f m <f k (ii) a Assuming there is no corresponding element, take f a And f k Median value of, i.e. f b =(f a +f k ) 2, to f b The result of the frame object detection isIn the same way, ifIn the presence of and target q n If the corresponding element is present, then f is indicated a <f m <f b Otherwise f b <f m <f k (ii) a According to the mode, the median values are taken in turn for detection untilThere is no corresponding target, andis present in (a); at this time, f is determined m Is the target q n The first frame to occur.
5. The random multi-target automatic detection tracking method based on the frame extraction detection as claimed in claim 1, characterized in that the method for updating the tracking status in step 4 is:
suppose that the current k-th frame has been subject to target detection in the previous steps and the detected target set is The tracking results of all trackers in the current frame are collected asFor theEach element d in i In aFind the corresponding element t j If such an element is not present, d is directly substituted i Add result set T r (ii) a Let t j Is and d i If the similarity distance of the corresponding element is s, calculating a selection coefficient b according to the following formula;
b=con(t j )×r-con(d i )×(1-r)
wherein con () is the confidence coefficient of the target during detection or tracking, r is a set coefficient representing which acceptance degree of the detection and tracking result is higher, and r belongs to (0,1);
if b is greater than 0, the reliability of the tracking result is higher at the moment, and the updating result is
If b is less than 0, the detection result is more credible, and the update result is
6. The method as claimed in claim 1, wherein the random multi-target automatic detection tracking method based on frame extraction detection is characterized in that for the current frame f k Each element p in (1) i Set of objects in previous frameFind whether there is element q corresponding to it j The method comprises the following steps:
let the current frame f k A certain element p of i A, target set of previous frameIs T; for the element a, the method for searching whether the corresponding element exists in the set T is as follows:
assume set T = { T = } 1 ,t 2 ,…,t i ,…,t m For element t in the set i According to the detection result, the coordinate in the corresponding image is obtained asAnd the element a corresponds to the target coordinate of (x) a ,y a ) Then a and t i The similar distance of (d) is defined as:
wherein label () is the category to which the target belongs; the smaller the final result s, the more similar the two elements; if there is an element T in the set T i Such that s (t) i )<S a If the element a is the same as the element a, the corresponding element of the element a exists in the T; wherein S a The threshold is a set threshold, which is 3.2 times the size of the target area.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910659013.2A CN110503663B (en) | 2019-07-22 | 2019-07-22 | Random multi-target automatic detection tracking method based on frame extraction detection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910659013.2A CN110503663B (en) | 2019-07-22 | 2019-07-22 | Random multi-target automatic detection tracking method based on frame extraction detection |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110503663A CN110503663A (en) | 2019-11-26 |
CN110503663B true CN110503663B (en) | 2022-10-14 |
Family
ID=68586679
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910659013.2A Active CN110503663B (en) | 2019-07-22 | 2019-07-22 | Random multi-target automatic detection tracking method based on frame extraction detection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110503663B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103347167A (en) * | 2013-06-20 | 2013-10-09 | 上海交通大学 | Surveillance video content description method based on fragments |
CN106778503A (en) * | 2016-11-11 | 2017-05-31 | 深圳云天励飞技术有限公司 | A kind of detection based on circulation frame buffer zone and the method and system for tracking |
CN106960446A (en) * | 2017-04-01 | 2017-07-18 | 广东华中科技大学工业技术研究院 | A kind of waterborne target detecting and tracking integral method applied towards unmanned boat |
CN108108697A (en) * | 2017-12-25 | 2018-06-01 | 中国电子科技集团公司第五十四研究所 | A kind of real-time UAV Video object detecting and tracking method |
CN108564069A (en) * | 2018-05-04 | 2018-09-21 | 中国石油大学(华东) | A kind of industry safe wearing cap video detecting method |
CN108986143A (en) * | 2018-08-17 | 2018-12-11 | 浙江捷尚视觉科技股份有限公司 | Target detection tracking method in a kind of video |
-
2019
- 2019-07-22 CN CN201910659013.2A patent/CN110503663B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103347167A (en) * | 2013-06-20 | 2013-10-09 | 上海交通大学 | Surveillance video content description method based on fragments |
CN106778503A (en) * | 2016-11-11 | 2017-05-31 | 深圳云天励飞技术有限公司 | A kind of detection based on circulation frame buffer zone and the method and system for tracking |
CN106960446A (en) * | 2017-04-01 | 2017-07-18 | 广东华中科技大学工业技术研究院 | A kind of waterborne target detecting and tracking integral method applied towards unmanned boat |
CN108108697A (en) * | 2017-12-25 | 2018-06-01 | 中国电子科技集团公司第五十四研究所 | A kind of real-time UAV Video object detecting and tracking method |
CN108564069A (en) * | 2018-05-04 | 2018-09-21 | 中国石油大学(华东) | A kind of industry safe wearing cap video detecting method |
CN108986143A (en) * | 2018-08-17 | 2018-12-11 | 浙江捷尚视觉科技股份有限公司 | Target detection tracking method in a kind of video |
Non-Patent Citations (2)
Title |
---|
Neo-Angiogenesis Metabolic Biomarker of Tumor-genesis Tracking By Infrared Joystick Contact Imaging in Personalized Homecare System;Szu, H;《百链》;20141231;第9118卷;第1-13页 * |
复杂环境下的实时目标跟踪算法研究;冯成龙;《CNKI》;20180315;I138-2015 * |
Also Published As
Publication number | Publication date |
---|---|
CN110503663A (en) | 2019-11-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109816689B (en) | Moving target tracking method based on adaptive fusion of multilayer convolution characteristics | |
Feichtenhofer et al. | Detect to track and track to detect | |
CN110070074B (en) | Method for constructing pedestrian detection model | |
CN110660083B (en) | Multi-target tracking method combined with video scene feature perception | |
CN110889324A (en) | Thermal infrared image target identification method based on YOLO V3 terminal-oriented guidance | |
CN111476817A (en) | Multi-target pedestrian detection tracking method based on yolov3 | |
CN110796074B (en) | Pedestrian re-identification method based on space-time data fusion | |
CN111582349B (en) | Improved target tracking algorithm based on YOLOv3 and kernel correlation filtering | |
CN110796679B (en) | Target tracking method for aerial image | |
CN113256690B (en) | Pedestrian multi-target tracking method based on video monitoring | |
CN111524164B (en) | Target tracking method and device and electronic equipment | |
CN110555868A (en) | method for detecting small moving target under complex ground background | |
CN107622507B (en) | Air target tracking method based on deep learning | |
CN110728694A (en) | Long-term visual target tracking method based on continuous learning | |
CN114708300A (en) | Anti-blocking self-adaptive target tracking method and system | |
CN111768429A (en) | Pedestrian target tracking method in tunnel environment based on Kalman filtering and pedestrian re-identification algorithm | |
CN114926859A (en) | Pedestrian multi-target tracking method in dense scene combined with head tracking | |
CN114283355A (en) | Multi-target endangered animal tracking method based on small sample learning | |
CN113129336A (en) | End-to-end multi-vehicle tracking method, system and computer readable medium | |
CN106934339B (en) | Target tracking and tracking target identification feature extraction method and device | |
CN116109975B (en) | Power grid safety operation monitoring image processing method and intelligent video monitoring system | |
CN110503663B (en) | Random multi-target automatic detection tracking method based on frame extraction detection | |
Wang et al. | Low-slow-small target tracking using relocalization module | |
Pillai et al. | Fine-Tuned EfficientNetB4 Transfer Learning Model for Weather Classification | |
CN115588149A (en) | Cross-camera multi-target cascade matching method based on matching priority |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |