CN113392777A - Real-time target detection method based on online learning strategy - Google Patents

Real-time target detection method based on online learning strategy Download PDF

Info

Publication number
CN113392777A
CN113392777A CN202110672668.0A CN202110672668A CN113392777A CN 113392777 A CN113392777 A CN 113392777A CN 202110672668 A CN202110672668 A CN 202110672668A CN 113392777 A CN113392777 A CN 113392777A
Authority
CN
China
Prior art keywords
sample
window
detector
scanning
posterior probability
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110672668.0A
Other languages
Chinese (zh)
Inventor
王洁
卢晓燕
姜文涛
王娇颖
钱钧
李良福
张莹
王超
何曦
刘轩
李璐阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian institute of Applied Optics
Original Assignee
Xian institute of Applied Optics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian institute of Applied Optics filed Critical Xian institute of Applied Optics
Priority to CN202110672668.0A priority Critical patent/CN113392777A/en
Publication of CN113392777A publication Critical patent/CN113392777A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a real-time target detection method based on an online learning strategy, which comprises the steps of firstly, acquiring an input target window and a scanning window from a sensor image, dividing the scanning window into a positive sample or a negative sample according to the overlapping degree parameter of the two windows, calculating posterior probability, and establishing a random forest detector; traversing all scanning windows in a new frame of image, establishing a training sample set and calculating a random forest feature vector and a posterior probability of each sample; executing target tracking, automatically acquiring the latest target window information, dividing a training sample set into a positive sample set and a negative sample set according to the overlapping degree parameter of a scanning window and a new target window, and training a random forest detector by using the training sample set; and finally, continuously carrying out iterative training to obtain a trained detector, inputting a frame of image into the detector, and outputting a target detection result. The invention can realize the fine detection of small targets, has small calculation amount and can realize real-time processing on an embedded platform.

Description

Real-time target detection method based on online learning strategy
Technical Field
The invention belongs to the technical field of automatic target detection, and relates to a real-time target detection method based on an online learning strategy.
Background
Automatic target detection is an important function in modern weaponry, can provide reliable target type and position information, provides powerful guarantee for battlefield reconnaissance, border patrol, accurate striking and other tasks, and is a key factor for realizing the intellectualization of weaponry. Machine learning-based target detection is a key direction in the technical field of automatic target detection.
The target detection method based on machine learning mainly comprises a support vector machine and deep learning. The support vector machine is based on statistics as a theoretical basis, has intuitive geometric interpretation and good generalization capability, but has the defects of high operation complexity, poor adaptability to target form transformation and low detectable rate. The deep learning method has good adaptability to changes of the form, the scale, the illumination and the like of a target by constructing a large number of data sets and optimizing the training strategy, greatly improves the accuracy rate of recognition, and needs to collect a large number of training samples and formulate different training strategies for optimizing network parameters in order to achieve a good recognition effect. Meanwhile, due to the adoption of a time-multilayer convolution architecture for feature extraction, the dimension of feature extraction is high, the calculation complexity is high, the real-time performance is not ideal, long-time off-line training is required, the response speed is low, and the flexibility is poor.
In summary, the problems of the prior art are as follows: the construction of a data set and the model training need to consume a large amount of time, the training can be completed only by off-line, the specific target class detection based on an off-line training model can be completed only when a target detection task is executed, and the detection of any interested target in a scene is difficult to realize; the calculation complexity is high, and the operation efficiency is low; the generalization of the training model is too high, and the fine recognition effect on small targets is not ideal.
Disclosure of Invention
Technical problem to be solved
The technical problem to be solved by the invention is as follows: how to quickly respond to the detection requirement of any target in a scene in real time and provide an accurate target detection result.
(II) technical scheme
In order to solve the technical problem, the invention provides a real-time target detection method based on an online learning strategy, which comprises the following steps:
step 1: acquiring a frame of scene image from a sensor, selecting an interested target in the scene, and acquiring input target window information including the size and position of a window;
step 2: acquiring scanning windows with the size of an input target window as a standard and expanded scanning window information including window sizes and positions, and calculating overlapping degree parameters of all the scanning windows and the input target window;
and step 3: establishing a random forest detector, wherein the detector data comprises a positive sample library, a negative sample library and the posterior probability value of each sample, and initializing the sample library and the posterior probability;
and 4, step 4: reading a new frame of image, traversing all the scanning windows obtained in the step 2, obtaining all the scanning window images of the current frame according to the position and size information of the scanning windows, forming a training sample set, and calculating the random forest feature vector and the posterior probability of each sample;
and 5: executing target tracking, updating target window information of the current frame image according to a tracking result, calculating an overlapping degree parameter of the scanning window obtained in the step 2 and the target window of the current frame image, dividing samples in the training sample set into a positive sample library and a negative sample library according to the overlapping degree, and specifically performing the steps from 31 to 32;
step 6: training a random forest detector;
and 7: repeating the step 4 to the step 6, carrying out iterative training on the random forest detector until the posterior probability updating rate epsilon (the ratio of the posterior probability updating times to the training times) of the detector is less than a set threshold, terminating the iteration and finishing the training;
and 8: and executing target detection, inputting a scene image to the random forest detector, and outputting a target detection result.
Wherein, in the step 2, acquiring the scanning window and the expanded scanning window includes the following steps:
step 21: establishing an initial scanning window, wherein the window size is consistent with that of an input target window, and the sliding window passes through the whole image from the upper left corner of the current frame image as a starting point (0,0) to obtain n scanning windows with standard sizes;
step 22: and carrying out random scale scaling transformation within the range of the coefficient 1-1.2 on each standard scanning window to obtain m extended scanning windows.
Step 23: the overlap parameter of all (n + m) scanning windows and the input target window is calculated, and the overlap is defined as the ratio of the intersection of the two windows and the union of the two windows.
Wherein, in the step 3, initializing the random forest detector, comprising the following steps:
step 31: establishing a positive sample library in the random forest detector, selecting n scanning windows with high overlap parameter values with the target window according to the overlap parameter obtained in the step 23, performing m times of affine transformation on each scanning window picture to obtain n x m scanning windows in total, obtaining a scanning window image sheet of the current frame image according to the n x m scanning window information, and marking the scanning window image sheet as a positive sample;
step 32: establishing a negative sample library in the random forest detector, selecting k scanning window pictures with the overlapping degree parameter larger than a set threshold value from the overlapping degree parameters obtained in the step 23, and marking the k scanning window pictures as negative samples;
step 33: and (4) calculating the random forest feature vectors of the positive and negative samples obtained in the step (31) and the step (32), and counting the posterior probability corresponding to each feature vector.
Step 34: forming a random forest detector by the data obtained in the steps 31 to 33, calculating the posterior probability of each input picture by the detector, and judging whether a target is detected or not according to the posterior probability value;
wherein, the step 4 of calculating the posterior probability of the sample comprises the following steps:
step 41: according to the random forest principle, each sample can be calculated to obtain n binary codes, the n binary codes form an n-dimensional feature vector which is called as a random forest feature vector of the sample, each feature vector can be calculated to obtain a posterior probability value, and therefore each sample corresponds to one feature vector and one posterior probability statistic value;
wherein, in the step 6, the random forest detector is trained, comprising the following steps:
step 61: inputting the training sample set into a detector, and calculating the posterior probability of each sample to obtain the classification result of the training sample by the detector;
step 62: comparing the posterior probability of each sample with the set positive and negative sample thresholds respectively, judging whether the detector is in a classification error, if so, adding the sample into a corresponding sample library, and correcting the posterior probability of the sample; for example, the sample label is a positive sample, but the posterior probability calculated by the detector is less than the positive sample threshold, which indicates that the detector misclassifies the sample as a negative sample, adds the sample to the positive sample library, updates the posterior probability of the sample, and completes one training.
(III) advantageous effects
The real-time target detection method based on the online learning strategy provided by the technical scheme has the following beneficial effects:
(1) the online training learning strategy provided by the invention can be used for learning the characteristics of the target while the photoelectric system executes target tracking, has a short training period, does not need to perform offline training based on a large database, and has the characteristic of high response speed;
(2) the method can realize the detection of any interested target in a scene, particularly for the unconventional type targets, the detection method based on deep learning needs to collect a large amount of data again and carry out long-time off-line training, and the method only needs a short period of on-line training and has the characteristic of high flexibility;
(3) the invention has good detection effect on small targets and can realize refined detection;
(4) the method makes up for the short boards of long training modeling period, low detection rate and low small target detection rate based on big data deep learning. Complementary to deep learning.
(5) The invention adopts the random forest theory, has small calculated amount and can realize real-time processing on an embedded platform.
Drawings
Fig. 1 is a flowchart of a real-time target detection method based on an online learning strategy according to the present invention.
Detailed Description
In order to make the objects, contents and advantages of the present invention clearer, the following detailed description of the embodiments of the present invention will be made in conjunction with the accompanying drawings and examples.
The hardware platform of the embodiment of the invention adopts a video tracker circuit board which is independently researched and developed and is based on a TMS320C6455 fixed-point digital signal processor of TI company, the real-time target detection method based on the online learning strategy is realized in an image processing software package loaded on the hardware platform, and as shown in the reference figure 1, the method of the invention specifically comprises the following steps:
step 1: acquiring a first frame of scene image from a sensor, selecting an interested target in the scene, acquiring input target window information, and storing the input target window information into a target window information structure tBOx, wherein the tBOx comprises a window size parameter tBOx.
Step 2: obtaining the information of a scanning window and an expanded scanning window which take an input target window size parameter tBux.size as a standard, including the size and the position of the window, and calculating the overlapping degree parameter of all the scanning windows and the input target window;
2.1): establishing a scanning window structure body, wherein the body comprises a window size parameter body.size, a window position parameter body.point and a window overlapping degree parameter body.overlap, making the body.size be tbox.size, taking the upper left corner of the current frame image as a starting point (0,0), taking x as a step size, sliding a window to pass through the whole image to obtain n scanning windows with standard sizes, and storing the n scanning window information in an n-dimensional scanning window structure body array as arrayBox1[ n ];
2.2): carrying out random scale scaling transformation with the range of coefficient 1-1.2 on each scanning window in arrayBox1[ n ] to obtain m extended scanning windows, wherein an extended scanning window information structure body is recorded as arrayBox2[ m ];
2.3): establishing a scanning window structure array arrayBox [ k ], wherein the structure stores all the obtained scanning window information of 2.1 and 2.2, namely k is n + m; and calculating the overlap degree parameter of all k scanning windows and the input target window, and storing the overlap degree parameter into the arrayBox [ k ]. overlap, wherein the overlap degree is defined as the ratio of the intersection of the two windows to the union of the two windows.
And step 3: establishing a random forest detector, wherein the detector data comprises a positive sample library, a negative sample library and the posterior probability value of each sample, and initializing the sample library and the posterior probability;
3.1): establishing a positive sample library in a random forest detector, setting a threshold value of 0.6, selecting windows of arrayBox [ n + m ]. overlap >0.6, performing affine transformation on scanning windows judged by the threshold value for k times, marking the scanning windows as goodboxes, and marking window pictures corresponding to all the goodboxes as positive samples Px;
3.2): establishing a negative sample library in a random forest detector, setting a window with a threshold value of 0.2, wherein 0.2< arrayBox [ n + m ]. overlap <0.6, marking the window as a badbox, and marking window pictures corresponding to all badboxes as negative samples Nx;
3.3): and (4) calculating the random forest feature vectors of the positive and negative samples obtained in the step (31) and the step (32), and counting the posterior probability corresponding to each feature vector.
3.4): the data obtained from steps 31 to 33 form a random forest detector;
and 4, step 4: reading T1 frame images, obtaining all scanning window pictures of T1 frames according to position and size information recorded by arrayBox [ k ], forming a training sample set train _ PNx, and calculating random forest feature vectors and posterior probability of each sample;
4.1): according to the random forest principle, each sample can be calculated to obtain n binary codes, the n binary codes form an n-dimensional feature vector called as the random forest feature vector of the sample, each feature vector can be calculated to obtain a posterior probability value, and therefore each sample corresponds to one feature vector and one posterior probability statistic value
And 5: executing target tracking to obtain position information track _ point of a target in a T1 frame image, updating a tBox parameter, making tBox point equal to track _ point, calculating an overlapping degree parameter of a scanning window obtained in the step 2 and a target window of the T1 frame image, storing the overlapping degree parameter into an arrayBox [ k ] overlap, dividing samples in a training sample set into a positive sample library and a negative sample library according to the overlapping degree, and specifically, the steps from the step 31 to the step 32 are carried out;
step 6: training a random forest detector;
6.1): inputting the training sample set train _ PNx into a detector, and calculating posterior probabilities poserors [ i ] of each sample to obtain a classification result of the training sample by the detector;
6.2): setting a positive sample threshold value to be 0.75, setting a negative sample threshold value to be 0.5, comparing the posterior probability of each sample with the set positive and negative sample threshold values respectively, and judging whether the detector classifies errors, for example, the sample is labeled as a positive sample, and the posterior probability of the sample, posterors [ i ] <0.75, indicates that the detector classifies the sample errors as a negative sample, adds the sample to a positive sample library, and simultaneously updates the posterior probability of the sample; if the sample label is a negative sample and the posterior probability of the sample is greater than 0.5, the sample is wrongly classified as a positive sample, the sample is added into a negative sample library, the posterior probability of the sample is updated, and one training is completed.
And 7: repeating the step 4 to the step 6, carrying out iterative training on the random forest detector until the posterior probability updating rate xi (the ratio of the posterior probability updating times to the training times) of the detector is less than a set threshold, terminating the iteration and finishing the training;
and 8: and executing target detection, inputting a scene image to the random forest detector, and outputting a target detection result.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.

Claims (10)

1. A real-time target detection method based on an online learning strategy is characterized by comprising the following steps:
step 1: acquiring a frame of scene image from a sensor, selecting an interested target in the scene, and acquiring input target window information including the size and position of a window;
step 2: acquiring scanning windows with the size of an input target window as a standard and expanded scanning window information including window sizes and positions, and calculating overlapping degree parameters of all the scanning windows and the input target window;
and step 3: establishing a random forest detector, wherein the detector data comprises a positive sample library, a negative sample library and the posterior probability value of each sample, and initializing the sample library and the posterior probability;
and 4, step 4: reading a new frame of image, traversing all the scanning windows obtained in the step 2, obtaining all the scanning window images of the current frame according to the position and size information of the scanning windows, forming a training sample set, and calculating the random forest feature vector and the posterior probability of each sample;
and 5: executing target tracking, updating target window information of the current frame image according to a tracking result, calculating an overlapping degree parameter of the scanning window obtained in the step 2 and the target window of the current frame image, and dividing samples in the training sample set into a positive sample library and a negative sample library according to the overlapping degree;
step 6: training a random forest detector;
and 7: repeating the step 4 to the step 6, carrying out iterative training on the random forest detector until the posterior probability updating rate epsilon of the detector is less than a set threshold, terminating the iteration and finishing the training;
and 8: and executing target detection, inputting a scene image to the random forest detector, and outputting a target detection result.
2. The method for detecting the real-time target based on the online learning strategy as claimed in claim 1, wherein the step 2 of obtaining the scanning window and the extended scanning window comprises the following steps:
step 21: establishing an initial scanning window, wherein the window size is consistent with that of an input target window, and the sliding window passes through the whole image from the upper left corner of the current frame image as a starting point (0,0) to obtain n scanning windows with standard sizes;
step 22: and carrying out random scale scaling transformation within the range of the coefficient 1-1.2 on each standard scanning window to obtain m extended scanning windows.
Step 23: the overlap parameter of all (n + m) scanning windows and the input target window is calculated, and the overlap is defined as the ratio of the intersection of the two windows and the union of the two windows.
3. The method for detecting the real-time target based on the online learning strategy as claimed in claim 2, wherein the step 3 of initializing the random forest detector comprises the following steps:
step 31: establishing a positive sample library in the random forest detector, selecting n scanning windows with high overlap parameter values with the target window according to the overlap parameter obtained in the step 23, performing m times of affine transformation on each scanning window picture to obtain n x m scanning windows in total, obtaining a scanning window image sheet of the current frame image according to the n x m scanning window information, and marking the scanning window image sheet as a positive sample;
step 32: establishing a negative sample library in the random forest detector, selecting k scanning window pictures with the overlapping degree parameter larger than a set threshold value from the overlapping degree parameters obtained in the step 23, and marking the k scanning window pictures as negative samples;
step 33: calculating the random forest feature vectors of the positive and negative samples obtained in the step 31 and the step 32, and counting the posterior probability corresponding to each feature vector;
step 34: the data obtained from step 31 to step 33 form a random forest detector, which calculates the posterior probability of each input picture and judges whether a target is detected according to the posterior probability value.
4. The method for detecting the real-time target based on the online learning strategy as claimed in claim 3, wherein the step 4 of calculating the posterior probability of the sample comprises the following steps:
step 41: according to the random forest principle, each sample can be calculated to obtain n binary codes, the n binary codes form an n-dimensional feature vector which is called as a random forest feature vector of the sample, each feature vector can be calculated to obtain a posterior probability value, and therefore each sample corresponds to one feature vector and one posterior probability statistic value.
5. The method for detecting the real-time target based on the online learning strategy as claimed in claim 4, wherein in the step 5, the step of dividing the samples in the training sample set into the positive sample library and the negative sample library comprises the following steps:
establishing a positive sample library in the random forest detector, selecting n scanning windows with high overlap parameter values with the target window according to the overlap parameter obtained in the step 23, performing m times of affine transformation on each scanning window picture to obtain n x m scanning windows in total, obtaining a scanning window image sheet of the current frame image according to the n x m scanning window information, and marking the scanning window image sheet as a positive sample;
and (4) establishing a negative sample library in the random forest detector, selecting k scanning window pictures with the overlapping degree parameter larger than a set threshold value with the target window according to the overlapping degree parameter obtained in the step (23), and marking the k scanning window pictures as negative samples.
6. The method for detecting the real-time target based on the online learning strategy as claimed in claim 5, wherein in the step 6, the training of the random forest detector comprises the following steps:
step 61: inputting the training sample set into a detector, and calculating the posterior probability of each sample to obtain the classification result of the training sample by the detector;
step 62: comparing the posterior probability of each sample with the set positive and negative sample threshold values respectively, judging whether the detector is in a classification error or not, if so, adding the sample into a corresponding sample library, correcting the posterior probability of the sample, and finishing one-time training.
7. The method for real-time target detection based on online learning strategy as claimed in claim 6, wherein in step 7, the posterior probability update rate ε is the ratio of the posterior probability update times to the training times.
8. The method for real-time target detection based on online learning strategy as claimed in claim 6, wherein in the step 62, the detector classifying errors comprises: the sample label is a positive sample, the posterior probability calculated by the detector is smaller than the threshold value of the positive sample, the detector is used for classifying the sample into a negative sample, the sample is added into the positive sample library, the posterior probability of the sample is updated, and one training is completed.
9. The method for real-time target detection based on online learning strategy according to claim 7, wherein the detection method is implemented based on video tracker circuit board of TMS320C6455 fixed-point digital signal processor of TI corporation.
10. Use of a real-time object detection method based on an online learning strategy according to any of claims 1-9 in the field of automatic object detection technology.
CN202110672668.0A 2021-06-17 2021-06-17 Real-time target detection method based on online learning strategy Pending CN113392777A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110672668.0A CN113392777A (en) 2021-06-17 2021-06-17 Real-time target detection method based on online learning strategy

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110672668.0A CN113392777A (en) 2021-06-17 2021-06-17 Real-time target detection method based on online learning strategy

Publications (1)

Publication Number Publication Date
CN113392777A true CN113392777A (en) 2021-09-14

Family

ID=77621800

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110672668.0A Pending CN113392777A (en) 2021-06-17 2021-06-17 Real-time target detection method based on online learning strategy

Country Status (1)

Country Link
CN (1) CN113392777A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103400391A (en) * 2013-08-09 2013-11-20 北京博思廷科技有限公司 Multiple-target tracking method and device based on improved random forest
CN107105159A (en) * 2017-04-13 2017-08-29 山东万腾电子科技有限公司 The real-time detecting and tracking system and method for embedded moving target based on SoC
CN108427960A (en) * 2018-02-10 2018-08-21 南京航空航天大学 Based on improvement Online Boosting and the improved TLD trackings of Kalman filter
US20200302187A1 (en) * 2015-07-17 2020-09-24 Origin Wireless, Inc. Method, apparatus, and system for people counting and recognition based on rhythmic motion monitoring
WO2021022970A1 (en) * 2019-08-05 2021-02-11 青岛理工大学 Multi-layer random forest-based part recognition method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103400391A (en) * 2013-08-09 2013-11-20 北京博思廷科技有限公司 Multiple-target tracking method and device based on improved random forest
US20200302187A1 (en) * 2015-07-17 2020-09-24 Origin Wireless, Inc. Method, apparatus, and system for people counting and recognition based on rhythmic motion monitoring
CN107105159A (en) * 2017-04-13 2017-08-29 山东万腾电子科技有限公司 The real-time detecting and tracking system and method for embedded moving target based on SoC
CN108427960A (en) * 2018-02-10 2018-08-21 南京航空航天大学 Based on improvement Online Boosting and the improved TLD trackings of Kalman filter
WO2021022970A1 (en) * 2019-08-05 2021-02-11 青岛理工大学 Multi-layer random forest-based part recognition method and system

Similar Documents

Publication Publication Date Title
CN108470354B (en) Video target tracking method and device and implementation device
CN108921873B (en) Markov decision-making online multi-target tracking method based on kernel correlation filtering optimization
CN111127513A (en) Multi-target tracking method
CN110781262B (en) Semantic map construction method based on visual SLAM
CN110766041B (en) Deep learning-based pest detection method
CN112884742B (en) Multi-target real-time detection, identification and tracking method based on multi-algorithm fusion
CN108320306B (en) Video target tracking method fusing TLD and KCF
CN111882586B (en) Multi-actor target tracking method oriented to theater environment
CN111931864B (en) Method and system for multiple optimization of target detector based on vertex distance and cross-over ratio
CN113592911B (en) Apparent enhanced depth target tracking method
CN111242985B (en) Video multi-pedestrian tracking method based on Markov model
CN114049383B (en) Multi-target tracking method and device and readable storage medium
CN111931571B (en) Video character target tracking method based on online enhanced detection and electronic equipment
CN111639570A (en) Online multi-target tracking method based on motion model and single-target clue
CN114676756A (en) Image recognition method, image recognition device and computer storage medium
CN117036397A (en) Multi-target tracking method based on fusion information association and camera motion compensation
CN111223126A (en) Cross-view-angle trajectory model construction method based on transfer learning
CN113392777A (en) Real-time target detection method based on online learning strategy
Hu et al. Reliability verification‐based convolutional neural networks for object tracking
TWI736063B (en) Object detection method for static scene and associated electronic device
CN112199539A (en) Automatic labeling method, system and equipment for contents of unmanned aerial vehicle three-dimensional map photographic image
Chen et al. A Modified MDP Algorithm for Multi-Pedestrians Tracking
Zhao et al. Adaptive visual tracking based on key frame selection and reinforcement learning
CN116580066B (en) Pedestrian target tracking method under low frame rate scene and readable storage medium
Zhou et al. An Improved TLD Tracking Algorithm for Fast-moving Object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination