CN111104948A - Target tracking method based on adaptive fusion of double models - Google Patents

Target tracking method based on adaptive fusion of double models Download PDF

Info

Publication number
CN111104948A
CN111104948A CN201811259843.8A CN201811259843A CN111104948A CN 111104948 A CN111104948 A CN 111104948A CN 201811259843 A CN201811259843 A CN 201811259843A CN 111104948 A CN111104948 A CN 111104948A
Authority
CN
China
Prior art keywords
target
response
color
adaptive fusion
filter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811259843.8A
Other languages
Chinese (zh)
Inventor
戴伟聪
金龙旭
李国宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun Institute of Optics Fine Mechanics and Physics of CAS
Original Assignee
Changchun Institute of Optics Fine Mechanics and Physics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun Institute of Optics Fine Mechanics and Physics of CAS filed Critical Changchun Institute of Optics Fine Mechanics and Physics of CAS
Priority to CN201811259843.8A priority Critical patent/CN111104948A/en
Publication of CN111104948A publication Critical patent/CN111104948A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/254Fusion techniques of classification results, e.g. of results related to same input data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a target tracking method based on dual-model adaptive fusion. The target tracking method based on the dual-model adaptive fusion utilizes the adaptive fusion coefficient based on the relative confidence coefficient, so that the response of the correlation filter and the response of the color classifier can be optimally fused, the tracking advantages of the respective models are fully shown, and the problem that the advantages of the correlation filter tracking model and the color classifier tracking model are not completely shown because the fusion coefficient of the correlation filter and the color classifier in the conventional Stacke target tracking method is a constant is effectively solved.

Description

Target tracking method based on adaptive fusion of double models
Technical Field
The invention relates to the technical field of computer vision, in particular to a target tracking method based on dual-model adaptive fusion.
Background
Target tracking is one of the main research directions in the field of computer vision. The target tracking relates to the fields of digital image processing, machine learning, pattern recognition, neural networks, deep learning and the like, and has wide development prospects in a plurality of application fields of video monitoring, intelligent robots and the like.
In recent years, target tracking methods based on detection have been developed greatly, wherein one of the most mainstream research directions is the target tracking method based on a correlation filter. Henriques et al proposed a KCF algorithm that extended the single-channel grayscale features used by MOSSE and CSK to multi-channel histogram of oriented gradient features (HOG) and mapped the features to high-dimensional space with kernel skills in 2014. The KCF is provided, so that the related filtering target tracking method is developed rapidly. The SRDCF proposed by Danelljan et al in 2015 solves the inherent boundary effect of the correlation filter through spatial regularization, named in the top of the list in the VOT2015 target tracking competition, but the excessive calculation amount of the SRDCF also limits the practicability of the algorithm. In 2016, Luca et al put forward a Stacke algorithm based on a KCF linear kernel version DCF, and the Stacke algorithm improves the performance of a tracking algorithm by solving two ridge regression equations and combining a correlation filter and a color classifier, so that a quite excellent result is obtained. However, the fusion coefficient of the correlation filter and the color classifier in the complete algorithm is constant, which results in that the complete algorithm does not fully exhibit the advantages of the correlation filter tracking model and the color classifier tracking model.
Therefore, in order to solve the problem that the existing stack algorithm does not fully exhibit the advantages of the correlation filter tracking model and the color classifier tracking model, it is necessary to provide a target tracking method capable of fully exhibiting the advantages of the correlation filter tracking model and the color classifier tracking model.
Disclosure of Invention
Aiming at the problem that the existing Stack algorithm does not completely show the advantages of a correlation filter tracking model and a color classifier tracking model, the embodiment of the invention provides a target tracking method based on dual-model adaptive fusion. The target tracking method based on the dual-model adaptive fusion provided by the embodiment of the invention utilizes the adaptive fusion coefficient based on the relative confidence coefficient to enable the relevant filter and the color classifier to be optimally fused, thereby fully showing the tracking advantages of the respective models.
The specific scheme of the target tracking method based on the dual-model adaptive fusion is as follows: a target tracking method based on dual-model adaptive fusion comprises the following steps of S1: acquiring target initial information according to the initial frame; step S2: extracting color histograms from the foreground region and the background region respectively, and solving and training a color classifier by using a ridge regression equation; step S3: extracting features from the relevant filtering area and training a relevant filter; step S4: initializing a scale filter, and extracting image blocks with different scales to train the scale filter; step S5: detecting a target by using the color classifier to obtain the response of the color classifier; step S6: detecting a target in a relevant filtering area by using a relevant filter to obtain a relevant filtering response; step S7: calculating a relative confidence coefficient according to the response of the relevant filtering, calculating a self-adaptive fusion coefficient based on the relative confidence coefficient, and fusing the response of the relevant filter and the response of the color classifier by adopting the self-adaptive fusion coefficient to obtain the position of the detection target; step S8: extracting the features of the target and updating the correlation filter and the color classifier; step S9: detecting scale change, and updating the target, the foreground area, the background area and the scale filter; step S10: and repeating the steps S5 to S9 until the video is finished.
Preferably, the target initial information includes a target position, a length of the target, and a width of the target.
Preferably, the process of extracting the color histogram in step S2 is: equally dividing the color space into a plurality of color intervals, defining each color interval as a square column of a histogram, and counting the number of pixel points of a foreground area or a background area in each square column.
Preferably, the width value of the histogram of the color histogram is 8.
Preferably, the expression of the ridge regression equation is:
Figure BDA0001843647580000021
χtrepresenting training samples and their corresponding regression values, β is the color classifier, L, to be solvedhistRepresenting the loss function of the classifier, λhistIs a regularization coefficient.
Preferably, the method of training the correlation filter is to minimize the following equation:
Figure BDA0001843647580000022
wherein f represents a sample, d is the dimension of the sample f, h is a correlation filter, g represents the output required by the correlation filter, g is a gaussian function, x represents convolution operation, and λ is a regularization coefficient.
Preferably, the expression of the relative confidence is:
Figure BDA0001843647580000031
wherein r istRelative confidence degree of the detection result of the correlation filter relative to the whole at the t frame, APCEtResponding to the t frame by ytAverage correlation peak energy of, APCEtThe specific calculation expression of (2) is:
Figure BDA0001843647580000032
preferably, the expression of the adaptive fusion coefficient is:
Figure BDA0001843647580000033
wherein, αtIs the self-adaptive fusion coefficient at the t frame, rho is the influence factor of the relative confidence coefficient, rtα is a constant weighting factor for the relative confidence of the detection results of the correlation filter with respect to global at the tth frame.
Preferably, the adaptive fusion coefficient is adopted to fuse the response of the correlation filter and the response of the color classifier, and a specific calculation expression is as follows:
response=(1-αt)response_cf+αt·response_p,
where response _ cf is the response of the correlation filter, response _ p is the response of the color classifier, αtThe adaptive fusion coefficient at the t-th frame, and response is the final response.
Preferably, the influence factor ρ is used to adjust the weights of the correlation filter discrimination result and the color classifier discrimination result.
According to the technical scheme, the embodiment of the invention has the following advantages:
the embodiment of the invention provides a target tracking method based on dual-model adaptive fusion, which enables a relevant filter and a color classifier to be optimally fused through an adaptive fusion coefficient based on relative confidence coefficient, and further fully shows the tracking advantages of respective models.
Drawings
FIG. 1 is a schematic flowchart of a target tracking method based on dual-model adaptive fusion according to an embodiment of the present invention;
fig. 2 is a schematic diagram illustrating comparison of experimental results of a target tracking method based on dual-model adaptive fusion and other target tracking methods on an OTB2013 according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating qualitative comparison between a target tracking method based on dual-model adaptive fusion and tracking of DSST and KCF in different images according to an embodiment of the present invention;
FIG. 4 is a schematic representation of another step flow diagram of the embodiment shown in FIG. 1.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
As shown in fig. 1, an embodiment of the present invention provides a target tracking method based on adaptive fusion of dual models. The method comprises ten steps, and the specific content of each step is as follows.
Step S1: and acquiring target initial information according to the initial frame. The target initial information comprises a target position, a length of the target and a width of the target. Further, in step S1, some initialization parameters, normal initialization operation of the initialization area, are also included.
Step S2: color histograms are extracted from the foreground region and the background region, respectively, and a color classifier is solved and trained using a ridge regression equation.
The specific process of extracting the color histogram from the foreground region or from the background region is as follows: equally dividing the color space into a plurality of color intervals, defining each color interval as a square column of a histogram, and counting the number of pixel points of a foreground area or a background area in each square column. In one embodiment, the width value of the histogram of the color histogram is 8.
The response of the color classifier is obtained by solving a ridge regression equation, the specific expression of which is shown in equation 1:
Figure BDA0001843647580000051
χtrepresenting training samplesThis and its corresponding regression value, β, is the color classifier, L, to be solvedhistRepresenting the loss function of the classifier, λhistIs a regularization coefficient.
Let (q, y) e W represent a series of rectangular sample boxes q and their corresponding regression labels
Figure BDA0001843647580000052
Including the positive sample (p, 1). X represents an image. The loss of all sampled images can be expressed as: equation 1 can be transformed into equation 2:
lhist(x,p,β)=∑(q,y)∈WT[∑u∈HψT(x,q)[u]]-y)2(formula 2)
ψT(x,q)The feature transformation representing one M channel, β, is a model application of linear regression such that on each pixel u, the regression value of the pixel belonging to the background region B is 0, the regression value of the pixel belonging to the foreground region O is 1, and ψ is taken as ψT(x,q)For short, the loss function for a single image can be written as:
Figure BDA0001843647580000053
in the above equation, O denotes a rectangular foreground region immediately surrounding the target, and B denotes a rectangular background region containing the target.
In the embodiment of the present invention, the color image employs the RGB color space, and therefore, the RGB color histogram is used as a feature. The loss function is decomposed into the sum of each histogram in the histogram, and the preferred value M in this embodiment of the invention is 32.
βTψ[u]It can be quickly obtained by constructing a look-up table k that maps the pixel value u to the serial number of the belonging square column, i.e. back-projecting with the color histogram, let βTψ[u]=βk(u)Equation 4 can be obtained:
Figure BDA0001843647580000061
wherein N isj(A)={xiE.a, (u) j is the sum of the number of elements in the jth square column in the region a.
Thus, the solution to the ridge regression problem of equation 1 is shown in equation 5:
Figure BDA0001843647580000062
wherein,
Figure BDA0001843647580000063
to pair
Figure BDA0001843647580000064
Using the integral map calculation, the response of the color classifier is obtained.
Step S3: features are extracted from the correlation filtered regions and correlation filters are trained. Extracting a sample template x according to the target center, and carrying out cyclic shift on the sample template x to construct a large number of training samples xi. And extracting multi-channel direction gradient histogram features (HOG) to train and generate a correlation filter.
The correlation filter can be solved by a ridge regression equation, and for a sample f consisting of d-dimensional features, a d-dimensional correlation filter h can be trained by minimizing equation 6:
Figure BDA0001843647580000065
wherein f represents a sample, d is the dimension of the sample f, h is a correlation filter, g represents the output required by the correlation filter, g is a gaussian function, x represents convolution operation, and λ is a regularization coefficient. λ is used to prevent overfitting.
Step S4: and initializing a scale filter, and extracting image blocks with different scales to train the scale filter. And taking the target position determined by the frame as a center, extracting a series of image block features with different scales, and constructing a feature pyramid. With H × W as the target size, the total number of extracted S pieces near the target position is anH×anAnd a represents a scale coefficient. n is provided withThe expression is shown in equation 7:
Figure BDA0001843647580000066
in this embodiment, S is 33. Of course, in other embodiments, the specific value of S may be other numbers.
Step S5: and detecting the target by using the color classifier, and obtaining the response of the color classifier. After minimizing equation 6, the filter H can be obtained by converting equation 6 into the frequency domainlThe expression in the frequency domain is specifically expressed as shown in equation 8:
Figure BDA0001843647580000071
wherein, capital letters in equation 8 mean the corresponding discrete fourier transform,
Figure BDA0001843647580000072
is represented by FkThe corresponding complex conjugate.
Step S6: and detecting the target by using the correlation filter in the correlation filtering area to obtain the response of the correlation filtering. The response of the correlation filter can be obtained by performing inverse Fourier transform on the formula 8.
Step S7: and calculating a relative confidence coefficient according to the response of the relevant filtering, calculating an adaptive fusion coefficient based on the relative confidence coefficient, and fusing the response of the relevant filter and the response of the color classifier by adopting the adaptive fusion coefficient to obtain the position of the detection target.
For combining the ideas of the two tracking models to achieve the idea of complementary advantages, the complete target tracking method integrates the response _ cf of the correlation filter and the response _ p of the color classifier by weighted average with a constant coefficient α, and the specific combined expression is shown in formula 9:
(1- α) response _ cf + α response _ p (equation 9)
Although the weighted integration method effectively fuses two complementary models, only one fixed fusion coefficient junction causes that the correlation filter and the color classifier cannot achieve optimal combination.
The embodiment of the invention adopts average correlation peak energy (APCE for short) to realize self-adaptive adjustment on the fusion coefficient. The average correlation peak energy (APCE) is an index for evaluating the confidence of the detection result of the correlation filter, and the larger the average correlation peak energy (APCE), the higher the confidence of the detection result. Tth frame response ytThe expression of APCE of (a) is shown in equation 10:
Figure BDA0001843647580000073
wherein,
Figure BDA0001843647580000074
represents the response ytThe maximum value of (a) is,
Figure BDA0001843647580000075
represents the response ytMean represents the mean value.
The expression of the relative confidence of the detection result of the correlation filter with respect to the global at the t-th frame is shown in equation 11:
Figure BDA0001843647580000081
wherein r istIndicating the relative confidence of the detection result of the relevant filter in the t-th frame relative to the global.
Therefore, the constant coefficient α in equation 9 can be adaptively fused with the coefficient αtSelf-adaptive fusion coefficient αtAs shown in equation 12:
Figure BDA0001843647580000082
wherein, αtIs the self-adaptive fusion coefficient at the t frame, rho is the influence factor of the relative confidence coefficient, rtThe relative confidence of the detection result of the correlation filter relative to the global at the t-th frame is α is a constant weighting coefficient, the influence factor p is used for adjusting the weight of the discrimination result of the correlation filter and the discrimination result of the color classifier, when the relative confidence of the detection result of the correlation filter is larger than 1, the discrimination result of the correlation filter is more credible, otherwise, the discrimination result of the color classifier is more credible.
Step S8: extracting features of the target and updating the correlation filter and color classifier.
Step S9: and detecting the scale change, and updating the target, the foreground area, the background area and the scale filter.
And at the new position, extracting 33 image blocks with different scales, adjusting the 33 image blocks with different scales to the same size, and generating candidate scale images through cyclic shift. And calling a scale correlation filter to detect the candidate scale image, and selecting the scale with the maximum response as a new scale. The target, foreground region and background region are updated.
Step S10: and repeating the steps S5 to S9 until the video is finished.
The embodiment of the invention provides a target tracking method based on dual-model adaptive fusion, which enables a relevant filter and a color classifier to be optimally fused through an adaptive fusion coefficient based on relative confidence coefficient, and further fully shows the tracking advantages of respective models.
The target tracking method based on the dual-model adaptive fusion provided by the embodiment of the invention is implemented on a computer with an I7-4710HQ2.5GHZ processor and an 8G memory, and the running speed of Matlab R2016a can reach 28 frames per second.
As shown in fig. 2, a schematic diagram for comparing experimental results of a target tracking method based on dual-model adaptive fusion and other target tracking methods on an OTB2013 is provided in the embodiment of the present invention. As can be seen from comparison in fig. 2, the target tracking method based on dual-model adaptive fusion (shown as Our algorithm in the figure) proposed by the embodiment of the present invention has an improvement in accuracy and success rate of 1.9% and 2% respectively, compared to the original edition algorithm stack. As shown in fig. 3, a qualitative comparison diagram between a target tracking method based on dual-model adaptive fusion and tracking of DSST and KCF in different images is provided in the embodiment of the present invention. As can be seen from FIG. 3, the target tracking method based on the dual-model adaptive fusion provided by the embodiment of the invention is more accurate in target tracking.
Fig. 4 is a schematic diagram illustrating another step flow of the embodiment shown in fig. 1. Initializing after starting tracking; three filters were trained separately after initialization: training a scale filter, training a color classifier and training a correlation filter; detecting the target by using the trained correlation filter to obtain a correlation filter response; detecting the target with the trained color classifier to obtain a classifier response; calculating a self-fusion system, and fusing the relevant filtering response and the classifier response by using a self-fusion coefficient to obtain a fused target position; detecting the scale change by using the trained scale filter; updating some columns; and judging whether the video is finished or not, if not, continuing, otherwise, finishing.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (10)

1. A target tracking method based on dual-model adaptive fusion is characterized by comprising the following steps:
step S1: acquiring target initial information according to the initial frame;
step S2: extracting color histograms from the foreground region and the background region respectively, and solving and training a color classifier by using a ridge regression equation;
step S3: extracting features from the relevant filtering area and training a relevant filter;
step S4: initializing a scale filter, and extracting image blocks with different scales to train the scale filter;
step S5: detecting a target by using the color classifier to obtain the response of the color classifier;
step S6: detecting a target in a relevant filtering area by using a relevant filter to obtain a relevant filtering response;
step S7: calculating a relative confidence coefficient according to the response of the relevant filtering, calculating a self-adaptive fusion coefficient based on the relative confidence coefficient, and fusing the response of the relevant filter and the response of the color classifier by adopting the self-adaptive fusion coefficient to obtain the position of the detection target;
step S8: extracting the features of the target and updating the correlation filter and the color classifier;
step S9: detecting scale change, and updating the target, the foreground area, the background area and the scale filter;
step S10: and repeating the steps S5 to S9 until the video is finished.
2. The method of claim 1, wherein the target initial information includes a target position, a target length, and a target width.
3. The object tracking method based on dual-model adaptive fusion of claim 1, wherein the process of extracting the color histogram in step S2 is as follows: equally dividing the color space into a plurality of color intervals, defining each color interval as a square column of a histogram, and counting the number of pixel points of a foreground area or a background area in each square column.
4. The method for tracking the target based on the adaptive fusion of the dual models as claimed in claim 3, wherein the width value of the histogram of the color histogram is 8.
5. The method for tracking the target based on the adaptive fusion of the dual models as claimed in claim 1, wherein the expression of the ridge regression equation is:
Figure FDA0001843647570000011
χtrepresenting training samples and their corresponding regression values, β is the color classifier, L, to be solvedhistRepresenting the loss function of the classifier, λhistIs a regularization coefficient.
6. The method of claim 1, wherein the method of training the correlation filter is to minimize the following equation:
Figure FDA0001843647570000021
wherein f represents a sample, d is the dimension of the sample f, h is a correlation filter, g represents the output required by the correlation filter, g is a gaussian function, x represents convolution operation, and λ is a regularization coefficient.
7. The method for tracking the target based on the adaptive fusion of the dual models as claimed in claim 1, wherein the expression of the relative confidence is:
Figure FDA0001843647570000022
wherein r istRelative confidence degree of the detection result of the correlation filter relative to the whole at the t frame, APCEtResponding to the t frame by ytAverage correlation peak energy of, APCEtThe specific calculation expression of (2) is:
Figure FDA0001843647570000023
8. the dual-model-based target tracking method based on adaptive fusion of the claims 7, wherein the expression of the adaptive fusion coefficient is as follows:
Figure FDA0001843647570000024
wherein α t is the adaptive fusion coefficient at the t frame, ρ is the influence factor of the relative confidence, rtα is a constant weighting factor for the relative confidence of the detection results of the correlation filter with respect to global at the tth frame.
9. The dual-model-based adaptive fusion target tracking method according to claim 8, wherein the adaptive fusion coefficients are used to fuse the response of the correlation filter and the response of the color classifier, and the specific calculation expression is as follows:
response=(1-αt)response_cf+αt·response_p,
where response _ cf is the response of the correlation filter, response _ p is the response of the color classifier, αtIs the adaptive fusion coefficient at the t frame, responseIs the final response.
10. The dual model-based adaptive fusion target tracking method according to claim 8, wherein the influence factor p is used to adjust the weights of the correlation filter discrimination result and the color classifier discrimination result.
CN201811259843.8A 2018-10-26 2018-10-26 Target tracking method based on adaptive fusion of double models Pending CN111104948A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811259843.8A CN111104948A (en) 2018-10-26 2018-10-26 Target tracking method based on adaptive fusion of double models

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811259843.8A CN111104948A (en) 2018-10-26 2018-10-26 Target tracking method based on adaptive fusion of double models

Publications (1)

Publication Number Publication Date
CN111104948A true CN111104948A (en) 2020-05-05

Family

ID=70419143

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811259843.8A Pending CN111104948A (en) 2018-10-26 2018-10-26 Target tracking method based on adaptive fusion of double models

Country Status (1)

Country Link
CN (1) CN111104948A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113888586A (en) * 2021-09-01 2022-01-04 河北汉光重工有限责任公司 Target tracking method and device based on correlation filtering
CN113888586B (en) * 2021-09-01 2024-10-29 河北汉光重工有限责任公司 Target tracking method and device based on correlation filtering

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030200065A1 (en) * 2001-04-20 2003-10-23 Li Luo Wen Maneuvering target tracking method via modifying the interacting multiple model (IMM) and the interacting acceleration compensation (IAC) algorithms
US20130051613A1 (en) * 2011-08-29 2013-02-28 International Business Machines Corporation Modeling of temporarily static objects in surveillance video data
US20130084006A1 (en) * 2011-09-29 2013-04-04 Mediatek Singapore Pte. Ltd. Method and Apparatus for Foreground Object Detection
CN103116896A (en) * 2013-03-07 2013-05-22 中国科学院光电技术研究所 Automatic detection tracking method based on visual saliency model
US20130156299A1 (en) * 2011-12-17 2013-06-20 Motorola Solutions, Inc. Method and apparatus for detecting people within video frames based upon multiple colors within their clothing
CN103186230A (en) * 2011-12-30 2013-07-03 北京朝歌数码科技股份有限公司 Man-machine interaction method based on color identification and tracking
CN104833357A (en) * 2015-04-16 2015-08-12 中国科学院光电研究院 Multisystem multi-model mixing interactive information fusion positioning method
CN108646725A (en) * 2018-07-31 2018-10-12 河北工业大学 Dual model method for diagnosing faults based on dynamic weighting

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030200065A1 (en) * 2001-04-20 2003-10-23 Li Luo Wen Maneuvering target tracking method via modifying the interacting multiple model (IMM) and the interacting acceleration compensation (IAC) algorithms
US20130051613A1 (en) * 2011-08-29 2013-02-28 International Business Machines Corporation Modeling of temporarily static objects in surveillance video data
US20130084006A1 (en) * 2011-09-29 2013-04-04 Mediatek Singapore Pte. Ltd. Method and Apparatus for Foreground Object Detection
US20130156299A1 (en) * 2011-12-17 2013-06-20 Motorola Solutions, Inc. Method and apparatus for detecting people within video frames based upon multiple colors within their clothing
CN103186230A (en) * 2011-12-30 2013-07-03 北京朝歌数码科技股份有限公司 Man-machine interaction method based on color identification and tracking
CN103116896A (en) * 2013-03-07 2013-05-22 中国科学院光电技术研究所 Automatic detection tracking method based on visual saliency model
CN104833357A (en) * 2015-04-16 2015-08-12 中国科学院光电研究院 Multisystem multi-model mixing interactive information fusion positioning method
CN108646725A (en) * 2018-07-31 2018-10-12 河北工业大学 Dual model method for diagnosing faults based on dynamic weighting

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
熊昌镇等: "稳健的双模型自适应切换实时跟踪算法", 《光学学报》 *
王艳川等: "基于双模型融合的自适应目标跟踪算法", 《计算机应用研究》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113888586A (en) * 2021-09-01 2022-01-04 河北汉光重工有限责任公司 Target tracking method and device based on correlation filtering
CN113888586B (en) * 2021-09-01 2024-10-29 河北汉光重工有限责任公司 Target tracking method and device based on correlation filtering

Similar Documents

Publication Publication Date Title
CN111723860B (en) Target detection method and device
CN110363182B (en) Deep learning-based lane line detection method
CN108986140B (en) Target scale self-adaptive tracking method based on correlation filtering and color detection
EP3819859B1 (en) Sky filter method for panoramic images and portable terminal
CN112966691B (en) Multi-scale text detection method and device based on semantic segmentation and electronic equipment
CN104915972A (en) Image processing apparatus, image processing method and program
CN107767405A (en) A kind of nuclear phase for merging convolutional neural networks closes filtered target tracking
CN108960260B (en) Classification model generation method, medical image classification method and medical image classification device
CN111160249A (en) Multi-class target detection method of optical remote sensing image based on cross-scale feature fusion
CN111260738A (en) Multi-scale target tracking method based on relevant filtering and self-adaptive feature fusion
CN105608456A (en) Multi-directional text detection method based on full convolution network
CN112395442B (en) Automatic identification and content filtering method for popular pictures on mobile internet
CN110472577B (en) Long-term video tracking method based on adaptive correlation filtering
CN103136504A (en) Face recognition method and device
CN108596951A (en) A kind of method for tracking target of fusion feature
CN110334703B (en) Ship detection and identification method in day and night image
EP2613294A1 (en) System and method for synthesizing portrait sketch from photo
CN107169994A (en) Correlation filtering tracking based on multi-feature fusion
CN106780727B (en) Vehicle head detection model reconstruction method and device
CN107944403A (en) Pedestrian's attribute detection method and device in a kind of image
CN112785622B (en) Method and device for tracking unmanned captain on water surface and storage medium
CN110555870A (en) DCF tracking confidence evaluation and classifier updating method based on neural network
CN109002463A (en) A kind of Method for text detection based on depth measure model
CN106157330A (en) A kind of visual tracking method based on target associating display model
CN116469020A (en) Unmanned aerial vehicle image target detection method based on multiscale and Gaussian Wasserstein distance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200505

RJ01 Rejection of invention patent application after publication