CN112233143B - Target tracking method, device and computer readable storage medium - Google Patents
Target tracking method, device and computer readable storage medium Download PDFInfo
- Publication number
- CN112233143B CN112233143B CN202011468540.4A CN202011468540A CN112233143B CN 112233143 B CN112233143 B CN 112233143B CN 202011468540 A CN202011468540 A CN 202011468540A CN 112233143 B CN112233143 B CN 112233143B
- Authority
- CN
- China
- Prior art keywords
- target
- tracked
- frame image
- final position
- current frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 42
- 238000001914 filtration Methods 0.000 claims abstract description 52
- 238000012545 processing Methods 0.000 claims abstract description 8
- 239000011159 matrix material Substances 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 11
- 238000005457 optimization Methods 0.000 claims description 10
- 125000004122 cyclic group Chemical group 0.000 claims description 8
- 238000006243 chemical reaction Methods 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 3
- 230000014509 gene expression Effects 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 10
- 230000008569 process Effects 0.000 description 10
- 238000001514 detection method Methods 0.000 description 7
- 230000009977 dual effect Effects 0.000 description 6
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000005259 measurement Methods 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/262—Analysis of motion using transform domain methods, e.g. Fourier domain methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
Abstract
The application discloses a target tracking method, a target tracking device and a computer-readable storage medium, wherein the method comprises the following steps: acquiring a current frame image; obtaining the predicted position of the target to be tracked in the current frame image according to the final position of the target to be tracked in the previous frame image; processing the predicted position by using a related filtering template to obtain the final position of the target to be tracked in the current frame image; updating parameters of a related filtering template based on the final position of the target to be tracked in the current frame image, the final position of the target to be tracked in the previous frame image and the similarity between the final position of the target to be tracked in the current frame image and the final position of the target to be tracked in the previous frame image; and obtaining the predicted position of the target to be tracked in the current frame image according to the final position of the target to be tracked in the previous frame image and performing the subsequent steps again. Through the mode, the stability of updating the parameters of the related filtering template can be enhanced.
Description
Technical Field
The present application relates to the field of target tracking technologies, and in particular, to a target tracking method and apparatus, and a computer-readable storage medium.
Background
Target tracking technology has been a research hotspot in computer vision, and is widely applied to tasks such as video structuring and manual interaction. The target tracking task is to give the initial frame position of the target O and then to distinguish the position of the target O in the subsequent frame through the model. A typical correlation filtering-based target tracking procedure is: and calculating a filtering template according to the position of the target to be tracked in the initial frame image, and after the position of the target to be tracked in the next frame image is predicted, updating the filtering template according to the position of the target to be tracked in the next frame image, and continuously using the filtering template for target tracking of the subsequent frame.
Disclosure of Invention
The application mainly provides a target tracking method, a target tracking device and a computer readable storage medium, which can solve the problem that the update of related filtering template parameters is unstable in the prior art.
In order to solve the above technical problem, a first aspect of the present application provides a target tracking method. Wherein, the method comprises the following steps: acquiring a current frame image; obtaining the predicted position of the target to be tracked in the current frame image according to the final position of the target to be tracked in the previous frame image; processing the predicted position by using the related filtering template to obtain the final position of the target to be tracked in the current frame image; updating parameters of the related filtering template based on the final position of the target to be tracked in the current frame image, the final position of the target to be tracked in the previous frame image and the similarity between the final position of the target to be tracked in the current frame image and the final position of the target to be tracked in the previous frame image; and the next frame of image is the current frame of image, and the predicted position of the target to be tracked in the current frame of image is obtained according to the final position of the target to be tracked in the previous frame of image, and the subsequent steps are executed again.
In order to solve the above technical problem, a second aspect of the present application provides a target tracking apparatus, including a processor and a memory coupled to each other, where the memory stores a computer program, and the processor is configured to execute the computer program to implement the target tracking method provided by the first aspect.
In order to solve the above technical problem, a third aspect of the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the target tracking method provided by the first aspect.
The beneficial effect of this application is: different from the situation of the prior art, the method and the device have the advantages that the predicted position of the target to be tracked in the current frame image is obtained according to the final position with the tracking target in the previous frame image, the predicted position is processed by utilizing the related filtering template to obtain the final position of the target to be tracked in the current frame image, and then the parameters of the related filtering template are updated based on the final position of the target to be tracked in the current frame image, the final position of the target to be tracked in the previous frame image and the similarity between the final position of the target to be tracked in the current frame image and the final position of the target to be tracked in the previous frame image, so that the prediction of the position of the target to be tracked is carried out frame by frame, and the target tracking. In the updating process of the related filtering template, the perception of the related filtering template on the position change of the target to be tracked in the adjacent frame images is enhanced by continuously learning the similarity between the adjacent frames, and the updating stability of the related filtering template is further enhanced.
Drawings
FIG. 1 is a schematic block flow diagram of an embodiment of a target tracking method of the present application;
FIG. 2 is a schematic diagram of a detection frame in an image to be tracked according to the present application;
FIG. 3 is a schematic block flow diagram of one embodiment of a process for building an optimized parameter update model according to the present application;
FIG. 4 is a schematic block flow diagram illustrating one embodiment of updating relevant filter model parameters according to the present application;
FIG. 5 is a schematic block diagram of a circuit configuration of an embodiment of the subject tracking apparatus of the present application;
FIG. 6 is a schematic block diagram of a circuit structure of an embodiment of a computer-readable storage medium of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first" and "second" in this application are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features shown. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. Furthermore, the terms "comprising" and "having," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those skilled in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
Referring to fig. 1, fig. 1 is a schematic block diagram of a flow of an embodiment of a target tracking method of the present application. It should be noted that, if the result is substantially the same, the flow sequence shown in fig. 1 is not limited in this embodiment. The target tracking method of the embodiment comprises the following steps:
s11: and acquiring a current frame image.
The current frame image may be a frame image obtained from an image sequence, where the image sequence may be a complete video stream sequence captured by the camera device, or may be a part of a video stream sequence cut from the complete video stream sequence.
Optionally, the image sequence includes a first frame image and several frames of images to be tracked located after the first frame image. Specifically, each frame of image in the image sequence has a corresponding frame number, so the images in the image sequence can be divided into a first frame of image (with the smallest frame number) and several frames of images to be tracked after the first frame of image based on the frame numbers. In which the position of the object in the first frame of image of the image sequence is known (either given by man or calculated according to a correlation algorithm), and the position of the object in the other frame of image except the first frame of image is unknown, in other words, the position of the object in the first frame of image is known, and the position of the object in the other frame of image is unknown.
S12: and obtaining the predicted position of the target to be tracked in the current frame image according to the final position of the target to be tracked in the previous frame image.
The target to be tracked may be simply referred to as the above-mentioned target, which may be a vehicle, a pedestrian, or various signboards, warning boards, or the like. The target to be tracked is a target in the first frame image, and therefore the target tracking method provided by the application is essentially used for tracking the position of the target appearing in the first frame image in each image to be tracked. The number of the objects in the first frame image may be one or more.
In the present application, the position of the target to be tracked is also referred to as a detection frame, please refer to fig. 2, and a frame 1 in fig. 2 is a detection frame of the target to be tracked. In the tracking process, each target has a corresponding detection frame.
The previous frame image is the previous frame image of the current frame image in the image sequence. The previous frame image of the current frame image may or may not be the first frame image of the image sequence. In the target tracking process, the positions of the targets in the images to be tracked of the image sequence are sequentially tracked according to the frame number. When the current frame image is the second frame image, the previous frame image is the first frame image; when the current frame image is an image to be tracked of other frames, the previous frame image is not the first frame image.
Since the position of the object in the first frame image is known, when the previous frame image is the first frame image in the image sequence, the final position of the object in the previous frame image refers to the known position of the object in the first frame image.
And when the previous frame image is not the first frame image in the image sequence, the final position of the target in the previous frame image is obtained by performing related filtering calculation on a plurality of predicted positions of the target to be tracked in the previous frame image, wherein the acquisition mode of the predicted position of the target in the previous frame image is the same as that of the current frame image.
S13: and processing the predicted position by using a related filtering template to obtain the final position of the target to be tracked in the current frame image.
In an embodiment, in step S12, a plurality of predicted positions of the target to be tracked in the current frame image are obtained from the final position of the target to be tracked in the previous frame image. And if one of the plurality of prediction positions is required to be determined as the final prediction position of the target to be tracked in the current frame image, inputting each prediction position into a related filtering model for calculation so as to obtain the final position of the target to be tracked in the current frame image.
Specifically, the relevant filtering templates are used for fitting the prediction positions respectively, and the prediction position with the best fitting result is used as the final position of the target to be tracked in the current frame image.
S14: and updating parameters of the related filtering template based on the final position of the target to be tracked in the current frame image, the final position of the target to be tracked in the previous frame image and the similarity between the final position of the target to be tracked in the current frame image and the final position of the target to be tracked in the previous frame image.
Because the current frame image and the previous frame image are two adjacent frames in the image sequence, the deviation amplitude of the final position determined by the target to be tracked in the current frame image is small relative to the final position of the target to be tracked in the previous frame image, and the appearance of the final position of the target to be tracked in the current frame image and the appearance of the final position of the target to be tracked in the previous frame image have extremely high similarity. Therefore, the method can be applied to the updating process of the similar filtering template parameters in consideration of the appearance similarity of the target to be tracked in every two adjacent frame images so as to enhance the stability of the updating of the related filtering template parameters.
After the step is executed, the step may jump to step S11, so as to obtain a next frame image of the current frame image from the image sequence as a new current frame image, and re-execute the steps of obtaining the predicted position of the target to be tracked in the current frame image according to the final position of the target to be tracked in the previous frame image, so as to complete the prediction of the target position in each frame image.
Through the embodiment, the target position in the previous frame image can be used for predicting the position of the target to be tracked of the current frame image in the current frame image, the predicted position is processed by using the related filtering template, and a more accurate final position is obtained to be used as the position of the target to be tracked, so that the position of the target to be tracked can be predicted frame by frame, and the target tracking purpose is achieved; and after the tracking of the target to be tracked in the current frame image is finished each time, the online learning of the related filtering template is further carried out based on the image characteristics of the final positions of the target to be tracked in the previous frame image and the current frame image so as to update the parameters of the related filtering template, and the learning of the similarity of the related filtering template to the adjacent frame image is enhanced, so that the tracking of the kernel related filtering model to the next current frame is more effective.
The parameter updating of the related filtering template of the application depends on the following optimized parameter updating model:
λ1、λ2is a constant parameter, y is a Gaussian true value, z2For the parameters of the relevant filtering template corresponding to the current frame image, z1Parameters of the relevant filter template corresponding to the previous frame of image, C (x)2The cyclic shift conversion result of the final position image feature of the target to be tracked in the current frame, C (x)1For the result of the cyclic shift conversion of the final position image features of the target to be tracked in the previous frame, ATRepresenting the transpose of the matrix A, | | A-B | | non-woven cells1Computing an L1 norm representing the difference between A and B;
z obtained by updating model solution with the optimized parameters2And updating the parameters of the relevant filtering template for the parameters of the relevant filtering template.
Referring to fig. 3, fig. 3 is a schematic block diagram illustrating a process of establishing an optimized parameter update model according to an embodiment of the present application. It should be noted that, if the result is substantially the same, the flow sequence shown in fig. 3 is not limited in this embodiment. The embodiment comprises the following steps:
s141: and constructing a parameter updating model by using the final position of the target to be tracked in the current frame and the final position of the target to be tracked in the previous frame.
Wherein the first term in the formula is to get the regression target closer to the true value yiThe second term is a regularization term in order to prevent overfitting of the model. Equation (2) is a convex optimization problem with a closed solution:wherein。
The ridge regression modeling process is as follows:
consider that X is a matrix generated by cyclic shifting of a base sample X, XTIs the transpose of X of the matrix and I is the identity matrix. When the modeling range is extended to the complex domain, the closed solution can be converted to equation (3):
wherein XHRepresents the conjugate transpose of the matrix X,,X*represents the conjugate matrix of X. The circulant matrix can be converted to diagonal form by Fourier transform, as shown in equation (4)
WhereinMeans converting x into Fourier domain, andf is a constant matrix independent of the training samples and n represents the length of the base samples. diag denotes the diagonalization of the matrix.
To facilitate applying the kernel function to the cyclic shift matrix of base samples, equation (2) is generally transformed into a dual form, as shown in equation (5).
Where z is a dual variable, which is also a parameter of the relevant filtering template of the present application, and z and w can be related by equation (6).
C (X) is a circulant matrix generated from the base sample X, i.e., X = c (X). In conjunction with equations (3), (4), and (6), the dual variable can be represented in the fourier domain as equation (7).
In this embodiment, the image feature of the final position of the target to be tracked in the current frame image and the image feature of the final position of the target to be tracked in the previous frame image are respectively used as the base samples x, the ridge regression model of the current frame image is established by using the final position of the target to be tracked in the current frame image, the ridge regression model of the previous frame image is established by using the final position of the target to be tracked in the previous frame image, and then the ridge regression model of the current frame image and the ridge regression model of the previous frame image are subjected to dual transformation to obtain the dual variable of the current frame image and the dual variable of the previous frame image.
Constructing a model shown as a formula (8):
the model shown in equation (8) is a parameter update model.
The definitions of the parameters are explained in formula (1), and are not described again.
S142: and carrying out sparse constraint on the parameter updating model by utilizing the similarity measurement between the final position of the target to be tracked in the current frame image and the final position of the target to be tracked in the previous frame image so as to obtain an optimized parameter updating model.
Specifically, taking an L1 norm of a difference between a parameter of a relevant filtering template corresponding to the current frame image and a parameter of a relevant filtering template corresponding to the previous frame image as the similarity measurement, and performing sparse constraint on the parameter updating model. I.e., in | | z2-z1||1And (3) carrying out sparsity constraint on the formula (8) for similarity measurement indexes to obtain an optimized parameter updating model as shown in the formula (1).
S143: and updating the parameters of the related filtering template based on the optimization parameter updating model.
The optimization parameter updating model is updated by solving z2Will z2The updated parameters of the relevant filtering template can be obtained as the parameters of the relevant filtering template.
For the solution of the optimization parameter update model, two auxiliary variables P and q can be introduced to convert the formula (1) into the optimization problem of the formula (9):
wherein G isk=C(x)kC(x)T k,R=[-I;I],Z=[z1;z2]. For formula (9), the ADMM (fast first-order indexing Method of Multipliers) Method can be used for optimization. Incorporating the equality constraint into the formula by introducing a Lagrange multiplier, resulting in formula (10)
Wherein<A,B>=Tr(ATB) And represents the matrix inner product. Y is1,kAnd Y2Representing the lagrange multiplier. The variables in equation (10) may be optimized alternately.
Alternating optimization of q, P, z is performed using equations (11), (12), (13):
wherein Q = [ Q ]1;q2]。
The optimization solution is circularly carried out until the target is converged, and the solved q is obtained2As a new parameter for the relevant filtering template.
Wherein q in the formula (11)kFurther throughFormula (14), formula (12) is further represented by formula (15), and formula (13) is further represented by formula (16):
wherein,. Solving equations (14), (15) and (16) circularly, and obtaining qkAs parameters of the relevant filtering template.
Referring to fig. 4, in the present embodiment, when the parameters of the related filtering template are solved by using the equations (14), (15), and (16), the following steps are performed:
s21: and respectively carrying out Fourier transform on the final position of the target to be tracked in the current frame image and the final position of the target to be tracked in the previous frame image to obtain a Fourier transform result.
Here, the final position of the target to be tracked in the current frame image and the final position of the target to be tracked in the previous frame image refer to performing feature extraction on a detection frame of the final position of the target to be tracked in the current frame image and performing feature extraction on a detection frame of the final position of the target to be tracked in the previous frame image, and the obtained image features may be an HOG feature, a scale feature of the detection frame, and the like.
And performing Fourier transform on the extracted image features x to obtain a Fourier transform result.
S22: and applying the Fourier transform result to the optimized parameter updating model to obtain the parameters of the related filtering template.
Applying the Fourier transform result to the expressions (14), (15) and (16), and solving the obtained q2As a new parameter for the relevant filtering template.
Alternatively, in another embodiment, the extracted image features may be subjected to cyclic shift transformation, the cyclic shift transformation result may be applied to equation (1), and the obtained z may be2As a new parameter for the relevant filtering template.
Referring to fig. 5, fig. 5 is a schematic block diagram of a circuit structure of an embodiment of the target tracking device of the present application. The object tracking device 11 comprises a processor 111 and a memory 112 coupled to each other, the memory 112 stores a computer program, and the processor 111 is configured to execute the computer program to implement the steps of the embodiments of the object tracking method of the present application as described above.
For the description of each step executed by the processing, refer to the description of each step in the embodiment of the target tracking method of the present application, which is not described herein again.
In the embodiments of the present application, the disclosed target tracking method and target tracking apparatus may be implemented in other ways. For example, the above-described embodiments of the target tracking apparatus are merely illustrative, and for example, the division of the modules or units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on this understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium, or in a part of or all of the technical solutions that contribute to the prior art.
Referring to fig. 6, fig. 6 is a schematic block diagram of a circuit structure of an embodiment of a computer readable storage medium of the present application, a computer program 1001 is stored in a computer storage medium 1000, and when the computer program 1001 is executed, the steps of the embodiments of the target tracking method of the present application are implemented.
The computer storage medium 1000 may be various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings, or which are directly or indirectly applied to other related technical fields, are intended to be included within the scope of the present application.
Claims (5)
1. A method of target tracking, the method comprising:
acquiring a current frame image;
obtaining the predicted position of the target to be tracked in the current frame image according to the final position of the target to be tracked in the previous frame image;
processing the predicted position by using a related filtering template to obtain the final position of the target to be tracked in the current frame image;
updating parameters of the related filtering template based on the final position of the target to be tracked in the current frame image, the final position of the target to be tracked in the previous frame image and the similarity between the final position of the target to be tracked in the current frame image and the final position of the target to be tracked in the previous frame image; the method comprises the following steps:
updating the parameters of the relevant filtering template based on an optimization parameter updating model, wherein the expression of the optimization parameter updating model is as follows:
wherein λ is1、λ2Is a constant parameter, y is a Gaussian true value, z2For the parameters of the relevant filtering template corresponding to the current frame image, z1Parameters of the relevant filter template corresponding to the previous frame of image, C (x)2The cyclic shift conversion result of the final position image feature of the target to be tracked in the current frame, C (x)1The cyclic shift conversion result of the final position image feature of the target to be tracked in the previous frame, C (x)k TRepresentation matrix C (x)kTranspose, | | z2-z1||1Operation representation z2And z1The L1 norm of the difference;
is solved to obtain z2In z is2Updating parameters of the relevant filtering template;
and the next frame of image is the current frame of image, and the predicted position of the target to be tracked in the current frame of image is obtained according to the final position of the target to be tracked in the previous frame of image, and the subsequent steps are executed again.
2. The method of claim 1,
the obtaining of the predicted position of the target to be tracked in the current frame image according to the final position of the target to be tracked in the previous frame image includes:
acquiring a plurality of predicted positions of the target to be tracked in the current frame image according to the final position of the target to be tracked in the previous frame image;
the processing the predicted position by using the relevant filtering template to obtain the final position of the target to be tracked in the current frame image comprises the following steps:
and respectively inputting the plurality of predicted positions into a relevant filtering model for calculation to obtain the final position of the target to be tracked in the current frame image.
3. The method according to claim 2, wherein the processing the predicted position by using the relevant filtering template to obtain a final position of the target to be tracked in the current frame image comprises:
and respectively fitting each predicted position by utilizing a related filtering template, and taking the predicted position with the best fitting result as the final position of the target to be tracked in the current frame image.
4. An object tracking apparatus, characterized in that the apparatus comprises a processor and a memory coupled to each other; the memory has stored therein a computer program for execution by the processor to implement the steps of the method according to any one of claims 1-3.
5. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when being executed by a processor, carries out the steps of the method according to any one of claims 1-3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011468540.4A CN112233143B (en) | 2020-12-14 | 2020-12-14 | Target tracking method, device and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011468540.4A CN112233143B (en) | 2020-12-14 | 2020-12-14 | Target tracking method, device and computer readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112233143A CN112233143A (en) | 2021-01-15 |
CN112233143B true CN112233143B (en) | 2021-05-11 |
Family
ID=74124048
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011468540.4A Active CN112233143B (en) | 2020-12-14 | 2020-12-14 | Target tracking method, device and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112233143B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113393493B (en) * | 2021-05-28 | 2024-04-05 | 京东科技信息技术有限公司 | Target object tracking method and device |
CN113364150B (en) * | 2021-06-10 | 2022-08-26 | 内蒙古工业大学 | Automatic tracking unmanned aerial vehicle laser charging device and tracking method |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107403175A (en) * | 2017-09-21 | 2017-11-28 | 昆明理工大学 | Visual tracking method and Visual Tracking System under a kind of movement background |
CN107481264A (en) * | 2017-08-11 | 2017-12-15 | 江南大学 | A kind of video target tracking method of adaptive scale |
CN108898621A (en) * | 2018-06-25 | 2018-11-27 | 厦门大学 | A kind of Case-based Reasoning perception target suggests the correlation filtering tracking of window |
CN110349190A (en) * | 2019-06-10 | 2019-10-18 | 广州视源电子科技股份有限公司 | Target tracking method, device and equipment for adaptive learning and readable storage medium |
CN111815668A (en) * | 2020-06-23 | 2020-10-23 | 浙江大华技术股份有限公司 | Target tracking method, electronic device and storage medium |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110097579B (en) * | 2019-06-14 | 2021-08-13 | 中国科学院合肥物质科学研究院 | Multi-scale vehicle tracking method and device based on pavement texture context information |
CN111080675B (en) * | 2019-12-20 | 2023-06-27 | 电子科技大学 | Target tracking method based on space-time constraint correlation filtering |
-
2020
- 2020-12-14 CN CN202011468540.4A patent/CN112233143B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107481264A (en) * | 2017-08-11 | 2017-12-15 | 江南大学 | A kind of video target tracking method of adaptive scale |
CN107403175A (en) * | 2017-09-21 | 2017-11-28 | 昆明理工大学 | Visual tracking method and Visual Tracking System under a kind of movement background |
CN108898621A (en) * | 2018-06-25 | 2018-11-27 | 厦门大学 | A kind of Case-based Reasoning perception target suggests the correlation filtering tracking of window |
CN110349190A (en) * | 2019-06-10 | 2019-10-18 | 广州视源电子科技股份有限公司 | Target tracking method, device and equipment for adaptive learning and readable storage medium |
CN111815668A (en) * | 2020-06-23 | 2020-10-23 | 浙江大华技术股份有限公司 | Target tracking method, electronic device and storage medium |
Non-Patent Citations (1)
Title |
---|
A hybrid tracking framework based on kernel correlation filtering and particle filtering;zhiqian zhao 等;《Neorocomputing》;20180731;40-49 * |
Also Published As
Publication number | Publication date |
---|---|
CN112233143A (en) | 2021-01-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107529650B (en) | Closed loop detection method and device and computer equipment | |
CN107369166A (en) | A kind of method for tracking target and system based on multiresolution neutral net | |
CN112233143B (en) | Target tracking method, device and computer readable storage medium | |
Bissacco et al. | Classification and recognition of dynamical models: The role of phase, independent components, kernels and optimal transport | |
Dou et al. | Metagait: Learning to learn an omni sample adaptive representation for gait recognition | |
Liu et al. | Iterative relaxed collaborative representation with adaptive weights learning for noise robust face hallucination | |
CN111027610A (en) | Image feature fusion method, apparatus, and medium | |
CN108053424A (en) | Method for tracking target, device, electronic equipment and storage medium | |
CN108229432A (en) | Face calibration method and device | |
Lyu et al. | Identifiability-guaranteed simplex-structured post-nonlinear mixture learning via autoencoder | |
Dou et al. | Background subtraction based on circulant matrix | |
CN107590820B (en) | Video object tracking method based on correlation filtering and intelligent device thereof | |
Gundogdu et al. | Extending correlation filter-based visual tracking by tree-structured ensemble and spatial windowing | |
CN114119690A (en) | Point cloud registration method based on neural network reconstruction Gaussian mixture model | |
Zhang et al. | CAM R-CNN: End-to-end object detection with class activation maps | |
CN109492530B (en) | Robust visual object tracking method based on depth multi-scale space-time characteristics | |
Salloum et al. | cPCA++: An efficient method for contrastive feature learning | |
CN114708307B (en) | Target tracking method, system, storage medium and device based on correlation filter | |
Shabat et al. | Accelerating particle filter using randomized multiscale and fast multipole type methods | |
WO2022252519A1 (en) | Image processing method and apparatus, terminal, medium, and program | |
CN112529081B (en) | Real-time semantic segmentation method based on efficient attention calibration | |
CN113706580B (en) | Target tracking method, system, equipment and medium based on relevant filtering tracker | |
Salloum et al. | Efficient image splicing localization via contrastive feature extraction | |
Li et al. | Discover governing differential equations from evolving systems | |
CN114913201A (en) | Multi-target tracking method, device, electronic equipment, storage medium and product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
EE01 | Entry into force of recordation of patent licensing contract |
Application publication date: 20210115 Assignee: ZHEJIANG DAHUA TECHNOLOGY Co.,Ltd. Assignor: ZHEJIANG DAHUA TECHNOLOGY Co.,Ltd. Contract record no.: X2021330000117 Denomination of invention: Target tracking method, device and computer-readable storage medium Granted publication date: 20210511 License type: Common License Record date: 20210823 |
|
EE01 | Entry into force of recordation of patent licensing contract |