CN113313739A - Target tracking method, device and storage medium - Google Patents

Target tracking method, device and storage medium Download PDF

Info

Publication number
CN113313739A
CN113313739A CN202110701150.5A CN202110701150A CN113313739A CN 113313739 A CN113313739 A CN 113313739A CN 202110701150 A CN202110701150 A CN 202110701150A CN 113313739 A CN113313739 A CN 113313739A
Authority
CN
China
Prior art keywords
target
tracking
image
frame image
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110701150.5A
Other languages
Chinese (zh)
Inventor
刘雪飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Agricultural Bank of China
Original Assignee
Agricultural Bank of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Agricultural Bank of China filed Critical Agricultural Bank of China
Priority to CN202110701150.5A priority Critical patent/CN113313739A/en
Publication of CN113313739A publication Critical patent/CN113313739A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides a target tracking method, a target tracking device and a storage medium, wherein a video to be processed is obtained, and a target to be tracked is determined from a first frame image of the video to be processed; starting from a first frame image, detecting a target by adopting a detection module in a TLD image tracking algorithm, and tracking the target by a tracking module to obtain a first tracking result; predicting the position of the target in the second frame image by a Kalman filter from the second frame image to obtain a second tracking result; determining the position of a target according to a first tracking result aiming at the first frame image; and determining the position of the target in the image from the second frame image according to the first tracking result and the second tracking result corresponding to the same frame image. According to the method and the device, the target is tracked according to the TLD image tracking algorithm, the position of the target is predicted by the Kalman filtering method, and the position of the target in the image is determined, so that the accuracy of determining the position in the target image is improved.

Description

Target tracking method, device and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a target tracking method, an apparatus, and a storage medium.
Background
In the fields of intelligent monitoring, aerospace and the like, a target tracking technology may be required to track an object, particularly to detect and track the object in a video. The Tracking-Learning-Detection (TLD) algorithm can realize the Detection and Tracking of the target. However, when tracking a target in a complex scene, the TDL algorithm is prone to fail tracking.
Currently, tracking of a target in a complex scene is realized by combining a TLD algorithm and a Kernel Correlation Filter (KCF) algorithm. Specifically, the method comprises the following steps: determining the position and the size of a target area in an initial target frame, and inputting an initial frame image into a TLD algorithm module and a KCF algorithm module for initialization; the TLD algorithm module and the KCF algorithm module run in parallel, if only one module outputs the position of the tracking target for processing the current frame image, the output position is used as the tracking result of the current frame image; if the two tracking modules output the positions of the targets, respectively calculating the similarity between the output positions of the targets and the target model M, and selecting the maximum one in the similarity as a target tracking result; and tracking the target in the next frame of image by the method, and determining the position of the target in each frame of image.
However, both the TLD algorithm and the KCF algorithm can track the target in the image, and there may be a case where the target cannot be tracked, for example, the target is blocked, and at this time, the position of the target in the image cannot be determined, so that the accuracy of determining the position of the target in the image is reduced.
Disclosure of Invention
The embodiment of the application provides a target tracking method, a target tracking device and a storage medium, a TLD algorithm is used for tracking a target, and meanwhile, a Kalman filter is used for predicting the position of the target in an image, so that the accuracy of determining the position of the target in the image is improved.
In a first aspect, an embodiment of the present application provides a target tracking method, where the target tracking method includes:
the method comprises the steps of obtaining a video to be processed, and determining a target to be tracked from a first frame image of the video to be processed.
And starting from the first frame image, detecting the target by adopting a detection module in a TLD image tracking algorithm, and tracking the target by a tracking module in the TLD image tracking algorithm to obtain a first tracking result of the target.
Starting from a second frame image, initializing a Kalman filter by using the position and the size of a target frame output by a previous frame image of the second frame image, predicting the position of the target in the second frame image through the Kalman filter, and obtaining a second tracking result of the target.
Determining the position of the target according to the first tracking result aiming at the first frame image;
and determining the position of the target in the image according to the first tracking result and the second tracking result corresponding to the same frame image from the second frame image.
In a possible implementation manner, the determining, according to a first tracking result and a second tracking result corresponding to the same frame of image, a position of the target in the image includes:
and for the same frame of image, if the first tracking result is successful tracking, respectively acquiring respective corresponding weight values of the TLD image tracking algorithm and the Kalman filter.
And determining the position of the target in the image according to the respective weight values corresponding to the TLD image tracking algorithm and the Kalman filter, and the position corresponding to the first tracking result and the position corresponding to the second tracking result.
In one possible implementation, the method further includes:
and for the same frame of image, if the first tracking result is tracking failure, determining the position corresponding to the second tracking result as the position of the target in the image.
In a possible implementation manner, before the starting from the first frame image, detecting the target by using a detection module in a TLD image tracking algorithm and tracking the target by using a tracking module in the TLD image tracking algorithm to obtain a first tracking result, the method further includes:
and initializing the detection module and the tracking module by adopting the coordinate position of the target in the first frame image.
The detecting, by using a detecting module in a TLD image tracking algorithm, the target from the first frame image, and tracking the target by using a tracking module in the TLD image tracking algorithm to obtain a first tracking result, including:
and starting from the first frame image, detecting the target by adopting an initialized detection module, and tracking the target by the initialized tracking module to obtain the first tracking result.
In one possible implementation, the method further includes:
and taking the position of the target in the image as an observed value of the Kalman filter, and updating the Kalman filter.
In one possible implementation, the updating the kalman filter includes:
according to the formula: p(k|k)=(I-Kg(k)H)P(k|k-1)The error covariance of the kalman filter is updated.
Wherein, P(k|k)Representing the error covariance, P, of the image of a frame subsequent to said image(k|k-1)Representing the error covariance, Kg, of the image(k)Representing Kalman filtering gain, H representing a parameter of an observation system, k representing the time corresponding to the next frame of image of the image, and k-1 representing the time corresponding to the image.
In one possible implementation, the predicting, by the kalman filter, the position of the target in the second frame image includes:
according to the formula: x(k|k)=X(k|k-1)+Kg(k)(Zk-HX(k|k-1)) And determining the position of the target in the second frame image.
Wherein, X(k|k)Representing the position, X, of the object in the second frame image(k|k-1)Denotes the position, Kg, of the object in the first frame image(k)Representing the Kalman Filter gain, ZkThe value of the observed value is represented,h represents a parameter of an observation system, k represents a time corresponding to the second frame image, and k-1 represents a time corresponding to the first frame image.
In a second aspect, an embodiment of the present application provides a target tracking apparatus, including:
the device comprises an acquisition unit, a tracking unit and a tracking unit, wherein the acquisition unit is used for acquiring a video to be processed and determining a target to be tracked from a first frame image of the video to be processed.
And the processing unit is used for detecting the target by adopting a detection module in a TLD image tracking algorithm from the first frame image, tracking the target by a tracking module in the TLD image tracking algorithm and obtaining a first tracking result of the target.
The processing unit is further configured to initialize a kalman filter from a second frame image by using the position and the size of a target frame output by a previous frame image of the second frame image, predict the position of the target in the second frame image through the kalman filter, and obtain a second tracking result of the target.
And the determining unit is used for determining the position of the target according to the first tracking result aiming at the first frame image.
And the determining unit is further used for determining the position of the target in the image from the second frame image according to the first tracking result and the second tracking result corresponding to the same frame image.
In a possible implementation manner, the determining unit is specifically configured to, for a same frame of image, respectively obtain weight values corresponding to the TLD image tracking algorithm and the kalman filter if the first tracking result is that tracking is successful; and determining the position of the target in the image according to the respective weight values corresponding to the TLD image tracking algorithm and the Kalman filter, and the position corresponding to the first tracking result and the position corresponding to the second tracking result.
In a possible implementation manner, the determining unit is further configured to determine, for the same frame of image, when the first tracking result is a tracking failure, a position corresponding to the second tracking result as a position of the target in the image.
In a possible implementation manner, the processing unit is further configured to initialize the detection module and the tracking module by using a coordinate position of the target in the first frame image.
The processing unit is specifically configured to detect the target by using an initialized detection module from the first frame image, and track the target by using the initialized tracking module to obtain the first tracking result.
In a possible implementation manner, the processing unit is further configured to update the kalman filter by using the position of the target in the image as an observed value of the kalman filter.
In a possible implementation manner, the processing unit is specifically configured to: p(k|k)=(I-Kg(k)H)P(k|k-1)Updating the error covariance of the Kalman filter; wherein, P(k|k)Representing the error covariance, P, of the image of a frame subsequent to said image(k|k-1)Representing the error covariance, Kg, of the image(k)Representing Kalman filtering gain, H representing a parameter of an observation system, k representing the time corresponding to the next frame of image of the image, and k-1 representing the time corresponding to the image.
In a possible implementation manner, the processing unit is specifically configured to: x(k|k)=X(k|k-1)+Kg(k)(Zk-HX(k|k-1)) Determining the position of the target in the second frame image; wherein, X(k|k)Representing the position, X, of the object in the second frame image(k|k-1)Denotes the position, Kg, of the object in the first frame image(k)Representing the Kalman Filter gain, ZkAnd representing an observation value, H representing a parameter of an observation system, k representing a time corresponding to the second frame image, and k-1 representing a time corresponding to the first frame image.
In a third aspect, embodiments of the present application further provide a target tracking apparatus, which may include a memory and a processor; wherein the content of the first and second substances,
the memory is used for storing the computer program.
The processor is configured to read the computer program stored in the memory, and execute the target tracking method in any one of the possible implementation manners of the first aspect according to the computer program in the memory.
In a fourth aspect, an embodiment of the present application further provides a computer-readable storage medium, where a computer-executable instruction is stored in the computer-readable storage medium, and when a processor executes the computer-executable instruction, the target tracking method described in any one of the foregoing possible implementation manners of the first aspect is implemented.
In a fifth aspect, an embodiment of the present application further provides a computer program product, which includes a computer program, and when the computer program is executed by a processor, the method for tracking a target is implemented as described in any one of the possible implementation manners of the first aspect.
Therefore, the embodiment of the application provides a target tracking method, a target tracking device and a storage medium, wherein a video to be processed is obtained, and a target to be tracked is determined from a first frame image of the video to be processed; starting from a first frame image, detecting a target by adopting a detection module in a TLD image tracking algorithm, and tracking the target by a tracking module in the TLD image tracking algorithm to obtain a first tracking result of the target. Initializing a Kalman filter by using the position and the size of a target frame output by a previous frame image of a second frame image from the second frame image, predicting the position of a target in the second frame image through the Kalman filter, and obtaining a second tracking result of the target; determining the position of a target according to a first tracking result aiming at the first frame image; and determining the position of the target in the image from the second frame image according to the first tracking result and the second tracking result corresponding to the same frame image. According to the technical scheme, the TLD image tracking algorithm is adopted to determine the position of the target in the first frame image, the detection module and the tracking module in the TLD image tracking algorithm can be initialized, and the position of the target in the next frame image can be determined more accurately. In addition, starting from the second frame image, the target is tracked according to the TLD image tracking algorithm, the position of the target in the image is predicted through the Kalman filtering method, and the position of the target in the image is determined together.
Drawings
Fig. 1 is a schematic view of an application scenario of a target tracking method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a target tracking method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a method for predicting a target location by using a kalman filter according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram of a framework of a target tracking method according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a target tracking apparatus according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of another object tracking device according to an embodiment of the present application.
With the foregoing drawings in mind, certain embodiments of the disclosure have been shown and described in more detail below. These drawings and written description are not intended to limit the scope of the disclosed concepts in any way, but rather to illustrate the concepts of the disclosure to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
In the embodiments of the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. In the description of the text of the present application, the character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
The technical scheme provided by the embodiment of the application can be applied to a target tracking scene, particularly to tracking a target in a video, for example, monitoring the driving process of a vehicle, monitoring the flight path of an airplane, monitoring the position of a user in a specific activity area, and the like. Video tracking typically uses visual tracking algorithms, which can be classified into generative and discriminative methods. The generative model is used for directly describing the target, and can be summarized into a candidate target for searching the maximum likelihood probability or the posterior probability, and the state of the real target is estimated by using the candidate target. However, unlike the complex form of generating to find the target model, the discriminant is to make an optimal classification between the target and the background by a classifier, and the discriminant method is commonly used at present.
In the prior art, when tracking an object in a video, a method combining a TLD algorithm and a KCF algorithm is generally used. The method comprises the following specific processes: firstly, analyzing and processing a first frame image of a video, determining the position and the size of a target in the first frame image, and initializing a TLD algorithm module and a KCF algorithm module. Inputting the rest images of the video into a TLD algorithm module and a KCF algorithm module, operating the TLD algorithm module and the KCF algorithm module in parallel, and processing the first frame image; if only one of the TLD algorithm module and the KCF algorithm module outputs the position of the tracking target, the output position is used as the tracking result of the current frame image; if the two tracking modules output the positions of the targets, the similarity between the output positions of the tracking targets and the target model is calculated respectively, and the result with larger similarity is used as the final tracking result. And tracking the target in each frame of image by the method, and determining the position of the target in each frame of image.
However, when tracking a target in a video, the target may be blocked in a certain frame of image, and at this time, neither the TLD algorithm nor the KCF algorithm may track the target, that is, the position of the target in the image cannot be determined, so that the accuracy of determining the position of the target in the image is reduced.
In order to solve the problem that the accuracy of the determined target position is low due to the fact that the TLD algorithm and the KCF algorithm cannot track the target, the position of the target in the image can be predicted through the Kalman filter, the position of the target in the next frame of image can be predicted only by inputting the position of the previous frame of image into the Kalman filter, the target does not need to be tracked, the problem that the TLD algorithm cannot track the target is solved, and therefore the accuracy of tracking the target in the image is improved.
Fig. 1 is a schematic view of an application scenario of a target tracking method according to an embodiment of the present application. According to the method shown in fig. 1, a video to be processed is input into a target tracking module, a TLD image tracking algorithm and a kalman filter in the target tracking module are used to track a target in the input video frame by frame, the position of the target in each frame image is determined, and the target position is output. The output target position may be a target position corresponding to each frame of image, or may also be a final position of the target, which is not limited in this embodiment of the present application.
The TLD image tracking algorithm comprises a tracking module, a detection module and a learning module. The tracking module in the TLD image tracking algorithm is a tracking algorithm improved based on optical flow tracking, and is used for tracking the feature points generated by the target. The detection module is formed by cascading three classifiers, a target frame is sequentially input into the variance classifier, the random forest classifier and the nearest neighbor classifier, each classifier can reject some image blocks which do not meet the classification condition, and only the target frame which passes through the three classifiers is considered to contain a detection target. The TLD learning module is used for improving the performance of the classifier through an online method by using samples generated in the tracking process.
Hereinafter, the target tracking method provided by the present application will be described in detail by specific embodiments. It is to be understood that the following detailed description may be combined with other embodiments, and that the same or similar concepts or processes may not be repeated in some embodiments.
Fig. 2 is a schematic flowchart of a target tracking method according to an embodiment of the present application. The target tracking method may be performed by software and/or hardware means, for example, the hardware means may be a target tracking means, which may be a terminal or a processing chip in the terminal. For example, referring to fig. 2, the target tracking method may include:
s201, obtaining a video to be processed, and determining a target to be tracked from a first frame image of the video to be processed.
For example, when the to-be-processed video is acquired, the to-be-processed video may be processed into a form of multiple frames of to-be-processed images, and the time interval between each frame of image is the same. In addition, when an object to be tracked is determined from a first frame image of a video to be processed, the object may be framed, and a position and a state of the object in the first frame image are determined, for example, a coordinate position of the object in a coordinate system corresponding to the first frame image, a current speed of the object, and the like are determined.
S202, starting from the first frame image, detecting the target by adopting a detection module in the TLD image tracking algorithm, and tracking the target by a tracking module in the TLD image tracking algorithm to obtain a first tracking result of the target.
Starting from the first frame image, before tracking the target by adopting the TLD image tracking algorithm, the coordinate position of the target in the first frame image can also be adopted to initialize the detection module and the tracking module. And enabling the detection module and the tracking module to track the coordinate position of the target in the second frame image according to the coordinate position of the target in the first frame image.
After the detection module and the tracking module are initialized, starting from a first frame image, the initialized detection module is adopted to detect a target, and the initialized tracking module is used for tracking the target to obtain a first tracking result. For example, the first tracking result may include a position of the target in the image, whether tracking of the tracking target is successful, a speed of the target in the image, and the like.
It can be understood that, when tracking module target, it can use forward and backward error tracking on the basis of pyramid LK optical flow tracking, i.e. median flow tracking method. Specifically, the tracker performs forward tracking on the previous frame image and the current frame image to generate a track of a tracking point between two frames, and generates another track in the current frame image by the position of the tracking point. The track is generated by tracking the pixel point tracked in the current frame image backwards to the corresponding pixel point in the previous frame image. The embodiments of the present application are described by way of example only, and do not represent that the embodiments of the present application are limited thereto.
It will be appreciated that the detection module in the TLD image tracking algorithm is a detector consisting of a classification cascade. Wherein the first classifier of the cascade of classifiers is a variance classifier, the second classifier is a set classifier, and the third classifier is a nearest neighbor classifier. The variance classifier is based on pixel values, and can filter the gray value variance of the pixel points which is less than 50% of the overall variance of all the pixel points in the boundary frame of the tracked target, so as to screen the target. For example, the variance classifier can discard backgrounds of the sky, buildings, streets, etc. that are not useful for tracking objects. The set classifier is based on a random forest and determines the approximate position of the target by a probability calculation method. And the nearest neighbor classifier calculates the correlation similarity between the target determined by the two classifiers and the online model, takes the classification with the similarity larger than a certain threshold as the target, determines the position of the target in the image and generates a first tracking result of the target.
Furthermore, in order to further increase the accuracy of the coordinate position of the target in the image determined by the TLD image tracking algorithm, after the target sample is determined, the image where the target sample is located may be processed by a learning module in the TLD image tracking algorithm, so as to ensure the accuracy of the output first tracking result of the target.
For example, after each frame of image is processed, the detection module and the tracking module may be continuously updated by the learning module in the TLD image tracking algorithm. Namely, after the coordinate position of the target in each frame of image is obtained, the detection module and the tracking module are updated through the learning module, so that the detection module and the tracking module can always determine the coordinate position of the target in the current image according to the position of the target in the previous frame of image, and a first tracking result of the target is obtained. The embodiment of the present application does not set any limit to the specific updating process of the learning module.
In the embodiment of the application, the detection module and the tracking module in the TLD image tracking algorithm are initialized, so that the TLD image tracking algorithm can determine the positions of the target in other frame images according to the position of the target in the first frame image, and obtain the first tracking result, and the accuracy of determining the position of the target in the image can be improved.
And S203, initializing a Kalman filter by using the position and the size of a target frame output by a previous frame image of the second frame image from the second frame image, predicting the position of the target in the second frame image through the Kalman filter, and obtaining a second tracking result of the target.
For example, when the position of the target in the image is predicted by the kalman filter, the position of the target in the next frame image can be predicted according to the current position of the target due to the kalman filter. Therefore, the kalman filter needs to be initialized by using the position and size of the target frame output by the second frame image, so that the position of the target in the previous frame image exists in the kalman filter, and the position of the target in the second frame image is predicted according to the conversion relationship and the conversion time between the two frame images, so as to obtain the second tracking result of the target.
It is understood that the second tracking result may further include a position of the target in the image, whether tracking of the tracking target is successful, a speed of the target in the image, and the like, and the second tracking result is not specifically limited in this embodiment of the application.
And S204, aiming at the first frame image, determining the position of the target according to the first tracking result.
Illustratively, for the first frame image. The position of the target may be directly determined according to the first tracking result, specifically, the position of the target may be determined by a detection module, a tracking module, and a learning module of a TLD image tracking algorithm, or may be determined according to other manners, which is not limited in this embodiment of the present application.
And S205, starting from the second frame image, determining the position of the target in the image according to the first tracking result and the second tracking result corresponding to the same frame image.
For example, starting from the second frame image, it is first determined whether the first tracking result is a successful tracking for the same frame image. Whether the tracking is successful or not can be determined according to the result of the tracking determination of the previous frame image, if the tracker in the tracking module outputs the target frame in the previous frame image, the first tracking result can be determined to be the tracking success, and if the tracker in the tracking module does not output the target frame in the previous frame image, the first tracking result can be determined to be the tracking failure. Whether the first tracking result is successful or not may also be determined in other ways, which is not limited in this embodiment of the application.
It can be understood that, if the first tracking result is that the tracking is successful, the position corresponding to the first tracking result and the position corresponding to the second tracking result may be input to the integration module of the TLD algorithm, and the integration module processes the first tracking result and the second tracking result to determine the position of the target in the image. The embodiment of the present application does not set any limit to the specific processing method of the integrated module.
Illustratively, if the first tracking result is that the tracking is successful, respectively acquiring respective weighted values corresponding to the TLD image tracking algorithm and the Kalman filter; and determining the position of the target in the image according to the weighted values corresponding to the TLD image tracking algorithm and the Kalman filter respectively, and the position corresponding to the first tracking result and the position corresponding to the second tracking result. That is, the position of the target in the image is determined by comprehensively considering the first tracking result determined by the TLD image tracking algorithm and the second tracking result determined by the kalman filter. The weighted value may be determined according to the shape and state of the target, for example, if the speed of the target is high, the weighted value corresponding to the TLD image tracking algorithm is higher than the weighted value corresponding to the kalman filter, and according to the corresponding weighted value, the first tracking result and the second tracking result are combined to determine the position of the target in the image.
In another possible implementation manner, the first tracking result and the second tracking result may also be input into a preset algorithm, and the position of the target in the image is directly determined through processing of the preset algorithm. For example, the preset algorithm may be trained according to different scenarios. The preset algorithm is not limited in any way in the embodiment of the present application.
It can be understood that the respective weight values of the TLD image tracking algorithm and the kalman filter, and the process of inputting the first tracking result and the second tracking result into the preset algorithm may be performed in the synthesis module of the TLD algorithm, which is not limited in this embodiment of the present application.
In the embodiment of the application, if the first tracking result is that the tracking is successful, the position of the target in the image is determined according to the respective corresponding weight values of the TLD image tracking algorithm and the kalman filter, and compared with the determination of the position of the target in the image only according to the TLD tracking algorithm, the accuracy of determining the position of the target in the image can be further improved.
For example, for the same frame of image, if the first tracking result is a tracking failure, the position corresponding to the second tracking result is determined as the position of the target in the image. That is, if the first tracking result is tracking failure, the position corresponding to the target in the second tracking result determined by the kalman filter is determined as the position of the target in the current frame image.
It is understood that the reason for the tracking failure as a tracking result may be that the target is blocked, or the target disappears, or the TLD image tracking algorithm has a problem, or other situations, and the present application does not make any limitation on the reason for the tracking failure.
In the embodiment of the application, when the tracking result of the TLD tracking algorithm is tracking failure, the position corresponding to the second tracking result predicted by the Kalman filter is determined as the position of the target in the image, so that the problems that the target is tracking failure and the position of the target cannot be determined can be solved, and the accuracy of determining the position of the target in the image is further improved.
For example, after determining the position of the target in each frame of image of the video, the movement track of the target may be determined according to a plurality of positions of the target. For example, after a video of the running process of the racing car is tracked, the position of the racing car on each frame of image can be determined, that is, the position of the racing car on the real runway can be determined, and the running route of the racing car in the running process can be drawn according to a plurality of determined positions.
Therefore, the target tracking method provided by the embodiment of the application obtains the video to be processed, determines the target to be tracked from the first frame image of the video to be processed, and can initialize the detection module and the tracking module in the TLD image tracking algorithm, so that the position of the target in the next frame image can be determined more accurately. Starting from a first frame image, detecting a target by adopting a detection module in a TLD image tracking algorithm, and tracking the target by a tracking module in the TLD image tracking algorithm to obtain a first tracking result of the target. Initializing a Kalman filter by using the position and the size of a target frame output by a previous frame image of a second frame image from the second frame image, predicting the position of a target in the second frame image through the Kalman filter, and obtaining a second tracking result of the target; determining the position of a target according to a first tracking result aiming at the first frame image; and determining the position of the target in the image from the second frame image according to the first tracking result and the second tracking result corresponding to the same frame image. The target is tracked through the TLD image tracking algorithm, the position of the target in the image is predicted through the Kalman filtering method, the problem that the target cannot be tracked can be solved, and therefore the accuracy of determining the position of the target in the image is improved.
In order to facilitate understanding of the target tracking method provided in the embodiments of the present application, the following will describe in detail the position of the kalman filter predicted target in the image. Specifically, referring to fig. 3, fig. 3 is a schematic flowchart of a method for predicting a target position by using a kalman filter according to an embodiment of the present disclosure. The method comprises the following steps:
s301, initializing a Kalman filter by using the position and the size of a target frame output by the first frame image from the second frame image.
For example, assume that the center position coordinate of the target at time t is (P)x(t),Py(t)), and the displacement in the x, y directions is (V)x(t),Vy(t)), the Kalman filter target state at time t is expressed as [ P ] by a vectorx(t),Py(t),Vx(t),Vy(t)]T. When initializing the kalman filter by using the first frame image of the video, it can be assumed that the velocities of the target in the x and y directions are both 0, and the target position is (P)x(1),Py(1) Then the initial state vector of the target is [ P ]x(1),Py(1),0,0]T
Further, the initial state transition matrix can be represented by the following formula (1):
Figure BDA0003129813850000131
where dt represents the time interval between two frame images.
Still further, when initializing the kalman filter, it is necessary to determine the initial valueQuantizing the observation matrix, initializing the error covariance, initializing the covariance, and initializing the observation covariance. Wherein the initialized observation vector is [ P ]x(1),Py(1)]TThe initialized observation matrix can be expressed by the following formula (2):
Figure BDA0003129813850000132
for example, the error covariance P may be a random matrix or 0, and the error covariance matrix is not limited in this embodiment. Assume that the initialized error covariance matrix can be expressed by the following equation (3):
Figure BDA0003129813850000133
and the initialized covariance can be expressed by the following equation 4:
Figure BDA0003129813850000134
the initialized observed covariance can be expressed by the following equation (5):
Figure BDA0003129813850000135
after the kalman filter is initially set, the position of the target in the second frame image can be predicted by the kalman filter.
S302, updating the Kalman filter according to the position of the target in the previous frame image. For example, after determining the position of the object in the second frame image, the kalman filter needs to be updated according to the position of the object in the second frame image. It will be appreciated that in determining the location of the target in each frame of image, the observation, covariance, error covariance, etc. corresponding to each frame of image may also be determined, enabling the kalman filter to be updated to predict the location of the target in the next frame of image.
When updating the kalman filter, the kalman filter may be updated with the position of the target in the image as the observed value of the kalman filter.
In the embodiment of the application, the Kalman filter is updated by using the position of the target in the current image, so that the Kalman filter can always predict the position of the target in the next frame of image according to the position of the target in the current image, and the position of the target in the image can be determined when the target tracking fails, so that the accuracy of determining the position of the target in the image is improved.
For example, when updating the kalman filter, the error covariance of the kalman filter may be updated according to the following equation (6).
P(k|k)=(I-Kg(k)H)P(k|k-1) (6)
Wherein, P(k|k)Error covariance, P, corresponding to a subsequent frame of image(k|k-1)Denotes the error covariance, Kg, of the image correspondence(k)The Kalman filtering gain is represented, H represents a parameter of an observation system, k represents the time corresponding to the next frame of image of the image, and k-1 represents the time corresponding to the image.
For example, the initialized error covariance is substituted into the formula (6) to obtain the error covariance corresponding to the second frame image of the video, so as to predict the position of the target in the second frame image.
In the embodiment of the application, by updating the error covariance of the Kalman filter, the position of the target in the next frame of image can be predicted according to the updated error covariance, and the accuracy of determining the position of the target in the image is improved.
And S303, predicting the position of the target in the current frame image through a Kalman filter.
For example, the position of the target in the second frame image may be determined by the following formula (7).
X(k|k)=X(k|k-1)+Kg(k)(Zk-HX(k|k-1)) (7)
Wherein, X(k|k)Indicating the position of the object in the second frame image, X(k|k-1)Indicating the position of the object in the first frame image, Kg(k)Representing the Kalman Filter gain, ZkRepresents the observed value, H represents a parameter of the observation system, i.e., an observation matrix, k represents a time corresponding to the second frame image, and k-1 represents a time corresponding to the first frame image.
For example, after determining the position of the target in the second frame image of the video, the above steps may be repeated to determine the positions of the target in the remaining frame images, thereby determining the final position of the target.
In the embodiment of the application, the position of the target in the second frame image is predicted according to the updated error covariance of the Kalman filter, so that the determined position of the target in the second frame image is more accurate.
In summary, the method for predicting the target position by the kalman filter provided in the embodiment of the present application can predict the position of the target in the second frame image of the video by initializing the kalman filter; by updating the Kalman filter, the position of the target in the current frame image can be determined according to the position of the target in the previous frame image, so that the position of the target in each frame image can be predicted. The position of the target in the image can be determined under the condition that the TLD image tracking algorithm fails to track, so that the accuracy of determining the position of the target in the image is improved.
For example, in another embodiment of the present application, the target may be tracked by the method shown in fig. 4. Fig. 4 is a schematic frame diagram of a target tracking method according to an embodiment of the present application.
As shown in fig. 4, a video image of a video to be processed is first obtained, a state of a target in a first frame image is determined by processing the video frame image, and a detection module, a tracking module, and a kalman filter of a TLD image tracking algorithm are initialized by the method described in the above embodiment. After initialization, starting from the first frame image, a detection module in the TLD image tracking algorithm is adopted to detect the target, the target is tracked through a tracking module in the TLD image tracking algorithm, and the position of the target in each frame image is determined. And predicting the position of the target in the second frame image by the Kalman filter from the second frame image, and continuing to predict the position of the third frame image.
When determining the position of the target in a certain frame image, it is necessary to first obtain the current state of the target, that is, the position of the target in the previous frame image and the state parameters, and determine whether the target is occluded. And if the target is not shielded, determining and outputting the target state of the target in the current frame image according to the position of the target in the current frame image acquired by the TLD image tracking algorithm and the position of the target in the current frame image predicted by Kalman filtering. If the target is shielded, further judging whether the TLD image tracking algorithm fails to track, if so, directly predicting the position of the target in the current frame image according to Kalman filtering, and outputting the target state of the current frame image; if the tracking failure does not occur, determining and outputting the target state of the target in the current frame image according to the position of the target in the current frame image acquired by the TLD image tracking algorithm and the position of the target in the current frame image predicted by Kalman filtering. For example, for a method for determining whether the TLD image tracking algorithm fails to track, reference may be made to the foregoing embodiment, and details of this embodiment are not described herein again. In addition, the position of the target in the current frame image obtained according to the TLD image tracking algorithm and the position of the target in the current frame image predicted by the kalman filter are the same as the method described in the above embodiment, and details thereof are not repeated in the embodiments of the present application.
As shown in fig. 4, after the target state of the current frame image is output, the target state may be used as an observation value to update a state matrix, and the current state of the target is updated, that is, the detection module, the tracking module, and the kalman filter in the TLD image tracking algorithm are updated, so that the position of the target in the next frame image is determined according to the latest target state.
In the embodiment of the application, the position of the target in the image acquired by the TLD image tracking algorithm and the position of the target in the image predicted by the Kalman filtering method are determined together, so that the position of the target in the image can be determined when the target tracking fails, and the accuracy of determining the position of the target in the image is improved.
Fig. 5 is a schematic structural diagram of an object tracking device 50 according to an embodiment of the present application, and for example, please refer to fig. 5, the object tracking device 50 may include:
the acquiring unit 501 is configured to acquire a video to be processed, and determine a target to be tracked from a first frame image of the video to be processed.
The processing unit 502 is configured to detect a target by using a detection module in the TLD image tracking algorithm from the first frame image, and track the target by using a tracking module in the TLD image tracking algorithm to obtain a first tracking result of the target.
The processing unit 502 is further configured to initialize a kalman filter from the second frame image by using the position and the size of the target frame output by the previous frame image of the second frame image, and predict the position of the target in the second frame image through the kalman filter to obtain a second tracking result of the target.
A determining unit 503, configured to determine, for the first frame image, a position of the target according to the first tracking result.
The determining unit 503 is further configured to determine, from the second frame image, a position of the target in the image according to the first tracking result and the second tracking result corresponding to the same frame image.
Optionally, the determining unit 503 is specifically configured to, for the same frame of image, respectively obtain weighted values corresponding to the TLD image tracking algorithm and the kalman filter if the first tracking result is that the tracking is successful; and determining the position of the target in the image according to the weighted values corresponding to the TLD image tracking algorithm and the Kalman filter respectively, and the position corresponding to the first tracking result and the position corresponding to the second tracking result.
Optionally, the determining unit 503 is further configured to determine, for the same frame of image, when the first tracking result is that tracking fails, a position corresponding to the second tracking result as a position of the target in the image.
Optionally, the processing unit 502 is further configured to initialize the detection module and the tracking module by using the coordinate position of the target in the first frame image.
The processing unit 502 is specifically configured to detect a target by using an initialized detection module from a first frame image, and track the target by using an initialized tracking module to obtain a first tracking result.
Optionally, the processing unit 502 is further configured to update the kalman filter by using the position of the target in the image as an observed value of the kalman filter.
Optionally, the processing unit 502 is specifically configured to: p(k|k)=(I-Kg(k)H)P(k|k-1)Updating the error covariance of the Kalman filter; wherein, P(k|k)Error covariance, P, corresponding to a subsequent frame of image(k|k-1)Denotes the error covariance, Kg, of the image correspondence(k)The Kalman filtering gain is represented, H represents a parameter of an observation system, k represents the time corresponding to the next frame of image of the image, and k-1 represents the time corresponding to the image.
Optionally, the processing unit 502 is specifically configured to: x(k|k)=X(k|k-1)+Kg(k)(Zk-HX(k|k-1)) Determining the position of the target in the second frame image; wherein, X(k|k)Indicating the position of the object in the second frame image, X(k|k-1)Indicating the position of the object in the first frame image, Kg(k)Representing the Kalman Filter gain, ZkRepresents the observation value, H represents a parameter of the observation system, k represents a time corresponding to the second frame image, and k-1 represents a time corresponding to the first frame image.
The target tracking device provided in the embodiment of the present application may implement the technical solution of the target tracking method in any of the above embodiments, and the implementation principle and the beneficial effect of the target tracking device are similar to those of the target tracking method, and reference may be made to the implementation principle and the beneficial effect of the target tracking method, which are not described herein again.
Fig. 6 is a schematic structural diagram of another object tracking device 60 provided in the embodiment of the present application, and for example, referring to fig. 6, the object tracking device 60 may include a processor 601 and a memory 602;
wherein the content of the first and second substances,
the memory 602 is used for storing computer programs.
The processor 601 is configured to read the computer program stored in the memory 602, and execute the technical solution of the target tracking method in any of the embodiments according to the computer program in the memory 602.
Alternatively, the memory 602 may be separate or integrated with the processor 601. When the memory 602 is a separate device from the processor 601, the target tracking apparatus 60 may further include: a bus for connecting the memory 602 and the processor 601.
Optionally, this embodiment further includes: a communication interface, which may be connected to the processor 601 through a bus. The processor 601 may control the communication interface to implement the receiving and transmitting functions of the target tracking device 60 described above.
The target tracking apparatus 60 shown in the embodiment of the present application may implement the technical solution of the target tracking method in any of the above embodiments, and the implementation principle and the beneficial effect thereof are similar to those of the target tracking method, and reference may be made to the implementation principle and the beneficial effect of the target tracking method, which is not described herein again.
An embodiment of the present application further provides a computer-readable storage medium, where a computer execution instruction is stored in the computer-readable storage medium, and when a processor executes the computer execution instruction, the technical solution of the target tracking method in any of the above embodiments is implemented, and implementation principles and beneficial effects of the technical solution are similar to those of the target tracking method, and reference may be made to the implementation principles and beneficial effects of the target tracking method, which are not described herein again.
The embodiment of the present application further provides a computer program product, which includes a computer program, and when the computer program is executed by a processor, the technical solution of the target tracking method in any of the above embodiments is implemented, and the implementation principle and the beneficial effect of the computer program are similar to those of the target tracking method, which can be referred to as the implementation principle and the beneficial effect of the target tracking method, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts shown as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment. In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated module implemented in the form of a software functional module may be stored in a computer-readable storage medium. The software functional module is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute some steps of the methods according to the embodiments of the present application.
It should be understood that the Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present invention may be embodied directly in a hardware processor, or in a combination of the hardware and software modules within the processor.
The memory may comprise a high-speed RAM memory, and may further comprise a non-volatile storage NVM, such as at least one disk memory, and may also be a usb disk, a removable hard disk, a read-only memory, a magnetic or optical disk, etc.
The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, the buses in the figures of the present application are not limited to only one bus or one type of bus.
The computer-readable storage medium may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (11)

1. A target tracking method, comprising:
acquiring a video to be processed, and determining a target to be tracked from a first frame image of the video to be processed;
starting from the first frame image, detecting the target by adopting a detection module in a TLD image tracking algorithm, and tracking the target by a tracking module in the TLD image tracking algorithm to obtain a first tracking result of the target;
initializing a Kalman filter by using the position and the size of a target frame output by a previous frame image of a second frame image from the second frame image, predicting the position of the target in the second frame image through the Kalman filter, and obtaining a second tracking result of the target;
determining the position of the target according to the first tracking result aiming at the first frame image;
and determining the position of the target in the image according to the first tracking result and the second tracking result corresponding to the same frame image from the second frame image.
2. The method according to claim 1, wherein the determining the position of the target in the image according to the first tracking result and the second tracking result corresponding to the same frame of image comprises:
for the same frame of image, if the first tracking result is that the tracking is successful, respectively acquiring respective corresponding weight values of the TLD image tracking algorithm and the Kalman filter;
and determining the position of the target in the image according to the respective weight values corresponding to the TLD image tracking algorithm and the Kalman filter, and the position corresponding to the first tracking result and the position corresponding to the second tracking result.
3. The method of claim 2, further comprising:
and for the same frame of image, if the first tracking result is tracking failure, determining the position corresponding to the second tracking result as the position of the target in the image.
4. The method according to any one of claims 1-3, wherein before the starting from the first frame image, detecting the target by using a detection module in a TLD image tracking algorithm and tracking the target by a tracking module in the TLD image tracking algorithm to obtain a first tracking result, the method further comprises:
initializing the detection module and the tracking module by adopting the coordinate position of the target in a first frame image;
the detecting, by using a detecting module in a TLD image tracking algorithm, the target from the first frame image, and tracking the target by using a tracking module in the TLD image tracking algorithm to obtain a first tracking result, including:
and starting from the first frame image, detecting the target by adopting an initialized detection module, and tracking the target by the initialized tracking module to obtain the first tracking result.
5. The method according to any one of claims 1-3, further comprising:
and taking the position of the target in the image as an observed value of the Kalman filter, and updating the Kalman filter.
6. The method of claim 5, wherein the updating the Kalman filter comprises:
according to the formula: p(k|k)=(I-Kg(k)H)P(k|k-1)Updating the error covariance of the Kalman filter;
wherein, P(k|k)Representing the error covariance, P, of the image of a frame subsequent to said image(k|k-1)Representing the error covariance, Kg, of the image(k)Representing Kalman filtering gain, H representing a parameter of an observation system, k representing the time corresponding to the next frame of image of the image, and k-1 representing the time corresponding to the image.
7. The method according to any one of claims 1-3, wherein said predicting, by the Kalman filter, the position of the target in the second frame image comprises:
according to the formula: x(k|k)=X(k|k-1)+Kg(k)(Zk-HX(k|k-1)) Determining the position of the target in the second frame image;
wherein, X(k|k)Representing the position, X, of the object in the second frame image(k|k-1)Denotes the position, Kg, of the object in the first frame image(k)Representing the Kalman Filter gain, ZkAnd representing an observation value, H representing a parameter of an observation system, k representing a time corresponding to the second frame image, and k-1 representing a time corresponding to the first frame image.
8. An object tracking device, comprising:
the device comprises an acquisition unit, a tracking unit and a tracking unit, wherein the acquisition unit is used for acquiring a video to be processed and determining a target to be tracked from a first frame image of the video to be processed;
the processing unit is used for detecting the target by adopting a detection module in a TLD image tracking algorithm from the first frame image, tracking the target by a tracking module in the TLD image tracking algorithm and obtaining a first tracking result of the target;
the processing unit is further configured to initialize a kalman filter from a second frame image by using the position and the size of a target frame output by a previous frame image of the second frame image, predict the position of the target in the second frame image through the kalman filter, and obtain a second tracking result of the target;
a determining unit, configured to determine, for the first frame image, a position of the target according to the first tracking result;
and the determining unit is further used for determining the position of the target in the image from the second frame image according to the first tracking result and the second tracking result corresponding to the same frame image.
9. An object tracking device comprising a memory and a processor; wherein the content of the first and second substances,
the memory for storing a computer program;
the processor is configured to read the computer program stored in the memory and execute a target tracking method according to any one of claims 1 to 7 according to the computer program in the memory.
10. A computer-readable storage medium having computer-executable instructions stored thereon which, when executed by a processor, implement a method of object tracking as claimed in any one of claims 1 to 7.
11. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, carries out a method of object tracking according to any of the preceding claims 1-7.
CN202110701150.5A 2021-06-23 2021-06-23 Target tracking method, device and storage medium Pending CN113313739A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110701150.5A CN113313739A (en) 2021-06-23 2021-06-23 Target tracking method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110701150.5A CN113313739A (en) 2021-06-23 2021-06-23 Target tracking method, device and storage medium

Publications (1)

Publication Number Publication Date
CN113313739A true CN113313739A (en) 2021-08-27

Family

ID=77380371

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110701150.5A Pending CN113313739A (en) 2021-06-23 2021-06-23 Target tracking method, device and storage medium

Country Status (1)

Country Link
CN (1) CN113313739A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114882078A (en) * 2022-05-11 2022-08-09 合肥中科深谷科技发展有限公司 Visual tracking method based on position prediction
CN115937247A (en) * 2022-08-12 2023-04-07 北京小米移动软件有限公司 Object tracking method, device and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108876809A (en) * 2018-06-17 2018-11-23 天津理工大学 A kind of TLD image tracking algorithm based on Kalman filtering
CN111563919A (en) * 2020-04-03 2020-08-21 深圳市优必选科技股份有限公司 Target tracking method and device, computer readable storage medium and robot

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108876809A (en) * 2018-06-17 2018-11-23 天津理工大学 A kind of TLD image tracking algorithm based on Kalman filtering
CN111563919A (en) * 2020-04-03 2020-08-21 深圳市优必选科技股份有限公司 Target tracking method and device, computer readable storage medium and robot

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114882078A (en) * 2022-05-11 2022-08-09 合肥中科深谷科技发展有限公司 Visual tracking method based on position prediction
CN115937247A (en) * 2022-08-12 2023-04-07 北京小米移动软件有限公司 Object tracking method, device and storage medium
CN115937247B (en) * 2022-08-12 2024-02-06 北京小米移动软件有限公司 Method, apparatus and storage medium for object tracking

Similar Documents

Publication Publication Date Title
CN110516556B (en) Multi-target tracking detection method and device based on Darkflow-deep Sort and storage medium
CN108229322B (en) Video-based face recognition method and device, electronic equipment and storage medium
CN109934065B (en) Method and device for gesture recognition
CN110766724B (en) Target tracking network training and tracking method and device, electronic equipment and medium
Chan et al. Vehicle detection and tracking under various lighting conditions using a particle filter
CN110349187B (en) Target tracking method and device based on TSK fuzzy classifier and storage medium
WO2016179808A1 (en) An apparatus and a method for face parts and face detection
CN111160212B (en) Improved tracking learning detection system and method based on YOLOv3-Tiny
US20200285859A1 (en) Video summary generation method and apparatus, electronic device, and computer storage medium
CN111104925B (en) Image processing method, image processing apparatus, storage medium, and electronic device
CN113313739A (en) Target tracking method, device and storage medium
CN115345905A (en) Target object tracking method, device, terminal and storage medium
CN108229494B (en) Network training method, processing method, device, storage medium and electronic equipment
US20230095568A1 (en) Object tracking device, object tracking method, and program
CN110766725B (en) Template image updating method and device, target tracking method and device, electronic equipment and medium
CN108229281B (en) Neural network generation method, face detection device and electronic equipment
CN116452631A (en) Multi-target tracking method, terminal equipment and storage medium
Sugandi et al. A color-based particle filter for multiple object tracking in an outdoor environment
CN117372928A (en) Video target detection method and device and related equipment
Le et al. Human detection and tracking for autonomous human-following quadcopter
Ojdanić et al. Parallel architecture for low latency UAV detection and tracking using robotic telescopes
JPWO2018179119A1 (en) Video analysis device, video analysis method, and program
Li et al. Target tracking based on biological-like vision identity via improved sparse representation and particle filtering
CN113762027B (en) Abnormal behavior identification method, device, equipment and storage medium
Truong et al. Single object tracking using particle filter framework and saliency-based weighted color histogram

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination