CN107657627B - Space-time context target tracking method based on human brain memory mechanism - Google Patents
Space-time context target tracking method based on human brain memory mechanism Download PDFInfo
- Publication number
- CN107657627B CN107657627B CN201710733989.0A CN201710733989A CN107657627B CN 107657627 B CN107657627 B CN 107657627B CN 201710733989 A CN201710733989 A CN 201710733989A CN 107657627 B CN107657627 B CN 107657627B
- Authority
- CN
- China
- Prior art keywords
- memory space
- space
- target
- template
- matching
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
- G06T7/248—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/262—Analysis of motion using transform domain methods, e.g. Fourier domain methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/277—Analysis of motion involving stochastic approaches, e.g. using Kalman filters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20076—Probabilistic image processing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a space-time context target tracking method based on a human brain memory mechanism. According to the method, a visual information processing cognitive model of a human brain memory mechanism is introduced into a time-space relation model updating process of an STC (space time series) method, so that each template is subjected to transmission and processing of three spaces of instantaneous memory, short-time memory and long-time memory, and a memory-based model updating strategy is formed. By memorizing the scene appearing previously, the method can still continuously and robustly track when the current target has the problems of illumination change, posture mutation, shielding, reappearance after transient disappearance and the like. In addition, when the confidence map is calculated according to the space-time context information, N target center position candidate points are set, and the target center position with the maximum similarity to the target template is selected as a final tracking result, so that errors caused by the confidence map are reduced, and the tracking accuracy is improved. Finally, a moving target tracking method with high precision and strong robustness is formed.
Description
Technical Field
The invention relates to a method for tracking a moving target in a video image, in particular to a space-time Context target tracking (STC) method based on a human brain memory mechanism, belonging to the technical field of computer vision.
Background
As an important research direction in the field of computer vision, target tracking has wide application prospects in the fields of video monitoring, man-machine interaction and intelligent transportation.
According to the difference of the target appearance modeling, a typical target tracking method can be divided into: a generative target tracking method and a discriminative target tracking method. The generative target tracking method learns a target model through characteristics, and then searches an area closest to the target model to realize target tracking. The discriminant tracking method constructs tracking as a binary classification problem, models the sample and the background respectively, and finds a decision boundary separating the sample and the background by using the background and target information. Compared with the generative tracking method, the discriminant tracking method has a relatively small amount of calculation, and can effectively use background information, and is gradually becoming the mainstream of the target tracking method.
As a novel discriminant Tracking method, a space-time Context target Tracking (STC) method (Zhang Kaihua, Zhang Lei, et al. "Fast Tracking via spread-Temporal Context learning." Computer Science,2013. "), the statistical relevance of a target and a surrounding area thereof is obtained by modeling the space-time relationship between the target to be tracked and the local Context thereof through a Bayesian framework. And calculating a confidence map according to the time-space relation, and predicting a new target position by using the position with the maximum likelihood probability in the confidence map. The method combines time information and spatial information at the same time, considers target and surrounding environment information, and learns the space-time relationship according to Fourier transform, so that the method has higher target tracking accuracy and speed. Therefore, the STC method has an important application prospect in the field of target tracking.
On the other hand, in the STC method, the spatiotemporal relationship established based on the bayesian framework is the statistical correlation of the target and its surrounding area on low-level features (pixel intensity values). When the conditions of illumination change, target posture mutation, shielding, reappearance after transient disappearance and the like occur, the target is easy to deviate or lose, and the target tracking accuracy is reduced. Therefore, the method has important research significance in researching the robustness target tracking problem of the STC method under the complex conditions (illumination change, target posture mutation, shielding, reappearance after transient disappearance and the like).
Disclosure of Invention
The invention provides a space-time context target tracking method based on a human brain memory mechanism, and aims to solve the problem that the tracking accuracy of an STC method is reduced under the conditions of illumination change, target posture mutation, shielding, reappearance after transient disappearance and the like in the target tracking process. According to the method, a visual information processing cognitive model of a human brain memory mechanism is introduced into a space-time context model updating process of an STC (space-time memory) method, so that each template is subjected to transmission and processing of three spaces of instantaneous memory, short-time memory and long-time memory, and a memory-based model updating strategy is formed. By effectively memorizing the scene appearing before, the method can still continuously and robustly track when the current target has the problems of illumination change, posture mutation, shielding, reappearance after transient disappearance and the like. In addition, when the confidence map is calculated according to the space-time context information, N target center position candidate points are set, and the target center position with the maximum similarity to the target template is selected as a final tracking result, so that errors caused by the confidence map are reduced, and the tracking accuracy is improved. Finally, a moving target tracking method with high precision and strong robustness is formed.
The invention is realized by the following technical scheme.
The invention discloses a space-time context target tracking method based on a human brain memory mechanism. A visual information processing cognitive model based on a human brain memory mechanism is introduced into a spatiotemporal relationship model updating process of an STC method to form a brand-new model updating strategy based on memory, so that each template needs to be transmitted and processed through three spaces of instantaneous memory, short-time memory and long-time memory. A memory-based model update strategy is formed.
And in the target tracking process, updating the target template according to different updating strategies according to the matching degree of the current frame target template and the target template in the memory space. If the similarity coefficient of the color histogram features in the target template of the current frame and the color histogram features in the target template in the memory space meets the requirement, matching is successful, and meanwhile, the parameters of the memory space matching template are updated to prepare for the prediction and tracking of the target of the next frame; if the matching is unsuccessful, the current target template is stored in the memory space as a new target template if a certain condition is met. By memorizing the scene appearing previously, the method can still accurately track the target when the current target is changed in illumination, suddenly changed in target posture, shielded, reappeared after temporary disappearance and the like.
In addition, in order to reduce errors caused by the confidence map and improve tracking accuracy, N target center position candidate points are set when the confidence map is calculated according to space-time context information, and a target center position with the maximum similarity to the target template is selected as a final tracking result.
The invention discloses a space-time context target tracking method based on a human brain memory mechanism, which comprises the following steps:
step 1: memory space and tracking window are initialized.
Initializing two layers of memory spaces, wherein the two layers of memory spaces are respectively used for storing target matching template parameters qtAnd spatiotemporal context. Each layer is constructed as a transient memory space, a short-term memory space, and a long-term memory space.
Step 2: a first frame spatial context model is learned.
And (2) solving a confidence map under the condition that an initial target tracking window is known in the step 1, calculating a spatial context model of a first frame according to the confidence map and the context prior probability of the first frame, and simultaneously using the spatial context model of the first frame as a space-time context model of the next frame. The spatial context model describes a conditional probability function p (x | c (z), o) representing the spatial relationship between the modeled target and the surrounding context information, defined as:
hsc(x-z)=p(x|c(z),o) (1)
and step 3: and (6) positioning the target.
The confidence map is computed from the spatio-temporal context model and the maximum found, i.e., the target location, is found, but due to the error characteristics of the confidence map, the target location may be at the second largest or other location of the confidence map. Therefore, the N positions with the maximum confidence maps are selected as target center position candidate points.
The calculation formula of the confidence map is as follows:
wherein the content of the first and second substances,in order to be a spatio-temporal context model,for the context prior probability, F and F-1Representing the fourier and inverse fourier transforms, respectively.
The calculation formula of the target position candidate point is as follows:
searching a maximum position in the confidence map as a target position candidate point;
and 4, step 4: and calculating the color histogram feature of the target position candidate point.
And calculating and obtaining the color histogram characteristics of the N candidate target tracking windows and the similarity between the N color histogram characteristics and the color histogram characteristics stored in the memory space, and taking the candidate point with the maximum similarity as the tracking result of the current frame.
And 5: and updating the memory space.
And updating the memory space and the space-time context model according to the matching rule to prepare for the prediction and tracking of the next frame target.
The updating process is as follows:
(1) and storing the transient memory space.
The video input is the current frame image, and the color histogram feature of the current frame is stored in the instantaneous memory space.
(2) Short-term memory space matching.
The current template is stored in the first position of the short-time memory space, the color histogram features stored in the instant memory space are sequentially matched with the S templates in the short-time memory space, the similarity is calculated, and whether the matching is successful or not is determined according to the comparison result of the similarity and the matching threshold.
If the matching is successful in the short-time memory space, updating the target template according to the current template, as shown in the following formula:
qt=(1-ε)qt-1+εp (4)
wherein q istTemplate parameters are matched for the target of the current frame, p is the color histogram feature of the temporal space, and ε is the update rate.
If the matching of S templates in the short-term memory space is unsuccessful, recording the last distribution in the short-term memory space as MKSimultaneous and long term memory spaceThe template in (1) is matched.
(3) And matching long-term memory space.
Sequentially matching color histogram features stored in the instantaneous memory space with S templates in the long-term memory space, calculating the similarity rho, and defining the matching threshold of the long-term memory space template as TlIf ρ is>TlIf not, the matching is defined as failure.
If a match is found, the matching template is updated according to equation (4) while at MKExtracting matched template under the condition of no memorization and forgetting MKAt MKSwapping the matching template and M in the memorable caseK。
If no matching template exists in the long-term memory space, storing the color histogram feature in the first position of the short-term memory space as the current template, and performing MKForget M in case of non-memorizationK(ii) a At MKWhen the memory is available and the long-term memory space is not full, M is setKMemorizing to a long-term memory space; at MKComparing M when the memory space is fullKAnd the weight of the long-term memory space template and the forgetting template with small weight.
In addition, when the target in the first-layer memory space matches the template parameter qtWhen updating, the spatio-temporal context model in the second layer of memory space is also updated at the same time, and the updating rule is shown as the following formula:
wherein the content of the first and second substances,is the model of the spatio-temporal context of the last frame,is the spatial context information of the last frame,is the current frame spatio-temporal context model and epsilon is the update rate.
Has the advantages that:
1. the speed is fast. According to the moving target tracking method based on the combination of the human brain memory mechanism and the space-time context, as the STC method converts the problem of solving the confidence map into Fourier transform domain calculation, the complexity of a target tracking algorithm is greatly reduced, and the moving target tracking method has good speed characteristics. The introduction of the memory mechanism only affects the updating process of the target template, and the influence on the speed is relatively small, so that the method provided by the invention has higher operation speed.
2. And the robustness is strong. The invention introduces a human brain memory mechanism-based model into the space-time context model updating process of the STC method, so that the algorithm memorizes the scene appearing before in tracking, thereby continuously and stably tracking when the current target has the problems of illumination change, posture mutation, shielding, reappearance after transient disappearance and the like, and effectively improving the robustness of the algorithm. Meanwhile, a confidence map is calculated according to the space-time context information to obtain N target center position candidate points, and the target center position with the maximum similarity to the target template is selected as a final tracking result, so that errors caused by the confidence map are reduced.
3. The anti-shielding capability is strong. The invention discloses a moving target tracking method based on the combination of a human brain memory mechanism and space-time context, which combines a human brain memory mechanism model to form a model updating strategy. When the target is shielded, the method can effectively resist the problem of shielding of the target by memorizing the scene appearing before shielding. The accuracy of target tracking is improved.
Drawings
FIG. 1 is a flow chart of a space-time context target tracking method based on human brain memory mechanism according to the present invention;
FIG. 2 illustrates the detailed process of memory space and spatiotemporal context update in the method of the present invention;
FIG. 3 shows the tracking results of the inventive method and conventional STC method, particle filter method;
fig. 4 shows the tracking accuracy curves of the inventive method and the conventional STC method and particle filter method.
Detailed Description
For a better understanding of the objects and advantages of the present invention, reference should be made to the following detailed description taken in conjunction with the accompanying drawings and examples.
Example (b):
the overall process of the target tracking method based on the combination of the human brain memory mechanism and the space-time context disclosed by the embodiment is shown as the attached figure 1, and specifically comprises the following steps:
step 1: memory space and tracking window are initialized.
Two layers of memory space are initialized, each layer being constructed as a transient memory space, a short-term memory space, and a long-term memory space. The short-term memory space and the long-term memory space respectively store S color histogram features. The instantaneous memory space is used for storing the color histogram characteristics of the current frame target; the first layer of short-term memory space and the first layer of long-term memory space are used for storing the target matching template parameter qt(ii) a And the second layer of short-time memory space and the second layer of long-time memory space are used for storing the space-time context model.
Inputting a first frame of a video, and determining an initial target tracking window.
Step 2: a first frame spatial context model is learned.
And (2) solving a confidence map under the condition that an initial target tracking window is known in the step 1, calculating a spatial context model of a first frame according to the confidence map and the context prior probability of the first frame, and simultaneously using the spatial context model of the first frame as a space-time context model of the next frame. The spatial context model describes a conditional probability function p (x | c (z), o) representing the spatial relationship between the modeled target and the surrounding context information, defined as:
hsc(x-z)=p(x|c(z),o) (6)
the relationship among the confidence map, the context prior probability and the spatial context model is as follows:
the spatial context model calculation formula is as follows:
and step 3: and (6) positioning the target.
The confidence map is computed from the spatio-temporal context model and the maximum found, i.e., the target location, is found, but due to the error characteristics of the confidence map, the target location may be at the second largest or other location of the confidence map. Therefore, the N positions with the maximum confidence maps are selected as target center position candidate points.
The calculation formula of the confidence map is as follows:
wherein the content of the first and second substances,in order to be a spatio-temporal context model,is the context prior probability.
The calculation formula of the target position candidate point is as follows:
and 4, step 4: and calculating the color histogram feature of the target position candidate point.
And calculating and obtaining the color histogram characteristics of the N candidate target tracking windows and the similarity between the N color histogram characteristics and the color histogram characteristics stored in the memory space, and taking the candidate point with the maximum similarity as the tracking result of the current frame.
And 5: and updating the memory space.
And updating the memory space and the space-time context model according to the matching rule to prepare for the prediction and tracking of the next frame target.
The updating process is as follows:
(1) and storing the transient memory space.
The video input is the current frame image, and the color histogram feature of the current frame is stored in the instantaneous memory space.
(2) Short-term memory space matching.
Storing a current template in a first position of a short-time memory space, sequentially matching color histogram features stored in an instantaneous memory space with S templates in the short-time memory space, calculating the similarity rho between the color histogram features stored in the instantaneous memory space and the first template in the short-time memory space, and defining the matching threshold of the current template as TcIf ρ is>TcIf the matching is successful, otherwise, the matching is failed, and then the matching is sequentially carried out with the last (S-1) templates of the space with short-term memory.
Defining the matching threshold value of (S-1) templates after the short-time memory space as TsIf ρ is>TsIf not, the matching is defined as failure.
If the matching is successful in the short-time memory space, updating the target template according to the current template, as shown in the following formula:
qt=(1-ε)qt-1+εp (11)
wherein q istTemplate parameters are matched for the target of the current frame, p is the color histogram feature of the temporal space, and ε is the update rate.
If the matching of S templates in the short-term memory space is unsuccessful, recording the last distribution in the short-term memory space as MKAnd simultaneously matching with the template in the long-term memory space.
(3) And matching long-term memory space.
If the matching in the step (2) fails, sequentially matching the color histogram features stored in the instantaneous memory space with the S templates in the long-term memory space, calculating the similarity rho, and defining the matching threshold of the long-term memory space template as TlIf ρ is>TlIf not, the matching is defined as failure.
If a match is found, the matching template is updated according to equation (11) while at MKExtracting matched template under the condition of no memorization and forgetting MKAt MKSwapping the matching template and M in the memorable caseK。
If no matching template exists in the long-term memory space, storing the color histogram feature in the first position of the short-term memory space as the current template, and performing MKForget M in case of non-memorizationK(ii) a At MKWhen the memory is available and the long-term memory space is not full, M is setKMemorizing to a long-term memory space; at MKComparing M when the memory space is fullKAnd the weight of the long-term memory space template and the forgetting template with small weight.
In addition, when the first layer of memory space matches the template parameter qtWhen updating, the spatio-temporal context information in the second layer memory space is also updated at the same time, and the updating rule is shown as the following formula:
wherein the content of the first and second substances,is the model of the spatio-temporal context of the last frame,is the spatial context model of the last frame,is the current frame spatio-temporal context model and epsilon is the update rate.
In this embodiment, N in step 3 is defined as the number of candidate points of the selected target center position, and N is 5; t in step 5c,Ts,TlAre respectivelyDefining a matching threshold value of the current template, a matching threshold value of the short-term memory space template and a matching threshold value of the long-term memory space template, and taking a passing experience value Tc,Ts,Tl0.87, 0.85 and 0.85, respectively; s in the step 5 defines the number of templates stored in the short-term memory space and the long-term memory space, and takes S as 5;
the simulation effect of the invention can be illustrated by the following simulation experiments:
1. simulation conditions are as follows:
the invention uses MATLAB 2013a platform on an Intel (R) core (TM) i3 CPU 3.07GHz, 4.00G PC to test the video sequence in the set (A) of Visual Tracker Benchmark video test (http:// cvlab.hanyang.ac.kr/tracker_benchmark/datasets.html) And (6) completing the simulation.
2. And (3) simulation results:
fig. 3(a) is a graph of the tracking result of the video sequence for the illumination variation of the target, which is shown as frames 45, 70, 90 and 120, and the rectangular boxes in the graph represent the tracking result of the conventional STC method, the particle filter tracking algorithm and the method of the present invention. As can be seen from FIG. 3(a), the present invention can accurately track a moving target in the process of reappearing the moving target after the moving target has obvious occlusion. Fig. 3(b) is a diagram of a tracking result of a video sequence with a sudden change in the pose of a target, which is frames 234, 244, 254 and 285 respectively, and it can be seen from fig. 3(b) that the method can accurately track the target in the process of reappearing after the moving target is obviously occluded. Fig. 3(c) is a diagram of the tracking result for a multi-frame video sequence, which is the 343 th, 373 th, 393 th and 412 th frames respectively, and it can be seen from fig. 3(c) that the robustness of the present invention is strong, and robust tracking can still be achieved for the multi-frame sequence.
The above detailed description is intended to illustrate the objects, aspects and advantages of the present invention, and it should be understood that the above detailed description is only exemplary of the present invention and is not intended to limit the scope of the present invention, and any modifications, equivalents, improvements and the like within the spirit and principle of the present invention should be included in the scope of the present invention.
Claims (4)
1. A space-time context target tracking method based on a human brain memory mechanism is characterized by comprising the following steps:
the method comprises the following steps:
step 1: initializing memory spaces and tracking windows
Initializing two layers of memory spaces, wherein the two layers of memory spaces are respectively used for storing target matching template parameters qtThe space-time context relationship is established, and each layer is constructed into a transient memory space, a short-time memory space and a long-time memory space;
step 2: learning a first frame spatial context model
Obtaining a confidence map under the condition that an initial target tracking window is known in the step 1, calculating a space context model of a first frame according to the confidence map and the context prior probability of the first frame, and simultaneously taking the space context model of the first frame as a space-time context model of the next frame;
and step 3: target localization
Calculating a confidence map by a space-time context model, finding a maximum value, namely the maximum value is the target position, but due to the error characteristic of the confidence map, the target position may be located at the second largest or other positions of the confidence map, so that the N positions with the largest confidence maps are selected as target center position candidate points, and the formula for the maximum position of the confidence map is:
wherein the content of the first and second substances,in order to be a spatio-temporal context model,for the context prior probability, F and F-1Representing the fourier and inverse fourier transforms, respectively;
the calculation formula of the target position candidate point is as follows:
searching a maximum position in the confidence map as a target position candidate point;
and 4, step 4: computing target location candidate point color histogram features
Calculating and obtaining color histogram features of the N candidate target tracking windows and the similarity between the N color histogram features and the color histogram features stored in the memory space, and taking the candidate point with the maximum similarity as the tracking result of the current frame;
and 5: updating memory space
Updating a memory space and a space-time context model according to a matching rule to prepare for prediction and tracking of a next frame target, wherein the specific matching updating process is as follows:
(1) transient memory space storage
Inputting a video into a current frame image, and storing the color histogram characteristics of the current frame in an instant memory space;
(2) short-term memory space matching
Storing a current template in a first position of a short-time memory space, sequentially matching color histogram features stored in an instantaneous memory space with S templates in the short-time memory space, calculating the similarity rho between the color histogram features stored in the instantaneous memory space and the first template in the short-time memory space, and defining the matching threshold of the current template as TcIf ρ is>TcIf the matching fails, the template is matched with the last S-1 templates of the short-time memory space in sequence;
defining the matching threshold of S-1 templates after the short-time memory space as TsIf ρ is>TsIf the matching is successful in the short-time memory space, updating the color histogram features according to the current template, as shown in the following formula:
qt=(1-ε)qt-1+εp (3)
wherein q istTemplate parameters are matched for the target of the current frame, p is the color histogram feature of the temporal space, epsilon is the update rate,
if the matching of S templates in the short-term memory space is unsuccessful, the last template in the short-term memory space is recorded as MKSimultaneously matching with the template in the long-term memory space;
(3) long term memory space matching
If the matching in the step (2) fails, sequentially matching the color histogram features stored in the instantaneous memory space with the S templates in the long-term memory space, calculating the similarity rho, and defining the matching threshold of the long-term memory space template as TlIf ρ is>TlIf yes, defining the matching to be successful, otherwise defining the matching to be failed;
if a match is found, the matching template is updated according to equation (3) while at MKExtracting matched template under the condition of no memorization and forgetting MKAt MKSwapping the matching template and M in the memorable caseK;
If no matching template exists in the long-term memory space, storing the color histogram feature in the first position of the short-term memory space as the current template, and performing MKForget M in case of non-memorizationK(ii) a At MKWhen the memory is available and the long-term memory space is not full, M is setKMemorizing to a long-term memory space; at MKComparing M when the memory space is fullKThe weight of the long-term memory space template and the template with small forgetting weight,
in addition, when the target in the first-layer memory space matches the template parameter qtWhen updating, the spatio-temporal context model in the second layer of memory space is also updated at the same time, and the updating rule is shown as the following formula:
2. The human brain memory mechanism-based spatiotemporal context target tracking method of claim 1, characterized in that:
creating two layers of memory spaces, wherein each layer is divided into 3 memory spaces which are respectively an instant memory space, a short-time memory space and a long-time memory space, and the two layers are respectively used for storing target matching template parameters qtA spatiotemporal context relationship;
and calculating a space context model of the first frame from the target position of the first frame, storing the space context model as a space-time context model of the next frame into a short-time memory space in the first layer of memory space, initializing the color histogram characteristics of the tracking window, and storing the color histogram characteristics into a short-time memory space in the second layer of memory space.
3. The human brain memory mechanism-based spatiotemporal context target tracking method of claim 1, characterized in that:
the larger the number N of confidence map candidate points in the step 3 is, the more accurate the tracking result is, but the calculated amount is increased, and the accuracy of the tracking result is increased by taking 5 as N under the condition of ensuring that the calculated amount is not complex.
4. The human brain memory mechanism-based spatiotemporal context target tracking method of claim 1, characterized in that:
the method introduces a visual information processing cognitive model of a human brain memory mechanism into a spatio-temporal context model updating process of an STC method, so that each template is subjected to transmission and processing of three spaces of instant memory, short-time memory and long-time memory to form a memory-based model updating strategy.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710733989.0A CN107657627B (en) | 2017-08-24 | 2017-08-24 | Space-time context target tracking method based on human brain memory mechanism |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710733989.0A CN107657627B (en) | 2017-08-24 | 2017-08-24 | Space-time context target tracking method based on human brain memory mechanism |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107657627A CN107657627A (en) | 2018-02-02 |
CN107657627B true CN107657627B (en) | 2021-07-30 |
Family
ID=61127808
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710733989.0A Active CN107657627B (en) | 2017-08-24 | 2017-08-24 | Space-time context target tracking method based on human brain memory mechanism |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107657627B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108416796A (en) * | 2018-02-13 | 2018-08-17 | 中国传媒大学 | The human body motion tracking method of two-way markov Monte Carlo particle filter |
CN108492318B (en) * | 2018-03-01 | 2022-04-26 | 西北工业大学 | Target tracking method based on bionic technology |
CN115712354B (en) * | 2022-07-06 | 2023-05-30 | 成都戎盛科技有限公司 | Man-machine interaction system based on vision and algorithm |
CN116307283B (en) * | 2023-05-19 | 2023-08-18 | 青岛科技大学 | Precipitation prediction system and method based on MIM model and space-time interaction memory |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106408592B (en) * | 2016-09-09 | 2019-04-05 | 南京航空航天大学 | A kind of method for tracking target updated based on target template |
-
2017
- 2017-08-24 CN CN201710733989.0A patent/CN107657627B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN107657627A (en) | 2018-02-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107689052B (en) | Visual target tracking method based on multi-model fusion and structured depth features | |
CN108520530B (en) | Target tracking method based on long-time and short-time memory network | |
CN109443382B (en) | Visual SLAM closed loop detection method based on feature extraction and dimension reduction neural network | |
CN107657627B (en) | Space-time context target tracking method based on human brain memory mechanism | |
US20180114056A1 (en) | Vision Based Target Tracking that Distinguishes Facial Feature Targets | |
US9613298B2 (en) | Tracking using sensor data | |
CN108198209B (en) | People tracking method under the condition of shielding and scale change | |
US7835542B2 (en) | Object tracking systems and methods utilizing compressed-domain motion-based segmentation | |
CN110120064B (en) | Depth-related target tracking algorithm based on mutual reinforcement and multi-attention mechanism learning | |
CN110473231B (en) | Target tracking method of twin full convolution network with prejudging type learning updating strategy | |
CN109859241B (en) | Adaptive feature selection and time consistency robust correlation filtering visual tracking method | |
CN111582349B (en) | Improved target tracking algorithm based on YOLOv3 and kernel correlation filtering | |
CN111144364A (en) | Twin network target tracking method based on channel attention updating mechanism | |
CN111105439B (en) | Synchronous positioning and mapping method using residual attention mechanism network | |
CN107622507B (en) | Air target tracking method based on deep learning | |
CN111008991B (en) | Background-aware related filtering target tracking method | |
CN111583294B (en) | Target tracking method combining scale self-adaption and model updating | |
CN113793359B (en) | Target tracking method integrating twin network and related filtering | |
CN110992401A (en) | Target tracking method and device, computer equipment and storage medium | |
CN108846850B (en) | Target tracking method based on TLD algorithm | |
CN107368802B (en) | Moving target tracking method based on KCF and human brain memory mechanism | |
CN111402303A (en) | Target tracking architecture based on KFSTRCF | |
Li et al. | A bottom-up and top-down integration framework for online object tracking | |
CN113033356B (en) | Scale-adaptive long-term correlation target tracking method | |
Zhang et al. | Residual memory inference network for regression tracking with weighted gradient harmonized loss |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |