CN107657627A - Space-time contextual target tracking based on human brain memory mechanism - Google Patents

Space-time contextual target tracking based on human brain memory mechanism Download PDF

Info

Publication number
CN107657627A
CN107657627A CN201710733989.0A CN201710733989A CN107657627A CN 107657627 A CN107657627 A CN 107657627A CN 201710733989 A CN201710733989 A CN 201710733989A CN 107657627 A CN107657627 A CN 107657627A
Authority
CN
China
Prior art keywords
memory space
target
template
space
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710733989.0A
Other languages
Chinese (zh)
Other versions
CN107657627B (en
Inventor
宋勇
李旭
赵尚男
赵宇飞
李云
陈学文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN201710733989.0A priority Critical patent/CN107657627B/en
Publication of CN107657627A publication Critical patent/CN107657627A/en
Application granted granted Critical
Publication of CN107657627B publication Critical patent/CN107657627B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/262Analysis of motion using transform domain methods, e.g. Fourier domain methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of space-time contextual target tracking based on human brain memory mechanism.The Vision information processing cognitive model of human brain memory mechanism is incorporated into the time-space relationship model modification process of STC methods by this method, so that transmission and processing that each template will be Jing Guo three immediate memory, short-term memory and long-term memory spaces, form the model modification strategy based on memory.By remembering the scene that had previously occurred so that can still continue the tracking of robust when this method is illumination variation occurs for current goal, posture be mutated, blocks, reproduction after of short duration disappearance the problems such as.In addition, when calculating confidence map according to spatio-temporal context information, N number of target's center position candidate point is set, therefrom chooses the target's center position maximum with To Template similarity as final tracking result, so as to reduce error caused by confidence map, tracking accuracy is improved.Finally, a kind of precision height, the motion target tracking method of strong robustness are formed.

Description

Space-time context target tracking method based on human brain memory mechanism
Technical Field
The invention relates to a method for tracking a moving target in a video image, in particular to a space-time Context target tracking (STC) method based on a human brain memory mechanism, and belongs to the technical field of computer vision.
Background
As an important research direction in the field of computer vision, target tracking has wide application prospects in the fields of video monitoring, man-machine interaction and intelligent transportation.
According to the difference of the target appearance modeling, a typical target tracking method can be divided into: a generative target tracking method and a discriminative target tracking method. The generative target tracking method learns a target model through characteristics, and then searches an area closest to the target model to realize target tracking. The discriminant tracking method constructs tracking as a binary classification problem, models the sample and the background respectively, and finds a decision boundary separating the sample and the background by using the background and target information. Compared with the generative tracking method, the discriminant tracking method has a relatively small amount of calculation, and can effectively use background information, and is gradually becoming the mainstream of the target tracking method.
As a novel discriminant Tracking method, a space-time Context target Tracking (STC) method (Zhang Kaihua, zhang Lei, et al. "Fast Tracking via spread-Temporal Context learning." Computer Science,2013. "), the statistical relevance of a target and a surrounding area thereof is obtained by modeling the space-time relationship between the target to be tracked and the local Context thereof through a Bayesian framework. And calculating a confidence map according to the time-space relation, and predicting a new target position by using the position with the maximum likelihood probability in the confidence map. The method combines time information and space information at the same time, considers target and surrounding environment information, and learns the space-time relationship according to Fourier transform, so that the method has higher target tracking accuracy and speed. Therefore, the STC method has an important application prospect in the field of target tracking.
On the other hand, in the STC method, the spatiotemporal relationship established based on the bayesian framework is the statistical correlation of the object and its surrounding area on low-level features (pixel intensity values). When the conditions of illumination change, target posture mutation, shielding, reappearance after temporary disappearance and the like occur, the target is easy to deviate or lose with the tracking, and the target tracking accuracy is reduced. Therefore, the method has important research significance in researching the robustness target tracking problem of the STC method under the complex conditions (illumination change, target posture mutation, shielding, reappearance after transient disappearance and the like).
Disclosure of Invention
The invention provides a space-time context target tracking method based on a human brain memory mechanism, and aims to solve the problem that the tracking accuracy of an STC method is reduced under the conditions of illumination change, target posture mutation, shielding, reappearance after transient disappearance and the like in the target tracking process. According to the method, a visual information processing cognitive model of a human brain memory mechanism is introduced into a space-time context model updating process of an STC (space-time memory) method, so that each template is subjected to transmission and processing of three spaces of instantaneous memory, short-time memory and long-time memory, and a memory-based model updating strategy is formed. By effectively memorizing the scene appearing before, the method can still continuously and robustly track when the current target has the problems of illumination change, posture mutation, shielding, reappearance after transient disappearance and the like. In addition, when the confidence map is calculated according to the spatio-temporal context information, N target center position candidate points are set, and the target center position with the maximum similarity to the target template is selected as a final tracking result, so that errors caused by the confidence map are reduced, and the tracking accuracy is improved. Finally, a moving target tracking method with high precision and strong robustness is formed.
The invention is realized by the following technical scheme.
The invention discloses a space-time context target tracking method based on a human brain memory mechanism. A visual information processing cognitive model based on a human brain memory mechanism is introduced into a spatiotemporal relationship model updating process of an STC method, a brand-new model updating strategy based on memory is formed, and each template needs to be transmitted and processed through three spaces of instantaneous memory, short-time memory and long-time memory. A memory-based model update strategy is formed.
And in the target tracking process, updating the target template according to different updating strategies according to the matching degree of the current frame target template and the target template in the memory space. If the similarity coefficient of the color histogram features in the target template of the current frame and the color histogram features in the target template in the memory space meets the requirement, matching is successful, and meanwhile, the parameters of the memory space matching template are updated to prepare for the prediction and tracking of the target of the next frame; if the matching is unsuccessful, the current target template is stored in the memory space as a new target template if a certain condition is met. By memorizing the scene appearing previously, the method can still accurately track the target when the current target is changed in illumination, suddenly changed in target posture, shielded, reappeared after temporary disappearance and the like.
In addition, in order to reduce errors caused by the confidence map and improve tracking accuracy, N target center position candidate points are set when the confidence map is calculated according to space-time context information, and a target center position with the maximum similarity to the target template is selected as a final tracking result.
The invention discloses a space-time context target tracking method based on a human brain memory mechanism, which comprises the following steps:
step 1: memory space and tracking window are initialized.
Initializing two layers of memory spaces, wherein the two layers of memory spaces are respectively used for storing the characteristics q of the target matching template t And spatiotemporal context. Each layer is constructed as a transient memory space, a short-term memory space, and a long-term memory space.
Step 2: a first frame spatial context model is learned.
And (3) obtaining a confidence map under the condition that an initial target tracking window is known in the step 1, calculating a spatial context model of the first frame according to the confidence map and the context prior probability of the first frame, and simultaneously using the spatial context model of the first frame as a space-time context model of the next frame. The spatial context model describes a conditional probability function p (x | c (z), o) representing the spatial relationship of the modeled target to the surrounding context information, defined as:
h sc (x-z)=p(x|c(z),o) (1)
and 3, step 3: and (4) positioning the target.
The confidence map is computed from the spatio-temporal context model and the maximum found, i.e., the target location, is found, but due to the error characteristics of the confidence map, the target location may be at the second largest or other location of the confidence map. Therefore, the N positions with the maximum confidence maps are selected as target center position candidate points.
The calculation formula of the confidence map is as follows:
wherein, the first and the second end of the pipe are connected with each other,in order to be a spatio-temporal context model,is the context prior probability.
The calculation formula of the target position candidate point is as follows:
and 4, step 4: and calculating the color histogram feature of the target position candidate point.
And calculating and obtaining the color histogram characteristics of the N candidate target tracking windows and the similarity coefficients of the N color histogram characteristics and the target template stored in the memory space, and taking the candidate point with the maximum similarity as the tracking result of the current frame.
And 5: and updating the memory space.
And updating the memory space and the space-time context model according to the matching rule to prepare for predicting and tracking the target of the next frame.
The updating process is as follows:
(1) And storing the transient memory space.
The video input is a current estimation template, and a target estimation template (color histogram feature) of the current frame is stored in a transient memory space.
(2) Short-term memory space matching.
The current template is stored in the first position of the short-time memory space, the color histogram stored in the instant memory space is sequentially matched with the S templates in the short-time memory space, the similarity is calculated, and whether the matching is successful or not is determined according to the comparison result of the similarity and the matching threshold.
If the matching is successful in the short-time memory space, updating the target template according to the current template, as shown in the following formula:
q t =(1-ε)q t-1 +εp (4)
wherein q is t For the current template, p is the estimated template of the temporal space, and ε is the update rate.
If the matching of S templates in the short-term memory space is unsuccessful, recording the last distribution in the short-term memory space as M K And simultaneously matching with the template in the long-term memory space.
(3) And matching long-term memory space.
And matching the color histogram stored in the instant memory space with the S templates in the long-term memory space in sequence, calculating the similarity, and comparing the similarity with a matching threshold value to determine whether the matching is successful.
If a match is found, the matching template is updated according to equation (4) while at M K Extracting matched template forgetting M under the condition of non-memorability K At M K Swapping the matching template and M in the memorable case K
If no matched template exists in the long-term memory space, the estimation template is stored in the first position of the short-term memory space as the current template, and the current template is stored in M K Forgetting M in case of no memorization K (ii) a At M K M can be memorized under the condition that the long-term memory space is not full K Memorizing to a long-term memory space; at M K Comparing M when the memory space is full K Weight of long-term memory space template, forgettingThe template with small weight.
In addition, when the first layer of memory space matches the template parameter q t When updating, the spatio-temporal context information in the second layer memory space is also updated at the same time, as shown in the following formula:
wherein the content of the first and second substances,is the last frame of spatiotemporal context information,is the spatial context information of the current frame,is the current frame spatio-temporal context information and p is the update rate.
Has the advantages that:
1. the speed is fast. According to the moving target tracking method based on the combination of the human brain memory mechanism and the space-time context, as the STC method converts the problem of solving the confidence map into Fourier transform domain calculation, the complexity of a target tracking algorithm is greatly reduced, and the moving target tracking method has good speed characteristics. The introduction of the memory mechanism only affects the updating process of the target template, and the influence on the speed is relatively small, so that the method provided by the invention has higher operation speed.
2. And the robustness is strong. The invention introduces a human brain memory mechanism-based model into the space-time context model updating process of the STC method, so that the algorithm memorizes the scene appearing before in tracking, thereby continuously and stably tracking when the current target has the problems of illumination change, posture mutation, shielding, reappearance after transient disappearance and the like, and effectively improving the robustness of the algorithm. Meanwhile, a confidence map is calculated according to the space-time context information to obtain N target center position candidate points, and the target center position with the maximum similarity to the target template is selected as a final tracking result, so that errors caused by the confidence map are reduced.
3. The anti-shielding capability is strong. The invention discloses a moving target tracking method based on the combination of a human brain memory mechanism and space-time context, which combines a human brain memory mechanism model to form a model updating strategy. When the target is shielded, the method can effectively solve the problem of shielding of the target by memorizing the scene appearing before shielding. The accuracy of target tracking is improved.
Drawings
FIG. 1 is a flow chart of a space-time context target tracking method based on a human brain memory mechanism according to the present invention;
FIG. 2 illustrates the detailed process of memory space and spatiotemporal context update in the method of the present invention;
FIG. 3 shows the tracking results of the inventive method and conventional STC method, particle filter method;
fig. 4 shows the tracking accuracy curves of the inventive method and the conventional STC method and particle filter method.
Detailed Description
For a better understanding of the objects and advantages of the present invention, reference should be made to the following detailed description taken in conjunction with the accompanying drawings and examples.
Example (b):
the overall process of the target tracking method based on the combination of the human brain memory mechanism and the space-time context disclosed by the embodiment is shown as the attached figure 1, and specifically comprises the following steps:
step 1: memory space and tracking window are initialized.
Two layers of memory space are initialized, each layer being constructed as a transient memory space, a short-term memory space, and a long-term memory space. And the short-term memory space and the long-term memory space respectively store S templates. The instantaneous memory space is used for storing the current frame target data (estimation template); the first layer of short-time memory space and the first layer of long-time memory space are used for storing the characteristics qt of the target matching template; and the second layer of short-time memory space and the second layer of long-time memory space are used for storing the space-time context model.
Inputting a first frame of a video, and determining an initial target tracking window.
Step 2: a first frame spatial context model is learned.
And (2) solving a confidence map under the condition that an initial target tracking window is known in the step 1, calculating a spatial context model of a first frame according to the confidence map and the context prior probability of the first frame, and simultaneously using the spatial context model of the first frame as a space-time context model of the next frame. The spatial context model describes a conditional probability function p (x | c (z), o) representing the spatial relationship of the modeled target to the surrounding context information, defined as:
h sc (x-z)=p(x|c(z),o) (6)
the relationship among the confidence map, the context prior probability and the spatial context model is as follows:
the spatial context model has the calculation formula as follows:
and step 3: and (6) positioning the target.
The confidence map is computed from the spatio-temporal context model, finding the maximum value, i.e. the target position, but due to the error characteristics of the confidence map, the target position may be located second largest or other position of the confidence map. Therefore, the N positions with the maximum confidence maps are selected as target center position candidate points.
The calculation formula of the confidence map is as follows:
wherein, the first and the second end of the pipe are connected with each other,in the form of a spatio-temporal context model,is the context prior probability.
The calculation formula of the target position candidate point is as follows:
and 4, step 4: and calculating the color histogram feature of the target position candidate point.
And calculating and obtaining the color histogram characteristics of the N candidate target tracking windows and the similarity coefficients of the N color histogram characteristics and the target template stored in the memory space, and taking the candidate point with the maximum similarity as the tracking result of the current frame.
And 5: and updating the memory space.
And updating the memory space and the space-time context model according to the matching rule to prepare for predicting and tracking the target of the next frame.
The updating process is as follows:
(1) And storing the transient memory space.
The video input is a current estimation template, and a target estimation template (color histogram feature) of the current frame is stored in a transient memory space.
(2) Short-term memory space matching.
Storing the current template in the first position of the short-time memory space, sequentially matching the color histogram stored in the instantaneous memory space with S templates in the short-time memory space, calculating the similarity rho between the color histogram stored in the instantaneous memory space and the first template in the short-time memory space, and defining the matching threshold of the current template as T c If ρ is>T c If yes, defining the matching to be successful, otherwise, definingThe matching is defined as failure, and then the template is matched with the last (S-1) templates of the space in sequence by short-term memory.
Defining the matching threshold value of (S-1) templates after the short-time memory space as T s If ρ is>T s If not, the matching is defined as failure.
If the matching is successful in the short-time memory space, updating the target template according to the current template, as shown in the following formula:
q t =(1-ε)q t-1 +εp (11)
wherein q is t For the current template, p is the estimated template of the temporal space and ε is the update rate.
If the matching of S templates in the short-term memory space is unsuccessful, recording the last distribution in the short-term memory space as M K And simultaneously matching with the template in the long-term memory space.
(3) And matching long-term memory space.
If the matching in the step (2) fails, sequentially matching the color histogram stored in the instantaneous memory space with the S templates in the long-term memory space, calculating the similarity rho, and defining the matching threshold of the long-term memory space template as T l If ρ>T l If not, the matching is defined as failure.
If a match is found, the matching template is updated according to equation (11) while at M K Extracting matched template forgetting M under the condition of non-memorability K At M, in K Exchanging matching templates and M in memorable condition K
If no matched template exists in the long-term memory space, the estimation template is stored in the first position of the short-term memory space as the current template, and the current template is stored in M K Forget M in case of non-memorization K (ii) a At M K M can be memorized under the condition that the long-term memory space is not full K Memorizing to a long-term memory space; at M K Comparing M when the memory space is full K And the weight of the long-term memory space template and the forgetting template with small weight.
In addition, when the first layer of memory space matches the template parameter q t When updating, the spatio-temporal context information in the second layer memory space is also updated at the same time, and the updating rule is shown as the following formula:
wherein the content of the first and second substances,a frame of spatiotemporal context information is generated,the spatial context information of the previous frame,is the current frame spatio-temporal context information and p is the update rate.
In this embodiment, N in step 3 is defined as the number of candidate points of the selected target center position, and N is 5; t in step 5 c ,T s ,T l Respectively defining the matching threshold value of the current template, the matching threshold value of the short-term memory space template and the matching threshold value of the long-term memory space template, and taking the empirical value T c ,T s ,T l 0.87, 0.85 and 0.85, respectively; s in the step 5 defines the number of templates stored in the short-term memory space and the long-term memory space, and S =5 is taken;
the simulation effect of the invention can be illustrated by the following simulation experiments:
1. simulation conditions are as follows:
the invention uses MATLAB 2013a platform on a PC machine of Intel (R) Core (TM) i3CPU 3.07GHz,4.00G to test the video sequence in a set (video sequence) of a Visual Tracker Benchmark video test set (http:// cvlab.hanyang.ac.kr/tracker_benchmark/datasets.html) And (6) completing the simulation.
2. And (3) simulation results:
fig. 3 (a) is a graph of the tracking result of the video sequence for the illumination change of the target, which is respectively frames 45, 70, 90 and 120, and the rectangular boxes in the graph represent the tracking results of the conventional STC method, the particle filter tracking algorithm and the method of the present invention. As can be seen from FIG. 3 (a), the method can accurately track the target when the moving target is obviously shielded and reappears. Fig. 3 (b) is a diagram of a tracking result of a video sequence with a sudden change in the pose of a target, which is frames 234, 244, 254 and 285 respectively, and it can be seen from fig. 3 (b) that the method can accurately track the target in the process of reappearing after the moving target is obviously occluded. Fig. 3 (c) is a diagram of the tracking result for a multi-frame video sequence, which is 343, 373, 393 and 412 frames respectively, and it can be seen from fig. 3 (c) that the robustness of the present invention is strong, and robust tracking can still be achieved for a multi-frame sequence.
The above detailed description is intended to illustrate the objects, technical solutions and advantages of the present invention, and it should be understood that the above detailed description is only an example of the present invention and is not intended to limit the scope of the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (6)

1. A space-time context target tracking method based on a human brain memory mechanism is characterized by comprising the following steps:
the method comprises the following steps:
step 1: memory space and tracking window are initialized.
Initializing two layers of memory spaces, wherein the two layers of memory spaces are respectively used for storing the characteristics q of the target matching template t And spatiotemporal context. Each layer is constructed as a transient memory space, a short-term memory space, and a long-term memory space.
Step 2: a first frame spatial context model is learned.
And (2) obtaining a confidence map under the condition that an initial target tracking window is known in the step (1), calculating a spatial context model of the first frame according to the confidence map and the context prior probability of the first frame, and taking the spatial context model of the first frame as a space-time context model of the next frame.
And step 3: and (6) positioning the target.
The confidence map is computed from the spatio-temporal context model, finding the maximum value, i.e. the target position, but due to the error characteristics of the confidence map, the target position may be located second largest or other position of the confidence map. Therefore, the N positions with the maximum confidence maps are selected as target center position candidate points.
And 4, step 4: and calculating the color histogram feature of the target position candidate point.
And calculating and obtaining the color histogram characteristics of the N candidate target tracking windows and the similarity coefficients of the N color histogram characteristics and the target template stored in the memory space, and taking the candidate point with the maximum similarity as the tracking result of the current frame.
And 5: and updating the memory space.
And updating the memory space and the space-time context model according to the matching rule to prepare for the prediction and tracking of the next frame target.
2. The human brain memory mechanism-based spatiotemporal context target tracking method of claim 1, characterized in that:
step 1: creation of memory spaces
Two layers of memory spaces are created, each layer is divided into 3 memory spaces which are respectively an instant memory space, a short-time memory space and a long-time memory space. Two layers of features q for storing target matching templates respectively t And the spatiotemporal context relationship.
And 2, step: calculating a space context model of the first frame from the target position of the first frame, taking the space context model as a space-time context model of the next frame, and storing the space context model into a short-time memory space in a first-layer memory space; and simultaneously initializing the color histogram characteristics of the tracking window, and storing the color histogram characteristics into a short-time memory space in the second-layer memory space.
3. The human brain memory mechanism-based spatiotemporal context target tracking method of claim 1, characterized in that:
in the target positioning process described in step 3, because the target may appear at the maximum position corresponding to the confidence map or beside the confidence map, the confidence map is calculated by the spatio-temporal context model, and the N positions with the maximum confidence map are selected as target center position candidate points.
The formula for solving the maximum position of the confidence map is as follows:
the calculation formula of the target position candidate point is as follows:
4. the human brain memory mechanism-based spatiotemporal context target tracking method of claim 1, characterized in that:
the updating process of the memory space and space-time context model in the step 5 is as follows:
(1) And storing the transient memory space.
The video input is a current estimation template, and a target estimation template (color histogram feature) of the current frame is stored in a transient memory space.
(2) Short-term memory space matching.
Storing the current template in the first position of the short-time memory space, sequentially matching the color histogram stored in the instantaneous memory space with S templates in the short-time memory space, calculating the similarity rho between the color histogram stored in the instantaneous memory space and the first template in the short-time memory space, and defining the matching threshold of the current template as T c If ρ>T c If the matching fails, the template is matched with the last (S-1) templates of the space in sequence according to short-time memory.
Defining the matching threshold value of (S-1) templates after the short-time memory space as T s If ρ is>T s If not, defining the matching as successful, otherwise defining the matching as matchedFailing.
If the matching is successful in the short-time memory space, updating the target template according to the current template, as shown in the following formula:
q t =(1-ε)q t-1 +εp (3)
wherein q is t For the current template, p is the estimated template of the temporal space and ε is the update rate.
If the matching of S templates in the short-term memory space is unsuccessful, recording the last distribution in the short-term memory space as M K And simultaneously matching with the template in the long-term memory space.
(3) And matching long-term memory space.
If the matching in the step (2) fails, sequentially matching the color histogram stored in the instant memory space with the S templates in the long-term memory space, calculating the similarity rho, and defining the matching threshold of the long-term memory space template as T l If ρ is>T l If not, the matching is defined as failure.
If a match is found, the matching template is updated according to equation (3) while at M K Extracting matched template forgetting M under the condition of non-memorability K At M K Exchanging matching templates and M in memorable condition K
If no matched template exists in the long-term memory space, the estimation template is stored in the first position of the short-term memory space as the current template, and the current template is stored in M K Forget M in case of non-memorization K (ii) a At M K When the memory is available and the long-term memory space is not full, M is set K Memorizing to a long-term memory space; at M K Comparing M when the memory space is full K And the weight of the long-term memory space template and the forgetting template with small weight.
In addition, when the first layer of memory space matches the template parameter q t When updating, the spatio-temporal context information in the second layer memory space is also updated at the same time, and the updating rule is shown as the following formula:
wherein, the first and the second end of the pipe are connected with each other,is the last frame of spatiotemporal context information,is the spatial context information of the current frame,is the current frame spatio-temporal context information and p is the update rate.
5. The human brain memory mechanism-based spatiotemporal context target tracking method of claim 1, characterized in that:
the larger the number N of confidence map candidate points in step 3 is, the more accurate the tracking result is, but the more calculation amount is increased at the same time. According to the invention, N is 5, and the accuracy of the tracking result is improved under the condition of ensuring uncomplicated calculation amount.
6. The human brain memory mechanism-based spatiotemporal context target tracking method as claimed in claims 1, 2, 3, 4, 5, characterized in that:
according to the method, a visual information processing cognitive model of a human brain memory mechanism is introduced into a space-time context model updating process of an STC (space-time memory) method, so that each template is subjected to transmission and processing of three spaces of instantaneous memory, short-time memory and long-time memory, and a memory-based model updating strategy is formed. By memorizing the scene appearing previously, the method can still continuously and robustly track when the current target has the problems of illumination change, target posture mutation, shielding, reappearance after temporary disappearance and the like. In addition, when the confidence map is calculated according to the spatio-temporal context information, N target center position candidate points are set, and a target center position with the maximum similarity to the target template is selected as a final tracking result, so that errors caused by the confidence map are reduced, and the tracking accuracy is improved. Finally, a moving target tracking method with high precision and strong robustness is formed.
CN201710733989.0A 2017-08-24 2017-08-24 Space-time context target tracking method based on human brain memory mechanism Active CN107657627B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710733989.0A CN107657627B (en) 2017-08-24 2017-08-24 Space-time context target tracking method based on human brain memory mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710733989.0A CN107657627B (en) 2017-08-24 2017-08-24 Space-time context target tracking method based on human brain memory mechanism

Publications (2)

Publication Number Publication Date
CN107657627A true CN107657627A (en) 2018-02-02
CN107657627B CN107657627B (en) 2021-07-30

Family

ID=61127808

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710733989.0A Active CN107657627B (en) 2017-08-24 2017-08-24 Space-time context target tracking method based on human brain memory mechanism

Country Status (1)

Country Link
CN (1) CN107657627B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108416796A (en) * 2018-02-13 2018-08-17 中国传媒大学 The human body motion tracking method of two-way markov Monte Carlo particle filter
CN108492318A (en) * 2018-03-01 2018-09-04 西北工业大学 A method of the target following based on bionics techniques
CN115061574A (en) * 2022-07-06 2022-09-16 陈伟 Human-computer interaction system based on visual core algorithm
CN116307283A (en) * 2023-05-19 2023-06-23 青岛科技大学 Precipitation prediction system and method based on MIM model and space-time interaction memory

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106408592A (en) * 2016-09-09 2017-02-15 南京航空航天大学 Target tracking method based on target template updating

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106408592A (en) * 2016-09-09 2017-02-15 南京航空航天大学 Target tracking method based on target template updating

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ANTONIO S.MONTEMAYOR等: ""A Memory-Based Particle Filter for Visual Tracking through Occlusions"", 《INTERNATIONAL WORK-CONFERENCE ON THE INTERPLAY》 *
KAIHUA ZHANG等: ""Fast Tracking via Spatio-Temporal Context Learning"", 《HTTPS://ARXIV.ORG/ABS/1311.1939》 *
鲍华: ""复杂场景下基于局部分块和上下文信息的单视觉目标跟踪"", 《中国博士学位论文全文数据库 信息科技辑》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108416796A (en) * 2018-02-13 2018-08-17 中国传媒大学 The human body motion tracking method of two-way markov Monte Carlo particle filter
CN108492318A (en) * 2018-03-01 2018-09-04 西北工业大学 A method of the target following based on bionics techniques
CN108492318B (en) * 2018-03-01 2022-04-26 西北工业大学 Target tracking method based on bionic technology
CN115061574A (en) * 2022-07-06 2022-09-16 陈伟 Human-computer interaction system based on visual core algorithm
CN115712354A (en) * 2022-07-06 2023-02-24 陈伟 Man-machine interaction system based on vision and algorithm
CN116307283A (en) * 2023-05-19 2023-06-23 青岛科技大学 Precipitation prediction system and method based on MIM model and space-time interaction memory
CN116307283B (en) * 2023-05-19 2023-08-18 青岛科技大学 Precipitation prediction system and method based on MIM model and space-time interaction memory

Also Published As

Publication number Publication date
CN107657627B (en) 2021-07-30

Similar Documents

Publication Publication Date Title
CN107689052B (en) Visual target tracking method based on multi-model fusion and structured depth features
CN108520530B (en) Target tracking method based on long-time and short-time memory network
CN107657627B (en) Space-time context target tracking method based on human brain memory mechanism
CN108198209B (en) People tracking method under the condition of shielding and scale change
Kwon et al. Highly nonrigid object tracking via patch-based dynamic appearance modeling
CN110473231B (en) Target tracking method of twin full convolution network with prejudging type learning updating strategy
CN109859241B (en) Adaptive feature selection and time consistency robust correlation filtering visual tracking method
CN111582349B (en) Improved target tracking algorithm based on YOLOv3 and kernel correlation filtering
CN111523447B (en) Vehicle tracking method, device, electronic equipment and storage medium
CN111105439B (en) Synchronous positioning and mapping method using residual attention mechanism network
CN111260738A (en) Multi-scale target tracking method based on relevant filtering and self-adaptive feature fusion
CN107622507B (en) Air target tracking method based on deep learning
CN111008991B (en) Background-aware related filtering target tracking method
CN109993770B (en) Target tracking method for adaptive space-time learning and state recognition
Lu et al. Learning transform-aware attentive network for object tracking
CN108734109B (en) Visual target tracking method and system for image sequence
CN110084201B (en) Human body action recognition method based on convolutional neural network of specific target tracking in monitoring scene
CN111583294B (en) Target tracking method combining scale self-adaption and model updating
CN110895820A (en) KCF-based scale self-adaptive target tracking method
CN108846850B (en) Target tracking method based on TLD algorithm
CN111008996A (en) Target tracking method through hierarchical feature response fusion
CN107368802B (en) Moving target tracking method based on KCF and human brain memory mechanism
CN111402303A (en) Target tracking architecture based on KFSTRCF
CN113312973A (en) Method and system for extracting features of gesture recognition key points
CN111080671B (en) Motion prediction method based on deep neural network and intelligent terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant