CN111223126B - Cross-view-angle trajectory model construction method based on transfer learning - Google Patents
Cross-view-angle trajectory model construction method based on transfer learning Download PDFInfo
- Publication number
- CN111223126B CN111223126B CN202010010171.8A CN202010010171A CN111223126B CN 111223126 B CN111223126 B CN 111223126B CN 202010010171 A CN202010010171 A CN 202010010171A CN 111223126 B CN111223126 B CN 111223126B
- Authority
- CN
- China
- Prior art keywords
- target
- model
- characteristic value
- value sequence
- sequence
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000013526 transfer learning Methods 0.000 title claims abstract description 17
- 238000010276 construction Methods 0.000 title claims abstract description 16
- 238000000034 method Methods 0.000 claims abstract description 43
- 238000013507 mapping Methods 0.000 claims abstract description 19
- 238000012549 training Methods 0.000 claims abstract description 15
- 238000012546 transfer Methods 0.000 claims abstract description 8
- 239000002245 particle Substances 0.000 claims description 45
- 230000006870 function Effects 0.000 claims description 12
- 230000007704 transition Effects 0.000 claims description 12
- 238000004422 calculation algorithm Methods 0.000 claims description 10
- 239000011159 matrix material Substances 0.000 claims description 9
- 238000004088 simulation Methods 0.000 claims description 9
- 238000004364 calculation method Methods 0.000 claims description 8
- 238000005457 optimization Methods 0.000 claims description 7
- 150000001875 compounds Chemical class 0.000 claims description 6
- 238000001914 filtration Methods 0.000 claims description 6
- 238000012952 Resampling Methods 0.000 claims description 3
- 238000005259 measurement Methods 0.000 claims description 3
- 238000013508 migration Methods 0.000 claims description 3
- 230000005012 migration Effects 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 230000000007 visual effect Effects 0.000 abstract description 4
- 230000006399 behavior Effects 0.000 description 6
- 238000004458 analytical method Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 238000003062 neural network model Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000007637 random forest analysis Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
- G06V20/42—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a cross-view trajectory model construction method based on transfer learning, which comprises the following steps of 1, constructing a target domain target trajectory characteristic value sequence set; classifying the characteristic value sequence of the known label according to the label; step 2, constructing a source domain target track characteristic value sequence set; classifying the characteristic value sequence according to the label; step 3, training the HMM model by adopting the characteristic value sequence in the step 2; step 4, constructing a mapping model between the source domain and the target domain features according to the characteristic value sequence set in the step 1 and the characteristic value sequence set in the step 2, and obtaining the target domain observation probability according to the model; and 5, calibrating the target domain transfer probability according to the characteristic value sequence set in the step 1 and the training model parameters in the step 4 to obtain the target domain hidden Markov model. The method solves the problem that the target track model is not suitable and has low accuracy due to different visual angles in the prior art.
Description
Technical Field
The invention belongs to the technical field of monitoring video processing, and particularly relates to a cross-view trajectory model construction method based on transfer learning.
Background
Motion information reflecting temporal changes in video content is essential to portray semantic content in video. The target motion track describes motion information of a plurality of semantic contents, so that track modeling analysis has important significance for a plurality of applications including video monitoring, object behavior analysis, video retrieval and the like. In the existing monitoring system, a plurality of visual angle cameras are often cooperated to play a role. For example, under the same scene, the multi-view cameras jointly work and cooperate; the multi-view camera can also provide effective information for the occurrence of tracks with the same semantics in different scenes. Learning a new model for each perspective is not practical, the training cost of obtaining multiple labeled samples for each behavior from each perspective is high, and the model is not convenient for wide popularization.
Traditional machine learning algorithms (support vector machines, decision trees, random forests, dynamic bayesian networks, support vector machines, etc.) are often used to classify trajectory-based behaviors. Most of the existing track analysis methods have the defects of high false alarm rate, overfitting, neglecting some useful characteristics of behaviors, incapability of covering various anomalies due to the particularity of the method and the unavailability of data, and the like. In recent years, the most excellent neural network model has strong data processing capacity to classify and identify data, and a good classification and identification effect is obtained, but the neural network model has weak modeling capacity on a track with a time sequence relation, and a large amount of sample data is required for training to achieve accurate convergence.
Disclosure of Invention
The invention aims to provide a cross-view-angle trajectory model construction method based on transfer learning, and solves the problem that a target trajectory model is not applicable and has low accuracy due to different view angles in the prior art.
The invention adopts the technical scheme that a cross-view trajectory model construction method based on transfer learning is implemented according to the following steps:
and 5, calibrating the target domain transition probability according to the characteristic value sequence set in the step 1 and the training model parameters in the step 4 to obtain the target domain hidden Markov model.
The invention is also characterized in that:
the step 1 is implemented according to the following steps:
step 1.1, tracking a target in a video frame sequence to obtain a target track coordinate sequence
Selecting a first frame target area in a frame sequence of a video as a tracking template, and extracting target color characteristics; tracking the target frame by adopting a particle filter tracking frame to obtain a track coordinate sequence; tracking track coordinate sequence in time interval of delta t =0.3sUniformly sampling; wherein (x) t ,y t ) Is the target position coordinate at time t;
step 1.2, denoising the target track coordinate sequence in the step 1.1
Filtering noise points of the track coordinate sequence obtained in the step 1.1 by using an average filter with the size of a sliding window being 5; the mean filtering formula is as follows:
step 1.3, extracting the angle characteristics of the target track coordinate sequence in the step 1.2
The following formula is adopted to extract the angle characteristics:
in the formula (x) t ,y t ) Is the target position coordinate at time t;
step 1.4, discretizing the angle characteristics extracted in step 1.3 to obtain a characteristic value sequence
According to the obtained angleObtaining a characteristic value O by discretizing a 24-direction chain code t Further, a characteristic value sequence O is obtained T =O 1 O 2 …O t …;
Step 1.5, classifying the characteristic value sequences according to the labels to obtain B n And (5) collecting the class characteristic value sequences.
In step 1.1, the specific process of extracting the target color features is as follows:
assume that the center position of the target region is (x) 0 ,y 0 ) Then the width and height of the target region are w 0 And h 0 At a certain point p in the target area i =(x i ,y i ) The target feature may be represented as:
in the formula, k is a normalization coefficient; a. n respectively represents the pixel number and the scale of the target area; u. of i Representing each feature subspace; delta is a dirac function; k (r) =1-r 2 Is a weight function;
assuming the particle state asObserved value is Z k Establishing a candidate model q = { q ] of the region where the particle is located i } i=1,…N And measuring the similarity of the particle region and the target region by adopting a Bhattacharyya coefficient:
state X at time t t The observation equation of (a) is:
in step 1.1, the particle filter tracking process is specifically as follows:
(1) Particle initialization
When t =0, particle initialization is performed to randomly generate particle subsetsSetting a weight value, wherein the weight value is 1/N;
(2) Predicting; predicting the state of each particle according to the prediction process of the system
Predicted current position during predictionThe position from the previous instant is a linear gaussian relationship, the so-called equation of motion:
in the formula u k Is an external input, ω k Is a gaussian error;
(3) Updating; updating the weight of the particle according to the observed value
Normalized weight
(4) Resampling; copying a part of particles with high weight and removing a part of particles with low weight
According to respective normalized weightSize copy/discard samples->Obtaining N approximate obeys>Distributed sample->Make->i=1,…,N;
(5) Outputting; estimating current state using particles and weights
The output being a set of particlesAnd estimating the current state by using the particle state and the weight value so as to obtain a target coordinate at the current moment:
(6) And (4) tracking the rest video frames by adopting the methods from (2) to (4) to obtain a track coordinate sequence.
In step 1.4, the discretization of the 24-direction chain code is specifically as follows:
dividing an angle area, namely 360 degrees into 24 intervals on average, marking the 24 intervals with 1-24, wherein one number corresponds to one angle interval; angle of rotationIn which angle interval, it is recorded as the number corresponding to the interval.
The specific process of the step 3 is as follows:
step 3.1, randomly initializing an HMM model lambda = (A, B, pi) to obtain an initial HMM model; wherein A is the transition state probability, B is the observation state probability, and π is the initial state probability distribution;
step 3.2, calculating M characteristic value sequences O in certain category of tracks S Probability of occurrence P (O) under this model S Multiplication by multiplication of I | λ)Wherein, I is a hidden state sequence;
Step 3.4, to the initial HMM model λ S =(A S ,B S ,π S ) Reestimating until the iteration of the model parameters is not improved any more, and obtaining the optimal HMM model of the sequence
Step 3.5, training the rest track categories by adopting the methods from step 3.1 to step 3.4 to obtain the source domain C n HMM model for individual trajectory classes
For initial HMM model λ S =(A S ,B S ,π S ) The re-estimation process is specifically as follows:
(1) Defining forward variables
α t (i)=P(O 1 ,O 2 ,…O t ,I/λ) 1≤t≤T (11)
In the formula, a ij ,b j Matrix parameters of A and B are respectively;
(2) Defining a backward variable
β t (i)=P(O t-1 ,O t-2 ,…O T ,I/λ) 1≤t≤T-1 (13)
In the formula, a ij ,b j Matrix parameters of A and B are respectively;
(3) For alpha t (i) To perform treatment
Initialization
Recursion:
(4) For beta is t (i) To perform treatment
Initialization
Recursive method
(5) Recalculation
In the formula (I), the compound is shown in the specification,matrix parameters of pi, A, B, respectively.
step 4.1, constructing a mapping model between the source domain and the target domain according to the characteristic value sequence set in the step 1 and the characteristic value sequence set in the step 2, wherein the mapping relation is as follows:
in the formula, w and b are coefficients of a characteristic mapping fitting curve equation; o is S Is a source domain coded sample;is the mapped target domain coded data;
the objective function is:
in the formula, O T Is the true target domain encoded data;
step 4.2, the optimal HMM model in the step 3 is obtainedIs based on the observation state probability->Assigning the initial value B of the probability of the observation state of the target domain according to the mapping relation of the step 4.1 T 。
step 5.1, model parameters in the step 4.3As a corresponding target domain model λ T Is greater than or equal to>π T ;
step 5.3, calculating the similarity of the simulation data in the step 5.2 and the target domain same track category characteristic value sequence in the step 1;
step 5.4, calculating a target domain transfer summary A by adopting an optimization algorithm by taking the similarity height as a target function T (ii) a The calculation formula is as follows:
step 5.5, calibrating the target domain transition probability by adopting a constraint optimization algorithm to obtain a target domain hidden Markov model
Solving the optimal delta A by adopting an interior point method, and calculating a target domain modelSimulation data and O T If the similarity is larger than or equal to the similarity threshold, the delta A obtained in the previous step is used as an initial value to enter the iteration of the interior point method again until the value is smaller than the similarity threshold; namely the target domain hidden Markov model>Wherein the constraint is that the constraint is a transition probability matrix->And->Is greater than 0 and the sum of each row of elements is 1.
The specific process of step 5.2 is as follows:
given an HMM model λ = (a, B, pi), the observation sequence O = O 1 O 2 …O k Can be produced by the following steps:
(1) According to the initial state probability distribution pi = pi i Selecting an initial state Q 1 =i;
(2) Let t =1;
(3) Output probability distribution b from state i jk Output O t =k;
(4) Output probability distribution b from state i jk Output O t =k;
(5) If t = t +1, if t < k, repeating (3) and (4), otherwise ending;
in step 5.3, the measurement of the similarity is determined by the euclidean distance, and the euclidean distance calculation formula is as follows:
in the formula (I), the compound is shown in the specification,O T respectively are models>The simulated data set mean value and the labeled characteristic value sequence set mean value in the step 1 belong to the same track category;
the similarity calculation formula is as follows:
the invention has the beneficial effects that:
the invention relates to a cross-view track model construction method based on transfer learning, which comprises the steps of constructing a target track characteristic data set, training a hidden Markov model under a source domain view, establishing a source domain characteristic and target domain characteristic mapping model to optimize transfer observation probability parameters, and optimizing a target domain transfer probability based on a small number of target domain labeled samples; by adopting the model constructed by the invention, the behavior state of the target track can be judged under a specific visual angle; the method solves the problems of poor recognition effect and low robustness in the prior art during cross-view model migration under the condition of less labeled data in the target field, and the model constructed by the method has good performance for recognizing the target track of the track sample under different views.
Drawings
FIG. 1 is a flow chart of a cross-perspective trajectory model construction method based on transfer learning according to the present invention;
FIG. 2 is a 24-direction chain code diagram in the cross-view trajectory model construction method based on transfer learning according to the present invention;
FIG. 3 is a source domain feature and target domain feature mapping fitting curve in step 4 of the cross-view trajectory model construction method based on transfer learning.
Detailed Description
The invention is described in detail below with reference to the drawings and the detailed description.
As shown in fig. 1, the invention relates to a cross-view trajectory model construction method based on transfer learning, which is implemented specifically according to the following steps:
the step 1 is implemented according to the following steps:
step 1.1, tracking the target in the video frame sequence to obtain the target track coordinate sequence
Selecting a first frame target area in a video frame sequence as a tracking template, and extracting target color characteristics; tracking the target frame by adopting a particle filter tracking frame to obtain a track coordinate sequence; tracking track coordinate sequence in time interval of delta t =0.3sUniformly sampling; wherein (x) t ,y t ) Is the target position coordinate at time t;
the specific process of extracting the target color features is as follows:
assume that the center position of the target region is (x) 0 ,y 0 ) Then the width and height of the target region are w 0 And h 0 At a certain point p in the target area i =(x i ,y i ) The target feature may be represented as:
in the formula, k is a normalization coefficient; a. n respectively represents the number of pixels and the size of the target area; u. of i Representing each feature subspace; delta is a dirac function; k (r) =1-r 2 Is a weight function;
assuming the particle state asObserved value is Z k Establishing a candidate model q = { q ] of the region where the particle is located i } i=1,…N And measuring the similarity of the particle region and the target region by adopting a Bhattacharyya coefficient:
state X at time t t The observation equation of (a) is:
the particle filter tracking process is concretely as follows:
(1) Particle initialization
When t =0, particle initialization is performed to randomly generate particle subsetsSetting a weight value, wherein the weight value is 1/N;
(2) Predicting; predicting the state of each particle according to the prediction process of the system
Predicted current position during predictionThe position from the previous instant is a linear gaussian relationship, the so-called equation of motion:
in the formula u k Is an external input, ω k Is a gaussian error;
(3) Updating; updating the weight of the particle according to the observed value
Normalized weight
(4) Resampling; copying a part of particles with high weight and removing a part of particles with low weight
According to respective normalized weightSize copy/discard sample->Deriving N approximate obeys>Distributed sample->Make->
(5) Outputting; estimating current state using particles and weights
The output being a set of particlesAnd estimating the current state by using the particle state and the weight value, thereby obtaining the target coordinate at the current moment:
(6) Tracking the rest video frames by adopting the methods (2) to (4) to obtain a track coordinate sequence;
step 1.2, denoising the target track coordinate sequence in the step 1.1
Filtering noise points of the track coordinate sequence obtained in the step 1.1 by using an average filter with a sliding window size of 5; the mean filtering formula is as follows:
step 1.3, extracting the angle characteristics of the target track coordinate sequence in the step 1.2
The following formula is adopted to extract the angle characteristics:
wherein (x) t ,y t ) Is the target position coordinate at the time t;
step 1.4, discretizing the angle characteristics extracted in step 1.3 to obtain a characteristic value sequence
According to the obtained angleObtaining a characteristic value O by discretizing a 24-direction chain code t Further, a characteristic value sequence O is obtained T =O 1 O 2 …O t …;
The discretization of the 24-direction chain code is specifically as follows (as shown in fig. 2):
dividing an angle area, namely 360 degrees into 24 intervals on average, marking the 24 intervals with 1-24, wherein one number corresponds to one angle interval; angle of rotationIn which angle interval, the angle interval is marked as the number corresponding to the interval;
step 1.5, sequence of characteristic values according to labelsLine classification to obtain B n And (5) collecting the class characteristic value sequence.
the specific process of the step 3 is as follows:
step 3.1, randomly initializing an HMM model lambda = (A, B, pi) to obtain an initial HMM model; wherein A is the transition state probability, B is the observation state probability, and π is the initial state probability distribution;
step 3.2, calculating M characteristic value sequences O in certain category of tracks S Probability of occurrence P (O) under this model S Multiplication of I | λ)Wherein, I is a hidden state sequence;
Step 3.4, to the initial HMM model λ S =(A S ,B S ,π S ) Reestimating until the iteration of the model parameters is not improved any more, and obtaining the optimal HMM model of the sequence
For initial HMM model λ S =(A S ,B S ,π S ) The re-estimation process is specifically as follows:
(1) Defining forward variables
α t (i)=P(O 1 ,O 2 ,…O t ,I/λ) 1≤t≤T (11)
In the formula, a ij ,b j Matrix parameters of A and B are respectively;
(2) Defining a backward variable
β t (i)=P(O t-1 ,O t-2 ,…O T ,I/λ) 1≤t≤T-1 (13)
In the formula, a ij ,b j Matrix parameters of A and B are respectively;
(3) For alpha t (i) To perform treatment
Initialization
Recursion:
(4) For beta is t (i) To perform treatment
Initialization
Recursive method
(5) Recalculation
In the formula (I), the compound is shown in the specification,matrix parameters of pi, A and B respectively;
step 3.5, training the rest track categories by adopting the methods of the step 3.1 to the step 3.4 to obtain a source domain C n HMM model for individual trajectory classes
as shown in fig. 3, step 4 is specifically implemented according to the following steps:
step 4.1, constructing a mapping model between the source domain and the target domain according to the characteristic value sequence set in the step 1 and the characteristic value sequence set in the step 2, wherein the mapping relation is as follows:
in the formula, w and b are coefficients of a characteristic mapping fitting curve equation; o is S Is a source domain encoded sample;is the mapped target domain coded data;
the objective function is:
in the formula, O T Is the true target domain encoded data;
step 4.2, the optimal HMM model in the step 3 is usedIn (b) is determined by the observation state probability>Assigning the initial value B of the probability of the observation state of the target domain according to the mapping relation of the step 4.1 T ;
And 5, calibrating the target domain transition probability according to the characteristic value sequence set in the step 1 and the training model parameters in the step 4 to obtain the target domain hidden Markov model.
step 5.1, model parameters in the step 4.3As a corresponding target domain model λ T Is greater than or equal to>π T ;
the specific process of the step 5.2 is as follows:
given an HMM model λ = (a, B, pi), the observation sequence O = O 1 O 2 …O k Can be produced by the following steps:
(1) According to the initial state probability distribution pi = pi i Selecting an initial state Q 1 =i;
(2) Let t =1;
(3) Output probability distribution b from state i jk Output O t =k;
(4) Output probability distribution b from state i jk Output O t =k;
(5) If t = t +1, if t < k, repeating (3) and (4), otherwise ending;
in step 5.3, the measurement of the similarity is determined by the euclidean distance, and the euclidean distance calculation formula is as follows:
in the formula (I), the compound is shown in the specification,O T are respectively the model->The simulated data set mean value and the labeled characteristic value sequence set mean value in the step 1 belong to the same track category;
the similarity calculation formula is as follows:
step 5.3, calculating the similarity of the simulation data in the step 5.2 and the target domain same track category characteristic value sequence in the step 1;
step 5.4, the similarity height is taken as a target function, and an optimization algorithm is adopted to calculate a target domain transfer summary A T (ii) a The calculation formula is as follows:
step 5.5, calibrating the target domain transition probability by adopting a constraint optimization algorithm to obtain a target domain hidden Markov model
Solving the optimal delta A by adopting an interior point method, and calculating a target domain modelSimulation data and O T If the similarity is larger than or equal to the similarity threshold, the delta A obtained in the previous step is used as an initial value to enter the iteration of the interior point method again until the delta A is smaller than the similarity threshold; namely the target domain hidden Markov model>Wherein the constraint is that the constraint is a transition probability matrix->And->Is greater than 0 and the sum of each row of elements is 1.
The invention relates to a cross-view track model construction method based on transfer learning, which comprises the steps of constructing a target track characteristic data set, training a hidden Markov model under a source domain view, establishing a source domain characteristic and target domain characteristic mapping model to optimize transfer observation probability parameters, and optimizing a target domain transfer probability based on a small number of target domain labeled samples; by adopting the model constructed by the invention, the behavior state of the target track can be judged under a specific visual angle; the method solves the problems of poor recognition effect and low robustness in cross-view model migration in the prior art under the condition of less labeled data in the target field, and the model constructed by the method has good performance in target track recognition of track samples under different views.
Claims (6)
1. A cross-view trajectory model construction method based on transfer learning is characterized by being implemented according to the following steps:
step 1, constructing a target track characteristic value sequence set of a target domain; classifying the characteristic value sequence of the known label according to the label to obtain B n A class characteristic value sequence set; wherein the target domain consists of a sequence of eigenvalues of which x tags are known and a sequence of eigenvalues of which y tags are unknown, and y > x;
the step 1 is specifically implemented according to the following steps:
step 1.1, tracking a target in a video frame sequence to obtain a target track coordinate sequence
Selecting a first frame target area in a video frame sequence as a tracking template, and extracting target color characteristics; tracking the target frame by adopting a particle filter tracking frame to obtain a track coordinate sequence; tracking track coordinate sequence according to time interval of delta t =0.3sUniformly sampling; wherein (x) t ,y t ) Is the target position coordinate at the time t;
in the step 1.1, the specific process of extracting the target color features is as follows:
assume that the center position of the target region is (x) 0 ,y 0 ) Then the width and height of the target region are w 0 And h 0 At a certain point p in the target area i =(x i ,y i ) The target feature may be represented as:
in the formula, k is a normalization coefficient; a. n respectively represents the pixel number and the scale of the target area; u. u i Representing each feature subspace; delta is a dirac function; k (r) =1-r 2 Is a weight function;
assuming the particle state asObserved value is Z k Establishing a candidate model q = { q ] of the region where the particle is located i } i=1,…N And measuring the similarity of the particle region and the target region by adopting a Bhattacharyya coefficient:
state X at time t t The observation equation of (a) is:
in step 1.1, the particle filter tracking process specifically includes:
(1) Particle initialization
When t =0, particle initialization is performed to randomly generate particle subsetsSetting a weight value, wherein the weight value is 1/N;
(2) Predicting; predicting the state of each particle according to the prediction process of the system
Predicted current position during predictionThe position from the previous instant is a linear gaussian relationship, the so-called equation of motion:
in the formula u k Is an external input, ω k Is a gaussian error;
(3) Updating; updating the weight of the particle according to the observed value
Normalized weight
(4) Resampling; copying a part of particles with high weight and removing a part of particles with low weight
According to respective normalized weightSize copy/discard sample->Obtaining N approximate obeys>Distributed sample->Make->
(5) Outputting; estimating current state using particles and weights
The output being a set of particlesAnd estimating the current state by using the particle state and the weight value, thereby obtaining the target coordinate at the current moment:
(6) Tracking the rest video frames by adopting the methods (2) to (4) to obtain a track coordinate sequence;
step 1.2, denoising the target track coordinate sequence in the step 1.1
Filtering noise points of the track coordinate sequence obtained in the step 1.1 by using an average filter with a sliding window size of 5; the mean filtering formula is as follows:
step 1.3, extracting the angle characteristics of the target track coordinate sequence in the step 1.2
The following formula is adopted to extract the angle characteristics:
in the formula (x) t ,y t ) Is the target position coordinate at the time t;
step 1.4, discretizing the angle characteristics extracted in step 1.3 to obtain a characteristic value sequence
According to the obtained angleObtaining a characteristic value O by discretizing a 24-direction chain code t Further, a characteristic value sequence O is obtained T =O 1 O 2 …O t …;
In the step 1.4, the discretization of the 24-direction chain code is specifically as follows:
dividing an angle area, namely 360 degrees into 24 intervals on average, marking the 24 intervals with 1-24, wherein one number corresponds to one angle interval; angle of rotationIn which angle interval, the angle interval is marked as the number corresponding to the interval;
step 1.5, classifying the characteristic value sequences according to the labels to obtain B n A class characteristic value sequence set;
step 2, constructing a source domain target track characteristic value sequence set by adopting the construction method in the step 1; classifying the characteristic value sequence according to the label to obtain C n A class characteristic value sequence set; wherein, the characteristic value sequence labels of the source domain are known;
step 3, training the HMM model by adopting the characteristic value sequence in the step 2 to obtain C n HMM models for each trajectory category;
step 4, constructing a mapping model between the source domain and the target domain features according to the characteristic value sequence set in the step 1 and the characteristic value sequence set in the step 2, and obtaining the target domain observation probability according to the model;
and 5, calibrating the target domain transition probability according to the characteristic value sequence set in the step 1 and the training model parameters in the step 4 to obtain the target domain hidden Markov model.
2. The cross-perspective trajectory model building method based on transfer learning according to claim 1, wherein the specific process in step 3 is as follows:
step 3.1, randomly initializing an HMM model lambda = (A, B, pi) to obtain an initial HMM model; wherein A is the transition state probability, B is the observation state probability, and π is the initial state probability distribution;
step 3.2Calculating M characteristic value sequences O in a certain category of tracks S Probability of occurrence P (O) under this model S Multiplication of I | λ)Wherein, I is a hidden state sequence;
Step 3.4, to the initial HMM model λ S =(A S ,B S ,π S ) Reestimating until the iteration of the model parameters is not improved any more, and obtaining the optimal HMM model of the sequence
3. The method for constructing a cross-perspective trajectory model based on transfer learning of claim 2, wherein λ is an initial HMM model S =(A S ,B S ,π S ) The re-estimation process is specifically as follows:
(1) Defining forward variables
α t (i)=P(O 1 ,O 2 ,…O t ,I/λ)1≤t≤T (11)
In the formula, a ij ,b j Matrix parameters of A and B are respectively;
(2) Defining a backward variable
β t (i)=P(O t-1 ,O t-2 ,…O T ,I/λ)1≤t≤T-1 (13)
In the formula, a ij ,b j Matrix parameters of A and B are respectively;
(3) For alpha t (i) To perform treatment
Initialization
Recursion:
(4) For beta is t (i) To perform treatment
Initialization
Recursive
(5) Recalculating
4. The method for constructing a cross-perspective trajectory model based on transfer learning according to claim 2, wherein the step 4 is specifically implemented according to the following steps:
step 4.1, constructing a mapping model between the source domain and the target domain according to the characteristic value sequence set in the step 1 and the characteristic value sequence set in the step 2, wherein the mapping relation is as follows:
in the formula, w and b are coefficients of a characteristic mapping fitting curve equation; o is S Is a source domain encoded sample;is the mapped target domain coded data;
the objective function is:
in the formula, O T Is the true target domain encoded data;
5. The method for constructing a cross-perspective trajectory model based on migration learning according to claim 4, wherein the step 5 is specifically implemented according to the following steps:
step 5.1, model parameters in the step 4.3As a corresponding target domain model λ T Is greater than or equal to>π T ;
step 5.3, calculating the similarity of the simulation data in the step 5.2 and the target domain same track category characteristic value sequence in the step 1;
step 5.4, calculating a target domain transfer summary A by adopting an optimization algorithm by taking the similarity height as a target function T (ii) a The calculation formula is as follows:
step 5.5, calibrating the target domain transition probability by adopting a constraint optimization algorithm to obtain a target domain hidden Markov model
Solving the optimal delta A by adopting an interior point method, and calculating a target domain modelSimulation data and O T If the similarity is larger than or equal to the similarity threshold, the delta A obtained by the last step of optimization is used as an initial value to enter the iteration of the interior point method again until the value is smaller than the similarity threshold; namely the target domain hidden Markov model>Wherein the constraint is that the constraint is a transition probability matrix->And->Is greater than 0 and the sum of each row of elements is 1.
6. The method for constructing the cross-perspective trajectory model based on the transfer learning of claim 4, wherein the step 5.2 comprises the following specific processes:
given the HMM model λ = (a, B, pi),then observe sequence O = O 1 O 2 …O k Can be produced by the following steps:
(1) According to the initial state probability distribution pi = pi i Selecting an initial state Q 1 =i;
(2) Let t =1;
(3) Output probability distribution b from state i jk Output O t =k;
(4) Output probability distribution b from state i jk Output O t =k;
(5) If t = t +1, if t < k, repeating (3) and (4), otherwise ending;
in the step 5.3, the measurement of the similarity is determined by the euclidean distance, and the euclidean distance is calculated according to the following formula:
in the formula (I), the compound is shown in the specification,O T are respectively the model->The simulated data set mean value and the labeled characteristic value sequence set mean value in the step 1 belong to the same track category;
the similarity calculation formula is as follows:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010010171.8A CN111223126B (en) | 2020-01-06 | 2020-01-06 | Cross-view-angle trajectory model construction method based on transfer learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010010171.8A CN111223126B (en) | 2020-01-06 | 2020-01-06 | Cross-view-angle trajectory model construction method based on transfer learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111223126A CN111223126A (en) | 2020-06-02 |
CN111223126B true CN111223126B (en) | 2023-03-31 |
Family
ID=70832254
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010010171.8A Active CN111223126B (en) | 2020-01-06 | 2020-01-06 | Cross-view-angle trajectory model construction method based on transfer learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111223126B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115272395A (en) * | 2022-07-11 | 2022-11-01 | 哈尔滨工业大学重庆研究院 | Cross-domain migratable pedestrian trajectory prediction method based on depth map convolutional network |
CN116776158B (en) * | 2023-08-22 | 2023-11-14 | 长沙隼眼软件科技有限公司 | Target classification method, device and storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103235933A (en) * | 2013-04-15 | 2013-08-07 | 东南大学 | Vehicle abnormal behavior detection method based on Hidden Markov Model |
CN106203323A (en) * | 2016-07-06 | 2016-12-07 | 中山大学新华学院 | Video behavior activity recognition key algorithm based on hidden Markov model |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11074495B2 (en) * | 2013-02-28 | 2021-07-27 | Z Advanced Computing, Inc. (Zac) | System and method for extremely efficient image and pattern recognition and artificial intelligence platform |
-
2020
- 2020-01-06 CN CN202010010171.8A patent/CN111223126B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103235933A (en) * | 2013-04-15 | 2013-08-07 | 东南大学 | Vehicle abnormal behavior detection method based on Hidden Markov Model |
CN106203323A (en) * | 2016-07-06 | 2016-12-07 | 中山大学新华学院 | Video behavior activity recognition key algorithm based on hidden Markov model |
Non-Patent Citations (2)
Title |
---|
基于HMM的动作识别结果可信度计算方法;王昌海等;《通信学报》;20160525(第05期);全文 * |
运动目标轨迹分类与识别;潘奇明等;《火力与指挥控制》;20091115(第11期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN111223126A (en) | 2020-06-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111476302B (en) | fast-RCNN target object detection method based on deep reinforcement learning | |
CN111178197B (en) | Mass R-CNN and Soft-NMS fusion based group-fed adherent pig example segmentation method | |
CN111401144B (en) | Escalator passenger behavior identification method based on video monitoring | |
EP2164041B1 (en) | Tracking method and device adopting a series of observation models with different lifespans | |
CN106846355B (en) | Target tracking method and device based on lifting intuitive fuzzy tree | |
CN110197502B (en) | Multi-target tracking method and system based on identity re-identification | |
CN107633226B (en) | Human body motion tracking feature processing method | |
CN111783576A (en) | Pedestrian re-identification method based on improved YOLOv3 network and feature fusion | |
CN108921877B (en) | Long-term target tracking method based on width learning | |
CN107169117B (en) | Hand-drawn human motion retrieval method based on automatic encoder and DTW | |
CN111582349B (en) | Improved target tracking algorithm based on YOLOv3 and kernel correlation filtering | |
CN110728694B (en) | Long-time visual target tracking method based on continuous learning | |
CN110363165B (en) | Multi-target tracking method and device based on TSK fuzzy system and storage medium | |
CN110458022B (en) | Autonomous learning target detection method based on domain adaptation | |
CN111223126B (en) | Cross-view-angle trajectory model construction method based on transfer learning | |
CN108038515A (en) | Unsupervised multi-target detection tracking and its storage device and camera device | |
CN110555868A (en) | method for detecting small moving target under complex ground background | |
CN111524164A (en) | Target tracking method and device and electronic equipment | |
CN102314591B (en) | Method and equipment for detecting static foreground object | |
CN107368802B (en) | Moving target tracking method based on KCF and human brain memory mechanism | |
KR20230171966A (en) | Image processing method and device and computer-readable storage medium | |
CN115359407A (en) | Multi-vehicle tracking method in video | |
CN112132257A (en) | Neural network model training method based on pyramid pooling and long-term memory structure | |
CN111444816A (en) | Multi-scale dense pedestrian detection method based on fast RCNN | |
CN113627240B (en) | Unmanned aerial vehicle tree species identification method based on improved SSD learning model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |