CN112163510B - Human body action classification recognition method based on multi-observation variable HMM model - Google Patents

Human body action classification recognition method based on multi-observation variable HMM model Download PDF

Info

Publication number
CN112163510B
CN112163510B CN202011024486.4A CN202011024486A CN112163510B CN 112163510 B CN112163510 B CN 112163510B CN 202011024486 A CN202011024486 A CN 202011024486A CN 112163510 B CN112163510 B CN 112163510B
Authority
CN
China
Prior art keywords
human body
observation
frequency
vector
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011024486.4A
Other languages
Chinese (zh)
Other versions
CN112163510A (en
Inventor
周代英
胡晓龙
李粮余
张同梦雪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202011024486.4A priority Critical patent/CN112163510B/en
Publication of CN112163510A publication Critical patent/CN112163510A/en
Application granted granted Critical
Publication of CN112163510B publication Critical patent/CN112163510B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Probability & Statistics with Applications (AREA)
  • Signal Processing (AREA)
  • Radar Systems Or Details Thereof (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention belongs to the technical field of human body action classification and recognition, and particularly relates to a human body action classification and recognition method based on a multi-observation variable HMM model. The method firstly extracts the upper envelope and the lower envelope of a human body micro-Doppler signal and a Doppler frequency change sequence of a human body trunk from a micro-Doppler image of human body radar echo data, uses the sequences as three observation variables of an HMM model, and discretizes the state quantity of the observation variables through vector quantization. The hidden state of the HMM model is utilized to present the internal relation between the upper envelope and the lower envelope of the human body micro Doppler signal and the frequency change of the trunk, and the Markov property of the hidden state variable of the model is utilized to represent the randomness of the state change of the moving human body, so that the classification performance of subsequent human body actions is improved, and the effectiveness of the method is verified by an experimental result.

Description

Human body action classification recognition method based on multi-observation variable HMM model
Technical Field
The invention belongs to the technical field of human body action classification and recognition, and particularly relates to a human body action classification and recognition method based on a multi-observation variable HMM model.
Background
With the increasing emphasis on human body safety protection, the non-contact human body motion state identification technology gradually becomes a research hotspot. The technology aims at detecting and identifying and classifying human body actions in a non-contact, non-invasive and remote manner, and has important significance in the aspects of identifying abnormal behavior personnel in safety protection, monitoring activities of the elderly living alone, monitoring the physical state of patients in hospitals and the like.
At present, in the aspect of radar detection and human body action identification, except that imaging radar detection carries out motion state identification through human body imaging and trajectory tracking, time domain decomposition or time frequency analysis is mainly carried out on radar echo signals, so that radar signal time domain characteristics or time domain characteristics are extracted to carry out motion state identification.
The human body motion detection technology based on time-frequency analysis has been developed greatly, and has more meaningful theories and technical achievements in the aspects of detection technology, feature recognition, algorithm classification and the like. The Youngwood Kim and Hao Link utilize the micro Doppler spectrogram of the human body to extract the statistical characteristics of the micro Doppler bandwidth, the four limb movement period, the average Doppler of the trunk and the like to realize the classification of the human body actions. The classification method is based on the statistical characteristics of micro Doppler signals, does not consider the state change in the human motion process, and has poor classification effect on random unstable human motion. Victor C · Chen proposes a motion classification method based on markov property of envelope variation of human micro-doppler images. Although this method uses the state change of the human body motion, the envelope of the human body micro doppler signal is a random combination of each moving part of the human body, and the state change is not markov, so that the classification performance is not satisfactory.
Disclosure of Invention
In order to solve the problems, the invention provides a method for classifying and identifying human body actions based on a constructed multi-observation variable Hidden Markov Model (HMM). The method comprises the steps of firstly extracting the upper envelope and the lower envelope of a human body micro-Doppler signal and a Doppler frequency change sequence of a human body trunk from a micro-Doppler image of human body action, reducing the state quantity of an observation variable through quantification, simplifying a model, and reducing the training time of a subsequent HMM model. And the quantized change sequence data is used as three observation state sequences of the HMM, an HMM model of the observation variable is trained, and finally classification of human body actions is achieved. The Markov property of the hidden state variable of the HMM model can fully describe the randomness of the physical condition change of the moving human body, so that the classification performance of the subsequent human body action is improved.
The technical scheme of the invention is as follows:
a human body action classification recognition method based on a multi-observation variable HMM model comprises the following steps:
s1, the upper envelope frequency, the lower envelope frequency and the trunk doppler frequency of the human micro doppler signal are used as three observation variables of the hidden markov model, so that the observation state sequence of the model is extracted first.
And acquiring observation sequences, namely the upper envelope frequency, the lower envelope frequency and the trunk Doppler frequency of the human body micro Doppler signal, wherein different echo effects can be formed on the human body micro signals in a motion state due to different micro characteristics of different body parts. And acquiring the frequency change condition of the upper envelope and the lower envelope of the body micro Doppler signal in an envelope extraction mode. Since the amplitude of the human body micro-doppler signal in the micro-doppler spectrogram is greater than zero, the variation sequence of the upper envelope frequency and the lower envelope frequency is extracted by the following formula:
Figure BDA0002701735100000021
wherein FHFor a sequence of frequency variations of the upper envelope, FLIs the frequency variation sequence of the lower envelope, Sp (f, t) is the micro Doppler spectrum, f is the micro Doppler frequency, t is the time, [ -fm,fm]Is the range of spectrogram doppler frequencies, T is the total duration (total sequence length);
in the process of human body movement, the trunk is an energy concentration area of echo signals. Aiming at a micro Doppler energy distribution result at a certain moment, the Doppler signal intensity formed by trunk movement is represented as a maximum value, so that a trunk Doppler frequency change sequence is extracted through peak detection, namely, a point with the maximum amplitude value at the moment t of a spectrogram is extracted, and the trunk Doppler at the moment is recordedFrequency FT(t),FT(t) satisfies the following formula:
Figure BDA0002701735100000022
the extracted upper envelope frequency variation sequence FH(t) lower envelope frequency variation sequence FL(t) and torso frequency variation sequence FT(t) three observation state sequences as models.
S2, quantifying the observed variable, comprising:
in order to reduce the state quantity of the observed variable, the training time of a subsequent HMM model is reduced, and the observed variable is quantized.
The codebook is obtained through a learning process, assuming a training set of M vector sources (training samples):
Tr={x1,x2,...,xM};
the source vector is 3-dimensional:
xm=(FH(tm),FT(tm),FL(tm)),m=1,2,...,M
FH(tm),FT(tm),FL(tm) Are each tmThe frequency of the upper envelope, torso and lower envelope of the time micro-doppler signal; given the number of codevectors N (i.e. the vector space is to be divided into N parts), the codebook (the set of all codevectors) is expressed as: c ═ C1,c2,...,cNEach code-vector is a 3-dimensional vector: c. Cn=(cn,h,cn,t,cn,l) N is 1,2,.. N, wherein cn,h,cn,t,cn,lThree code words which are respectively the nth code vector correspond to the states of three observation variables; and code vector cnThe corresponding coding region is denoted SnThe division of the space is then represented as: p ═ S1,S2,...,SN}; if the source vector xmAt SnThen its approximation is cnDenoted Ap (x)m)=cnThe average distortion factor is used for measurement, and the average distortion factor is expressed as follows:
Figure BDA0002701735100000031
using LBG algorithm, find DavMinimum code vector cnObtaining a codebook C; the LBG algorithm is an iterative algorithm that alternately adjusts P and C such that the average distortion continuously moves toward its local minimum, and finally finds DavThe smallest code vector.
For any vector x to be quantized (f)h(t),ft(t),fl(t)),fh(t),ft(t),fl(t) the frequencies of the upper envelope, the trunk and the lower envelope of the human micro-Doppler signal at the time t are respectively calculated and associated with each codevector c in the codebook through the following formula (L2 norm)nEuclidean distance of dn
dn=||x-cn||2
The code vector with the shortest Euclidean distance is the vector x after x quantizationq
S3, constructing a multi-observation variable HMM model:
for the HMM model for human action classification in this method, the observed variables are multiple, definition Q is the set of all possible hidden states, codebook C is the set of all possible observed states, i.e.:
Q={q1,q2,...,qK},C={c1,c2,...,cN}
wherein K is the number of possible hidden states, N is the number of all possible observed states;
for a micro Doppler signal of human body movement, I is defined as a corresponding hidden state sequence, and a corresponding observation sequence is a quantized vector sequence X of upper envelope, trunk and lower envelope frequency combinationqAnd the sequence length is T:
I={i1,i2,...,iT},Xq={xq,1,xq,2,...,xq,T}
wherein any hidden state itE.g. Q, any observation state xq,t∈C;
Two assumptions for the HMM model:
(1) homogeneous Markov assumption: if the hidden state at any time t is only dependent on the previous hidden state, it=qiHidden state i at time t +1t+1=qjThe hidden state transition probability a from time t to time t +1ijExpressed as:
aij=p(it+1=qj|it=qi)
then the hidden state transition matrix a ═ aij]K×K
(2) Observe independent assumptions: assuming that the observation state at any time only depends on the hidden state at the current time, if the hidden state i at the time tt=qjObserved state x at this timeq,t=cnAnd the three observation variables are independent from each other, the probability b of the observation state generated in the hidden state at the momentj(n) is:
Figure BDA0002701735100000041
the probability matrix (emission matrix) B ═ B of observation state generationj(n)]K×N
In addition, a set of hidden state probability distributions pi at the initial time needs to be defined as:
Π=[π(i)]K
wherein pi (i) ═ p (i)1=qi);
An HMM model is composed of three parameters of hidden state initial probability distribution, a hidden state transition matrix and an emission matrix, and the HMM model is represented by a triplet lambda (pi, A and B);
s4, performing parameter learning on the HMM model λ (Π, a, B) using the EM algorithm, and assigning a training observation sequence Xq,trainMaking the conditional probability p (X) of the observed sequence under the modelq,train| λ) is the maximum, and model parameters λ are obtained as (Π, a, B), and human body actions are classified and recognized through models.
The invention has the beneficial effects that: the hidden state of the HMM model can be used for presenting the internal relation between the upper envelope and the lower envelope of the human body micro Doppler signal and the frequency change of the trunk, and the randomness of the state change of the moving human body is represented by the Markov property of the hidden state variable of the model, so that the classification performance of the subsequent human body action is improved.
Detailed Description
The utility of the invention is analyzed by combining simulation experiments.
The experiment is based on a frequency modulation continuous wave radar (AWR1642) of TI company, and radar echo data of 6 motion actions of 4 persons 3.5-5 meters away from the radar are respectively collected. The 5 actions are walking, forward tumbling, stooping, squatting and in-situ arm swinging, respectively. 2750 time-frequency spectrograms are acquired in each action, wherein 1500 are used for classifier training, and 1250 are used for testing.
In human body action classification, an HMM model is firstly trained for each class of motion actions, namely a model parameter lambda of the ith motion class is obtainedi. The human body action identification and classification stage is to evaluate the human body action observation sequence lambda to be classifiedxI.e. λ of the HMM model in the ith action classiUnder the condition of pi, A and B, calculating an observation sequence X to be recognized under the model by a forward and backward algorithmqProbability of occurrence p (X)qi). Probability p (X)qi) And when the maximum value is reached, the action class corresponding to the given HMM model is the human body action class of the recognition classification result.
The experiment uses other two human body motion classification methods to compare with the method provided by the invention. The results of the experiment are shown in table 1.
TABLE 1 human action Classification results
Type of action Walking machine Fall down forward Stoop down Squatting down In-situ swing arm Total number of
Number of samples tested 250 250 250 250 250 1250
Correct classification number (method 1) 210 219 204 42 236 1001
Classification accuracy (%) (method 1) 84.00 87.60 81.60 16.80 94.40 80.08
Correct classification number (method 2) 201 210 226 230 241 1108
Classification accuracy (%) (method 2) 80.40 84.00 90.40 92.00 96.40 88.64
Correct classification number (invention) 224 225 240 238 247 1174
Classification accuracy (%) (present invention) 89.60 90.00 96.00 95.20 98.80 93.92
The method 1 is an action classification method based on the envelope variation characteristic of the human body micro Doppler image. The method 2 is a method for extracting the characteristics of micro Doppler bandwidth, four limb movement periods, trunk average Doppler and the like by using a human body micro Doppler spectrogram and then realizing human body action classification by using a Support Vector Machine (SVM).
As can be seen from the classification result data in the table, in the method 2, the body state change in the human body motion process is not considered, the random unstable human body motion classification effect is poor, and therefore, the motion classification accuracy of the walking and forward falling human body postures which are relatively high in randomness is low. Although the method 1 utilizes the state change of human body motion, the extracted state change depends on the envelope, and the micro Doppler signal envelope change of the same motion is also larger, so that the walking and forward and backward classification effect is improved, but the method is not very effective for classifying some motions. The human body action classification method provided by the invention mainly depends on the constructed HMM model of the non-single observation variable, finds the internal relation between the upper envelope and the lower envelope of the human body micro Doppler signal and the frequency change of the trunk through the model, presents the relation through the hidden state of the model, and improves the action classification effect by utilizing the relation. The classification accuracy in the table shows that the human body motion classification method provided by the invention has good classification effect, and further improves the classification of motions (walking and forward falling) with high randomness.

Claims (1)

1. A human body action classification recognition method based on a multi-observation variable HMM model is characterized by comprising the following steps:
s1, obtaining observation sequences which are respectively the upper envelope frequency, the lower envelope frequency and the trunk Doppler frequency of the human body micro Doppler signal, wherein the change sequences of the upper envelope frequency and the lower envelope frequency are extracted through the following formula:
Figure FDA0003529332910000011
wherein FHFor a sequence of frequency variations of the upper envelope, FLIs as followsA sequence of frequency variations of the envelope, Sp (f, t) being the micro-Doppler spectrum, f being the micro-Doppler frequency, t being the time, [ -fm,fm]Is the range of spectrogram Doppler frequency, T is the total duration;
extracting the trunk Doppler frequency change sequence through peak detection, namely extracting a point with the maximum amplitude value at the moment t of a spectrogram, and recording the trunk Doppler frequency F at the momentT(t),FT(t) satisfies the following formula:
Figure FDA0003529332910000012
s2, quantifying the observed variable, comprising:
assume a training set of M vector sources:
Tr={x1,x2,...,xM};
the source vector is 3-dimensional:
xm=(FH(tm),FT(tm),FL(tm)),m=1,2,...,M
FH(tm),FT(tm),FL(tm) Are each tmThe frequency of the upper envelope, torso and lower envelope of the time micro-doppler signal; setting the number of code vectors as N, setting the set of all code vectors as a code book, wherein the code book is expressed as: c ═ C1,c2,...,cNEach code-vector is a 3-dimensional vector: c. Cn=(cn,h,cn,t,cn,l) N is 1,2,.. N, wherein cn,h,cn,t,cn,lThree code words which are respectively the nth code vector correspond to the states of three observation variables; and code vector cnThe corresponding coding region is denoted SnThe division of the space is then represented as: p ═ S1,S2,...,SN}; if the source vector xmAt SnThen its approximation is cnDenoted Ap (x)m)=cnThe average error distortion is used for measurement, and the average error distortion is expressed as follows:
Figure FDA0003529332910000021
using LBG algorithm, find DavMinimum code vector cnObtaining a codebook C;
for any vector x to be quantized (f)h(t),ft(t),fl(t)),fh(t),ft(t),fl(t) respectively calculating the frequency of the upper envelope, the trunk and the lower envelope of the human body micro Doppler signal at the time t by the following formulanEuclidean distance of dn
dn=||x-cn||2
The code vector with the shortest Euclidean distance is the vector x after x quantizationq
S3, constructing a multi-observation variable HMM model:
definition Q is the set of all possible hidden states and codebook C is the set of all possible observed states, i.e.:
Q={q1,q2,...,qK},C={c1,c2,...,cN}
wherein K is the number of possible hidden states, N is the number of all possible observed states;
for a micro Doppler signal of human motion, I is defined as a hidden state sequence, and the corresponding observation sequence is a quantized vector sequence X of the frequency combination of an upper envelope, a trunk and a lower envelopeqThe sequence length is the total duration:
I={i1,i2,...,iT},Xq={xq,1,xq,2,...,xq,T}
wherein any hidden state itE.g. Q, any observation state xq,t∈C;
If the hidden state at any time t is only dependent on the previous hidden state, it=qiHidden state i at time t +1t+1=qjFrom time tHidden state transition probability a to time t +1ijExpressed as:
aij=p(it+1=qj|it=qi)
then the hidden state transition matrix a ═ aij]K×K
Assuming that the observation state at any time only depends on the hidden state at the current time, if the hidden state i at the time tt=qjObserved state x at this timeq,t=cnAnd the three observation variables are independent from each other, the probability b of the observation state generated in the hidden state at the momentj(n) is:
Figure FDA0003529332910000031
the emission matrix B generated by the observation state is ═ Bj(n)]K×N
Defining hidden state probability distribution pi at initial time as:
Π=[π(i)]K
wherein pi (i) ═ p (i)1=qi);
Forming an HMM model by three parameters including hidden state initial probability distribution, a hidden state transition matrix and an emission matrix, wherein the HMM model is lambda (pi, A and B);
s4, performing parameter learning on the HMM model λ (Π, a, B) using the EM algorithm, and assigning a training observation sequence Xq,trainMaking the conditional probability p (X) of the observed sequence under the modelq,train| λ) is the maximum, and model parameters λ are obtained as (Π, a, B), and human body actions are classified and recognized through models.
CN202011024486.4A 2020-09-25 2020-09-25 Human body action classification recognition method based on multi-observation variable HMM model Active CN112163510B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011024486.4A CN112163510B (en) 2020-09-25 2020-09-25 Human body action classification recognition method based on multi-observation variable HMM model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011024486.4A CN112163510B (en) 2020-09-25 2020-09-25 Human body action classification recognition method based on multi-observation variable HMM model

Publications (2)

Publication Number Publication Date
CN112163510A CN112163510A (en) 2021-01-01
CN112163510B true CN112163510B (en) 2022-04-22

Family

ID=73864017

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011024486.4A Active CN112163510B (en) 2020-09-25 2020-09-25 Human body action classification recognition method based on multi-observation variable HMM model

Country Status (1)

Country Link
CN (1) CN112163510B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116776158B (en) * 2023-08-22 2023-11-14 长沙隼眼软件科技有限公司 Target classification method, device and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101872418A (en) * 2010-05-28 2010-10-27 电子科技大学 Detection method based on group environment abnormal behavior
CN103258193A (en) * 2013-05-21 2013-08-21 西南科技大学 Group abnormal behavior identification method based on KOD energy feature
CN106407905A (en) * 2016-08-31 2017-02-15 电子科技大学 Machine learning-based wireless sensing motion identification method
WO2019006473A1 (en) * 2017-06-30 2019-01-03 The Johns Hopkins University Systems and method for action recognition using micro-doppler signatures and recurrent neural networks
CN109711389A (en) * 2019-01-16 2019-05-03 华南农业大学 A kind of milking sow posture conversion identification method based on Faster R-CNN and HMM
CN111273284A (en) * 2020-03-06 2020-06-12 电子科技大学 Human body trunk micromotion state change feature extraction method
CN111273283A (en) * 2020-03-06 2020-06-12 电子科技大学 Trunk movement parameter extraction method based on human body three-dimensional micro Doppler signal
CN111352086A (en) * 2020-03-06 2020-06-30 电子科技大学 Unknown target identification method based on deep convolutional neural network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101640077B1 (en) * 2009-06-05 2016-07-15 삼성전자주식회사 Apparatus and method for video sensor-based human activity and facial expression modeling and recognition

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101872418A (en) * 2010-05-28 2010-10-27 电子科技大学 Detection method based on group environment abnormal behavior
CN103258193A (en) * 2013-05-21 2013-08-21 西南科技大学 Group abnormal behavior identification method based on KOD energy feature
CN106407905A (en) * 2016-08-31 2017-02-15 电子科技大学 Machine learning-based wireless sensing motion identification method
WO2019006473A1 (en) * 2017-06-30 2019-01-03 The Johns Hopkins University Systems and method for action recognition using micro-doppler signatures and recurrent neural networks
CN109711389A (en) * 2019-01-16 2019-05-03 华南农业大学 A kind of milking sow posture conversion identification method based on Faster R-CNN and HMM
CN111273284A (en) * 2020-03-06 2020-06-12 电子科技大学 Human body trunk micromotion state change feature extraction method
CN111273283A (en) * 2020-03-06 2020-06-12 电子科技大学 Trunk movement parameter extraction method based on human body three-dimensional micro Doppler signal
CN111352086A (en) * 2020-03-06 2020-06-30 电子科技大学 Unknown target identification method based on deep convolutional neural network

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
A Novel Approach For Known and Unknown Target Discrimination Using HRRP;Daiying Zhou;《Research Journal of Applied Sciences, Engineering and Technology》;20130221;第5卷(第6期);第1943-1949页 *
Action recognition using micro-Doppler signatures and a recurrent neural network;Jeff Craley等;《2017 51st Annual Conference on Information Sciences and Systems (CISS)》;20170515;第1-5页 *
基于微多普勒特性的人体动作识别研究;胡晓龙;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》;20220115(第01期);第I136-2114页 *
基于抽象隐马尔可夫模型的运动行为识别方法;钱堃等;《模式识别与人工智能》;20091027;第22卷(第3期);第433-439页 *
基于隐马尔科夫模型的猪只状态自动识别技术;张苏楠等;《黑龙江畜牧兽医》;20161231(第11期);第97-99,103,294页 *
雷达信号中的微多普勒信息提取方法与应用研究;吕蒙;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》;20140615(第6期);第I136-630页 *

Also Published As

Publication number Publication date
CN112163510A (en) 2021-01-01

Similar Documents

Publication Publication Date Title
CN110797021B (en) Hybrid speech recognition network training method, hybrid speech recognition device and storage medium
Zhang et al. Human activity recognition based on motion sensor using u-net
Kim et al. Human activity classification based on micro-Doppler signatures using an artificial neural network
CN110045348A (en) A kind of human motion state classification method based on improvement convolutional neural networks
Qian et al. A noble double-dictionary-based ECG compression technique for IoTH
CN111273284B (en) Human body trunk micromotion state change feature extraction method
Yang et al. A multidimensional feature extraction and selection method for ECG arrhythmias classification
CN104887222A (en) Reversible electroencephalogram analysis method
CN110520935A (en) Learn sleep stage from radio signal
CN110766099A (en) Electrocardio classification method combining discriminant deep belief network and active learning
CN110269605A (en) A kind of electrocardiosignal noise recognizing method based on deep neural network
CN112163510B (en) Human body action classification recognition method based on multi-observation variable HMM model
Qu et al. Human activity recognition based on WRGAN-GP-synthesized micro-Doppler spectrograms
Zhao et al. Continuous human motion recognition using micro-Doppler signatures in the scenario with micro motion interference
Shao et al. Deep learning methods for personnel recognition based on micro-Doppler features
Chen et al. Human activity recognition using temporal 3DCNN based on FMCW radar
Xiong et al. An unsupervised dictionary learning algorithm for neural recordings
CN110414426B (en) Pedestrian gait classification method based on PC-IRNN
Saeed et al. Comparison of classifier architectures for online neural spike sorting
Wang et al. Rapid recognition of human behavior based on micro-Doppler feature
Zhu et al. A dataset of human motion status using ir-uwb through-wall radar
Yan et al. Analyzing Wrist Pulse Signals Measured with Polyvinylidene Fluoride Film for Hypertension Identification.
CN110111360B (en) Through-wall radar human body action characterization method based on self-organizing mapping network
Xu et al. Semisupervised radar-based gait recognition in the wild via ensemble determination strategy
Wang et al. Respiration and Heartbeat Rates Measurement Based on Convolutional Sparse Coding

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant