CN113705339A - Cross-user human behavior identification method based on antagonism domain adaptation strategy - Google Patents
Cross-user human behavior identification method based on antagonism domain adaptation strategy Download PDFInfo
- Publication number
- CN113705339A CN113705339A CN202110802467.8A CN202110802467A CN113705339A CN 113705339 A CN113705339 A CN 113705339A CN 202110802467 A CN202110802467 A CN 202110802467A CN 113705339 A CN113705339 A CN 113705339A
- Authority
- CN
- China
- Prior art keywords
- data
- classifier
- fea
- feature
- features
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/254—Fusion techniques of classification results, e.g. of results related to same input data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/049—Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Abstract
The invention discloses a cross-user human body behavior identification method based on a antagonism domain adaptation strategy, and belongs to the technical field of behavior identification. The method comprises the steps of preprocessing and imaging original acceleration data and gyroscope data into single-channel moving images, and extracting features from two dimensions of image space and time sequence. The training of the feature extraction network and the two classifiers is carried out in three steps by adopting a resistance domain adaptation strategy so as to optimize the distribution of the extracted features, reduce the influence of confusion at the feature decision boundary caused by crossing users, introduce minimum inter-class confusion loss and reduce the influence of confusion at the feature overlapping part caused by crossing users. And finally, inputting the optimally distributed features into the classifiers, comprehensively considering the prediction results of the two classifiers, and classifying the human behavior to obtain the human behavior state. The invention reduces the influence caused by the difference between individuals and effectively improves the recognition precision of the human body behaviors of the cross-user based on the sensor signals.
Description
Technical Field
The invention belongs to the technical field of behavior recognition, and particularly relates to a cross-user human body behavior recognition method based on a antagonism domain adaptation strategy.
Background
Human behavior recognition is a hot problem of research in recent years, and due to rapid development of the internet of things industry and a great deal of popularity of terminal equipment (such as mobile phones, smart wristbands, smart watches, cameras and the like), human behavior recognition plays an important role in many fields (such as old people nursing, medical assistance, exercise health monitoring and the like) closely related to our lives.
At present, human behavior recognition is mainly performed in two modes, one mode is human behavior recognition based on video or image information, the image processing technology is mature, and the image information is visual, so that the recognition accuracy of the mode is high, but the mode needs devices capable of shooting and capturing human image information depending on a camera and the like, so that the limitation is large, problems and hidden dangers exist in privacy and safety aspects, and the mode cannot be widely applied. The other mode is based on human behavior recognition of wearable equipment, and the mode judges the current ongoing activities of the user by means of various sensor signals in the wearable equipment, so that the privacy and safety problems are well solved, and the activity recognition accuracy is high, so that the method becomes a mode which is more popular and has more application prospects.
There are many studies on human behavior recognition based on sensor signals, but physiological signals are more susceptible to inter-individual differences than video information, that is, the effect of the model is more severely reduced when processing signal data of a new user. Although this problem can be solved by training a large amount of user data to obtain a more generalizable model, the collection and labeling of data is rather cumbersome and time-consuming, and therefore, solving the cross-user recognition problem in the human behavior recognition task is a meaningful and challenging task.
Currently, there is little research on cross-user human behavior recognition, and existing solutions are also basically based on domain adaptation strategies. The purpose of domain adaptation is to migrate the knowledge learned by the neural network from the source domain to the target domain, so generic domain adaptation methods are often used to solve the cross-user problem. Some researchers use the Maximum Mean Difference (MMD) method, which is a distance-based general domain adaptation method aiming at aligning the feature distributions of the source domain and the target domain, but since the metric of distance is completely determined by human, it is difficult to find a proper criterion; other researchers have used countermeasure-based Domain adaptation methods, SA-gan (subject adapter gan), DANN (Domain-adaptive Neural Network), etc. in the study of cross-user human behavior recognition. However, since the signal is more susceptible to individual differences, these general domain adaptation methods are not satisfactory when the differences between individuals are large. One basic assumption of the generic domain adaptation method is that features of the same class of source and target domains are closer in subspace than features of different classes, but in a sensor signal-based cross-user domain adaptation scenario, the large differences in physiological and behavioral habits among individuals can make the feature confusion problem worse than the general domain adaptation task, which may not meet the basic assumption.
In particular, the feature Confusion problem has two manifestations in the cross-user human behavior recognition scenario, one is Confusion at Decision Boundaries (fusion at Decision Boundaries, CDB), and the other is Confusion at overlaps (fusion at overlaying, COL). The former is the case that the features of the target domain fall right near the decision boundary of the classifier, which results in the undesirable classification effect, while the latter is the case that the features of different classes of the source domain and the target domain are overlapped together, which results in the classification confusion. It is the existence of these two feature confusion problems that lead to great difficulties in the task of cross-user human behavior recognition based on sensor signals.
Disclosure of Invention
The invention aims to: aiming at the existing problems, the invention provides an antagonism domain adaptation method for a cross-user human behavior recognition task based on sensor signals.
The invention discloses a cross-user human behavior identification method based on a antagonism domain adaptation strategy, which comprises the following steps:
step 1: method for acquiring three-axis accelerometer data and three-axis gyroscope data of source domain user under different behaviors based on wearable sensor equipment to obtain source domain data xsAnd sets corresponding behavior label ys(ii) a Acquiring triaxial accelerometer data and triaxial gyroscope data of a target domain user based on wearable sensor equipment to obtain original signal target domain data xt;
Step 2: for source domain data xsAnd target domain data xtCutting and segmenting are carried out, and signal data are converted into single-channel moving image data based on imaging processing;
the method comprises the steps of carrying out image conversion on signals, and carrying out image conversion on the signals.
And step 3: inputting single-channel moving image data into a feature extraction network E (parallel deep neural network), extracting signal features from two dimensions of image space and time sequence, and acquiring image space features feacnnAnd time series characteristics fealstm;
The extraction of the image space features is performed by using a Convolutional Neural Network (CNN), and the output is the extracted image space features feacnn;
The time series characteristic is extracted by adopting a Long Short-Term Memory network (LSTM), and the output of the LSTM is the extracted time series characteristic fealstm;
In a possible implementation, the spatial features fea of the image are extractedcnnThe convolutional neural network structure comprises a convolutional layer 1, a pooling layer 1, and a convolutional layerLayer 2, pooling layer 2, convolutional layer 3, pooling layer 3, and full-link layer, where feature flattening is performed and the output is extracted image space feature feacnn;
In one possible implementation, the time series features fea are extractedlstmThe parameters of the long-short term memory network are set as follows: the number of the hidden units is 64, the number of the layers is 3, a full connection layer is connected after the hidden units, the characteristic flattening processing is carried out, and the output of the characteristic flattening processing is taken as the extracted time series characteristic fealstm。
And 4, step 4: for image space characteristic feacnnAnd time series characteristics fealstmFlattening processing is carried out, and then characteristic splicing is carried out to obtain combined characteristics, namely the combined characteristics are finally extracted characteristics and input into a classifier;
in the invention, data feature extraction of a source domain user and a target domain user is not distinguished and follows the same network flow direction, ysAdjustments to the classifier and feature extraction network only for the training phase (network parameter adjustments);
in one possible implementation, the structure of the classifier is: the system comprises a normalization layer, an activation layer and a full connection layer, wherein the activation function adopts a ReLU function. Corresponding to the specific feature extraction network structure, the input dimension of the received data is 192, the received data comprises 128-dimensional image space features and 64-dimensional time series features, and the output dimension is the human behavior category number set by the human behavior recognition task. The classifier belongs to a part of a deep network, the input of the classifier is joint features obtained after splicing, the output is the prediction probability of each human behavior (such as walking, running, going upstairs and downstairs and the like), a probability prediction vector p can be obtained based on the prediction probabilities of all classes, and the behavior class corresponding to the maximum prediction probability is taken as a final recognition result when the classifier is used for recognition output.
And 5: in the training process, in order to solve the problems of confusion and overlapping confusion at decision boundaries, two classifiers C with the same structure and different initializations are trained based on a Maximum Classifier Difference (MCD) strategy1And C2By usingAntagonistic mode pair feature extraction network E and classifier C1、C2Training to reduce confusion of features at decision boundaries;
minimum Class Confusion (MCC) loss is introduced in the training process, and the influence caused by feature overlapping Confusion is reduced;
wherein, the antagonism training feature extraction network E and the classifier C1、C2The steps are as follows:
first, using source domain data xsAnd a behavior tag ysTraining feature extraction network E and classifier C1、C2The loss of this step is the cross-entropy loss LCE;
Second, fixed feature extraction network E, using source domain data xsAnd a behavior tag ysAnd target domain data xtTraining classifier C1、C2To maximize the difference between the two classifiers, said difference being defined asWherein M is the number of categories of tags, p1m(y|xt)p1mAnd p2m(y|xt) Representing the prediction output of two classifiers, and defining the loss of the step as follows under the premise of ensuring the classification accuracy: l isstep2=LCE-Ldis;
Thirdly, fixing the classifier C1、C2Using source domain data xsBehavior tag ysAnd target domain data xtTraining the feature extraction network E, and introducing MCC loss in this step, which is expressed by the formula: representing a confusion matrix, which can be calculated from the inter-class confusion matrix of the target domain, and the loss of this step is defined as:where α and β are adjustable parameters, representing corresponding coefficients.
In the three training steps, the first step adopts a traditional supervised training mode, a reliable human behavior recognition network is obtained by utilizing source domain data, the second step and the third step form a maximum and minimum game, and the antagonistic training mode enables the extracted features to be far away from a decision boundary as far as possible under the condition of reliable classification, so that the influence of confusion at the decision boundary is reduced; and the extracted feature distribution can be optimized by minimizing the inter-class confusion loss, the feature overlapping of different classes is reduced, and the influence of confusion at the overlapping part is reduced.
Step 6: based on classifier C1And C2And comprehensively judging the output result, and determining the human behavior category of the target domain user so as to realize the identification of the behavior activity of the target domain user.
Further, in step 6, the comprehensive decision is: and adding the prediction probabilities of the same category to obtain a fusion prediction probability, and taking the human behavior category corresponding to the maximum fusion probability as a final prediction result of the target domain user.
In summary, due to the adoption of the technical scheme, the invention has the beneficial effects that:
the invention adopts a deep neural network mode to solve the task of cross-user human behavior recognition, thereby avoiding the traditional step of manually extracting features and having higher time efficiency; meanwhile, the method adopts an adversarial domain adaptation strategy, weakens the influence of confusion at characteristic boundaries and confusion at overlapping positions in the training process, overcomes the problem that the migration effect is reduced when the individual difference is large in other methods, and enables the model to have higher stability on the problem of the individual difference.
Drawings
FIG. 1 is a flowchart illustrating an identification method of a cross-user human behavior identification method based on a antagonism domain adaptation strategy according to an embodiment of the present invention;
FIG. 2 is a block diagram of a cross-user human behavior recognition method based on a antagonism domain adaptation strategy according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a neural network structure used in the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the following embodiments and accompanying drawings.
Referring to fig. 1 and fig. 2, the cross-user human behavior recognition method based on the adversarial domain adaptation strategy provided by the embodiment of the present invention includes three major parts, namely signal preprocessing, training of a network model, and prediction of an activity label, wherein the training of the network model is divided into two steps, namely learning of human behavior recognition knowledge in a source domain and migrating the knowledge to a target domain.
The signals adopted by human behavior recognition are generally acceleration signals and gyroscope signals, and the sensors can be fixed at the positions of the waist, the arms, the calves and the like to detect the motion conditions of key parts of the human body, so that the sensors can be used as bases for judging behaviors of standing, walking, running, going upstairs and downstairs, riding and the like. The embodiment of the invention has no limitation on the type of signals, so common acceleration and gyroscope signals and other related multi-modal signals can be used as the input of the network.
In the step of preprocessing the signal, firstly, the signal is subjected to band-pass filtering and median filtering to remove noise components mixed in the signal, and then, signal data is subjected to down-sampling to reduce the data scale and improve the training speed of the network. After the above two steps of processing, the signal data is divided into slices, the duration of each slice is 5 seconds, and an overlapping part of 2 seconds exists between two adjacent slices. The slicing time length is selected to ensure that sufficient action information can be contained in the slices, and the number of sample points of a single slice is not too large. The last step of data preprocessing is also the most critical step, namely, the imaging processing, and the signal data of each channel is converted into single-channel moving image data. Specifically, the signal slices of each channel are stacked in columns, and operations such as Fourier transform and the like are omitted to simplify the imaging process. The finally obtained single-channel image data is used as the input of the neural network.
The learning of the human behavior recognition knowledge in the source domain belongs to common supervised learning, and the learning part is deeply researched by the academic world, and a great deal of literature references exist in the traditional support vector machine, the random forest, the k-neighbor algorithm and the deep neural network. In studies using deep neural networks, researchers concluded that Convolutional Neural Networks (CNNs) and long short term memory networks (LSTM) perform better. In order to fully utilize data information contained in a moving image, in the embodiment of the invention, a parallel depth network is adopted to extract depth features, the extraction of the features is carried out from two dimensions, namely an image space dimension and a time sequence dimension, because after the signals are converted into the moving image, potential relations among signals of channels can be extracted from the image, human body behavior is a continuous process in time, and information at the previous moment can help to identify the behavior at the current moment. Image space dimension feature feacnnThe extraction of (2) adopts CNN, because CNN has natural advantages in image processing, key features contained in an image can be captured more easily; time series dimension feature fealstmThe extraction of (2) adopts LSTM, because LSTM has an advantage in capturing sequence features having a tandem timing relationship. The features of the two dimensions are spliced and combined after being flattened and are used as the input of a source domain classifier so as to train a network to recognize human behaviors. Since the image space contains more available information, the reference ratio of the time-series feature as the classification should not be too large, so the embodiment of the present invention uses the feature fea in the image spacecnnIs higher than the time series characteristic fealstmThe preferred ratio of the two dimensions is 2: 1.
The biggest technical difficulty of the cross-user human behavior recognition task lies in that in the stage of knowledge migration to a target domain, the signal data difference between a source domain and the target domain is large due to the difference of age, sex, behavior habits and the like of individuals, so that the classification precision of a model is seriously reduced, and a general domain adaptation method cannot well cope with the situation of large difference between individuals. After specific analysis, the cause is classified as a feature confusion problem, and the manifestations of the feature confusion include confusion at decision boundaries, where the former means that the features of the target domain fall right near the decision boundaries of the classifier, resulting in an unsatisfactory classification effect, and confusion at overlaps, where different classes of features of the source domain and the target domain overlap and result in classification confusion.
Accordingly, a maximized classifier difference strategy is adopted to train the network resistively to optimize the distribution of the extracted features and reduce the influence of confusion at decision boundaries. The specific training steps are divided into the following three steps:
the first step implements learning of source domain knowledge, i.e. using source domain data xsAnd ysTraining feature extraction network E and classifier C1、C2The loss of this step is the cross-entropy loss LCEThe formula isWhere M denotes the number of categories, yicFor the sign function, 1 is taken when the true class of the sample is equal to c, otherwise 0 is taken, p is takenicIs the classifier prediction probability that the observation sample i belongs to class c.
Second, fixed feature extraction network E, using source domain data xs、ysAnd target domain data xtTraining classifier C1、C2To maximize the difference between the two classifiers, defined asWhere M is the number of classes of tags, p1mAnd p2mRepresenting the prediction output of two classifiers, and defining the loss of the step as follows under the premise of ensuring the classification accuracy: l isstep2=LCE-Ldis。
Thirdly, fixing the classifier C1、C2Using source domain data xs、ysAnd target domain data xtTraining the feature extraction network E, in this step MCC loss is introduced to mitigate the effect of aliasing at feature overlaps, whichThe formula is as follows: an inter-class confusion matrix representing the target domain, such that the loss function is defined as:where α and β are adjustable parameters.
Referring to fig. 2, in a specific training process, the three steps are cycled until network loss converges, and in each iteration, after the first two steps are finished, the third step is repeated as an independent inner-layer cycle to ensure the stability of feature extraction from the feature extraction network.
Referring to fig. 3, in a possible implementation, the feature extraction network E includes a CNN and an LSTM, where the CNN has the structure: a convolution layer 1, a pooling layer 1, a convolution layer 2, a pooling layer 2, a convolution layer 3, a pooling layer 3, and a full-link layer, in which a feature flattening process is performed and the output is an extracted image space feature feacnn(ii) a The specific parameters of the LSTM are set as: the number of the hidden units is 64, the number of the layers is 3, a full connection layer is connected after the hidden units, the characteristic flattening processing is carried out, and the output of the characteristic flattening processing is taken as the extracted time series characteristic fealstm。
Finally, forecasting the target domain sample, after the steps, the network finishes the migration of the target domain knowledge, and inputs the target domain sample data xtiWherein, subscript i represents the sample data number of the target domain, two classifiers C1And C2Will give a prediction probability vector p separately1cAnd p2cA 1 is to p1cAnd p2cAdding, selecting the class with the highest prediction probability as the model to target domain sample data xtiThe final predicted result of (2). By the method, probability results of the two classifiers are comprehensively considered, the advantages of a double-classifier strategy are fully utilized, and the accuracy and reliability of the model for identifying the target domain data are further improved.
While the invention has been described with reference to specific embodiments, any feature disclosed in this specification may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise; all of the disclosed features, or all of the method or process steps, may be combined in any combination, except mutually exclusive features and/or steps.
Claims (8)
1. The cross-user human behavior identification method based on the adversarial domain adaptation strategy is characterized by comprising the following steps of:
step 1: method for acquiring three-axis accelerometer data and three-axis gyroscope data of source domain user under different behaviors based on wearable sensor equipment to obtain source domain data xsAnd sets corresponding behavior label ys(ii) a Acquiring triaxial accelerometer data and triaxial gyroscope data of a target domain user based on wearable sensor equipment to obtain original signal target domain data xt;
Step 2: for source domain data xsAnd target domain data xtCutting and segmenting are carried out, and signal data are converted into single-channel moving image data based on imaging processing;
and step 3: inputting single-channel moving image data into a feature extraction network E, extracting signal features from two dimensions of image space and time sequence to obtain image space features feacnnAnd time series characteristics fealstm;
The image spatial feature feacnnThe time series characteristic fea is obtained by adopting a convolution neural networklstmAcquiring by adopting a long-term and short-term memory network;
and 4, step 4: for image space characteristic feacnnAnd time series characteristics fealstmFlattening processing is carried out, then feature splicing is carried out to obtain combined features, the combined features are input into a classifier, and the classifier is used for outputting the prediction probability of each human behavior category;
and 5: based on maximum classifier difference method in adversarial domain adaptation strategy, two classifiers with same network structure and different initialization parameters are trainedC1And C2Extracting network E and classifier C from features by using antagonism method1、C2Training is carried out, and a loss function in training is set based on the minimized inter-class confusion loss;
step 6: based on classifier C1And C2And performing comprehensive judgment on the output result to determine the human behavior category of the target domain user.
2. The method of claim 1, wherein in step 2, the stacking process is performed directly on the signal channels, single channel live image data.
3. The method of claim 1, wherein in step 3, the convolutional neural network comprises in sequence: a convolutional layer 1, a pooling layer 1, a convolutional layer 2, a pooling layer 2, a convolutional layer 3, a pooling layer 3 and a full-connection layer; the full connection layer is used for flattening the input feature graph and outputting the feature graph as the extracted image space feature feacnn。
4. The method as claimed in claim 1, wherein in step 3, the long-short term memory network comprises 64 hidden units, the number of hidden layers is 3, the last hidden layer is followed by a full link layer for flattening, and the output is used as the extracted time series feature fealstm。
5. The method of any of claim 1, wherein in step 3, the extracted image space features feacnnAnd time series characteristic fealstmThe ratio in dimension is 2: 1.
6. the method of claim 1, wherein in step 3, the network structure of the classifier is: the device comprises a normalization layer, an activation layer and a full connection layer which are connected in sequence, wherein the activation function adopts a ReLU function.
7. A process as claimed in any one of claims 1 to 6The method according to item 5, characterized in that in step 5, the network E and the classifier C are extracted for the features by using a antagonism method1、C2The training is specifically as follows:
first, using source domain data xsAnd a behavior tag ysTraining feature extraction network E and classifier C1、C2The loss of this step is the cross-entropy loss LCE;
Second, fixed feature extraction network E, using source domain data xsAnd a behavior tag ysAnd target domain data xtTraining classifier C1、C2To maximize the difference between the two classifiers, said difference being defined asWherein M is the number of categories of tags, p1m(y|xt)p1mAnd p2m(y|xt) Representing the predicted output of both classifiers, the penalty for this step is: l isstep2=LCE-Ldis;
Thirdly, fixing the classifier C1、C2Using source domain data xsBehavior tag ysAnd target domain data xtTraining the feature extraction network E, with the loss of this step:wherein L isMCCMeaning that the inter-class confusion loss is minimized, an inter-class confusion matrix representing the target domain, α and β being adjustable parameters.
8. The method of claim 7, wherein in step 6, the integrated decision is: will classifier C1And C2Is transported byAnd adding the predicted probabilities of the same category to obtain a fusion predicted probability, and taking the human behavior category corresponding to the maximum fusion probability as a final predicted result of the target domain user.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110802467.8A CN113705339B (en) | 2021-07-15 | 2021-07-15 | Cross-user human behavior recognition method based on antagonism domain adaptation strategy |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110802467.8A CN113705339B (en) | 2021-07-15 | 2021-07-15 | Cross-user human behavior recognition method based on antagonism domain adaptation strategy |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113705339A true CN113705339A (en) | 2021-11-26 |
CN113705339B CN113705339B (en) | 2023-05-23 |
Family
ID=78648705
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110802467.8A Active CN113705339B (en) | 2021-07-15 | 2021-07-15 | Cross-user human behavior recognition method based on antagonism domain adaptation strategy |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113705339B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114757237A (en) * | 2022-06-13 | 2022-07-15 | 武汉理工大学 | Speed-independent gait recognition method based on WiFi signals |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111160462A (en) * | 2019-12-30 | 2020-05-15 | 浙江大学 | Unsupervised personalized human activity recognition method based on multi-sensor data alignment |
CN111476783A (en) * | 2020-04-13 | 2020-07-31 | 腾讯科技(深圳)有限公司 | Image processing method, device and equipment based on artificial intelligence and storage medium |
CN112084891A (en) * | 2020-08-21 | 2020-12-15 | 西安理工大学 | Cross-domain human body action recognition method based on multi-mode features and counterstudy |
US20210125104A1 (en) * | 2019-10-25 | 2021-04-29 | Onfido Ltd | Machine learning inference system |
-
2021
- 2021-07-15 CN CN202110802467.8A patent/CN113705339B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210125104A1 (en) * | 2019-10-25 | 2021-04-29 | Onfido Ltd | Machine learning inference system |
CN111160462A (en) * | 2019-12-30 | 2020-05-15 | 浙江大学 | Unsupervised personalized human activity recognition method based on multi-sensor data alignment |
CN111476783A (en) * | 2020-04-13 | 2020-07-31 | 腾讯科技(深圳)有限公司 | Image processing method, device and equipment based on artificial intelligence and storage medium |
CN112084891A (en) * | 2020-08-21 | 2020-12-15 | 西安理工大学 | Cross-domain human body action recognition method based on multi-mode features and counterstudy |
Non-Patent Citations (2)
Title |
---|
YALAN YE 等: "Cross-subject EEG-based Emotion Recognition Using Adversarial Domain Adaption with Attention Mechanism" * |
叶娅兰: "基于虚电路的微通信元系统架构的QoS建模研究" * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114757237A (en) * | 2022-06-13 | 2022-07-15 | 武汉理工大学 | Speed-independent gait recognition method based on WiFi signals |
Also Published As
Publication number | Publication date |
---|---|
CN113705339B (en) | 2023-05-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107633207B (en) | AU characteristic recognition methods, device and storage medium | |
US9805255B2 (en) | Temporal fusion of multimodal data from multiple data acquisition systems to automatically recognize and classify an action | |
Zhu et al. | Efficient human activity recognition solving the confusing activities via deep ensemble learning | |
Javeed et al. | Wearable sensors based exertion recognition using statistical features and random forest for physical healthcare monitoring | |
WO2019127273A1 (en) | Multi-person face detection method, apparatus, server, system, and storage medium | |
CN107784282A (en) | The recognition methods of object properties, apparatus and system | |
CN106909938B (en) | Visual angle independence behavior identification method based on deep learning network | |
US11429809B2 (en) | Image processing method, image processing device, and storage medium | |
CN107767416B (en) | Method for identifying pedestrian orientation in low-resolution image | |
CN107590473B (en) | Human face living body detection method, medium and related device | |
CN112784763B (en) | Expression recognition method and system based on local and overall feature adaptive fusion | |
CN110046544A (en) | Digital gesture identification method based on convolutional neural networks | |
CN111128242A (en) | Multi-mode emotion information fusion and identification method based on double-depth network | |
Fang et al. | Dynamic gesture recognition using inertial sensors-based data gloves | |
Cheng et al. | A global and local context integration DCNN for adult image classification | |
CN110991278A (en) | Human body action recognition method and device in video of computer vision system | |
JP2022511221A (en) | Image processing methods, image processing devices, processors, electronic devices, storage media and computer programs | |
CN113705339A (en) | Cross-user human behavior identification method based on antagonism domain adaptation strategy | |
Shanthi et al. | Algorithms for face recognition drones | |
Geng | Research on athlete’s action recognition based on acceleration sensor and deep learning | |
Sarin et al. | Cnn-based multimodal touchless biometric recognition system using gait and speech | |
WO2021155661A1 (en) | Image processing method and related device | |
Sun et al. | Method of analyzing and managing volleyball action by using action sensor of mobile device | |
CN113792807A (en) | Skin disease classification model training method, system, medium and electronic device | |
CN110555342B (en) | Image identification method and device and image equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |