CN111407243B - Pulse signal pressure identification method based on deep learning - Google Patents

Pulse signal pressure identification method based on deep learning Download PDF

Info

Publication number
CN111407243B
CN111407243B CN202010206857.4A CN202010206857A CN111407243B CN 111407243 B CN111407243 B CN 111407243B CN 202010206857 A CN202010206857 A CN 202010206857A CN 111407243 B CN111407243 B CN 111407243B
Authority
CN
China
Prior art keywords
data
pulse signal
pulse
waveform
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010206857.4A
Other languages
Chinese (zh)
Other versions
CN111407243A (en
Inventor
邢晓芬
张弘毅
郭锴凌
梁国栋
徐向民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202010206857.4A priority Critical patent/CN111407243B/en
Publication of CN111407243A publication Critical patent/CN111407243A/en
Application granted granted Critical
Publication of CN111407243B publication Critical patent/CN111407243B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7225Details of analog processing, e.g. isolation amplifier, gain or sensitivity adjustment, filtering, baseline or drift compensation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/725Details of waveform analysis using specific filters therefor, e.g. Kalman or adaptive filters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7253Details of waveform analysis characterised by using transforms
    • A61B5/726Details of waveform analysis characterised by using transforms using Wavelet transforms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • G06F2218/10Feature extraction by analysing the shape of a waveform, e.g. extracting parameters relating to peaks

Abstract

The invention relates to the field of computers, and provides a pulse signal pressure identification method based on deep learning. The method can extract various characteristics including statistical characteristics, pulse waveform characteristics, nonlinear characteristics and wavelet transformation characteristics through pulse signal data marked in an experiment, construct a two-dimensional characteristic diagram with time characteristics by utilizing a sliding time window, and finally establish and train a related algorithm model through a convolutional neural network. On the basis of the trained algorithm model, the method can analyze the newly detected pulse signal sample, judge the current emotion of the testee and judge the stress condition of the wearer in real time. The method and the device can accurately identify the current pressure state of the user, have small calculation complexity, can realize real-time identification, can help the user to find the psychological problem in time and carry out corresponding intervention measures, thereby effectively improving the life quality of the user.

Description

Pulse signal pressure identification method based on deep learning
Technical Field
The invention belongs to the field of artificial intelligence and pattern recognition, relates to a computer information processing method, and particularly relates to a pulse signal pressure recognition method based on deep learning.
Background
With the development of economic society, the life rhythm of people is continuously accelerated, and further the life pressure is increased day by day. Psychological studies have shown that prolonged sustained psychological stress leads to various psychological diseases including depression, which in turn causes serious physiological damage. Therefore, the stress state of the user is monitored in real time, and correct psychological intervention is performed, so that the stress state monitoring system has extremely important research significance. In recent years, the mobile internet technology has been developed rapidly, and wearable devices have played an important role in the life of people, and gradually become popular products in consumer electronics. Wearable equipment has the advantages of convenient carrying, accurate measurement and strong expansion capability, and is favored by consumers and researchers.
The existing emotion recognition technology has various limitations: 1, only a plurality of specific emotions such as sadness, happiness and the like can be judged, and the identification capability of the pressure is insufficient; 2, comprehensive judgment is needed based on various physiological signals, signals such as electroencephalogram and electrocardio are used in a part of algorithms, and the signals need to be collected through professional experimental equipment such as an electroencephalogram cap and are difficult to apply in daily life; 3, the recognition algorithm is usually a shallow machine learning algorithm, and the accuracy is not high.
Disclosure of Invention
Aiming at the defects of the existing pressure identification method, the pulse signal pressure identification method based on deep learning is provided, the method aims at the pulse signals collected through wearable equipment, processes including filtering processing, feature extraction, feature graph construction and the like are carried out, then, a convolutional neural network is used for analyzing the signal features, the pressure state of a user is identified, and the purposes of receiving the signals in real time and analyzing the signals in real time can be achieved.
The pulse signal pressure identification method based on deep learning comprises the following steps
S1, acquiring pulse signals through wearable equipment worn on the wrist of the user;
s2, performing empirical mode decomposition on the original data of the pulse signals, and removing baseline drift to obtain waveform signals without baseline drift;
s3, establishing a feature project, and extracting statistical features, pulse waveform features, nonlinear features and wavelet transformation features from the waveform signals obtained in S2 through a feature extraction module;
s4, setting a sliding time window, sequentially translating according to the time sequence of the data, extracting the feature vectors of the data in the time window, and finally splicing all the feature vectors into a two-dimensional feature map according to the time dimension;
s5, inputting the two-dimensional characteristic map into the convolutional neural network by using the convolutional neural network, and identifying the stress emotion of the user;
and S6, repeatedly training and optimizing the convolutional neural network by using the loss function, and constructing a complete real-time pressure recognition model.
Further, in step S1, the wearable device collects a waveform of change of transmittance of the artery at the wrist, and further extracts the pulse signal.
Further, in step S2, all the eigenmode functions of the pulse signal are extracted and summed by eigenmode decomposition on the raw data, thereby removing the baseline drift with the envelope property.
Further, in step S3,
extracting the statistical features, including: calculating a main peak interval as a heartbeat interval IBI for the waveform signal formed in S2, and calculating an average value and a standard deviation of the IBI; calculating a first-order difference sequence for the IBI, extracting a standard deviation and a mean value of the first-order difference sequence, and adding the standard deviation and the mean value into the characteristic engineering;
extracting the pulse waveform features, including: calculating the secondary wave peak point of the waveform signal formed in S2, calculating the interpolation sequence of the adjacent main and secondary wave peak points, and adding the average value of the sequence into the characteristic engineering;
extracting the nonlinear features, including: calculating 2-order approximate entropy based on the waveform signal obtained in S2 and adding characteristic engineering;
extracting the wavelet transformation characteristics comprises the following steps: the waveform signal formed in S2 is subjected to 9-layer wavelet decomposition with the fundamental function "db 1", and the ratio of the square sum of wavelet coefficients of each layer to the square sum of all coefficients is added to the feature engineering.
Further, the formula of the wavelet decomposition is as follows:
Figure BDA0002421408390000021
in the formula, WTf(a, τ) is the decomposition result, aTo transform the scale, τ is the time domain offset, t is time,
Figure BDA0002421408390000022
f (t) is the waveform signal formed in step S2 as a fundamental wave decomposition function.
Further, in step S4, a time window with a length of W seconds is set for a data segment with a length of 20 seconds, S seconds is taken as a step value to traverse the data segment, where 0< S < W < T, and T is smaller than the total length of the data, and the statistical features, pulse waveform features, nonlinear features, and wavelet transform features described in the data extraction in each time window are finally spliced into a two-dimensional feature map, where the size of the feature map is (T-W)/sx 15.
Further, step S5 includes:
s51, performing two-dimensional convolution on feature maps with the size of 10 multiplied by 15 through 10 two-dimensional convolution kernels with the step value of 2, wherein the convolution kernels are 2 multiplied by 15, the feature maps after convolution are 5 multiplied by 1, and the number of the feature maps is 10;
s52, splicing the 10 convolved feature maps, wherein the size of the spliced feature maps is 5 multiplied by 10, and inputting convolution kernels with the size of 1 multiplied by 10 for convolution, and the layer has 10 convolution kernels in total;
s53, splicing the 10 convolved feature maps, wherein the size of the spliced feature maps is 5 x 10, inputting convolution kernels with the size of 3 x 10 for convolution, and the layer is provided with 10 convolution kernels in total;
and S54, unfolding all the characteristic diagrams obtained in S51, S52 and S53 into one-dimensional vectors, respectively carrying out batch standardization processing, inputting the processed vectors into a full-connection layer for dimension reduction, and inputting the dimension-reduced data into a Softmax classification function for classification.
Further, the specific training process of the training in step S6 is: processing the public WESAD data set to obtain a characteristic diagram with a stress emotion label, and putting the characteristic diagram into a model for repeated training until the best classification performance is obtained; on the basis, a standard experiment scene is established based on a Chinese standard video material library (CEVS) and a psychological standard paradigm, pulse data of the Chinese residents of the proper age are collected and labeled, and the data are put into model training until the model classification performance is optimal.
Compared with the existing pressure identification technology, the method has the following beneficial effects:
(1) conventional pressure identification techniques generally rely on a comprehensive analysis of various physiological signals to obtain results, wherein: some technologies rely on EEG signals, ECG signals and other signals with strict acquisition requirements, and are difficult to implant into wearable equipment; some techniques rely on signals such as skin conductance that are susceptible to interference. The wearable pulse signal acquisition device has the advantages that the pulse signal acquisition device is a pulse signal acquired by wearable equipment, the signal can be acquired by wrist watch equipment through a photoelectronic device, the daily life of a wearer cannot be affected, the wearable pulse signal acquisition device is not prone to interference, and the wearable pulse signal acquisition device is high in reliability.
(2) The traditional pressure recognition technical process is usually raw data → various features → classification results, which means that the essence of the algorithm is the arrangement and combination of the features, and the pressure is not considered as one of human emotions, and has the characteristics of continuity and stage. The application provides a characteristic extraction method based on a sliding time window, which establishes a characteristic diagram with time-frequency domain characteristics and fully utilizes the characteristics of the stress emotion in time.
(3) The convolutional neural network is introduced, and a two-dimensional convolutional kernel of the convolutional neural network has a certain stepping value, so that the two-dimensional convolutional kernel has a certain receptive field, is reflected on a characteristic diagram, and is represented as time dimension upper span. The convolution kernel integrates various characteristics of the time-frequency domain, and the characteristics are combined together through characteristic aggregation, so that information in various time domains is provided for the model, and the identification accuracy is improved.
(4) The convolutional neural network provided by the application utilizes the international standard public database WESAD and the self-experiment acquisition database to train and test respectively, so that the accuracy is improved, and meanwhile, the generalization capability of the model is improved. On the basis of the trained model, the method can analyze the newly detected pulse signal sample, judge the current emotion of the testee and judge the stress condition of the wearer in real time.
(5) The method and the device can accurately identify the current pressure state of the user, have small calculation complexity, can realize real-time identification, and can help the user to find the psychological problem of the user in time.
(6) The method also provides reference for problems in other emotion recognition fields, can be applied to recognition problems of other physiological signals and emotion labels, and has wide application prospect and deep research value.
Drawings
Fig. 1 is an overall flowchart of a pulse signal pressure identification method based on deep learning according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a step of establishing a two-dimensional feature map in the embodiment of the present invention.
FIG. 3 is a diagram illustrating a convolutional neural network structure according to an embodiment of the present invention.
FIG. 4 is a graph of a standard pulse waveform.
Detailed Description
The present invention will be described in detail with reference to specific examples. It should be understood that these embodiments are described merely to enable those skilled in the art to better understand and to implement the present disclosure, and are not intended to limit the scope of the present disclosure in any way.
As shown in fig. 2, the present embodiment provides a pulse signal pressure identification method based on deep learning, which is established on pulse signal data acquired by a wearable device, and the overall process of the method can be divided into two parts, including preprocessing of data and neural network identification. The data preprocessing comprises data filtering, feature extraction and construction of a two-dimensional feature map, wherein the feature extraction comprises extraction of statistical features, pulse waveform features, nonlinear features and wavelet transformation features. The neural network part comprises a time-frequency domain convolution module, a feature aggregation module and a classification module. Meanwhile, in order to obtain model parameters with pressure recognition capability, the method is trained based on a public database and a self-built database.
The pulse signal pressure identification method based on deep learning provided by the embodiment comprises the following steps:
s1, acquiring pulse signals through wearable equipment worn on the wrist of the user;
s2, performing empirical mode decomposition on the original data of the pulse signals, and removing baseline drift to obtain waveform signals without baseline drift;
s3, establishing a feature project, and extracting statistical features, pulse waveform features, nonlinear features and wavelet transformation features from the waveform signals processed in S2 through a feature extraction module;
s4, setting a sliding time window, sequentially translating according to the time sequence of the data, extracting the feature vectors of the data in the time window, and finally splicing all the feature vectors into a two-dimensional feature map according to the time dimension;
s5, inputting the two-dimensional characteristic map into the convolutional neural network by using the convolutional neural network, and identifying the stress emotion of the user;
and S6, repeatedly training and optimizing the convolutional neural network by using the loss function, and constructing a complete real-time pressure recognition model. This embodiment uses a Focalloss loss function, the mathematical form of which is FL (p)t)=-αt(1-pt)γlog(pt)
Where t is the classification category (here, pressure, neutral, pleasure), PtThe t class classification probability, alpha, output after the two-dimensional feature map in S4 is processed by the neural network in S5tFor the experimentally determined coefficient ratio, γ is the enhancement factor set by the model trainer himself, set here as 2.
The raw data is pulse signals acquired by wearable equipment, and includes but is not limited to blood volume change waveforms acquired by irradiating wrist arteries through an optoelectronic device and acquiring reflected light.
The data filtering specifically includes analyzing the mode of an original pulse signal through Empirical Mode Decomposition (EMD), decomposing to obtain an eigenmode sequence set of the signal, and adding all the eigenmode sequences to remove baseline drift caused by factors such as respiration. The method comprises the following specific steps:
a) finding out all maximum value points of the original data sequence X [ n ] obtained in the step S1, and fitting by using a cubic spline interpolation function to form an upper envelope curve of the original data; similarly, all minimum points are found, and a lower envelope curve of the data is formed by fitting a cubic spline interpolation function.
b) The average value of the upper envelope line and the lower envelope line is recorded as AVG, and the average envelope AVG is subtracted from the original data sequence X [ n ] to obtain a new data sequence h.
c) If h still has a negative maximum and a positive local minimum, repeating step b).
d) And adding all the eigenmode functions to obtain a waveform signal without baseline drift.
The feature extraction mainly extracts the following features: statistical features, pulse waveform features, nonlinear features, and wavelet transform features.
The statistical characteristics comprise the mean value and the standard deviation of pulse crest interval IBI and the mean value and the standard deviation of an IBI first-order difference sequence, specifically, a difference method is used for searching a maximum value point of a waveform, then a 0.7 multiplied maximum value is used as a threshold value, interference points of a dicrotic wave of the waveform are removed, so that the crest value point of the pulse waveform is extracted, time coordinates of all the crest value points are added into the sequence to obtain the heart beat interval IBI, and then the mean value and the standard deviation of the IBI are extracted; and solving a first-order difference of the IBI, and adding the standard deviation and the mean value of the first-order difference sequence into the characteristic engineering.
The pulse waveform is characterized by the mean value of the time difference between the main peak and the sub peak of the pulse, specifically, the waveform signal formed in the step S2 is divided into sections according to the standard pulse waveform, the time difference of the main peak and the sub peak of the data in each section is respectively calculated, and the mean value of the time difference is calculated. The standard pulse waveform is shown in fig. 4, in which 1 indicates the main wave, 2 indicates the tidal wave, 3 indicates the descending isthmus, and 4 indicates the dicrotic wave.
The nonlinear feature is 2-order approximate entropy of the pulse data, specifically, approximate entropy of the waveform signal formed in S2 with 2 as the order and 0.5 as the tolerance is obtained, and feature engineering is added.
The wavelet transform is characterized in that after 9 layers of wavelet decomposition with a fundamental wave function of db1 are carried out on signals, the square sum of wavelet coefficients of each layer accounts for the proportion of the square sum of all the coefficients. Specifically, the formula of the wavelet decomposition is as follows:
Figure BDA0002421408390000051
Respectively solving the square sum of the wavelet coefficients of each layer, dividing the square sum of the layer by the total value of the square sums of all the coefficients, calculating the ratio of the square sums of the wavelet coefficients of each layer, and adding the characteristic engineering.
As shown in fig. 3, the method for constructing the two-dimensional feature map sets a sliding time window with a certain length for the waveform signal formed in S2, sequentially translates the waveform signal according to the time sequence of the data, extracts feature vectors of the data in the window, and finally splices all the feature vectors into the two-dimensional feature map according to the time dimension.
Specifically, a time window with the length of W seconds is set for a data segment with the length of T seconds, s seconds are taken as a stepping value to traverse the data segment, wherein s is more than 0 and less than W and less than the total length of the data, various characteristics such as statistical characteristics, pulse waveform characteristics, nonlinear characteristics, wavelet transformation characteristics and the like are solved for the data in each time window, and finally, the data are spliced into a two-dimensional characteristic diagram. Experiments prove that the T, W, s has the optimal values of 20,10 and 1 respectively, and the size of the obtained two-dimensional characteristic graph is 10 multiplied by 15.
As shown in fig. 3, the structure diagram of a convolutional neural network provided in the embodiment of the present invention includes the following 4-layer structure:
1) the convolutional layer 1 comprises 10 two-dimensional convolutional kernels with the size of 2 x 15, the step value of the convolutional kernels is 2, the convolutional layer uses a PReLU as an activation function, the size of a feature graph after convolution is 5 x 1, and the number of the feature graphs is 10;
2) a convolution layer 2 including 10 two-dimensional convolution kernels of 1 × 10 size, the step value of the convolution kernel being 1, the convolution layer using the PReLU as an activation function;
3) a convolution layer 3 including 10 two-dimensional convolution kernels of 3 × 10 size, the step value of the convolution kernel being 2, the convolution layer using the PReLU as an activation function;
4) and in the full connection layer, the output characteristic graphs of the convolutional layers 1, 2 and 3 are extended and spliced into a one-dimensional characteristic vector with the length of 120, the one-dimensional characteristic vector is input into the full connection layer, the 3-dimensional characteristic vector is output, and classification is carried out through a softmax function.
The activation function is of the form:
Figure BDA0002421408390000061
in the above equation, a is the learnable coefficient, and x is the input of the activation function.
The training process of the training comprises the following specific steps: processing the public WESAD data set to obtain a characteristic diagram with a stress emotion label, and putting the characteristic diagram into a model for repeated training until the best classification performance is obtained; on the basis, a standard experiment scene is established based on a Chinese standard video material library (CEVS) and a psychological standard paradigm, pulse data of the Chinese residents of the proper age are collected and labeled, and the data are put into model training until the model classification performance is optimal.
The pressure identification method designed by the embodiment of the invention comprises the following specific implementation steps:
step 1: the wearable equipment is worn according to a specified posture, and the pulse data of the testee is collected according to the use requirement;
step 2: filtering the acquired pulse data to remove baseline drift;
and step 3: setting a sliding time window, and constructing a two-dimensional characteristic map for the acquired pulse data based on the sliding time window, wherein the characteristics comprise statistical characteristics, nonlinear characteristics, waveform characteristics and wavelet transformation characteristics;
and 4, step 4: inputting the characteristic diagram into the trained neural network model to obtain the current pressure state of the wearer;
and 5: according to the feedback of the user, corresponding parameters of the neural network are adjusted and trained to form a neural network algorithm with higher generalization, and higher accuracy is obtained.
The application of the invention also provides reference for other related problems in the same field, can be expanded and extended based on the reference, is applied to other technical schemes related to emotion recognition analysis methods, and has very wide application prospect.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be suggested to one skilled in the art, but it is intended to cover all modifications and alterations falling within the scope of the present disclosure as defined in the appended claims without departing from the spirit thereof.

Claims (8)

1. A pulse signal pressure identification method based on deep learning is characterized by comprising the following steps:
s1, acquiring pulse signals through wearable equipment worn on the wrist of the user;
s2, performing empirical mode decomposition on the original data of the pulse signals, and removing baseline drift to obtain waveform signals without baseline drift;
s3, establishing a feature project, and extracting statistical features, pulse waveform features, nonlinear features and wavelet transformation features from the waveform signals without baseline drift through a feature extraction module;
s4, setting a sliding time window, sequentially translating according to the time sequence of the data, extracting the feature vectors of the data in the time window, and finally splicing all the feature vectors into a two-dimensional feature map according to the time dimension;
s5, inputting the two-dimensional characteristic map into the convolutional neural network by using the convolutional neural network, and identifying the stress emotion of the user;
and S6, repeatedly training and optimizing the convolutional neural network by using the loss function, and constructing a complete real-time pressure recognition model.
2. The pulse signal pressure identification method based on deep learning of claim 1, wherein: in step S1, the wearable device acquires a waveform of change of transmittance of the artery at the wrist, and further extracts a pulse signal.
3. The pulse signal pressure identification method based on deep learning of claim 1, wherein: in step S2, the raw data is decomposed through empirical mode decomposition, and all eigenmode functions of the pulse signal are extracted and summed, so as to remove the baseline drift with envelope properties.
4. The pulse signal pressure identification method based on deep learning of claim 1, wherein: in the step S3, in the step S,
extracting the statistical features, including: calculating a main peak interval as a heartbeat interval IBI for the waveform signal formed in S2, and calculating an average value and a standard deviation of the IBI; calculating a first-order difference sequence for the IBI, extracting a standard deviation and a mean value of the first-order difference sequence, and adding the standard deviation and the mean value into the characteristic engineering;
extracting the pulse waveform features, including: calculating the secondary wave peak point of the waveform signal formed in S2, calculating the interpolation sequence of the adjacent main and secondary wave peak points, and adding the average value of the sequence into the characteristic engineering;
extracting the nonlinear features, including: calculating 2-order approximate entropy based on the waveform signal obtained in S2 and adding characteristic engineering;
extracting the wavelet transformation characteristics comprises the following steps: the waveform signal formed in S2 is subjected to 9-layer wavelet decomposition with the fundamental function "db 1", and the ratio of the square sum of wavelet coefficients of each layer to the square sum of all coefficients is added to the feature engineering.
5. The pulse signal pressure identification method based on deep learning of claim 4, wherein: the formula of the wavelet decomposition is as follows:
Figure FDA0002896992230000021
in the formula, WTf(α, τ) is the decomposition result, a is the transform scale, τ is the time domain offset, t is time,
Figure FDA0002896992230000022
f (t) is the waveform signal formed in step S2 as a fundamental wave decomposition function.
6. The pulse signal pressure identification method based on deep learning of claim 1, wherein: in step S4, a time window with a length of W seconds is set for a data segment with a length of T seconds, S seconds is used as a step value to traverse the data segment, where S < 0< W < T and T is smaller than the total length of the data, and the statistical features, pulse waveform features, nonlinear features and wavelet transform features are extracted from the data in each time window, and finally, the data are spliced into a two-dimensional feature map, where the size of the feature map is (T-W)/sx 15.
7. The pulse signal pressure identification method based on deep learning of claim 1, wherein: step S5 includes:
s51, performing two-dimensional convolution on feature maps with the size of 10 multiplied by 15 through 10 two-dimensional convolution kernels with the step value of 2, wherein the convolution kernels are 2 multiplied by 15, the feature maps after convolution are 5 multiplied by 1, and the number of the feature maps is 10;
s52, splicing the 10 convolved feature maps, wherein the size of the spliced feature maps is 5 multiplied by 10, and inputting convolution kernels with the size of 1 multiplied by 10 for convolution, and the layer has 10 convolution kernels in total;
s53, splicing the 10 convolved feature maps, wherein the size of the spliced feature maps is 5 x 10, inputting convolution kernels with the size of 3 x 10 for convolution, and the layer is provided with 10 convolution kernels in total;
and S54, unfolding all the characteristic diagrams obtained in S51, S52 and S53 into one-dimensional vectors, respectively carrying out batch standardization processing, inputting the processed vectors into a full-connection layer for dimension reduction, and inputting the dimension-reduced data into a Softmax classification function for classification.
8. The pulse signal pressure identification method based on deep learning of claim 1, wherein: the specific training process of the training in step S6 is: processing the public WESAD data set to obtain a characteristic diagram with a stress emotion label, and putting the characteristic diagram into a model for repeated training until the best classification performance is obtained; on the basis, a standard experiment scene is established based on a Chinese standard video material library and a psychological standard paradigm, pulse data of the Chinese residents of the proper age are collected and labeled, and the data are put into model training.
CN202010206857.4A 2020-03-23 2020-03-23 Pulse signal pressure identification method based on deep learning Active CN111407243B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010206857.4A CN111407243B (en) 2020-03-23 2020-03-23 Pulse signal pressure identification method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010206857.4A CN111407243B (en) 2020-03-23 2020-03-23 Pulse signal pressure identification method based on deep learning

Publications (2)

Publication Number Publication Date
CN111407243A CN111407243A (en) 2020-07-14
CN111407243B true CN111407243B (en) 2021-05-14

Family

ID=71486171

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010206857.4A Active CN111407243B (en) 2020-03-23 2020-03-23 Pulse signal pressure identification method based on deep learning

Country Status (1)

Country Link
CN (1) CN111407243B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112364710B (en) * 2020-10-20 2024-04-05 西安理工大学 Plant electric signal classification and identification method based on deep learning algorithm
CN112712022B (en) * 2020-12-29 2023-05-23 华南理工大学 Pressure detection method, system, device and storage medium based on image recognition
CN112998652B (en) * 2021-02-23 2022-07-19 华南理工大学 Photoelectric volume pulse wave pressure identification method and system
CN112869717B (en) * 2021-02-25 2023-02-24 佛山科学技术学院 Pulse feature recognition and classification system and method based on BL-CNN
CN113057633B (en) * 2021-03-26 2022-11-01 华南理工大学 Multi-modal emotional stress recognition method and device, computer equipment and storage medium
CN113180670B (en) * 2021-05-24 2023-03-21 北京测态培元科技有限公司 Method for identifying mental state of depression patient based on finger pulse signals
CN113397500B (en) * 2021-08-03 2022-06-28 华东师范大学 Pulse monitoring device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103956028A (en) * 2014-04-23 2014-07-30 山东大学 Automobile multielement driving safety protection method
CN109276241A (en) * 2018-11-28 2019-01-29 深圳还是威健康科技有限公司 A kind of Pressure identification method and apparatus
CN109498041A (en) * 2019-01-15 2019-03-22 吉林大学 Driver road anger state identification method based on brain electricity and pulse information

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110245628A1 (en) * 2010-03-31 2011-10-06 Nellcor Puritan Bennett Llc Photoplethysmograph Filtering Using Empirical Mode Decomposition
JP6610661B2 (en) * 2015-04-23 2019-11-27 ソニー株式会社 Information processing apparatus, control method, and program
CN105496371A (en) * 2015-12-21 2016-04-20 中国石油大学(华东) Method for emotion monitoring of call center service staff

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103956028A (en) * 2014-04-23 2014-07-30 山东大学 Automobile multielement driving safety protection method
CN109276241A (en) * 2018-11-28 2019-01-29 深圳还是威健康科技有限公司 A kind of Pressure identification method and apparatus
CN109498041A (en) * 2019-01-15 2019-03-22 吉林大学 Driver road anger state identification method based on brain electricity and pulse information

Also Published As

Publication number Publication date
CN111407243A (en) 2020-07-14

Similar Documents

Publication Publication Date Title
CN111407243B (en) Pulse signal pressure identification method based on deep learning
Fan et al. Multiscaled fusion of deep convolutional neural networks for screening atrial fibrillation from single lead short ECG recordings
Tivatansakul et al. Emotion recognition using ECG signals with local pattern description methods
CN110472649B (en) Electroencephalogram emotion classification method and system based on multi-scale analysis and integrated tree model
CN113729707A (en) FECNN-LSTM-based emotion recognition method based on multi-mode fusion of eye movement and PPG
CN109598222B (en) EEMD data enhancement-based wavelet neural network motor imagery electroencephalogram classification method
CN114533086B (en) Motor imagery brain electrolysis code method based on airspace characteristic time-frequency transformation
CN113128552A (en) Electroencephalogram emotion recognition method based on depth separable causal graph convolution network
Zhao et al. ECG identification based on matching pursuit
Yao et al. Interpretation of electrocardiogram heartbeat by CNN and GRU
CN114145745B (en) Graph-based multitasking self-supervision emotion recognition method
CN115211858A (en) Emotion recognition method and system based on deep learning and storable medium
CN115238796A (en) Motor imagery electroencephalogram signal classification method based on parallel DAMSCN-LSTM
Pan et al. Recognition of human inner emotion based on two-stage FCA-ReliefF feature optimization
Samal et al. Ensemble median empirical mode decomposition for emotion recognition using EEG signal
CN111053552A (en) QRS wave detection method based on deep learning
CN113128384B (en) Brain-computer interface software key technical method of cerebral apoplexy rehabilitation system based on deep learning
Rahuja et al. A comparative analysis of deep neural network models using transfer learning for electrocardiogram signal classification
Wan et al. Research on Identification Algorithm Based on ECG Signal and Improved Convolutional Neural Network
CN116602676A (en) Electroencephalogram emotion recognition method and system based on multi-feature fusion and CLSTN
Kim et al. Development of person-independent emotion recognition system based on multiple physiological signals
Cene et al. Upper-limb movement classification through logistic regression sEMG signal processing
Zhao et al. GTSception: a deep learning eeg emotion recognition model based on fusion of global, time domain and frequency domain feature extraction
CN115270847A (en) Design decision electroencephalogram recognition method based on wavelet packet decomposition and convolutional neural network
CN114343679A (en) Surface electromyogram signal upper limb action recognition method and system based on transfer learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant