CN111222498A - An identification method based on photoplethysmography - Google Patents

An identification method based on photoplethysmography Download PDF

Info

Publication number
CN111222498A
CN111222498A CN202010194774.8A CN202010194774A CN111222498A CN 111222498 A CN111222498 A CN 111222498A CN 202010194774 A CN202010194774 A CN 202010194774A CN 111222498 A CN111222498 A CN 111222498A
Authority
CN
China
Prior art keywords
size
layer
photoplethysmography
lstm
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010194774.8A
Other languages
Chinese (zh)
Inventor
陈真诚
程鹏
梁永波
朱健铭
李文湛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin University of Electronic Technology
Original Assignee
Guilin University of Electronic Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin University of Electronic Technology filed Critical Guilin University of Electronic Technology
Priority to CN202010194774.8A priority Critical patent/CN111222498A/en
Publication of CN111222498A publication Critical patent/CN111222498A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/15Biometric patterns based on physiological signals, e.g. heartbeat, blood flow

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种基于光电容积脉搏波的身份识别方法,包括如下步骤1)获取训练组数据和测试组数据;2)将训练组数据和测试组数据中所有的光电容积脉搏波信号数据分割成若干包含多个光电容积脉搏波的片段;3)利用连续小波变换将分割后的所有片段转化成时频特征能量图形式;4)搭建CNN与LSTM相结合的神经网络模型;5)将训练组图片数据送入CNN与LSTM相结合的神经网络模型中进行训练;6)利用训练好的模型对测试组图片数据进行分类,分类结果即为身份识别结果,最后对训练好的CNN与LSTM相结合的神经网络模型进行评估。这种方法在应用中安全系数高,无需人工提取特征,识别效果精度高且稳定,可用于政府机构、金融机构等领域的个人身份识别。

Figure 202010194774

The invention discloses an identification method based on photoplethysmography, comprising the following steps: 1) acquiring training group data and test group data; 2) dividing all photoplethysmographic signal data in the training group data and the test group data 3) Use continuous wavelet transform to convert all the segmented segments into time-frequency feature energy map form; 4) Build a neural network model combining CNN and LSTM; 5) Put the training The group picture data is sent to the neural network model combining CNN and LSTM for training; 6) Use the trained model to classify the test group picture data, and the classification result is the identification result. Finally, the trained CNN and LSTM are compared. The combined neural network model is evaluated. This method has a high safety factor in application, does not require manual feature extraction, and has a high and stable recognition effect. It can be used for personal identification in government agencies, financial institutions and other fields.

Figure 202010194774

Description

Identity recognition method based on photoplethysmography
Technical Field
The invention relates to the crossing field of information processing and computer science, in particular to an identity identification method based on photoelectric volume pulse waves, which can be used as a means for maintaining personal information security in the fields of government institutions, financial institutions and the like.
Background
With the development of the current society, the security problem becomes more prominent, people lose the reliability and the practicability of the traditional identity identification method by memorizing complex passwords or carrying electronic ciphers, and the current situation makes the demand of people on the biological identification technology more and more great. Such as company confidential systems, financial transactions, computer networks and access systems to the security domain are still identified and authorized by means of identification cards or passwords. Such systems are not sufficiently secure because identification card or password information is easily stolen or forgotten. The biological recognition system can identify according to the physiological signals and the behavior characteristics of the individual, and the physiological signals and the behavior characteristics of the human body are unique to the individual, so that the biological recognition system can identify and provide more secrecy and safety. Methods in which human physiological signals and behavioral features such as fingerprints, human faces, sounds, electroencephalograms, and electrocardiograms are used for identification are becoming increasingly popular. However, fingerprints can be copied by various means such as a powder method and a magnetic powder method, face recognition can be deceived by fake moving pictures, sounds can be simulated, and methods based on brain waves or electrocardiosignals require professional acquisition equipment and thus cannot be widely used.
Photoplethysmography (PPG) enables photoplethysmography to be used to obtain photoplethysmography from fingertip, wrist or earlobe measurements. PPG is a non-invasive, electro-optical method of obtaining information about the volume change of blood flow in a blood vessel by testing a part of the body close to the skin. The photoplethysmography signal is an inherent physiological signal of a human body, is difficult to copy and simulate, has high safety and is simple and convenient to acquire. Most of the existing identity recognition methods based on photoplethysmography need manual feature extraction, the process is complicated, features are greatly different due to different human bodies, and the generalization capability is low.
The time-frequency analysis technology can enable a user to observe the energy density and the intensity of a time domain and a frequency domain of a signal at the same time, and the time and the frequency are combined together, so that the signal can be fully processed. Continuous Wavelet Transform (CWT) is a time-frequency analysis technique, and is suitable for processing a non-stationary weak physiological signal, such as a photoplethysmographic pulse wave, mainly containing low-frequency components, and the information in the original signal can be better retained by using a time-frequency characteristic energy map obtained by the method. In addition, compared with the one-dimensional signal, the two-dimensional image can automatically ignore small noise data in the image by utilizing the convolution layer and the pooling layer in the model, thereby avoiding the influence of noise in the one-dimensional signal on the model identification accuracy and sensitivity.
The Convolutional Neural Network (CNN) is an algorithm which is good at processing a large amount of picture information, and mainly comprises an input layer, a Convolutional layer, a pooling layer, a full-link layer and a Softmax layer, wherein the CNN can retain spatial information, and a convolution structure is formed by the Convolutional layer and the pooling layer, so that the problem of overfitting of a mathematical model is effectively relieved. The Long Short-Term Memory Network (LSTM) is a special type in a Recurrent Neural Network (RNN), wherein the state of each unit is interacted with other units, the time dynamics in data is displayed through the internal feedback state, Long-Term dependence information can be learned, and the classification effect can be greatly improved by combining the CNN and the LSTM. Most of the existing identity recognition methods based on photoplethysmography are based on features extracted manually, which causes the problem of the difference of photoplethysmography signals among different human individuals, and leads to the reduction of the recognition accuracy and generalization capability of the model. The neural network model combining the CNN and the LSTM does not need to manually extract features, and the hidden and non-simulative features in the deep layer of the photoplethysmographic signal can be learned in the process of continuously fitting and continuously optimizing model parameters based on a large amount of data, so that the obtained model has high safety factor and generalization capability in practical application.
Disclosure of Invention
The invention aims to provide an identity identification method based on photoplethysmography aiming at the defects in the prior art. The method does not need to manually extract features, greatly simplifies the process of model fitting, and has the advantages of strong generalization capability, high safety coefficient, high and stable identification effect precision.
The technical scheme for realizing the purpose of the invention is as follows:
an identity recognition method based on photoplethysmography comprises the following steps:
1) obtaining training group data and test group data: collecting photoplethysmographic signals of n persons in a specified time period to form a training group; randomly acquiring photoelectric volume pulse wave signals of about 3/10 of the n persons in another time period to form a test group, wherein the photoelectric volume pulse wave signal data in the training group and the test group are identities of each person, and the data classification is identity recognition;
2) dividing all photoelectric volume pulse wave signal data in training group data and test group data into a plurality of segments containing a plurality of photoelectric volume pulse waves;
3) converting all the segmented segments into a time-frequency characteristic energy graph form by using continuous wavelet transform;
4) building a neural network model combining CNN and LSTM;
5) sending the picture data of the training group in the step 3) into a neural network model combining CNN and LSTM for training;
6) classifying the test group picture data in the step 3) by using the neural network model combining the CNN and the LSTM trained in the step 5), wherein the classification result is an identity recognition result, and finally evaluating the neural network model combining the CNN and the LSTM trained.
The number of the test group in the step 1) can be 2/10 to 4/10 of the total number of the participants, and the photoplethysmography data of each person needs to be collected as much as possible.
The segment length range of the photoplethysmography in the step 2) is between 5s and 20s, wherein the number of sampling points in the segment of the photoplethysmography is equal to the product of the sampling frequency and the length of the segmented photoplethysmography wave segment, and the time interval of the sampling points is 1 divided by the sampling frequency.
Selecting any one of 'cgau 8', 'haar', 'dB 2', 'bior' and 'mor 1' as a mother wavelet function of the continuous wavelet transform in the step 3), wherein the pixel size of the converted picture is 1054x148, and the picture is an RGB color image with a channel of 3.
The process of building the neural network model combining the CNN and the LSTM in the step 4) is as follows:
4.1, using the time-frequency characteristic energy graph with the uniform format as input layer data;
4.2 build up the convolution layer and the pooling layer, and the specific parameters are as follows:
a first layer: the number of filters is 30, the size of kernel _ size is 3x3, and the size of strands is 1x 1;
a second layer: max pooling layer, pool _ size 2x2, crosses size 2x 2;
and a third layer: the number of filters is 60, the size of kernel _ size is 3x3, and the size of strands is 1x 1;
a fourth layer: max pooling layer, pool _ size 2x2, crosses size 2x 2;
and a fifth layer: the number of filters is 90, the size of kernel _ size is 3x3, and the size of strands is 1x 1;
a sixth layer: max pooling layer, pool _ size 2x2, crosses size 2x 2;
a seventh layer: the number of filters is 120, the size of kernel _ size is 3x3, and the size of strands is 1x 1;
an eighth layer: max pooling layer, pool _ size 2x2, crosses size 2x 2;
a ninth layer: the number of filters is 150, the size of kernel _ size is 3x3, and the size of strands is 1x 1;
a tenth layer: max pooling layer, pool _ size 2x2, crosses size 2x 2;
wherein the activation function in each convolutional layer is a ReLU activation function;
4.3 build the full connection layer and the LSTM layer, the concrete parameters are: the eleventh layer and the twelfth layer are full connection layers, the number of the neurons is 500 and 100 respectively, and the activation functions are ReLU activation functions; the thirteenth layer is an LSTM layer, where output _ dim is 50 in size; the fourteenth layer is a full connection layer, the number of the neurons is n, n is the number of the training groups, and the activation function is a Softmax activation function.
And the evaluation in the step 6) is to comprehensively evaluate the trained neural network model combining the CNN and the LSTM by using the loss function value and the accuracy.
The technical scheme has the advantages that:
1. the technical scheme adopts a deep learning method of a neural network model combining CNN and LSTM, does not need to independently extract manual features, does not need to carry out preprocessing such as denoising on collected photoplethysmography signals, directly combines the steps of feature extraction and classification identification, and simplifies the process of model fitting;
3. in the technical scheme, the characteristics which are hidden in the deep layer of the photoplethysmogram signal and cannot be simulated can be learned in the process of continuously fitting and continuously optimizing model parameters based on a large amount of data, so that the obtained model has high safety factor and generalization capability in practical application;
4. according to the technical scheme, the one-dimensional photoplethysmographic signals are converted into the time-frequency characteristic energy diagram, small noise data in the diagram can be ignored, and finally the small noise data are sent into the neural network model combining the CNN and the LSTM to achieve high-precision and stable identification effect.
The method has high safety factor in practical application, does not need manual feature extraction, greatly simplifies the process of model fitting, and has strong generalization capability and high and stable recognition effect precision.
Drawings
FIG. 1 is a schematic flow chart of an embodiment;
FIG. 2 is a graph of the time-frequency characteristic of 10s photoplethysmography waves of the same person at different time periods in the embodiment;
FIG. 3 is a graph of the time-frequency characteristic energy of 10s photoplethysmograph pulse waves of different persons in the example;
FIG. 4 is a diagram showing a neural network model combining CNN and LSTM in the embodiment;
FIG. 5 is a diagram illustrating the result of the photoplethysmography signal test in the embodiment.
In the figure, epoch is the number of iterations, loss and accuracycacy are the accuracy and loss function value of the simulation test, train _ loss is the loss function value of the training data, val _ loss is the loss function value of the test data, train _ accuracycacy is the accuracy of the training data, and val _ accuracycacy is the accuracy of the test data.
Detailed Description
The invention is described in further detail below with reference to the following figures and specific examples, but the invention is not limited thereto.
Example (b):
referring to fig. 1, an identity recognition method based on photoplethysmography includes the following steps:
1) downloading photoplethysmography signal data of 80 subjects from a MIMIC-III public database so as to simulate the process of acquiring the photoplethysmography signal data from a human body, and forming the downloaded photoplethysmography signals of 80 persons in the first 60 minutes into a training set; randomly extracting the photoplethysmography signals of 32 of the 80 persons in the last 60 minutes to form a test group, wherein the photoplethysmography signal data in the training group and the test group are the identities of the persons, and the data classification is identity recognition;
2) dividing all photoplethysmography signal data into a plurality of segments containing a plurality of photoplethysmography waves, wherein the length range of each segment is between 5s and 20s, the number of sampling points in the photoplethysmography segment is equal to the product of the sampling frequency and the length of the divided photoplethysmography segment, in the example, the sampling frequency is 125Hz, the length of the divided segment is 10s, and therefore the number of sampling points of one signal segment is 1250;
3) converting all the segmented segments into a time-frequency characteristic energy graph form by using continuous wavelet transform, wherein the continuous wavelet transform selects a mother wavelet function 'cgau 8', the pixel size of the converted picture is 1054x148, the picture is an RGB color image, and the channel is 3, as shown in FIGS. 2 and 3;
4) building a neural network model combining CNN and LSTM, as shown in figure 4,
the process of building the neural network model combining the CNN and the LSTM comprises the following steps:
4.1 using the time-frequency characteristic energy graph with the uniform format size of 1054x148x3 as the input layer data;
4.2 build up the convolution layer and the pooling layer, and the specific parameters are as follows:
a first layer: the number of filters is 30, the size of kernel _ size is 3x3, and the size of strands is 1x 1;
a second layer: max pooling layer, pool _ size 2x2, crosses size 2x 2;
and a third layer: the number of filters is 60, the size of kernel _ size is 3x3, and the size of strands is 1x 1;
a fourth layer: max pooling layer, pool _ size 2x2, crosses size 2x 2;
and a fifth layer: the number of filters is 90, the size of kernel _ size is 3x3, and the size of strands is 1x 1;
a sixth layer: max pooling layer, pool _ size 2x2, crosses size 2x 2;
a seventh layer: the number of filters is 120, the size of kernel _ size is 3x3, and the size of strands is 1x 1;
an eighth layer: max pooling layer, pool _ size 2x2, crosses size 2x 2;
a ninth layer: the number of filters is 150, the size of kernel _ size is 3x3, and the size of strands is 1x 1;
a tenth layer: max pooling layer, pool _ size 2x2, crosses size 2x 2;
wherein the activation function in each convolutional layer is a ReLU activation function;
4.3 build the full connection layer and the LSTM layer, the concrete parameters are: the eleventh layer and the twelfth layer are full connection layers, the number of the neurons is 500 and 100 respectively, and the activation functions are ReLU activation functions; the thirteenth layer is an LSTM layer, where output _ dim is 50 in size; the fourteenth layer is a full connection layer, the number of the neurons is 80, and the activation function is a Softmax activation function;
5) sending the picture data of the training group in the step 3) into a neural network model combining CNN and LSTM for training, in the example, sending all pictures of the training group into the neural network model combining CNN and LSTM in the step 4) for training, and storing each parameter model adjusted by training;
6) classifying and identifying the test group picture data in the step 3) by using the neural network model combining the CNN and the LSTM trained in the step 5), automatically memorizing and counting the result of the classification and identification by using the neural network model combining the CNN and the LSTM, so as to calculate the numerical values of evaluation indexes such as the accuracy, the loss function value and the like of the neural network model combining the CNN and the LSTM, and further evaluating the classification effect of the neural network model combining the CNN and the LSTM by using the numerical values.
As shown in fig. 5, the neural network model combining the CNN and the LSTM trained in step 5) is subjected to simulation testing, in this example, when the number of selected iterations is greater than 7, the classification accuracy of the neural network model combining the CNN and the LSTM is substantially stable, and the accuracy of the classification of the photoplethysmographic signals of different people is high.

Claims (5)

1.一种基于光电容积脉搏波的身份识别方法,其特征在于,包括如下步骤:1. an identification method based on photoplethysmography, is characterized in that, comprises the steps: 1)获取训练组数据和测试组数据:采集n个人在规定时间段内的光电容积脉搏波信号,组成训练组;再随机采集这n个人中的约3/10的人在另一时间段内的光电容积脉搏波信号,组成测试组,训练组和测试组的光电容积脉搏波信号数据即为每个人的身份;1) Obtain training group data and test group data: collect the photoplethysmography signals of n people within a specified time period to form a training group; then randomly collect about 3/10 of the n people in another time period The photoplethysmography signal of the training group and the test group constitute the test group, and the photoplethysmography signal data of the training group and the test group is the identity of each person; 2)将训练组数据和测试组数据中所有的光电容积脉搏波信号数据分割成若干包含多个光电容积脉搏波的片段;2) Divide all the photoplethysmographic signal data in the training group data and the test group data into several segments containing a plurality of photoplethysmography waves; 3)利用连续小波变换将分割后的所有片段转化成时频特征能量图形式;3) Use continuous wavelet transform to convert all segmented segments into time-frequency feature energy map form; 4)搭建CNN与LSTM相结合的神经网络模型;4) Build a neural network model combining CNN and LSTM; 5)将步骤3)的训练组图片数据送入CNN与LSTM相结合的神经网络模型中进行训练;5) Send the training group picture data in step 3) into the neural network model combining CNN and LSTM for training; 6)利用步骤5)中训练好的CNN与LSTM相结合的神经网络模型对步骤3)的测试组图片数据进行分类,分类结果即为身份识别结果,最后对训练好的CNN与LSTM相结合的神经网络模型进行评估。6) Use the neural network model that combines the trained CNN and LSTM in step 5) to classify the test group image data in step 3), and the classification result is the identification result. Finally, the trained CNN and LSTM are combined. Neural network models are evaluated. 2.根据权利要求1所述的基于光电容积脉搏波的身份识别方法,其特征在于,步骤1)中测试组人数可占总参与人数的2/10到4/10。2 . The identification method based on photoplethysmography according to claim 1 , wherein, in step 1), the number of people in the test group can account for 2/10 to 4/10 of the total number of participants. 3 . 3.根据权利要求1所述的基于光电容积脉搏波的身份识别方法,其特征在于,步骤2)中所述的光电容积脉搏波的片段长度范围在5s到20s之间,其中光电容积脉搏波的片段中采样点个数等于采样频率与分割的光电容积脉搏波片段长度的乘积,采样点时间间隔为1除以采样频率。3. The identification method based on photoplethysmography according to claim 1, characterized in that, the segment length of the photoplethysmography wave described in step 2) ranges from 5s to 20s, wherein the photoplethysmography wave The number of sampling points in the segment is equal to the product of the sampling frequency and the length of the segmented photoplethysmographic wave segment, and the sampling point time interval is 1 divided by the sampling frequency. 4.根据权利要求1所述的基于光电容积脉搏波的身份识别方法,其特征在于,步骤3)中所述的连续小波变换的母小波函数选择‘cgau8’、‘haar’、‘dB2’、‘bior’和‘mor1’中的任意一种,转换后的图片像素大小为1054x148,同时图片为RGB彩色图像,通道为3。4. The identification method based on photoplethysmography according to claim 1, wherein the mother wavelet function of the continuous wavelet transform described in step 3) selects 'cgau8', 'haar', 'dB2', Either 'bior' or 'mor1', the pixel size of the converted image is 1054x148, and the image is an RGB color image with 3 channels. 5.根据权利要求1所述的基于光电容积脉搏波的身份识别方法,其特征在于,步骤4)中所述的搭建CNN与LSTM相结合的神经网络模型的过程为:5. The identification method based on photoplethysmography according to claim 1, wherein the process of building a neural network model combining CNN and LSTM described in step 4) is: 4.1将统一格式的时频特征能量图作为输入层数据;4.1 Use the time-frequency feature energy map in a unified format as the input layer data; 4.2搭建卷积层和池化层,具体参数为:4.2 Build the convolution layer and pooling layer, the specific parameters are: 第一层:卷积层,filters个数为30, kernel_size大小为3x3,strides大小为1x1;The first layer: convolution layer, the number of filters is 30, the size of kernel_size is 3x3, and the size of strides is 1x1; 第二层:最大池化层,pool_size大小为2x2,strides大小为2x2;The second layer: the maximum pooling layer, the size of pool_size is 2x2, and the size of strides is 2x2; 第三层:卷积层,filters个数为60,kernel_size大小为3x3,strides大小为1x1;The third layer: convolutional layer, the number of filters is 60, the size of kernel_size is 3x3, and the size of strides is 1x1; 第四层:最大池化层,pool_size大小为2x2,strides大小为2x2;The fourth layer: the maximum pooling layer, the size of pool_size is 2x2, and the size of strides is 2x2; 第五层:卷积层,filters个数为90,kernel_size大小为3x3,strides大小为1x1;The fifth layer: convolution layer, the number of filters is 90, the size of kernel_size is 3x3, and the size of strides is 1x1; 第六层:最大池化层,pool_size大小为2x2,strides大小为2x2;The sixth layer: the maximum pooling layer, the size of pool_size is 2x2, and the size of strides is 2x2; 第七层:卷积层,filters个数为120,kernel_size大小为3x3,strides大小为1x1;The seventh layer: convolution layer, the number of filters is 120, the size of kernel_size is 3x3, and the size of strides is 1x1; 第八层:最大池化层,pool_size大小为2x2,strides大小为2x2;The eighth layer: the maximum pooling layer, the size of pool_size is 2x2, and the size of strides is 2x2; 第九层:卷积层,filters个数为150,kernel_size大小为3x3,strides大小为1x1;The ninth layer: convolutional layer, the number of filters is 150, the size of kernel_size is 3x3, and the size of strides is 1x1; 第十层:最大池化层,pool_size大小为2x2,strides大小为2x2;The tenth layer: the maximum pooling layer, the size of pool_size is 2x2, and the size of strides is 2x2; 其中每个卷积层中的激活函数均为ReLU激活函数;The activation function in each convolutional layer is the ReLU activation function; 4.3搭建全连接层和LSTM层,具体参数为:第十一和十二层为全连接层,神经元个数分别为500和100,激活函数均为ReLU激活函数;第十三层为LSTM层,其中output_dim大小为50;第十四层为全连接层,神经元个数为n,n为训练组的人数,激活函数为Softmax激活函数。4.3 Build the fully connected layer and the LSTM layer. The specific parameters are: the eleventh and twelfth layers are fully connected layers, the number of neurons is 500 and 100 respectively, and the activation functions are all ReLU activation functions; the thirteenth layer is the LSTM layer , where the output_dim size is 50; the fourteenth layer is a fully connected layer, the number of neurons is n, n is the number of people in the training group, and the activation function is the Softmax activation function.
CN202010194774.8A 2020-03-19 2020-03-19 An identification method based on photoplethysmography Pending CN111222498A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010194774.8A CN111222498A (en) 2020-03-19 2020-03-19 An identification method based on photoplethysmography

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010194774.8A CN111222498A (en) 2020-03-19 2020-03-19 An identification method based on photoplethysmography

Publications (1)

Publication Number Publication Date
CN111222498A true CN111222498A (en) 2020-06-02

Family

ID=70830150

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010194774.8A Pending CN111222498A (en) 2020-03-19 2020-03-19 An identification method based on photoplethysmography

Country Status (1)

Country Link
CN (1) CN111222498A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112613431A (en) * 2020-12-28 2021-04-06 中北大学 Automatic identification method, system and device for leaked gas
CN113569685A (en) * 2021-07-20 2021-10-29 华中科技大学 Model establishment, fault diagnosis method and system for machine tool spindle bearing fault diagnosis
CN113892919A (en) * 2021-12-09 2022-01-07 季华实验室 Pulse feeling data acquisition method and device, electronic equipment and system
CN114098691A (en) * 2022-01-26 2022-03-01 之江实验室 Pulse wave identity authentication method, device and medium based on Gaussian mixture model
CN115830656A (en) * 2022-12-08 2023-03-21 辽宁科技大学 Identity recognition method and device based on pulse wave
CN116548941A (en) * 2023-04-20 2023-08-08 陕西智控方达科技有限公司 Heart rate detection method and device based on generation countermeasure network

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110077484A1 (en) * 2009-09-30 2011-03-31 Nellcor Puritan Bennett Ireland Systems And Methods For Identifying Non-Corrupted Signal Segments For Use In Determining Physiological Parameters
CN106473750A (en) * 2016-10-08 2017-03-08 西安电子科技大学 Personal identification method based on photoplethysmographic optimal period waveform
CN107088069A (en) * 2017-03-29 2017-08-25 西安电子科技大学 Personal identification method based on human body PPG signal subsections
WO2018145377A1 (en) * 2017-02-07 2018-08-16 华为技术有限公司 User identity recognition method, apparatus and system
US20190133468A1 (en) * 2017-11-03 2019-05-09 Samsung Electronics Co., Ltd. Method and apparatus for high accuracy photoplethysmogram based atrial fibrillation detection using wearable device
CN110288018A (en) * 2019-06-24 2019-09-27 桂林电子科技大学 A WiFi Identity Recognition Method Integrating Deep Learning Models
US20190313915A1 (en) * 2015-06-14 2019-10-17 Facense Ltd. Posture-adjusted calculation of physiological signals
CN110458197A (en) * 2019-07-11 2019-11-15 启东市知微电子科技有限公司 Personal identification method and its system based on photoplethysmographic

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110077484A1 (en) * 2009-09-30 2011-03-31 Nellcor Puritan Bennett Ireland Systems And Methods For Identifying Non-Corrupted Signal Segments For Use In Determining Physiological Parameters
US20190313915A1 (en) * 2015-06-14 2019-10-17 Facense Ltd. Posture-adjusted calculation of physiological signals
CN106473750A (en) * 2016-10-08 2017-03-08 西安电子科技大学 Personal identification method based on photoplethysmographic optimal period waveform
WO2018145377A1 (en) * 2017-02-07 2018-08-16 华为技术有限公司 User identity recognition method, apparatus and system
CN107088069A (en) * 2017-03-29 2017-08-25 西安电子科技大学 Personal identification method based on human body PPG signal subsections
US20190133468A1 (en) * 2017-11-03 2019-05-09 Samsung Electronics Co., Ltd. Method and apparatus for high accuracy photoplethysmogram based atrial fibrillation detection using wearable device
CN110288018A (en) * 2019-06-24 2019-09-27 桂林电子科技大学 A WiFi Identity Recognition Method Integrating Deep Learning Models
CN110458197A (en) * 2019-07-11 2019-11-15 启东市知微电子科技有限公司 Personal identification method and its system based on photoplethysmographic

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈真诚 等: "基于光电容积脉搏波信号与心电信号相关性的研究", 《生物医学工程研》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112613431A (en) * 2020-12-28 2021-04-06 中北大学 Automatic identification method, system and device for leaked gas
CN112613431B (en) * 2020-12-28 2021-06-29 中北大学 Automatic identification method, system and device for leaked gas
CN113569685A (en) * 2021-07-20 2021-10-29 华中科技大学 Model establishment, fault diagnosis method and system for machine tool spindle bearing fault diagnosis
CN113892919A (en) * 2021-12-09 2022-01-07 季华实验室 Pulse feeling data acquisition method and device, electronic equipment and system
CN113892919B (en) * 2021-12-09 2022-04-22 季华实验室 Pulse feeling data acquisition method and device, electronic equipment and system
CN114098691A (en) * 2022-01-26 2022-03-01 之江实验室 Pulse wave identity authentication method, device and medium based on Gaussian mixture model
CN115830656A (en) * 2022-12-08 2023-03-21 辽宁科技大学 Identity recognition method and device based on pulse wave
CN115830656B (en) * 2022-12-08 2023-07-14 辽宁科技大学 Identification method and device based on pulse wave
CN116548941A (en) * 2023-04-20 2023-08-08 陕西智控方达科技有限公司 Heart rate detection method and device based on generation countermeasure network

Similar Documents

Publication Publication Date Title
CN111222498A (en) An identification method based on photoplethysmography
CN113017630B (en) Visual perception emotion recognition method
Edla et al. Classification of EEG data for human mental state analysis using Random Forest Classifier
CN108776788B (en) Brain wave-based identification method
CN109784023B (en) Steady-state visually evoked EEG identification method and system based on deep learning
CN110458197A (en) Personal identification method and its system based on photoplethysmographic
CN112949349B (en) Method and system for displaying pulse condition waveform in real time based on face video
Huang et al. Human identification with electroencephalogram (EEG) signal processing
Patil et al. A non-contact PPG biometric system based on deep neural network
Fawaz et al. Encoding rich frequencies for classification of stroke patients EEG signals
CN109620260A (en) Psychological condition recognition methods, equipment and storage medium
CN111797747A (en) Potential emotion recognition method based on EEG, BVP and micro-expression
Joshi et al. Deep learning based person authentication using hand radiographs: A forensic approach
CN118332261A (en) Brain wave imagination digital identification and identity authentication method and system based on transducer self-encoder
CN114611560B (en) SSVEP EEG signal classification method based on convolutional neural network
CN110192864B (en) Cross-domain electrocardiogram biological characteristic identity recognition method
CN114343636A (en) Emotion adjusting method and device
Zhang et al. ATGAN: attention-based temporal GAN for EEG data augmentation in personal identification
He et al. Emotion classification using EEG data in a brain-inspired spiking neural network
Li et al. Authentication study for brain-based computer interfaces using music stimulations
Guo et al. Brain visual image signal classification via hybrid dilation residual shrinkage network with spatio-temporal feature fusion
Nawas et al. K-NN classification of brain dominance
Yu et al. The use of synthetic finger vein images in deep learning pre-training
CN113951886A (en) A brain magnetic pattern generation system and lie detector decision-making system
Hamza-Lup et al. Attention patterns detection using brain computer interfaces

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200602