CN111222498A - Identity recognition method based on photoplethysmography - Google Patents

Identity recognition method based on photoplethysmography Download PDF

Info

Publication number
CN111222498A
CN111222498A CN202010194774.8A CN202010194774A CN111222498A CN 111222498 A CN111222498 A CN 111222498A CN 202010194774 A CN202010194774 A CN 202010194774A CN 111222498 A CN111222498 A CN 111222498A
Authority
CN
China
Prior art keywords
size
layer
lstm
photoplethysmography
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010194774.8A
Other languages
Chinese (zh)
Inventor
陈真诚
程鹏
梁永波
朱健铭
李文湛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin University of Electronic Technology
Original Assignee
Guilin University of Electronic Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin University of Electronic Technology filed Critical Guilin University of Electronic Technology
Priority to CN202010194774.8A priority Critical patent/CN111222498A/en
Publication of CN111222498A publication Critical patent/CN111222498A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/15Biometric patterns based on physiological signals, e.g. heartbeat, blood flow

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an identity recognition method based on photoelectric volume pulse waves, which comprises the following steps of 1) obtaining training group data and testing group data; 2) dividing all photoelectric volume pulse wave signal data in training group data and test group data into a plurality of segments containing a plurality of photoelectric volume pulse waves; 3) converting all the segmented segments into a time-frequency characteristic energy graph form by using continuous wavelet transform; 4) building a neural network model combining CNN and LSTM; 5) sending the picture data of the training group into a neural network model combining CNN and LSTM for training; 6) and classifying the test group picture data by using the trained model, wherein the classification result is an identity recognition result, and finally evaluating the trained neural network model combining the CNN and the LSTM. The method has high safety factor in application, does not need to manually extract features, has high and stable identification effect, and can be used for personal identification in the fields of government institutions, financial institutions and the like.

Description

Identity recognition method based on photoplethysmography
Technical Field
The invention relates to the crossing field of information processing and computer science, in particular to an identity identification method based on photoelectric volume pulse waves, which can be used as a means for maintaining personal information security in the fields of government institutions, financial institutions and the like.
Background
With the development of the current society, the security problem becomes more prominent, people lose the reliability and the practicability of the traditional identity identification method by memorizing complex passwords or carrying electronic ciphers, and the current situation makes the demand of people on the biological identification technology more and more great. Such as company confidential systems, financial transactions, computer networks and access systems to the security domain are still identified and authorized by means of identification cards or passwords. Such systems are not sufficiently secure because identification card or password information is easily stolen or forgotten. The biological recognition system can identify according to the physiological signals and the behavior characteristics of the individual, and the physiological signals and the behavior characteristics of the human body are unique to the individual, so that the biological recognition system can identify and provide more secrecy and safety. Methods in which human physiological signals and behavioral features such as fingerprints, human faces, sounds, electroencephalograms, and electrocardiograms are used for identification are becoming increasingly popular. However, fingerprints can be copied by various means such as a powder method and a magnetic powder method, face recognition can be deceived by fake moving pictures, sounds can be simulated, and methods based on brain waves or electrocardiosignals require professional acquisition equipment and thus cannot be widely used.
Photoplethysmography (PPG) enables photoplethysmography to be used to obtain photoplethysmography from fingertip, wrist or earlobe measurements. PPG is a non-invasive, electro-optical method of obtaining information about the volume change of blood flow in a blood vessel by testing a part of the body close to the skin. The photoplethysmography signal is an inherent physiological signal of a human body, is difficult to copy and simulate, has high safety and is simple and convenient to acquire. Most of the existing identity recognition methods based on photoplethysmography need manual feature extraction, the process is complicated, features are greatly different due to different human bodies, and the generalization capability is low.
The time-frequency analysis technology can enable a user to observe the energy density and the intensity of a time domain and a frequency domain of a signal at the same time, and the time and the frequency are combined together, so that the signal can be fully processed. Continuous Wavelet Transform (CWT) is a time-frequency analysis technique, and is suitable for processing a non-stationary weak physiological signal, such as a photoplethysmographic pulse wave, mainly containing low-frequency components, and the information in the original signal can be better retained by using a time-frequency characteristic energy map obtained by the method. In addition, compared with the one-dimensional signal, the two-dimensional image can automatically ignore small noise data in the image by utilizing the convolution layer and the pooling layer in the model, thereby avoiding the influence of noise in the one-dimensional signal on the model identification accuracy and sensitivity.
The Convolutional Neural Network (CNN) is an algorithm which is good at processing a large amount of picture information, and mainly comprises an input layer, a Convolutional layer, a pooling layer, a full-link layer and a Softmax layer, wherein the CNN can retain spatial information, and a convolution structure is formed by the Convolutional layer and the pooling layer, so that the problem of overfitting of a mathematical model is effectively relieved. The Long Short-Term Memory Network (LSTM) is a special type in a Recurrent Neural Network (RNN), wherein the state of each unit is interacted with other units, the time dynamics in data is displayed through the internal feedback state, Long-Term dependence information can be learned, and the classification effect can be greatly improved by combining the CNN and the LSTM. Most of the existing identity recognition methods based on photoplethysmography are based on features extracted manually, which causes the problem of the difference of photoplethysmography signals among different human individuals, and leads to the reduction of the recognition accuracy and generalization capability of the model. The neural network model combining the CNN and the LSTM does not need to manually extract features, and the hidden and non-simulative features in the deep layer of the photoplethysmographic signal can be learned in the process of continuously fitting and continuously optimizing model parameters based on a large amount of data, so that the obtained model has high safety factor and generalization capability in practical application.
Disclosure of Invention
The invention aims to provide an identity identification method based on photoplethysmography aiming at the defects in the prior art. The method does not need to manually extract features, greatly simplifies the process of model fitting, and has the advantages of strong generalization capability, high safety coefficient, high and stable identification effect precision.
The technical scheme for realizing the purpose of the invention is as follows:
an identity recognition method based on photoplethysmography comprises the following steps:
1) obtaining training group data and test group data: collecting photoplethysmographic signals of n persons in a specified time period to form a training group; randomly acquiring photoelectric volume pulse wave signals of about 3/10 of the n persons in another time period to form a test group, wherein the photoelectric volume pulse wave signal data in the training group and the test group are identities of each person, and the data classification is identity recognition;
2) dividing all photoelectric volume pulse wave signal data in training group data and test group data into a plurality of segments containing a plurality of photoelectric volume pulse waves;
3) converting all the segmented segments into a time-frequency characteristic energy graph form by using continuous wavelet transform;
4) building a neural network model combining CNN and LSTM;
5) sending the picture data of the training group in the step 3) into a neural network model combining CNN and LSTM for training;
6) classifying the test group picture data in the step 3) by using the neural network model combining the CNN and the LSTM trained in the step 5), wherein the classification result is an identity recognition result, and finally evaluating the neural network model combining the CNN and the LSTM trained.
The number of the test group in the step 1) can be 2/10 to 4/10 of the total number of the participants, and the photoplethysmography data of each person needs to be collected as much as possible.
The segment length range of the photoplethysmography in the step 2) is between 5s and 20s, wherein the number of sampling points in the segment of the photoplethysmography is equal to the product of the sampling frequency and the length of the segmented photoplethysmography wave segment, and the time interval of the sampling points is 1 divided by the sampling frequency.
Selecting any one of 'cgau 8', 'haar', 'dB 2', 'bior' and 'mor 1' as a mother wavelet function of the continuous wavelet transform in the step 3), wherein the pixel size of the converted picture is 1054x148, and the picture is an RGB color image with a channel of 3.
The process of building the neural network model combining the CNN and the LSTM in the step 4) is as follows:
4.1, using the time-frequency characteristic energy graph with the uniform format as input layer data;
4.2 build up the convolution layer and the pooling layer, and the specific parameters are as follows:
a first layer: the number of filters is 30, the size of kernel _ size is 3x3, and the size of strands is 1x 1;
a second layer: max pooling layer, pool _ size 2x2, crosses size 2x 2;
and a third layer: the number of filters is 60, the size of kernel _ size is 3x3, and the size of strands is 1x 1;
a fourth layer: max pooling layer, pool _ size 2x2, crosses size 2x 2;
and a fifth layer: the number of filters is 90, the size of kernel _ size is 3x3, and the size of strands is 1x 1;
a sixth layer: max pooling layer, pool _ size 2x2, crosses size 2x 2;
a seventh layer: the number of filters is 120, the size of kernel _ size is 3x3, and the size of strands is 1x 1;
an eighth layer: max pooling layer, pool _ size 2x2, crosses size 2x 2;
a ninth layer: the number of filters is 150, the size of kernel _ size is 3x3, and the size of strands is 1x 1;
a tenth layer: max pooling layer, pool _ size 2x2, crosses size 2x 2;
wherein the activation function in each convolutional layer is a ReLU activation function;
4.3 build the full connection layer and the LSTM layer, the concrete parameters are: the eleventh layer and the twelfth layer are full connection layers, the number of the neurons is 500 and 100 respectively, and the activation functions are ReLU activation functions; the thirteenth layer is an LSTM layer, where output _ dim is 50 in size; the fourteenth layer is a full connection layer, the number of the neurons is n, n is the number of the training groups, and the activation function is a Softmax activation function.
And the evaluation in the step 6) is to comprehensively evaluate the trained neural network model combining the CNN and the LSTM by using the loss function value and the accuracy.
The technical scheme has the advantages that:
1. the technical scheme adopts a deep learning method of a neural network model combining CNN and LSTM, does not need to independently extract manual features, does not need to carry out preprocessing such as denoising on collected photoplethysmography signals, directly combines the steps of feature extraction and classification identification, and simplifies the process of model fitting;
3. in the technical scheme, the characteristics which are hidden in the deep layer of the photoplethysmogram signal and cannot be simulated can be learned in the process of continuously fitting and continuously optimizing model parameters based on a large amount of data, so that the obtained model has high safety factor and generalization capability in practical application;
4. according to the technical scheme, the one-dimensional photoplethysmographic signals are converted into the time-frequency characteristic energy diagram, small noise data in the diagram can be ignored, and finally the small noise data are sent into the neural network model combining the CNN and the LSTM to achieve high-precision and stable identification effect.
The method has high safety factor in practical application, does not need manual feature extraction, greatly simplifies the process of model fitting, and has strong generalization capability and high and stable recognition effect precision.
Drawings
FIG. 1 is a schematic flow chart of an embodiment;
FIG. 2 is a graph of the time-frequency characteristic of 10s photoplethysmography waves of the same person at different time periods in the embodiment;
FIG. 3 is a graph of the time-frequency characteristic energy of 10s photoplethysmograph pulse waves of different persons in the example;
FIG. 4 is a diagram showing a neural network model combining CNN and LSTM in the embodiment;
FIG. 5 is a diagram illustrating the result of the photoplethysmography signal test in the embodiment.
In the figure, epoch is the number of iterations, loss and accuracycacy are the accuracy and loss function value of the simulation test, train _ loss is the loss function value of the training data, val _ loss is the loss function value of the test data, train _ accuracycacy is the accuracy of the training data, and val _ accuracycacy is the accuracy of the test data.
Detailed Description
The invention is described in further detail below with reference to the following figures and specific examples, but the invention is not limited thereto.
Example (b):
referring to fig. 1, an identity recognition method based on photoplethysmography includes the following steps:
1) downloading photoplethysmography signal data of 80 subjects from a MIMIC-III public database so as to simulate the process of acquiring the photoplethysmography signal data from a human body, and forming the downloaded photoplethysmography signals of 80 persons in the first 60 minutes into a training set; randomly extracting the photoplethysmography signals of 32 of the 80 persons in the last 60 minutes to form a test group, wherein the photoplethysmography signal data in the training group and the test group are the identities of the persons, and the data classification is identity recognition;
2) dividing all photoplethysmography signal data into a plurality of segments containing a plurality of photoplethysmography waves, wherein the length range of each segment is between 5s and 20s, the number of sampling points in the photoplethysmography segment is equal to the product of the sampling frequency and the length of the divided photoplethysmography segment, in the example, the sampling frequency is 125Hz, the length of the divided segment is 10s, and therefore the number of sampling points of one signal segment is 1250;
3) converting all the segmented segments into a time-frequency characteristic energy graph form by using continuous wavelet transform, wherein the continuous wavelet transform selects a mother wavelet function 'cgau 8', the pixel size of the converted picture is 1054x148, the picture is an RGB color image, and the channel is 3, as shown in FIGS. 2 and 3;
4) building a neural network model combining CNN and LSTM, as shown in figure 4,
the process of building the neural network model combining the CNN and the LSTM comprises the following steps:
4.1 using the time-frequency characteristic energy graph with the uniform format size of 1054x148x3 as the input layer data;
4.2 build up the convolution layer and the pooling layer, and the specific parameters are as follows:
a first layer: the number of filters is 30, the size of kernel _ size is 3x3, and the size of strands is 1x 1;
a second layer: max pooling layer, pool _ size 2x2, crosses size 2x 2;
and a third layer: the number of filters is 60, the size of kernel _ size is 3x3, and the size of strands is 1x 1;
a fourth layer: max pooling layer, pool _ size 2x2, crosses size 2x 2;
and a fifth layer: the number of filters is 90, the size of kernel _ size is 3x3, and the size of strands is 1x 1;
a sixth layer: max pooling layer, pool _ size 2x2, crosses size 2x 2;
a seventh layer: the number of filters is 120, the size of kernel _ size is 3x3, and the size of strands is 1x 1;
an eighth layer: max pooling layer, pool _ size 2x2, crosses size 2x 2;
a ninth layer: the number of filters is 150, the size of kernel _ size is 3x3, and the size of strands is 1x 1;
a tenth layer: max pooling layer, pool _ size 2x2, crosses size 2x 2;
wherein the activation function in each convolutional layer is a ReLU activation function;
4.3 build the full connection layer and the LSTM layer, the concrete parameters are: the eleventh layer and the twelfth layer are full connection layers, the number of the neurons is 500 and 100 respectively, and the activation functions are ReLU activation functions; the thirteenth layer is an LSTM layer, where output _ dim is 50 in size; the fourteenth layer is a full connection layer, the number of the neurons is 80, and the activation function is a Softmax activation function;
5) sending the picture data of the training group in the step 3) into a neural network model combining CNN and LSTM for training, in the example, sending all pictures of the training group into the neural network model combining CNN and LSTM in the step 4) for training, and storing each parameter model adjusted by training;
6) classifying and identifying the test group picture data in the step 3) by using the neural network model combining the CNN and the LSTM trained in the step 5), automatically memorizing and counting the result of the classification and identification by using the neural network model combining the CNN and the LSTM, so as to calculate the numerical values of evaluation indexes such as the accuracy, the loss function value and the like of the neural network model combining the CNN and the LSTM, and further evaluating the classification effect of the neural network model combining the CNN and the LSTM by using the numerical values.
As shown in fig. 5, the neural network model combining the CNN and the LSTM trained in step 5) is subjected to simulation testing, in this example, when the number of selected iterations is greater than 7, the classification accuracy of the neural network model combining the CNN and the LSTM is substantially stable, and the accuracy of the classification of the photoplethysmographic signals of different people is high.

Claims (5)

1. An identity recognition method based on photoplethysmography is characterized by comprising the following steps:
1) obtaining training group data and test group data: collecting photoplethysmographic signals of n persons in a specified time period to form a training group; randomly acquiring photoplethysmographic signals of about 3/10 of the n persons in another time period to form a test group, wherein the photoplethysmographic signal data of the training group and the test group are the identities of each person;
2) dividing all photoelectric volume pulse wave signal data in training group data and test group data into a plurality of segments containing a plurality of photoelectric volume pulse waves;
3) converting all the segmented segments into a time-frequency characteristic energy graph form by using continuous wavelet transform;
4) building a neural network model combining CNN and LSTM;
5) sending the picture data of the training group in the step 3) into a neural network model combining CNN and LSTM for training;
6) classifying the test group picture data in the step 3) by using the neural network model combining the CNN and the LSTM trained in the step 5), wherein the classification result is an identity recognition result, and finally evaluating the neural network model combining the CNN and the LSTM trained.
2. The photoplethysmography-based identity recognition method according to claim 1, wherein the number of the test group in step 1) is 2/10 to 4/10 of the total number of the participants.
3. The method for identifying an identity based on photoplethysmography according to claim 1, wherein the segment length of the photoplethysmography in step 2) ranges from 5s to 20s, wherein the number of sampling points in the segment of the photoplethysmography is equal to the product of the sampling frequency and the length of the segmented photoplethysmography segment, and the time interval of the sampling points is 1 divided by the sampling frequency.
4. The photoplethysmography-based identity recognition method according to claim 1, wherein the mother wavelet function of the continuous wavelet transform in step 3) selects any one of 'cgau 8', 'haar', 'dB 2', 'bior' and 'mor 1', the pixel size of the transformed picture is 1054x148, and the picture is an RGB color image with a channel of 3.
5. The identity recognition method based on photoplethysmography according to claim 1, wherein the process of constructing the neural network model combining CNN and LSTM in step 4) is as follows:
4.1, using the time-frequency characteristic energy graph with the uniform format as input layer data;
4.2 build up the convolution layer and the pooling layer, and the specific parameters are as follows:
a first layer: the number of filters is 30, the size of kernel _ size is 3x3, and the size of strands is 1x 1;
a second layer: max pooling layer, pool _ size 2x2, crosses size 2x 2;
and a third layer: the number of filters is 60, the size of kernel _ size is 3x3, and the size of strands is 1x 1;
a fourth layer: max pooling layer, pool _ size 2x2, crosses size 2x 2;
and a fifth layer: the number of filters is 90, the size of kernel _ size is 3x3, and the size of strands is 1x 1;
a sixth layer: max pooling layer, pool _ size 2x2, crosses size 2x 2;
a seventh layer: the number of filters is 120, the size of kernel _ size is 3x3, and the size of strands is 1x 1;
an eighth layer: max pooling layer, pool _ size 2x2, crosses size 2x 2;
a ninth layer: the number of filters is 150, the size of kernel _ size is 3x3, and the size of strands is 1x 1;
a tenth layer: max pooling layer, pool _ size 2x2, crosses size 2x 2;
wherein the activation function in each convolutional layer is a ReLU activation function;
4.3 build the full connection layer and the LSTM layer, the concrete parameters are: the eleventh layer and the twelfth layer are full connection layers, the number of the neurons is 500 and 100 respectively, and the activation functions are ReLU activation functions; the thirteenth layer is an LSTM layer, where output _ dim is 50 in size; the fourteenth layer is a full connection layer, the number of the neurons is n, n is the number of the training groups, and the activation function is a Softmax activation function.
CN202010194774.8A 2020-03-19 2020-03-19 Identity recognition method based on photoplethysmography Pending CN111222498A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010194774.8A CN111222498A (en) 2020-03-19 2020-03-19 Identity recognition method based on photoplethysmography

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010194774.8A CN111222498A (en) 2020-03-19 2020-03-19 Identity recognition method based on photoplethysmography

Publications (1)

Publication Number Publication Date
CN111222498A true CN111222498A (en) 2020-06-02

Family

ID=70830150

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010194774.8A Pending CN111222498A (en) 2020-03-19 2020-03-19 Identity recognition method based on photoplethysmography

Country Status (1)

Country Link
CN (1) CN111222498A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112613431A (en) * 2020-12-28 2021-04-06 中北大学 Automatic identification method, system and device for leaked gas
CN113569685A (en) * 2021-07-20 2021-10-29 华中科技大学 Method and system for establishing fault diagnosis model and fault diagnosis of machine tool spindle bearing
CN113892919A (en) * 2021-12-09 2022-01-07 季华实验室 Pulse feeling data acquisition method and device, electronic equipment and system
CN114098691A (en) * 2022-01-26 2022-03-01 之江实验室 Pulse wave identity authentication method, device and medium based on Gaussian mixture model
CN115830656A (en) * 2022-12-08 2023-03-21 辽宁科技大学 Identity recognition method and device based on pulse wave

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110077484A1 (en) * 2009-09-30 2011-03-31 Nellcor Puritan Bennett Ireland Systems And Methods For Identifying Non-Corrupted Signal Segments For Use In Determining Physiological Parameters
CN106473750A (en) * 2016-10-08 2017-03-08 西安电子科技大学 Personal identification method based on photoplethysmographic optimal period waveform
CN107088069A (en) * 2017-03-29 2017-08-25 西安电子科技大学 Personal identification method based on human body PPG signal subsections
WO2018145377A1 (en) * 2017-02-07 2018-08-16 华为技术有限公司 User identity recognition method, apparatus and system
US20190133468A1 (en) * 2017-11-03 2019-05-09 Samsung Electronics Co., Ltd. Method and apparatus for high accuracy photoplethysmogram based atrial fibrillation detection using wearable device
CN110288018A (en) * 2019-06-24 2019-09-27 桂林电子科技大学 A kind of WiFi personal identification method merging deep learning model
US20190313915A1 (en) * 2015-06-14 2019-10-17 Facense Ltd. Posture-adjusted calculation of physiological signals
CN110458197A (en) * 2019-07-11 2019-11-15 启东市知微电子科技有限公司 Personal identification method and its system based on photoplethysmographic

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110077484A1 (en) * 2009-09-30 2011-03-31 Nellcor Puritan Bennett Ireland Systems And Methods For Identifying Non-Corrupted Signal Segments For Use In Determining Physiological Parameters
US20190313915A1 (en) * 2015-06-14 2019-10-17 Facense Ltd. Posture-adjusted calculation of physiological signals
CN106473750A (en) * 2016-10-08 2017-03-08 西安电子科技大学 Personal identification method based on photoplethysmographic optimal period waveform
WO2018145377A1 (en) * 2017-02-07 2018-08-16 华为技术有限公司 User identity recognition method, apparatus and system
CN107088069A (en) * 2017-03-29 2017-08-25 西安电子科技大学 Personal identification method based on human body PPG signal subsections
US20190133468A1 (en) * 2017-11-03 2019-05-09 Samsung Electronics Co., Ltd. Method and apparatus for high accuracy photoplethysmogram based atrial fibrillation detection using wearable device
CN110288018A (en) * 2019-06-24 2019-09-27 桂林电子科技大学 A kind of WiFi personal identification method merging deep learning model
CN110458197A (en) * 2019-07-11 2019-11-15 启东市知微电子科技有限公司 Personal identification method and its system based on photoplethysmographic

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈真诚 等: "基于光电容积脉搏波信号与心电信号相关性的研究", 《生物医学工程研》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112613431A (en) * 2020-12-28 2021-04-06 中北大学 Automatic identification method, system and device for leaked gas
CN112613431B (en) * 2020-12-28 2021-06-29 中北大学 Automatic identification method, system and device for leaked gas
CN113569685A (en) * 2021-07-20 2021-10-29 华中科技大学 Method and system for establishing fault diagnosis model and fault diagnosis of machine tool spindle bearing
CN113892919A (en) * 2021-12-09 2022-01-07 季华实验室 Pulse feeling data acquisition method and device, electronic equipment and system
CN113892919B (en) * 2021-12-09 2022-04-22 季华实验室 Pulse feeling data acquisition method and device, electronic equipment and system
CN114098691A (en) * 2022-01-26 2022-03-01 之江实验室 Pulse wave identity authentication method, device and medium based on Gaussian mixture model
CN115830656A (en) * 2022-12-08 2023-03-21 辽宁科技大学 Identity recognition method and device based on pulse wave
CN115830656B (en) * 2022-12-08 2023-07-14 辽宁科技大学 Pulse wave-based identity recognition method and device

Similar Documents

Publication Publication Date Title
CN111222498A (en) Identity recognition method based on photoplethysmography
Yuan et al. Fingerprint liveness detection using an improved CNN with image scale equalization
CN110069958B (en) Electroencephalogram signal rapid identification method of dense deep convolutional neural network
CN108776788B (en) Brain wave-based identification method
CN109784023B (en) Steady-state vision-evoked electroencephalogram identity recognition method and system based on deep learning
CN110458197A (en) Personal identification method and its system based on photoplethysmographic
CN110353702A (en) A kind of emotion identification method and system based on shallow-layer convolutional neural networks
CN111523601B (en) Potential emotion recognition method based on knowledge guidance and generation of countermeasure learning
CN109497990B (en) Electrocardiosignal identity recognition method and system based on canonical correlation analysis
Huang et al. Human identification with electroencephalogram (EEG) signal processing
Zhou et al. Research on image preprocessing algorithm and deep learning of iris recognition
Patil et al. A non-contact PPG biometric system based on deep neural network
CN111797747A (en) Potential emotion recognition method based on EEG, BVP and micro-expression
CN109620260A (en) Psychological condition recognition methods, equipment and storage medium
CN108256579A (en) A kind of multi-modal sense of national identity quantization measuring method based on priori
CN113243924A (en) Identity recognition method based on electroencephalogram signal channel attention convolution neural network
CN115969392A (en) Cross-period brainprint recognition method based on tensor frequency space attention domain adaptive network
CN114081505A (en) Electroencephalogram signal identification method based on Pearson correlation coefficient and convolutional neural network
CN116861217A (en) Identity recognition method and system for mobile terminal
He et al. Emotion classification using eeg data in a brain-inspired spiking neural network
Krishna et al. Ear-Based Biometric System Using Artificial Intelligence
Nawas et al. K-NN classification of brain dominance
Cheng et al. Heart sound recognition-A prospective candidate for biometric identification
Li et al. Authentication study for brain-based computer interfaces using music stimulations
CN113951886A (en) Brain magnetic pattern generation system and lie detection decision system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200602

RJ01 Rejection of invention patent application after publication