CN113069081A - Pain detection method based on improved Bi-LSTM and fNIRS - Google Patents

Pain detection method based on improved Bi-LSTM and fNIRS Download PDF

Info

Publication number
CN113069081A
CN113069081A CN202110301587.XA CN202110301587A CN113069081A CN 113069081 A CN113069081 A CN 113069081A CN 202110301587 A CN202110301587 A CN 202110301587A CN 113069081 A CN113069081 A CN 113069081A
Authority
CN
China
Prior art keywords
data
layer
lstm
pain
fnirs
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110301587.XA
Other languages
Chinese (zh)
Other versions
CN113069081B (en
Inventor
潘晓光
王小华
张娜
董虎弟
宋晓晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanxi Sanyouhe Smart Information Technology Co Ltd
Original Assignee
Shanxi Sanyouhe Smart Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanxi Sanyouhe Smart Information Technology Co Ltd filed Critical Shanxi Sanyouhe Smart Information Technology Co Ltd
Priority to CN202110301587.XA priority Critical patent/CN113069081B/en
Publication of CN113069081A publication Critical patent/CN113069081A/en
Application granted granted Critical
Publication of CN113069081B publication Critical patent/CN113069081B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0075Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence by spectroscopy, i.e. measuring spectra, e.g. Raman spectroscopy, infrared absorption spectroscopy
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4824Touch or pain perception evaluation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Veterinary Medicine (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Psychiatry (AREA)
  • Hospice & Palliative Care (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Pain & Pain Management (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physiology (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of deep learning, and particularly relates to a pain detection method based on improved Bi-LSTM and fNIRS, which comprises the following steps: the method comprises the following steps of data acquisition, data set construction, data preprocessing, model construction, data set division, model construction, model training and model evaluation, wherein the data acquisition is used for acquiring fNIRS signals under different pain senses and carrying out data annotation; the data set construction is used for constructing a standardized data set; the data preprocessing is used for carrying out normalization processing on the data and converting the label into a One-Hot form; the data set division divides a data set into a training set, a verification set and a test set; the model is constructed by an LSTM layer, a CNN layer, a BI-LSTM layer and a full connection layer to form an improved BI-LSTM network; the model training uses training set data to carry out iterative training on the network, and a fNIRS signal pain sensation identification parameter model is constructed; and the model evaluation adopts the accuracy, the recall rate and F1 to evaluate the model identification effect.

Description

Pain detection method based on improved Bi-LSTM and fNIRS
Technical Field
The invention belongs to the technical field of deep learning, and particularly relates to a pain detection method based on improved Bi-LSTM and fNIRS.
Background
Pain assessment in non-verbal patients is extremely complex, usually done by clinical judgment, however, this method is not reliable, since the patient's vital signs may fluctuate significantly due to other underlying medical conditions, and to date there is no objective diagnostic test that can help a physician diagnose pain.
At present, some algorithms for identifying functional near infrared spectroscopy (fNIRS) by using a machine learning method and further detecting pain exist, but the algorithms need manual feature engineering, need manual intervention to design and select the most relevant features, and cannot objectively select the features.
Therefore, there is a need for improvements in the prior art.
Disclosure of Invention
In order to overcome the defects in the prior art, the improved Bi-LSTM and fNIRS-based pain detection method with higher efficiency is provided.
In order to solve the technical problems, the invention adopts the technical scheme that:
a method for improved Bi-LSTM and fNIRS-based pain detection comprising:
s100, data acquisition: collecting fNIRS signals under different pain senses by using an fNIRS signal collecting instrument, and labeling data;
s200, data set construction: constructing a standardized data set capable of carrying out deep learning training;
s300, data preprocessing: carrying out normalization processing on the data, and converting the label into a One-Hot form;
s400, data set division: dividing a data set into a training set, a verification set and a test set;
s500, model construction: constructing an improved BI-LSTM network consisting of an LSTM layer, a CNN layer, a BI-LSTM layer and a full connection layer;
s600, model training: performing iterative training on the network by using training set data to construct a fNIRS signal pain sensation identification parameter model;
s700, model evaluation: and evaluating the model identification effect by adopting the accuracy, the recall rate and F1.
Further, in the S100 data collection, pain detection is performed on the volunteers to obtain fNIRS data, ETG-4000 is used to collect fNIRS data, the volunteers contact with the hot mold, the hot mold gradually increases or decreases in temperature, the volunteers press a button when undergoing a pain threshold test and a maximum intensity pain tolerance test, and the fNIRS data are classified into 4 types according to thresholds and tolerances for cold and heat stimulation: "Low Cold" means low pain, "Low Heat" means low pain, "high Cold" means high pain, "high fever" means high pain, and the classification is labeled.
Further, in the S200 data set construction, data of 8S before the volunteer presses the button and data of 2S after the volunteer presses the button are taken as collected data segments, each data segment is a fNIRS signal with 10S of frequency of 10Hz, each data segment comprises 24 channels, and each channel has 1000 sampling points, i.e. data in the form of 1000 × 24.
Further, in the S300 data preprocessing, min-max normalization is performed on the data, and the formula is as follows:
Figure RE-GDA0003034923240000021
the label is converted into an One-Hot form, wherein low cold is [1,0,0,0], "low heat is [0,1,0,0]," high cold is [0,0,1,0], "high heat is [0,0,01 ].
Further, in the S400 data set division, the data is divided into 6: 2: the ratio of 2 is divided into a training set, a validation set and a test set.
Further, in S500, the LSTM layer uses a gate control unit to control which information should be memorized and which information should be forgotten, and is implemented by three gates, where the first gate is a forgetting gate, and a sigmoid layer is used to determine the information to be deleted from the LSTM Cell state:
ft=σ(Wf·[ht-1,xt]+bf),
the second gate is an entry gate that uses the sigmoid layer to determine the value to be updated, and the tanh layer to define the new updated value:
it=σ(Wf·[ht-1,xt]+bi),ct=tanh(Wc·[ht-1,xt]+bc),
the output gate determines to output the LSTM Cell state:
ot=σ(Wo·[ht-1,xt]+bo),ht=ot·tanh(ct),
wherein xtFor the input sequence at time t, ht-1Hidden state in the past, bf,bi,bc,boFor the bias vector of each layer, the first layer is composed of a single-layer LSTM network, the number of hidden nodes of each LSTM Cell is 64, and the layer network is used for performing preliminary time domain analysis on data.
Further, in S500, the CNN layer is formed by 3 layers of 1D CNN networks, where the convolution kernel size of the first layer of CNN network is 5 × 5, the step size is 2, the convolution is performed in a valid manner, and the filter number is 64; the convolution kernel size of the second layer CNN network is 3 x 3, the step length is 2, the convolution is carried out by adopting a valid mode, and the filter number is 128; the convolution kernel size of the third CNN network is 3 x 3, the step length is 1, the convolution is carried out by adopting a valid mode, and the filter number is 128; each convolution layer is followed by a batch normalization layer, an activation layer and a Dropout layer, the activation function adopts ReLU to carry out nonlinear activation on the extracted features, and all coefficients of the Dropout layer are 0.5;
the Bi-LSTM layer is composed of two LSTM networks, forward and reverse analysis is respectively carried out on data characteristics, each unit of the two LSTM adopts 128 hidden nodes, and after the data is subjected to bidirectional time domain identification through the Bi-LSTM layer, the obtained characteristics are fused in an add mode to obtain 128-dimensional data;
and the full connection layer carries out final classification on the obtained feature vectors, and carries out final classification on the obtained results by adopting Sigmoid:
Figure RE-GDA0003034923240000031
wherein k represents the result output by the FC layer, and S (k) represents the result output after Sigmoid operation.
Further, in the S600 model training, after the network is built, training is performed on network parameters by using training set data, SGD is used as an optimizer, the initial learning rate is 0.1, the learning rate of each 100 epochs is attenuated by 80%, the size of batch size is 64, softmax _ cross _ entry is used as a loss function, 500 epochs are set for training, the training is stopped when the loss value of 15 continuous epochs models is not reduced, the model is stored, 100 epochs of secondary training is performed on the trained data model by using verification set data, if the model loss is not reduced, the model is stored, and if the model loss is reduced, the model is continuously trained by using the training set data until the model loss is stable.
Further, in the evaluation of the S700 model, a trained model is used to perform pain classification prediction on the test set data, the prediction result is compared with the corresponding label, and the recognition effect evaluation is performed, where the evaluation mode is F1-Score, and the higher the F1-Score value is, the better the recognition effect is represented:
Figure RE-GDA0003034923240000032
wherein F1 is F1-score, A is accuracy,
Figure RE-GDA0003034923240000033
r is the recall ratio of the Chinese character,
Figure RE-GDA0003034923240000034
TP is positive class number, FP is negative class number, FN is negative class number, and TN is negative class number.
Compared with the prior art, the invention has the following beneficial effects:
the present invention combines forward and backward information in an improved Bi-LSTM network, using a feature extraction process that does not require complex features to achieve satisfactory results, which also reduces the level of subjectivity of manual feature design, facilitates the development of pain assessment knowledge using neuroimaging as a diagnostic method, and represents a further development of physiology-based human pain diagnosis, which would be beneficial to those who are unable to self-describe pain.
Drawings
The following will explain embodiments of the present invention in further detail through the accompanying drawings.
FIG. 1 is a flow chart of the steps of the present invention;
FIG. 2 is a network model architecture diagram of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example (b):
as shown in fig. 1-2, a method for detecting pain based on improved Bi-LSTM and fNIRS includes the following steps:
s100, data acquisition: collecting fNIRS signals under different pain senses by using an fNIRS signal collecting instrument, and labeling data;
functional near infrared spectroscopy (fNIRS technology) a non-invasive brain function imaging technology, the principle of fNIRS brain function imaging is similar to that of fMRI, namely brain nerve activity can cause local hemodynamic change, the fNIRS brain function imaging technology mainly utilizes the difference characteristic of oxyhemoglobin and deoxyhemoglobin in brain tissues to near infrared light absorption rates with different wavelengths of 600-900nm to directly detect the hemodynamic activity of cerebral cortex in real time, pain detection is carried out on a volunteer to obtain fNIRS data by observing the hemodynamic change, the nerve activity condition of the brain can be reversely deduced through a neurovascular coupling rule, ETG-4000 is adopted to collect the fNIRS data, the instrument can collect 24-channel NIFfRS data, the sampling frequency is 10Hz, the volunteer is contacted with a hot mold, the temperature of the hot mold is gradually increased or reduced, the volunteer presses a button when undergoing a pain threshold test and a highest-intensity pain tolerance test, the two tests, the hot pain threshold and hot pain tolerance, were divided into two tests, with a 2 minute rest between the two tests, three consecutive cold or hot stimuli trials randomly selected at the beginning of the experiment after the first 60 seconds of rest, with a 60 second rest between the two trials, with the fNIRS data divided into 4 categories according to the threshold and tolerance to cold or hot stimuli: "Low Cold" means low pain, "Low Heat" means low pain, "high Cold" means high pain, "high fever" means high pain, and the classification is labeled.
S200, data set construction: constructing a standardized data set capable of carrying out deep learning training;
the data of 8s before the volunteer presses the button and the data of 2s after the volunteer presses the button are taken as collected data segments, each data segment is an fNIRS signal with the length of 10s and the frequency of 10Hz, each segment of data comprises 24 channels, and each channel has 1000 sampling points, namely the data is in a form of 1000 x 24.
S300, data preprocessing: carrying out normalization processing on the data, and converting the label into a One-Hot form;
the data were normalized by min-max, as follows:
Figure RE-GDA0003034923240000041
converting the label into One-Hot form, and the low cold is [1,0,0,0]]And low heat is [0,1, 0]]And high cold is [0,0,1,0]]High heat is [0,0,01]]。
S400, data set division: dividing a data set into a training set, a verification set and a test set;
the data are processed according to the following steps of 6: 2: and 2, dividing the ratio into a training set, a verification set and a test set, wherein the training set is used for iterative training of model parameters, the verification set is used for verifying the complete training of the model, the loss is the lowest, and the test set is used for measuring the recognition effect of the model.
S500, model construction: as shown in figure 2, the Bi-LSTM is improved, the network is composed of 4 parts, namely an LSTM layer, a CNN layer, a Bi-LSTM layer and a full connection layer,
LSTM layer: the LSTM layer uses a gating cell to control which information should be remembered and which should be forgotten, implemented by three gates, the first of which is a forgetting gate that uses the sigmoid layer to determine the information to be deleted from the LSTMCell state: f. oft=σ(Wf·[ht-1,xt]+bf) The second gate is an entry gate that uses the sigmoid layer to determine the value to be updated, and the tanh layer to define the new updated value: i.e. it=σ(Wf·[ht-1,xt]+bi),ct=tanh(Wc·[ht-1,xt]+bc) The output gate determines the output of the LSTMCell state ot=σ(Wo·[ht-1,xt]+bo),ht=ot·tanh(ct) Wherein x istFor the input sequence at time t, ht-1Hidden state in the past, bf,bi,bc,boThe first layer is formed by a single-layer LSTM network for the bias vector of each layer, the number of hidden nodes of each LSTM Cell is 64, and the layer network is used for carrying out preliminary time domain analysis on data;
CNN layer: the device comprises 3 layers of 1D CNN networks, wherein the convolution kernel size of the first layer of CNN network is 5 x 5, the step size is 2, the convolution is carried out in a valid mode, the filter number is 64, the convolution kernel size of the second layer of CNN network is 3 x 3, the step size is 2, the convolution is carried out in a valid mode, the filter number is 128, the convolution kernel size of the third layer of CNN network is 3 x 3, the step size is 1, the convolution is carried out in a valid mode, the filter number is 128, a batch normalization layer, an activation layer and a Dropout layer are arranged after each layer of convolution, the activation function carries out nonlinear activation on extracted features by adopting Rep, and the coefficients of the Dropout layers are all 0.5;
Bi-LSTM layer: the data analysis method comprises the steps that two LSTM networks are used for analyzing data characteristics in a forward direction and a reverse direction respectively, each unit of the two LSTMs adopts 128 hidden nodes, and after the data are subjected to bidirectional time domain identification through a Bi-LSTM layer, the obtained characteristics are fused in an add mode to obtain 128-dimensional data;
full connection layer: the obtained features are finally classified through a full connection layer, the obtained results are finally classified by Sigmoid,
Figure RE-GDA0003034923240000051
wherein k represents the result output by the FC layer, and S (k) represents the result output after Sigmoid operation.
S600, model training: performing iterative training on the network by using training set data to construct a fNIRS signal pain sensation identification parameter model;
after the network is built, training network parameters by using training set data, adopting SGD as an optimizer, setting an initial learning rate to be 0.1, attenuating the learning rate of every 100 epochs by 80%, setting the size of batch size to be 64, setting a loss function to be softmax _ cross _ entropy, setting 500 epochs for training, stopping training when the loss values of 15 continuous epochs models are not reduced, and storing the models. And (3) performing 100 epochs of secondary training on the trained data model by using the verification set data, if the model loss does not decrease, storing the model, and if the model loss decreases, continuing training the model by using the training set data until the model loss is stable.
S700, model evaluation: evaluating the model identification effect by adopting the accuracy, the recall rate and F1;
and carrying out pain sensation classification prediction on the test set data by using the trained model, comparing the prediction result with the corresponding label, and evaluating the recognition effect. The evaluation mode is F1-Score, the higher the F1-Score value is, the better the recognition effect is shown,
Figure RE-GDA0003034923240000061
wherein F1 is F1-score, A is accuracy, R is recall, TP is positive class number, FP is negative class number, FN is positive class number, and TN is negative class number.
Although only the preferred embodiments of the present invention have been described in detail, the present invention is not limited to the above embodiments, and various changes can be made without departing from the spirit of the present invention within the knowledge of those skilled in the art, and all changes are encompassed in the scope of the present invention.

Claims (9)

1. A method for improved Bi-LSTM and fNIRS-based pain detection, comprising:
s100, data acquisition: collecting fNIRS signals under different pain senses by using an fNIRS signal collecting instrument, and labeling data;
s200, data set construction: constructing a standardized data set capable of carrying out deep learning training;
s300, data preprocessing: carrying out normalization processing on the data, and converting the label into a One-Hot form;
s400, data set division: dividing a data set into a training set, a verification set and a test set;
s500, model construction: constructing an improved BI-LSTM network consisting of an LSTM layer, a CNN layer, a BI-LSTM layer and a full connection layer;
s600, model training: performing iterative training on the network by using training set data to construct a fNIRS signal pain sensation identification parameter model;
s700, model evaluation: and evaluating the model identification effect by adopting the accuracy, the recall rate and F1.
2. The method for detecting pain based on modified Bi-LSTM and fNIRS of claim 1, wherein in the S100 data collection, the fNIRS data is obtained by pain detection of volunteers, the fNIRS data is collected by ETG-4000, the volunteers are exposed to a hot mold, the hot mold is gradually heated up or cooled down, the volunteers press a button while undergoing a pain threshold test and a maximum intensity pain tolerance test, and the fNIRS data are classified into 4 categories according to the threshold and tolerance to cold and hot stimuli: "Low Cold" means low pain, "Low Heat" means low pain, "high Cold" means high pain, "high fever" means high pain, and the classification is labeled.
3. The method of claim 1, wherein the data set of S200 is constructed by taking 8S before the volunteer presses the button and 2S after the volunteer presses the button as collected data segments, each data segment is 10S for fNIRS signal with frequency of 10Hz, each data segment comprises 24 channels, and each channel has 1000 sampling points in the form of 1000 x 24 data.
4. The method of claim 1, wherein in the step of preprocessing the S300 data, min-max normalization is performed on the data, and the formula is as follows:
Figure RE-FDA0003034923230000011
the label is converted into an One-Hot form, wherein low cold is [1,0,0,0], "low heat is [0,1,0,0]," high cold is [0,0,1,0], "high heat is [0,0,01 ].
5. The method for improved Bi-LSTM and fNIRS-based pain detection of claim 1, wherein in the S400 data set partition, the data is divided into 6: 2: the ratio of 2 is divided into a training set, a validation set and a test set.
6. The method of claim 1, wherein in step S500, the LSTM layer uses a gating unit to control which information should be remembered and which information should be forgotten, and the method is implemented by three gates, the first gate is a forgetting gate, and the sigmoid layer is used to determine the information to be deleted from the LSTM Cell state:
ft=σ(Wf·[ht-1,xt]+bf),
the second gate is an entry gate that uses the sigmoid layer to determine the value to be updated, and the tanh layer to define the new updated value:
it=σ(Wf·[ht-1,xt]+bi),ct=tanh(Wc·[ht-1,xt]+bc),
the output gate determines to output the LSTM Cell state:
ot=σ(Wo·[ht-1,xt]+bo),ht=ot·tanh(ct),
wherein xtFor the input sequence at time t, ht-1Hidden state in the past, bf,bi,bc,boFor the bias vector of each layer, the first layer is composed of a single-layer LSTM network, the number of hidden nodes of each LSTM Cell is 64, and the layer network is used for performing preliminary time domain analysis on data.
7. The method of claim 1, wherein the improved Bi-LSTM and fNIRS-based pain test comprises: in S500, the CNN layer is formed by 3 layers of 1D CNN networks, where the convolution kernel size of the first layer of CNN network is 5 × 5, the step size is 2, the convolution is performed in a valid manner, and the filter number is 64; the convolution kernel size of the second layer CNN network is 3 x 3, the step length is 2, the convolution is carried out by adopting a valid mode, and the filter number is 128; the convolution kernel size of the third CNN network is 3 x 3, the step length is 1, the convolution is carried out by adopting a valid mode, and the filter number is 128; each convolution layer is followed by a batch normalization layer, an activation layer and a Dropout layer, the activation function adopts ReLU to carry out nonlinear activation on the extracted features, and all coefficients of the Dropout layer are 0.5;
the Bi-LSTM layer is composed of two LSTM networks, forward and reverse analysis is respectively carried out on data characteristics, each unit of the two LSTM adopts 128 hidden nodes, and after the data is subjected to bidirectional time domain identification through the Bi-LSTM layer, the obtained characteristics are fused in an add mode to obtain 128-dimensional data;
and the full connection layer carries out final classification on the obtained feature vectors, and carries out final classification on the obtained results by adopting Sigmoid:
Figure RE-FDA0003034923230000021
wherein k represents the result output by the FC layer, and S (k) represents the result output after Sigmoid operation.
8. The method of claim 1, wherein the improved Bi-LSTM and fNIRS-based pain test comprises: in the S600 model training, training the network parameters by using training set data after the network is built, adopting SGD as an optimizer, setting 500 epochs to be trained by using an initial learning rate of 0.1, attenuating 80% of each 100 epoch learning rate, wherein the size of batch size is 64, setting a loss function by using softmax _ cross _ entry, stopping the training if the loss value of 15 continuous epoch models is not reduced, storing the models, performing 100 epoch secondary training on the trained data models by using verification set data, storing the models if the loss of the models is not reduced, and continuing training the models by using the training set data until the loss of the models is stable.
9. The method of claim 1, wherein the improved Bi-LSTM and fNIRS-based pain test comprises: in the S700 model evaluation, a trained model is used for carrying out pain sensation classification prediction on test set data, the prediction result is compared with the corresponding label, and the recognition effect evaluation is carried out, wherein the evaluation mode is F1-Score, and the higher the F1-Score value is, the better the recognition effect is represented:
Figure RE-FDA0003034923230000031
wherein F1 isF1-score, A is the accuracy,
Figure RE-FDA0003034923230000032
r is the recall ratio of the Chinese character,
Figure RE-FDA0003034923230000033
TP is positive class number, FP is negative class number, FN is negative class number, and TN is negative class number.
CN202110301587.XA 2021-03-22 2021-03-22 Pain detection method based on improved Bi-LSTM and fNIRS Active CN113069081B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110301587.XA CN113069081B (en) 2021-03-22 2021-03-22 Pain detection method based on improved Bi-LSTM and fNIRS

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110301587.XA CN113069081B (en) 2021-03-22 2021-03-22 Pain detection method based on improved Bi-LSTM and fNIRS

Publications (2)

Publication Number Publication Date
CN113069081A true CN113069081A (en) 2021-07-06
CN113069081B CN113069081B (en) 2023-04-07

Family

ID=76613417

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110301587.XA Active CN113069081B (en) 2021-03-22 2021-03-22 Pain detection method based on improved Bi-LSTM and fNIRS

Country Status (1)

Country Link
CN (1) CN113069081B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117648430A (en) * 2024-01-30 2024-03-05 南京大经中医药信息技术有限公司 Dialogue type large language model supervision training evaluation system

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101677775A (en) * 2007-04-05 2010-03-24 纽约大学 System and method for pain detection and computation of a pain quantification index
CN102940882A (en) * 2012-11-05 2013-02-27 张清泉 Medicine for treating respiratory diseases
CN104784354A (en) * 2015-05-03 2015-07-22 黑龙江中医药大学 Traditional Chinese medicine acupoint injection for treating migraine and preparation method of traditional Chinese medicine acupoint injection
US20180150605A1 (en) * 2016-11-28 2018-05-31 Google Inc. Generating structured text content using speech recognition models
CN109009099A (en) * 2018-07-19 2018-12-18 燕山大学 A kind of intelligent anesthesia system based on EEG-NIRS
US20190074028A1 (en) * 2017-09-01 2019-03-07 Newton Howard Real-time vocal features extraction for automated emotional or mental state assessment
CN110573066A (en) * 2017-03-02 2019-12-13 光谱Md公司 Machine learning systems and techniques for multi-spectral amputation site analysis
WO2020074903A1 (en) * 2018-10-10 2020-04-16 Ieso Digital Health Limited Methods, systems and apparatus for improved therapy delivery and monitoring
CN111163693A (en) * 2017-06-28 2020-05-15 瑞桑尼有限公司 Customization of health and disease diagnostics
WO2020115487A1 (en) * 2018-12-07 2020-06-11 Oxford University Innovation Limited Method and data processing apparatus for generating real-time alerts about a patient
CN111728620A (en) * 2020-07-06 2020-10-02 刘涛 Cervical vertebra evaluation rehabilitation device
US20210012215A1 (en) * 2019-07-09 2021-01-14 Baidu Usa Llc Hierarchical multi-task term embedding learning for synonym prediction
WO2021021714A1 (en) * 2019-07-29 2021-02-04 The Regents Of The University Of California Method of contextual speech decoding from the brain

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101677775A (en) * 2007-04-05 2010-03-24 纽约大学 System and method for pain detection and computation of a pain quantification index
CN102940882A (en) * 2012-11-05 2013-02-27 张清泉 Medicine for treating respiratory diseases
CN104784354A (en) * 2015-05-03 2015-07-22 黑龙江中医药大学 Traditional Chinese medicine acupoint injection for treating migraine and preparation method of traditional Chinese medicine acupoint injection
US20180150605A1 (en) * 2016-11-28 2018-05-31 Google Inc. Generating structured text content using speech recognition models
CN110573066A (en) * 2017-03-02 2019-12-13 光谱Md公司 Machine learning systems and techniques for multi-spectral amputation site analysis
CN111163693A (en) * 2017-06-28 2020-05-15 瑞桑尼有限公司 Customization of health and disease diagnostics
US20190074028A1 (en) * 2017-09-01 2019-03-07 Newton Howard Real-time vocal features extraction for automated emotional or mental state assessment
CN109009099A (en) * 2018-07-19 2018-12-18 燕山大学 A kind of intelligent anesthesia system based on EEG-NIRS
WO2020074903A1 (en) * 2018-10-10 2020-04-16 Ieso Digital Health Limited Methods, systems and apparatus for improved therapy delivery and monitoring
WO2020115487A1 (en) * 2018-12-07 2020-06-11 Oxford University Innovation Limited Method and data processing apparatus for generating real-time alerts about a patient
US20210012215A1 (en) * 2019-07-09 2021-01-14 Baidu Usa Llc Hierarchical multi-task term embedding learning for synonym prediction
WO2021021714A1 (en) * 2019-07-29 2021-02-04 The Regents Of The University Of California Method of contextual speech decoding from the brain
CN111728620A (en) * 2020-07-06 2020-10-02 刘涛 Cervical vertebra evaluation rehabilitation device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
RF ROJAS: "Pain Assessment based on fNIRS using Bidirectional LSTMs", 《ARXIV》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117648430A (en) * 2024-01-30 2024-03-05 南京大经中医药信息技术有限公司 Dialogue type large language model supervision training evaluation system
CN117648430B (en) * 2024-01-30 2024-04-16 南京大经中医药信息技术有限公司 Dialogue type large language model supervision training evaluation system

Also Published As

Publication number Publication date
CN113069081B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN109157231B (en) Portable multichannel depression tendency evaluation system based on emotional stimulation task
Aslan et al. Automatic Detection of Schizophrenia by Applying Deep Learning over Spectrogram Images of EEG Signals.
Xu et al. Prediction in autism by deep learning short-time spontaneous hemodynamic fluctuations
CN113052113B (en) Depression identification method and system based on compact convolutional neural network
Bairy et al. Automated diagnosis of depression electroencephalograph signals using linear prediction coding and higher order spectra features
CN111568446A (en) Portable electroencephalogram depression detection system combined with demographic attention mechanism
Xu et al. Identification of autism spectrum disorder based on short-term spontaneous hemodynamic fluctuations using deep learning in a multi-layer neural network
Supakar et al. A deep learning based model using RNN-LSTM for the Detection of Schizophrenia from EEG data
Vallabhaneni et al. Deep learning algorithms in eeg signal decoding application: a review
CN113069081B (en) Pain detection method based on improved Bi-LSTM and fNIRS
Sharma et al. SzHNN: a novel and scalable deep convolution hybrid neural network framework for schizophrenia detection using multichannel EEG
CN115363531A (en) Epilepsy detection system based on bimodal electroencephalogram signal information bottleneck
Shao et al. Obstructive sleep apnea detection scheme based on manually generated features and parallel heterogeneous deep learning model under IoMT
Wang et al. Automated rest eeg-based diagnosis of depression and schizophrenia using a deep convolutional neural network
CN114176610A (en) Workload assessment method for diagnosis of mild cognitive dysfunction patient
CN111096730B (en) Autism classification method based on fluctuation entropy of spontaneous dynamics activity
Jagadeesan et al. Behavioral features based autism spectrum disorder detection using decision trees
Arora et al. Unraveling depression using machine intelligence
CN116269212A (en) Multi-mode sleep stage prediction method based on deep learning
CN114464319B (en) AMS susceptibility assessment system based on slow feature analysis and deep neural network
Wickramaratne et al. A Ternary Bi-Directional LSTM Classification for Brain Activation Pattern Recognition Using fNIRS
CN114748072A (en) Electroencephalogram-based information analysis and rehabilitation training system and method for depression auxiliary diagnosis
CN113729708A (en) Lie evaluation method based on eye movement technology
CN114983434A (en) System and method based on multi-mode brain function signal recognition
CN111248907A (en) Risk prediction method based on electroencephalogram signal characteristics of mental disease clinical high-risk group

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant