CN113143270A - Bimodal fusion emotion recognition method based on biological radar and voice information - Google Patents

Bimodal fusion emotion recognition method based on biological radar and voice information Download PDF

Info

Publication number
CN113143270A
CN113143270A CN202011368284.1A CN202011368284A CN113143270A CN 113143270 A CN113143270 A CN 113143270A CN 202011368284 A CN202011368284 A CN 202011368284A CN 113143270 A CN113143270 A CN 113143270A
Authority
CN
China
Prior art keywords
voice
information
physiological
signal
emotion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011368284.1A
Other languages
Chinese (zh)
Inventor
李兴广
王鑫磊
张继淋
王笑竹
宋文军
臧景峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun University of Science and Technology
Original Assignee
Changchun University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun University of Science and Technology filed Critical Changchun University of Science and Technology
Priority to CN202011368284.1A priority Critical patent/CN113143270A/en
Publication of CN113143270A publication Critical patent/CN113143270A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/0205Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/024Detecting, measuring or recording pulse rate or heart rate
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves 
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/08Detecting, measuring or recording devices for evaluating the respiratory organs
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4803Speech analysis specially adapted for diagnostic purposes

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Physiology (AREA)
  • Cardiology (AREA)
  • Psychiatry (AREA)
  • Pulmonology (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Child & Adolescent Psychology (AREA)
  • Developmental Disabilities (AREA)
  • Educational Technology (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychology (AREA)
  • Social Psychology (AREA)
  • Measuring Pulse, Heart Rate, Blood Pressure Or Blood Flow (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention provides a bimodal fusion emotion recognition method based on biological radar and voice information, which comprises the steps of acquiring physiological information and voice information through an external device radar and a microphone, wherein the physiological information comprises respiratory information and heartbeat information, extracting physiological characteristics and voice characteristics from the physiological information and the voice information through a pre-trained characteristic extraction network, fusing the physiological characteristics and the voice characteristics, inputting the fused characteristics into a pre-trained deep convolution neural network, wherein the deep convolution neural network comprises a first classifier and a second classifier which have different labels, the first classifier and the second classifier are decision tree classifiers, and the first classifier and the second classifier respectively obtain different types of physiological emotion evaluation information and voice emotion evaluation information so that the merged user emotion evaluation information is more objective and has reference value, thereby improving the accuracy of emotion recognition.

Description

Bimodal fusion emotion recognition method based on biological radar and voice information
Technical Field
The application relates to the field of emotion recognition, in particular to a bimodal fusion emotion recognition method based on a biological radar and voice information.
Background
The emotion is physiologically complex and stable physiological evaluation and experience, is attitude experience of people to objective objects and corresponding behavioral response, and modern people face various learning, living and working stresses and can be in negative emotion and sub-psychological health state for a long time. The emotion recognition helps people to know the emotions of the people and other people, so that the emotion is timely adjusted, and the emotion recognition method has great value to the mental health of the human body. Especially for special groups such as drivers, service personnel, medical staff, etc., the emotional well-being of such groups may even affect public safety and social stability. Therefore, emotion analysis and recognition are important interdisciplinary research subjects in the fields of neuroscience, psychology, cognitive science, computer science, artificial intelligence and the like.
The emotional information is mainly expressed in two layers: the external emotion information refers to information which can be naturally observed through the appearance, such as facial expressions, lip movements, sounds, postures and the like, and the internal emotion information refers to physiological information which cannot be observed outside, such as heart rate, pulse, blood pressure, brain waves and the like.
However, the emotion of the human body is complex and changeable, the accuracy rate of judging the emotion characteristics by independently measuring certain information is low, and the method is provided for improving the accuracy rate.
Disclosure of Invention
The invention aims to provide a bimodal fusion emotion recognition method based on a biological radar and voice information, which fully utilizes bimodal fusion to obtain more emotion information and can judge emotion states according to user voice and physiological information.
The technical solution for realizing the purpose of the invention is as follows: a bimodal fusion emotion recognition method based on biological radar and voice information comprises the following steps:
acquiring physiological information and voice information through an external device radar and a microphone, inputting the physiological information and the voice information into a feature extraction network which is trained in advance, and extracting physiological features and voice features respectively.
After acquiring the physiological information and the voice information, the method further comprises the following steps: and preprocessing the physiological information and the voice information.
Step three, the physiological information preprocessing of the initial information specifically comprises the following steps:
1) performing a Fast Fourier Transform (FFT) on the ADC data to obtain a Range curve;
2) determining the distance Range of the target, acquiring Range bins corresponding to the target, and extracting phases of the Range bins of the target;
3) and (5) phase unwrapping to obtain an actual displacement curve.
Step four, the voice information preprocessing of the initial information specifically comprises the following steps:
1) pre-emphasizing the collected voice information to flatten the frequency spectrum of the signal;
2) performing frame windowing on the collected voice signals to obtain voice analysis frames;
3) and carrying out short-time Fourier transform on the voice analysis frame to obtain a voice spectrogram.
Step five, the extraction of the physiological characteristics specifically comprises the following steps:
1) performing band-pass filtering on the physiological sign signals of the human body, filtering drift signals below 0.2Hz and noise signals above 2Hz of the original sign signals, classifying the signals of 0.2Hz-0.9Hz as respiratory signals, and classifying the signals of 0.9Hz-2.0Hz as heartbeat signals;
2) extracting time characteristics, waveform characteristics and frequency domain characteristics of the respiratory signals; extracting time characteristics, waveform characteristics and frequency domain characteristics of the heartbeat signals;
3) and reducing the dimension of the extracted respiratory signal and heartbeat signal characteristics to obtain physiological characteristics.
Step six, extracting the time characteristic, the waveform characteristic and the frequency domain characteristic of the respiratory signal, and extracting the time characteristic, the waveform characteristic and the frequency domain characteristic of the heartbeat signal, wherein the method comprises the following steps:
1) performing time-frequency analysis of non-Gaussian kernel functions on the respiratory signals and the heartbeat signals to obtain a time-frequency distribution matrix, and extracting time characteristics;
2) analyzing the time-frequency distribution matrix, extracting an energy central point by analyzing the physical significance of each element in the matrix to obtain a one-dimensional waveform characteristic vector containing modulation type characteristics, and extracting waveform characteristics;
3) and carrying out Fourier transform on the one-dimensional waveform characteristic vector containing the modulation type characteristics to obtain frequency domain information, and extracting frequency domain characteristics.
Step seven, the extraction of the voice features specifically comprises the following steps:
1) pre-emphasizing the voice digital signal by using a digital filter, and performing frame division processing on the pre-emphasized voice data by using a short-time analysis technology to obtain a voice characteristic parameter time sequence;
2) windowing the voice characteristic parameter time sequence by using a Hamming window function to obtain voice windowing data, and performing endpoint detection on the voice windowing data by using a double-threshold comparison method to obtain preprocessed voice data;
3) carrying out short-time Fourier transform on the preprocessed voice data to obtain a voice spectrogram;
4) and acquiring the voice spectrogram, and extracting voice features from the voice spectrogram through the feature extraction network.
And step eight, performing feature fusion on the extracted physiological features and the extracted voice features to obtain fusion features.
Step nine, the feature fusion mode of the extracted physiological features and the voice features comprises at least one of the following modes:
performing weighted fusion;
product fusion;
maximum value fusion;
and merging and fusing.
Inputting the fusion characteristics into a pre-trained deep convolutional neural network, wherein the deep convolutional neural network comprises a first classifier and a second classifier which have different labels, and acquiring output physiological emotion evaluation information of the first classifier and output voice emotion evaluation information of the second classifier to be combined into emotion characteristic information.
Eleven, the training of the pre-trained deep convolutional network comprises the following steps:
1) constructing a voice information base and a physiological information base, and acquiring voice sample data and physiological sample data in the voice information base and the physiological information base;
2) performing frame windowing on the voice sample data to obtain a voice analysis frame, performing short-time Fourier transform on the voice analysis frame to obtain a voice spectrogram, extracting voice characteristic data, extracting a respiratory signal and a heartbeat signal from physiological sample data, and performing dimension reduction processing to obtain physiological characteristic data;
3) and training a time recurrent neural network by using the voice characteristic data and the physiological characteristic data.
Has the advantages that: compared with the prior art, the technical scheme of the invention has the following beneficial technical effects:
the invention realizes the bimodal fusion of physiological information and voice information, and is an innovation of emotion recognition. Emotion recognition based on human physiological information and voice information is more persuasive than a single information recognition result;
in addition, a non-contact detection method is provided, compared with other methods for acquiring physiological information by wearing electrode plates, the method greatly improves the degree of freedom of measurement and provides great convenience for users;
in addition, feature fusion is selected in the aspect of a bimodal fusion mode, complementary information among different modalities and mutual influence among the complementary information are effectively fused, and the obtained combined feature result can more comprehensively display the emotional state of a user;
in addition, an idea is provided for other dual-mode fusion or multi-mode fusion, and various functions can be continuously improved subsequently, so that the traditional single-mode emotion recognition technology can achieve the purpose of upgrading and updating.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a schematic flow chart of a bimodal fusion emotion recognition method based on biological radar and voice information provided by the invention;
FIG. 2 is a schematic view of a physiological information processing flow provided by the present invention;
fig. 3 is a schematic diagram of a processing flow of voice information provided by the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
The invention is further illustrated by the following examples and figures of the specification.
Example 1
A bimodal fusion emotion recognition method based on biological radar and voice information is disclosed as figure 1, and comprises the following steps:
1) collecting a voice signal and a physiological signal: carrying out non-contact acquisition on natural voice and human body signals by using a microphone and a radar;
the radar is a frequency modulation continuous wave radar, and a sawtooth wave is transmitted by utilizing a linear frequency modulation technology;
the microphone is a digital MEMS microphone and outputs 1/2-period pulse density modulation digital signals;
2) signal preprocessing: respectively preprocessing signals of physiological and voice modes including heartbeat signals, respiration signals and voice signals to enable the signals to meet the input requirements of corresponding models of different modes;
3) and (3) emotion feature extraction: respectively extracting features of the heartbeat signal, the respiration signal and the voice signal which are preprocessed in the step 2) to obtain corresponding feature vectors;
4) and (3) emotional characteristic fusion: performing feature fusion on the three feature vectors of the heartbeat, the respiration and the voice extracted in the step 3) by adopting one of weighted fusion, product fusion, maximum value fusion and merging fusion;
5) judging the emotion: inputting the fusion characteristics in the step 4) into a pre-trained deep convolutional neural network, wherein the deep convolutional neural network comprises a first classifier and a second classifier which have different labels, acquiring output physiological emotion evaluation information of the first classifier and output voice emotion evaluation information of the second classifier, merging the output physiological emotion evaluation information and the output voice emotion evaluation information into user emotion evaluation information, and judging emotion according to the user emotion evaluation information.
Example 2
The physiological information processing flow, as shown in fig. 2, includes the following steps:
1) acquiring human body physiological sign signals by using a frequency modulation continuous wave radar, carrying out band-pass filtering on the original human body physiological sign signals, filtering drift signals below 0.2Hz and noise signals above 2Hz of the original human body physiological sign signals, classifying the signals of 0.2Hz-0.9Hz as respiratory signals, and classifying the signals of 0.9Hz-2.0Hz as heartbeat signals;
2) extracting time characteristics, waveform characteristics and frequency domain characteristics of the respiratory signals; extracting time characteristics, waveform characteristics and frequency domain characteristics of the heartbeat signals;
3) carrying out PCA dimension reduction on the extracted respiratory signal and heartbeat signal characteristics to two-dimensional data to obtain physiological characteristics;
4) and inputting the physiological characteristic data to be identified into the first classifier to obtain physiological emotion evaluation information.
Example 3
The voice information processing flow, as shown in fig. 3, includes the following steps:
1) acquiring a human body voice signal by using a digital MEMS (micro-electromechanical systems) microphone, pre-emphasizing the human body voice signal by using a digital filter, and outputting pre-emphasized voice data;
2) performing frame processing on the pre-emphasized voice data by using a short-time analysis technology to obtain a voice characteristic parameter time sequence;
3) windowing the voice characteristic parameter time sequence by using a Hamming window function to obtain voice windowing data
4) Carrying out endpoint detection on the voice windowing data by using a double-threshold comparison method to obtain preprocessed voice data;
5) carrying out short-time Fourier transform on the preprocessed voice data to obtain a voice spectrogram;
6) extracting voice feature data to obtain voice emotion features;
7) and inputting the obtained speech emotion characteristics into the second classifier to obtain speech emotion evaluation information.

Claims (7)

1. A bimodal fusion emotion recognition method based on biological radar and voice information is characterized by comprising the following steps:
step 1: acquiring physiological information and voice information of a user with emotion to be recognized through an external device radar and a microphone, inputting the physiological information and the voice information into a pre-trained feature extraction network, and respectively extracting physiological features and voice features;
step 2: performing feature fusion on the extracted physiological features and the extracted voice features;
and step 3: inputting the fusion features into a pre-trained deep convolutional neural network, wherein the deep convolutional neural network comprises a first classifier and a second classifier which have different labels;
and 4, step 4: and acquiring the physiological emotion evaluation information output by the first classifier and the voice emotion evaluation information output by the second classifier, wherein the labels of the physiological emotion evaluation information and the voice emotion evaluation information are different from each other, and combining the physiological emotion evaluation information and the voice emotion evaluation information into the user emotion evaluation information.
2. The dual-modality fusion emotion recognition method based on biological radar and voice information, as claimed in claim 1, wherein the acquired physiological information includes heartbeat signals and respiration signals.
3. The dual-mode fusion emotion recognition method based on biological radar and voice information, as claimed in claim 1, wherein the step of acquiring physiological information and extracting physiological features comprises the steps of:
step 1: under the condition that the trunk of a user to be identified in emotion meets a micro-motion state, acquiring a human body physiological sign signal by using a frequency modulation continuous wave radar, carrying out band-pass filtering on the original human body physiological sign signal, filtering a drift signal and a noise signal of the original physical sign signal, and classifying the filtered signal into a respiratory signal and a heartbeat signal;
step 2: extracting time characteristics, waveform characteristics and frequency domain characteristics of the respiratory signals and the heartbeat signals, carrying out PCA dimension reduction on the extracted respiratory signals and heartbeat signal characteristics to two-dimensional data to obtain physiological characteristics, and inputting the physiological characteristics into the first classifier to obtain physiological emotion evaluation information.
4. The dual-mode fusion emotion recognition method based on biological radar and voice information, as claimed in claim 3, wherein the step of extracting the time feature, waveform feature and frequency domain feature of the respiration signal and the heartbeat signal comprises the following steps:
step 1: performing time-frequency analysis of a non-Gaussian kernel function on a respiration signal and a heartbeat signal of a user with emotion to be recognized to obtain a time-frequency distribution matrix, and extracting time characteristics;
step 2: analyzing the time-frequency distribution matrix, extracting an energy central point by analyzing the physical significance of each element in the matrix to obtain a one-dimensional waveform characteristic vector containing modulation type characteristics, and extracting waveform characteristics;
and step 3: and carrying out Fourier transform on the one-dimensional waveform characteristic vector containing the modulation type characteristics to obtain frequency domain information, and extracting frequency domain characteristics.
5. The dual-mode fusion emotion recognition method based on the biological radar and the voice information, as claimed in claim 1, wherein the step of obtaining the voice information and extracting the voice features comprises the steps of:
step 1: acquiring an original voice signal of a human body by using a microphone, wherein the original voice signal is a continuous analog signal, voice needs to be sampled and converted into discrete data on a time axis, and the analog signal is converted into a discrete time signal after sampling processing;
step 2: carrying out hierarchical quantization on the voice discrete time signal, dividing the amplitude of signal sampling into a plurality of sections, classifying sample values sampled in a certain section into one class, and giving a corresponding quantization value to obtain a voice digital signal;
and step 3: and coding, preprocessing and feature selection are carried out on the voice digital signal to obtain voice feature data, and the voice feature data are input into the second classifier to obtain voice emotion evaluation information.
6. The dual-mode fusion emotion recognition method based on the biological radar and the voice information, as claimed in claim 5, wherein the encoding, preprocessing and feature selection are performed on the voice digital signal to obtain voice feature data, comprising the following steps:
step 1: pre-emphasizing the voice digital signal by using a digital filter, and performing frame division processing on the pre-emphasized voice data by using a short-time analysis technology to obtain a voice characteristic parameter time sequence;
step 2: windowing the voice characteristic parameter time sequence by using a Hamming window function, and further carrying out end point detection on voice windowed data by using a double-threshold comparison method to obtain preprocessed voice data;
and step 3: and carrying out short-time Fourier transform on the preprocessed voice data to obtain an STFT voice spectrogram, and extracting voice characteristic data.
7. The dual-mode fusion emotion recognition method based on biological radar and voice information, as claimed in claim 1, wherein the training of the pre-trained deep convolutional network comprises the following steps:
step 1: constructing a voice information base and a physiological information base, and acquiring voice sample data and physiological sample data in the voice information base and the physiological information base;
step 2: performing frame windowing on the voice sample data to obtain a voice analysis frame, performing short-time Fourier transform on the voice analysis frame to obtain a voice spectrogram, extracting voice characteristic data, extracting a respiratory signal and a heartbeat signal from physiological sample data, and performing dimension reduction processing to obtain physiological characteristic data;
and step 3: and training a time recurrent neural network by using the voice characteristic data and the physiological characteristic data.
CN202011368284.1A 2020-12-02 2020-12-02 Bimodal fusion emotion recognition method based on biological radar and voice information Pending CN113143270A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011368284.1A CN113143270A (en) 2020-12-02 2020-12-02 Bimodal fusion emotion recognition method based on biological radar and voice information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011368284.1A CN113143270A (en) 2020-12-02 2020-12-02 Bimodal fusion emotion recognition method based on biological radar and voice information

Publications (1)

Publication Number Publication Date
CN113143270A true CN113143270A (en) 2021-07-23

Family

ID=76882425

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011368284.1A Pending CN113143270A (en) 2020-12-02 2020-12-02 Bimodal fusion emotion recognition method based on biological radar and voice information

Country Status (1)

Country Link
CN (1) CN113143270A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113707185A (en) * 2021-09-17 2021-11-26 卓尔智联(武汉)研究院有限公司 Emotion recognition method and device and electronic equipment
CN113892931A (en) * 2021-10-14 2022-01-07 重庆大学 Method for extracting and analyzing intra-abdominal pressure by FMCW radar based on deep learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107220591A (en) * 2017-04-28 2017-09-29 哈尔滨工业大学深圳研究生院 Multi-modal intelligent mood sensing system
CN110200640A (en) * 2019-05-14 2019-09-06 南京理工大学 Contactless Emotion identification method based on dual-modality sensor
CN111563422A (en) * 2020-04-17 2020-08-21 五邑大学 Service evaluation obtaining method and device based on bimodal emotion recognition network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107220591A (en) * 2017-04-28 2017-09-29 哈尔滨工业大学深圳研究生院 Multi-modal intelligent mood sensing system
CN110200640A (en) * 2019-05-14 2019-09-06 南京理工大学 Contactless Emotion identification method based on dual-modality sensor
CN111563422A (en) * 2020-04-17 2020-08-21 五邑大学 Service evaluation obtaining method and device based on bimodal emotion recognition network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王正友主编: "《数字传媒设计与制作 第二版》", 31 May 2016 *
陈雕等: "雷达信号时频分析的特征提取", 《计算机应用》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113707185A (en) * 2021-09-17 2021-11-26 卓尔智联(武汉)研究院有限公司 Emotion recognition method and device and electronic equipment
CN113892931A (en) * 2021-10-14 2022-01-07 重庆大学 Method for extracting and analyzing intra-abdominal pressure by FMCW radar based on deep learning
CN113892931B (en) * 2021-10-14 2023-08-22 重庆大学 Method for extracting and analyzing intra-abdominal pressure by FMCW radar based on deep learning

Similar Documents

Publication Publication Date Title
CN109157231B (en) Portable multichannel depression tendency evaluation system based on emotional stimulation task
Zhao et al. Noise rejection for wearable ECGs using modified frequency slice wavelet transform and convolutional neural networks
Krishna et al. An efficient mixture model approach in brain-machine interface systems for extracting the psychological status of mentally impaired persons using EEG signals
CN111461176B (en) Multi-mode fusion method, device, medium and equipment based on normalized mutual information
Li et al. Automatic recognition of sign language subwords based on portable accelerometer and EMG sensors
Patil et al. The physiological microphone (PMIC): A competitive alternative for speaker assessment in stress detection and speaker verification
CN107736894A (en) A kind of electrocardiosignal Emotion identification method based on deep learning
CN111920420B (en) Patient behavior multi-modal analysis and prediction system based on statistical learning
Chen et al. Emotion recognition with audio, video, EEG, and EMG: a dataset and baseline approaches
CN113729707A (en) FECNN-LSTM-based emotion recognition method based on multi-mode fusion of eye movement and PPG
CN113197579A (en) Intelligent psychological assessment method and system based on multi-mode information fusion
CN113143270A (en) Bimodal fusion emotion recognition method based on biological radar and voice information
Chauhan et al. Effective stress detection using physiological parameters
CN104367306A (en) Physiological and psychological career evaluation system and implementation method
Kang et al. 1D convolutional autoencoder-based PPG and GSR signals for real-time emotion classification
Wang et al. Speech neuromuscular decoding based on spectrogram images using conformal predictors with Bi-LSTM
Sonawani et al. Biomedical signal processing for health monitoring applications: a review
Byeon et al. Ensemble deep learning models for ECG-based biometrics
Shahid et al. Emotion recognition system featuring a fusion of electrocardiogram and photoplethysmogram features
CN105796091B (en) A kind of intelligent terminal for removing electrocardiosignal vehicle movement noise
Ratnovsky et al. EMG-based speech recognition using dimensionality reduction methods
Jiang et al. Continuous blood pressure estimation based on multi-scale feature extraction by the neural network with multi-task learning
Yosi et al. Emotion recognition using electroencephalogram signal
Lazurenko et al. Motor imagery-based brain-computer interface: neural network approach
Skopin et al. Heartbeat feature extraction from vowel speech signal using 2D spectrum representation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210723