WO2022199215A1 - 一种融合人群信息的语音情感识别方法和系统 - Google Patents

一种融合人群信息的语音情感识别方法和系统 Download PDF

Info

Publication number
WO2022199215A1
WO2022199215A1 PCT/CN2022/070728 CN2022070728W WO2022199215A1 WO 2022199215 A1 WO2022199215 A1 WO 2022199215A1 CN 2022070728 W CN2022070728 W CN 2022070728W WO 2022199215 A1 WO2022199215 A1 WO 2022199215A1
Authority
WO
WIPO (PCT)
Prior art keywords
network
crowd
mel
information
input
Prior art date
Application number
PCT/CN2022/070728
Other languages
English (en)
French (fr)
Inventor
李太豪
郑书凯
刘昱龙
裴冠雄
马诗洁
Original Assignee
之江实验室
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 之江实验室 filed Critical 之江实验室
Priority to US17/845,908 priority Critical patent/US11837252B2/en
Publication of WO2022199215A1 publication Critical patent/WO2022199215A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/18Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being spectral information of each sub-band
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/21Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being power information
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/24Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being the cepstrum

Definitions

  • the invention belongs to the field of artificial intelligence, and in particular relates to a speech emotion recognition method and system integrating crowd information.
  • the current mainstream speech emotion recognition methods are based on traditional algorithms or deep learning methods based on simple neural network architectures.
  • the basic process based on the traditional method is: feature extraction of speech, and emotion classification of speech through features.
  • the speech features usually include Mel frequency cepstral coefficients, Mel spectrum, zero-crossing rate, fundamental frequency, etc.
  • the basic process of the method based on deep learning is the same as that of the traditional method, except that the traditional method classifier uses traditional algorithms such as SVM, and the deep learning method uses a neural network classifier.
  • the features used by the current deep learning methods are Mel frequency cepstral coefficients and Mel spectrum, and the network is usually just a few layers of RNN or a few layers of CNN as a classifier.
  • the present invention provides a method and system for effectively improving the accuracy of speech emotion recognition based on SENet fusion of crowd information and Mel spectrum feature information, and its specific technical solutions are as follows:
  • a speech emotion recognition method integrating crowd information comprising the following steps:
  • S5 Input the X input obtained in S3 into the Mel spectrum preprocessing network to obtain the Mel spectrum depth feature information H m ;
  • S6 fuse the crowd depth feature information H p extracted by S4 and the mel spectrum depth feature information H m extracted by S5 through the channel attention network SENet to obtain the fusion feature H f ;
  • the crowd classification network is composed of a three-layer LSTM network structure, and the step S4 specifically includes the following steps:
  • S4_1 First, the input Mel spectrogram signal X input of length T is divided into three segments with overlapping Mel spectrum segments of equal length, split from 0 to cut into the first segment, arrive cut into the second segment, To T cut into the third segment;
  • S4_2 Input the three Mel spectrum segments segmented by S4_1 into the three-layer LSTM network in turn, and take the last output of the LSTM network output as the final state, the three Mel spectrum segments finally obtain 3 hidden features , and finally the three latent features are averaged to obtain the final crowd depth feature information H p .
  • the mel-spectrogram preprocessing network of the step S5 is composed of the ResNet network cascaded FMS network, and the step S5 specifically includes the following steps: first, the mel-spectrogram signal X input with a length of T is expanded into a three-dimensional matrix; Then use the ResNet network structure to use the structure of 2-layer convolution and maximum pooling to extract the information related to the emotion in the Mel spectrum information; and then use the FMS network structure to effectively combine the information extracted by the ResNet network. , and finally obtain the Mel spectrum depth feature information H m .
  • step S6 specifically includes the following steps:
  • the crowd depth feature information H p is a one-dimensional vector in the space R C , where C represents the channel dimension
  • the mel spectrum depth feature information H m is a three-dimensional matrix in the space R T ⁇ W ⁇ C , where T represents the time dimension, W represents the width dimension, and C represents the channel dimension
  • T represents the time dimension
  • W represents the width dimension
  • C represents the channel dimension
  • SENet network the global average pooling of H m on the time dimension T and the width dimension W is performed, and converted into a C -dimensional vector to obtain a one-dimensional space RC.
  • vector H p_avg specifically,
  • H m [H 1 , H 2 , H 3 , ..., H C ]
  • the global average pooling formula is as follows:
  • S6_2 Splicing the H p_avg obtained in S6_1 with the crowd depth feature information H p to obtain the splicing feature H c , the expression is:
  • Y represents the output of the network
  • X represents the input of the network
  • W represents the weight parameter of the network
  • b represents the bias parameter of the network
  • S6_4 Multiply the weight parameter obtained by S6_3 by the deep Mel spectrum feature information H m obtained by S5 to obtain an emotion feature matrix, and perform global average pooling on the emotion feature matrix on the dimension T ⁇ W to obtain a fusion feature H f .
  • step S7 specifically includes the following steps:
  • S7_1 Input the H f obtained in S6 into a two-layer fully connected network after passing through the pooling layer to obtain a 7-dimensional feature vector H b , where 7 represents the number of all emotion categories;
  • S7_2 The feature vector obtained by S7_1:
  • Softmax As the independent variable of the Softmax operator, calculate the final value of Softmax as the probability value of the input audio belonging to each category of emotion, and finally take the category with the largest probability value as the final audio emotion category.
  • the calculation formula of Softmax is as follows:
  • a speech emotion recognition system integrating crowd information including:
  • the voice signal acquisition module is used to collect user voice signals
  • the voice signal preprocessing module is used to preprocess the collected voice signal, perform endpoint detection on the voice, remove the silent segments before and after the voice, and generate data that can be used for neural network processing;
  • the emotion prediction module is used to process the Mel spectrum features through the designed network model to predict the emotion type of the user's audio
  • the data storage module is used to use the MySQL database to store the user's voice data and emotional label data.
  • the voice signal acquisition module adopts a high-fidelity single microphone or a microphone array.
  • the preprocessing includes: pre-emphasis, framing, windowing, short-time Fourier transform, trigonometric filtering, and mute removal operations to convert the speech signal from a time-domain signal to a frequency-domain signal, that is, from an audio signal.
  • the samples are converted into mel spectrum features; the speech is silently denoised by spectral subtraction, the speech is pre-emphasized by the Z transform method, and the mel spectrum feature is extracted by the short-time Fourier transform method.
  • the speech emotion recognition method of the present invention recognizes speech emotion by integrating crowd information. Due to the difference in physiological development of different crowds, the vocal cord morphological structure is different, thereby affecting the effect of human pronunciation, such as: children's pronunciation is crisp and sharp. , the voice of the elderly is muddy and deep, and the voice of an adult man is usually deeper than that of an adult woman. Therefore, the fusion of crowd information can more effectively extract the emotional information contained in the speech;
  • the speech emotion recognition method of the present invention utilizes LSTM to take the last output and global pooling technology, which can ignore the limitation of speech length and realize emotion recognition of speeches of different lengths;
  • the speech emotion recognition method of the present invention uses SENet for information fusion, and can effectively extract important information in the network through the channel attention mechanism of SENet, and improve the overall accuracy of the model;
  • the speech emotion recognition system of the present invention has the function of emotion analysis results and original dialogue speech storage, which can help to make reasonable analysis and suggestions, such as for smart phone customer service quality evaluation scenarios, intelligent voice dialogue robot user satisfaction analysis scenarios, Voice message sentiment analysis scenarios, voice sentiment category analysis scenarios in videos, etc.
  • Fig. 1 is the structural representation of the speech emotion recognition system of the present invention
  • FIG. 2 is a schematic flowchart of a speech emotion recognition method of the present invention
  • Fig. 3 is the network structure schematic diagram of the speech emotion recognition method of the present invention.
  • Figure 4 Schematic diagram of the network structure of the fusion of ResNet and FMS.
  • a speech emotion recognition system integrating crowd information includes:
  • the voice signal acquisition module is used to collect the user's voice signal.
  • a high-fidelity single microphone or a microphone array is used to reduce the distortion of the voice signal acquisition;
  • the voice signal preprocessing module is used to preprocess the collected voice signal, perform endpoint detection on the voice, remove the silent segments before and after the voice, and generate data that can be used for neural network processing. Specifically: this module pre-emphasizes the voice. , framing, windowing, short-time Fourier transform, trigonometric function filtering, mute removal and other operations to convert speech signals from time-domain signals to frequency-domain signals, that is, from audio samples to Mel spectrum features for subsequent use processing; wherein the speech is denoised by spectral subtraction, the speech is pre-emphasized by the Z-transform method, and the Mel spectrum is extracted by the short-time Fourier transform method;
  • the emotion prediction module is used to process the Mel spectrum features through the designed network model to predict the emotion type of the user's audio
  • the data storage module is used to store the user's voice data and emotional label data by using databases such as MySQL.
  • a method for using a speech emotion recognition system fused with crowd information includes the following steps:
  • S1 The user audio signal is collected through the recording collection device, which is represented as X audio .
  • S2 Perform pre-emphasis, short-time Fourier transform and other preprocessing on the collected audio signal X audio to generate a mel spectrogram signal, which is represented as X mel , and the mel spectrum is a matrix with a dimension of T′ ⁇ 128.
  • the energy of the Mel-spectrograms of different frequency dimensions of each frame is accumulated, and the mute frames are removed by setting a threshold to remove frames with energy lower than the threshold.
  • the crowd classification network is composed of a three-layer LSTM network structure.
  • the LSTM network is a recurrent neural network structure that can effectively solve the problem of long sequence dependencies.
  • multi-layer LSTMs are often used to solve sequence-related problems such as speech. Specifically, it includes the following steps:
  • S4_1 First, the input Mel spectrum of length T is divided into three segments with overlapping Mel spectrum segments of equal length, split from 0 to cut into the first segment, arrive cut into the second segment, To T cut into the third segment;
  • S4_2 Input the three segments of Mel spectrum segmented by S4_1 into the three-layer LSTM network in turn, and take the last output of the LSTM network output as the final state.
  • three mel spectrum segments finally obtain 3 hidden features with dimension 256, and finally the three features are averaged as the final crowd depth feature information H p .
  • Three-layer LSTM can effectively extract the information of long time series such as mel spectrum; taking the last state of LSTM and averaging can effectively remove information such as text content unrelated to crowd information in the mel spectrum, and improve the accuracy of crowd information extraction .
  • S5 Input the X input obtained in S3 into the mel-spectrum preprocessing network to obtain the mel-spectrum depth feature information H m .
  • the Mel spectrum preprocessing network structure is composed of a ResNet network cascaded with an FMS network.
  • the specific network structure is shown in Figure 4.
  • the Mel spectrum preprocessing network processing steps are: first, expand the Mel spectrum with a dimension of T ⁇ 128 into T ⁇ 128 ⁇ 1 three-dimensional matrix, and then through the ResNet and FMS network structure to carry out the depth information processing of Mel spectrum features, to generate deep Mel spectrum features of dimension T ⁇ 128 ⁇ 256;
  • ResNet network structure uses 2 layers of convolution plus maximum
  • the pooling structure extracts the information related to the emotion in the Mel spectrum, and then uses the FMS network architecture to effectively combine the information extracted by the ResNet network to obtain more reasonable emotion-related features.
  • the ResNet network can expand the network depth and improve the network learning ability while solving the gradient disappearance problem in deep learning; the FMS network can effectively extract information from the network, which helps the ResNet network to efficiently extract useful information from the network.
  • S6 fuse the crowd depth feature information H p extracted by S4 and the mel spectrum depth feature information H m extracted by S5 through the channel attention network SENet, as shown in Figure 3, to obtain the fusion feature H f , and the specific steps include:
  • the crowd depth feature information H p obtained in step S4 is a one-dimensional vector in the space R C , where C represents the channel dimension;
  • the mel spectrum depth feature information H m obtained in step S5 is the space R G ⁇ W ⁇ C .
  • through the channel attention network SENet the global average pooling of H m in the time dimension and the width dimension is converted into a C-dimensional vector, and R is obtained.
  • One-dimensional vector H p_avg in C space specifically,
  • H m [H 1 , H 2 , H 3 , ..., H C ]
  • the global average pooling formula is as follows:
  • S6_2 Splicing the H p_avg obtained in S6_1 with the crowd depth feature information H p to obtain the splicing feature H c , the expression is:
  • S6_3 Input the splicing feature H c obtained in S6_2 into a two-layer fully connected network to obtain a channel weight vector W c .
  • the calculation formula of the fully connected network is as follows:
  • Y represents the output of the network
  • X represents the input of the network
  • W represents the weight parameter of the network
  • b represents the bias parameter of the network
  • S6_4 multiply the weight parameter obtained by S6_3 by the depth mel spectrum feature information H m obtained by S5 to obtain the fusion feature H f ;
  • the SENet automatically calculates the weight coefficients of each channel through the network, which can effectively enhance the important information extracted from the network while reducing the weight of useless information.
  • SENet adding crowd information can focus on extracting information related to the pronunciation characteristics of the crowd according to different crowds, further improving the accuracy of emotion recognition.
  • S7 After the feature H f fused by S6 is passed through a pooling layer, it is input into the crowd classification network for emotion recognition, that is, the three-dimensional matrix of T ⁇ 128 ⁇ 256 is converted into a one-dimensional vector of 256 dimensions and input to the classification
  • the network performs emotion recognition.
  • the classification network is composed of a layer of 256-dimensional fully connected network and a layer of 7-dimensional fully connected network.
  • the output 7-dimensional features are calculated by the Softmax operator for the probability of emotion classification. The largest one is the final emotion category. Specifically, it includes the following steps:
  • S7_1 Input the H f obtained in S6 into a two-layer fully connected network after passing through the pooling layer to obtain a 7-dimensional feature vector H b , where 7 represents the number of all emotion categories;
  • S7_2 The feature vector obtained by S7_1:
  • Softmax As the independent variable of the Softmax operator, calculate the final value of Softmax as the probability value of the input audio belonging to each category of emotion, and finally take the category with the largest probability value as the final audio emotion category.
  • the calculation formula of Softmax is as follows:
  • the method provided by this implementation increases the accuracy of audio emotion feature extraction by integrating crowd information, and can increase the emotion recognition capability of the entire model.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Theoretical Computer Science (AREA)
  • Psychiatry (AREA)
  • Hospice & Palliative Care (AREA)
  • Child & Adolescent Psychology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

一种融合人群信息的语音情感识别方法和系统,属于人工智能领域。方法包括以下步骤:采集用户语音信号(S1);预处理语音信号,获取梅尔谱(S2);切除梅尔谱前后静音段(S3);通过人群分类网络获取深度人群信息(S4);通过梅尔谱预处理网络获取梅尔谱深度信息(S5);通过SENet融合特征,获取融合信息(S6);通过分类网络,得到情感识别结构(S7)。通过融合人群信息特征,使情感特征提取更加准确,通过SENet的通道注意力机制进行信息融合,能够有效的进行深度特征的提取,提高整体识别精度。

Description

[根据细则26改正03.03.2022] 一种融合人群信息的语音情感识别方法及系统 技术领域
本发明属于人工智能领域,具体涉及一种融合人群信息的语音情感识别方法和系统。
背景技术
语言交互是人类最早的交流方式之一,因此语音成为了人类表达情感的主要方式。随着人机交互的兴起,智能的进行语音情感分析也越发重要起来。目前情感主要的分类方式是上世纪Ekman提出的7种情感,分别为:中性、开心、悲伤、生气、害怕、厌恶、惊讶。
当前主流的语音情感识别方法是基于传统算法或者基于简单神经网络架构的深度学习方法。基于传统方法的基本流程为:对语音进行特征提取、通过特征对语音进行情感分类。其中语音特征通常有梅尔频率倒谱系数、梅尔频谱、过零率、基频等。基于深度学习的方法基本流程与传统方法一样,只是传统方法分类器用的是SVM等传统算法,深度学习用的是神经网络分类器。目前深度学习方法使用的特征有梅尔频率倒谱系数和梅尔频谱,网络通常只是简单的几层RNN或者几层CNN作为分类器。
在目前的技术中,因为只考虑了语音的浅层信息,使用了简单的网络结构,所以情感识别识别率都比较低,泛化性也比较差。
发明内容
为了解决现有技术中存在的上述技术问题,本发明提供了基于SENet融合人群信息和梅尔谱特征信息,有效提高语音情感识别准确率的方法和系统,其具体技术方案如下:
一种融合人群信息的语音情感识别方法,包括如下步骤:
S1:通过录音采集设备,采集用户音频信号,表示为X audio
S2:对采集的音频信号X audio,进行预处理,生成梅尔谱图信号,表示为X mel
S3:对生成的梅尔谱图信号X mel,计算不同时间帧梅尔谱图的能量大小,通过设置阈值,切除前后静音段,得到长度为T的梅尔谱图信号,表示为X input
S4:将S3得到的X input输入人群分类网络,得到人群深度特征信息H p
S5:将S3得到的X input输入梅尔谱预处理网络,得到梅尔谱深度特征信息H m
S6:将S4提取的人群深度特征信息H p和S5提取的梅尔谱深度特征信息H m通过通道注意力网络SENet进行融合,得到融合特征H f
S7:将S6融合后的特征H f,通过池化层后,输入人群分类网络进行情感识别。
进一步的,所述人群分类网络由三层LSTM网络结构构成,所述步骤S4具体包括如下步骤:
S4_1:首先将输入的长度为T的梅尔谱图信号X input,有重叠的切分成三段
Figure PCTCN2022070728-appb-000001
等长度的梅尔谱片段,切分方法为0到
Figure PCTCN2022070728-appb-000002
切分成第一段,
Figure PCTCN2022070728-appb-000003
Figure PCTCN2022070728-appb-000004
切分成第二段,
Figure PCTCN2022070728-appb-000005
到T切分成第三段;
S4_2:将S4_1切分好的三个梅尔谱片段,依次输入到三层LSTM网络中,并取LSTM网络输出的最后一个输出作为最终状态,三个梅尔谱片段最终获得3个隐含特征,最后将3个隐含特征进行取平均,得到最终的人群深度特征信息H p
进一步的,所述步骤S5的梅尔谱预处理网络由ResNet网络级联FMS网络组成,所述步骤S5具体包括如下步骤:首先将长度为T的梅尔谱图信号X input扩充成三维矩阵;然后利用所述的ResNet网络结构采用2层卷积加最大池化的结构,提取梅尔谱图信息中与表示情感的相关的信息;再利用FMS网络架构对ResNet网络提取出的信息进行有效组合,最后得到梅尔谱深度特征信息H m
进一步的,所述步骤S6具体包括如下步骤:
S6_1:所述人群深度特征信息H p是空间R C中的一维向量,其中C代表通道维度;所述梅尔谱深度特征信息H m是空间R T×W×C中的三维矩阵,其中T代表时间维度,W代表 宽度维度,C代表通道维度;通过SENet网络,将H m在时间维度T和宽度维度W上做全局平均池化,转换成C维向量,得到空间R C的一维向量H p_avg,具体的,
H m=[H 1,H 2,H 3,...,H C]
其中,
Figure PCTCN2022070728-appb-000006
另外,
Figure PCTCN2022070728-appb-000007
全局平均池化公式如下:
Figure PCTCN2022070728-appb-000008
S6_2:将S6_1得到的H p_avg与人群深度特征信息H p进行拼接,得到拼接特征H c,表达式为:
Figure PCTCN2022070728-appb-000009
S6_3:将S6_2得到的拼接特征H c输入两层全连接网络,得到通道权重向量W c,其中,全连接网络的计算公式如下:
Y=W*X+b
其中,Y表示网络的输出,X表示网络的输入,W表示网络的权重参数,b表示网络的偏置参数;
S6_4:将S6_3得到的权重参数乘以S5得到的深度梅尔谱特征信息H m,得到情感特征矩 阵,将情感特征矩阵在维度T×W上做全局平均池化,得到融合特征H f
进一步的,所述步骤S7具体包括如下步骤:
S7_1:将S6得到的H f,经过池化层后,输入到两层全连接网络,得到7维特征向量H b,其中7表示所有的情感类别数;
S7_2:将S7_1得到的特征向量:
Figure PCTCN2022070728-appb-000010
作为Softmax算子的自变量,计算Softmax的最终值,做为输入音频属于每一类情感的概率值,最后取类别概率值最大的作为最终的音频情感类别,其中Softmax的计算公式如下:
Figure PCTCN2022070728-appb-000011
其中的e为常量。
一种融合人群信息的语音情感识别系统,包括:
语音信号采集模块,用于采集用户语音信号;
语音信号预处理模块,用于将采集到的语音信号进行预处理,对语音进行端点检测,去除语音前后静音段,生成可用于神经网络处理的数据;
情感预测模块,用于通过设计的网络模型处理梅尔谱特征,预测用户音频的情感类型;
数据存储模块,用于利用MySQL数据库,存储用户的语音数据和情感标签数据。
进一步的,所述语音信号采集模块采用高保真单麦克风或者麦克风阵列。
进一步的,所述预处理,包括:预加重、分帧、加窗、短时傅里叶变换、三角函数滤波、静音去除操作,将语音信号从时域信号转换到频域信号,即从音频采样转换成梅尔谱特征;其中采用谱减法对语音进行静音去噪,采用Z变换方法对语音进行预加重,采用短时傅里叶变换方法对语音进行梅尔谱特征提取。
本发明的优点如下:
1、本发明的语音情感识别方法,融合人群信息对语音情感进行识别,由于不同人群在生理 发育上的不同,导致声带形态构造不同,从而影响人的发音效果,例如:儿童的发音清脆、尖锐,老人的声音浑浊、低沉,另外,成年男子的声音通常比成年女子声音更加低沉,因此,融合人群信息能够更加有效提取语音中的蕴含的情感信息;
2、本发明的语音情感识别方法,利用LSTM取最后一个输出和全局池化技术,能够忽略语音长度限制,实现不同长度语音进行情感识别;
3、本发明的语音情感识别方法,利用SENet进行信息融合,能够通过SENet的通道注意力机制,有效提取网络中的重要信息,提高模型整体精度;
4、本发明的语音情感识别系统具有情感分析结果及原始对话语音存储功能,能够帮助做出合理分析和建议,例如用于智能电话客服服务质量评估场景,智能语音对话机器人用户满意度分析场景、语音留言情感分析场景、视频内语音情感类别分析场景等。
附图说明
图1为本发明的语音情感识别系统的结构示意图;
图2为本发明的语音情感识别方法的流程示意图;
图3为本发明的语音情感识别方法的网络结构示意图;
图4 ResNet与FMS融合的网络结构示意图。
具体实施方式
为了使本发明的目的、技术方案和技术效果更加清楚明白,以下结合说明书附图,对本发明作进一步详细说明。
如图1所示,一种融合人群信息的语音情感识别系统,包括:
语音信号采集模块,用于采集用户语音信号,一般采用高保真单麦克风或者麦克风阵列,以降低语音信号采集的失真度;
语音信号预处理模块,用于将采集到的语音信号进行预处理,对语音进行端点检测,去除语音前后静音段,生成可用于神经网络处理的数据,具体为:该模块通过对语音进行预加重、分帧、加窗、短时傅里叶变换、三角函数滤波、静音去除等操作,将语音信号从时域信号转换到频域信号,即从音频采样转换成梅尔谱特征,用于后续处理;其中采用谱减法对语音进行去噪,采用Z变换方法对语音进行预加重,采用短时傅里叶变换方法对语音进行梅尔谱提取;
情感预测模块,用于通过设计的网络模型处理梅尔谱特征,预测用户音频的情感类型;
数据存储模块,用于利用MySQL等数据库,存储用户的语音数据和情感标签数据。
如图2所示,一种使用融合人群信息的语音情感识别系统的方法,包括如下步骤:
S1:通过录音采集设备,采集用户音频信号,表示为X audio
S2:对采集的音频信号X audio,进行预加重、短时傅里叶变换等预处理,生成梅尔谱图信号,表示为X mel,梅尔谱是一个维度为T′×128的矩阵。
S3:对生成的梅尔谱图信号X mel,计算不同时间帧梅尔谱图的能量大小,通过设置阈值,切除前后静音段,得到网络输入是维度为T×128的梅尔谱图信号,表示为X input
其中,所述切除前后静音段,采用累加各帧不同频率维度的梅尔谱图的能量,通过设置阈值去除能量低于该阈值的帧,实现去除静音帧。
S4:将S3得到的X input输入人群分类网络,得到人群深度特征信息H p,该人群分类网络由三层LSTM网络结构构成,LSTM网络是一种能够有效解决长序列依赖问题的递归神经网络结构,多层LSTM常用于解决例如语音这样的序列相关的问题。具体的,包括如下步骤:
S4_1:首先将输入的长度为T的梅尔谱,有重叠的切分成三段
Figure PCTCN2022070728-appb-000012
等长度的梅尔谱片段,切分方法为0到
Figure PCTCN2022070728-appb-000013
切分成第一段,
Figure PCTCN2022070728-appb-000014
Figure PCTCN2022070728-appb-000015
切分成第二段,
Figure PCTCN2022070728-appb-000016
到T切分成第三段;
S4_2:将S4_1切分好的三段梅尔谱,依次输入到三层LSTM网络中,并取LSTM网络输出的最后一个输出作为最终状态。通过此方法,三个梅尔谱片段最终获得3个维度为256的隐含特征,最后将三个特征进行取平均,作为最终的人群深度特征信息H p。三层LSTM可以有效提取梅尔谱这种较长时序序列的信息;取LSTM最后一个状态和求平均能够有效去除梅尔谱中与人群信息无关的文本内容等信息,提高人群信息提取的准确度。
S5:将S3得到的X input输入梅尔谱预处理网络,得到梅尔谱深度特征信息 H m
所述梅尔谱预处理网络结构由ResNet网络级联FMS网络,具体网络结构如图4所示,梅尔谱预处理网络处理步骤为:首先将维度为T×128的梅尔谱扩充成T×128×1的三维矩阵,然后通过ResNet与FMS网络结构进行梅尔谱特征的深度信息处理,生成维度为T×128×256的深度梅尔谱特征;ResNet网络结构采用2层卷积加最大池化的结构,提取梅尔谱中与表示情感的相关的信息,然后再利用FMS网络架构对ResNet网络提取出的信息进行有效的组合,得到更加合理的与情感相关的特征。
ResNet网络能够在拓展网络深度,提高网络学习能力的同时,解决深度学习中出现的梯度消失问题;FMS网络能够有效进行网络中的信息提取,有助于ResNet网络高效的提取网络中的有用信息。
S6:将S4提取的人群深度特征信息H p和S5提取的梅尔谱深度特征信息H m通过通道注意力网络SENet进行融合,如图3所示,得到融合特征H f,具体步骤包括:
S6_1:步骤S4得到的人群深度特征信息H p是空间R C中的一维向量,其中C代表通道维度;步骤S5得到的梅尔谱深度特征信息H m是空间R G×W×C中的三维矩阵,其中T代表时间维度,W代表宽度维度,C代表通道维度;通过通道注意力网络SENet,将H m在时间维度和宽度维度上做全局平均池化,转换成C维向量,得到R C空间的一维向量H p_avg,具体的,
H m=[H 1,H 2,H 3,...,H C]
其中,
Figure PCTCN2022070728-appb-000017
则平均池化后的特征为:
Figure PCTCN2022070728-appb-000018
全局平均池化公式如下:
Figure PCTCN2022070728-appb-000019
S6_2:将S6_1得到的H p_avg与人群深度特征信息H p进行拼接,得到拼接特征H c,表达式为:
Figure PCTCN2022070728-appb-000020
S6_3:将S6_2得到的拼接特征H c输入两层全连接网络,得到通道权重向量W c。具体的,全连接网络的计算公式如下:
Y=W*X+b
其中的,Y表示网络的输出,X表示网络的输入,W表示网络的权重参数,b表示网络的偏置参数;
S6_4:将S6_3得到的权重参数乘以S5得到的深度梅尔谱特征信息H m,得到融合特征H f
所述SENet通过网络自动计算各个通道的权重系数,能够有效增强网络中提取的重要信息,同时降低无用信息的权重。另外,加入人群信息的SENet,能够根据不同人群,侧重提取与该人群发音特点的相关信息,进一步提高情感识别的准确率。
S7:将S6融合后的特征H f,通过一层池化层后,输入人群分类网络进行情感识别,即:将T×128×256的三维矩阵转换成256维的一维向量,输入到分类网络进行情感识别,所述分类网络由一层256维的全连接网络加一层7维的全连接网络构成,最后将输出的7维特征通过Softmax算子进行情感7分类的概率计算,以概率最大者为最终的情感类别,具体的,包括如下步骤:
S7_1:将S6得到的H f,经过池化层后,输入到两层全连接网络,得到7维特征向量H b,其中7表示所有的情感类别数;
S7_2:将S7_1得到的特征向量:
Figure PCTCN2022070728-appb-000021
作为Softmax算子的自变量,计算Softmax的最终值,做为输入音频属于每一类情感的概率值,最后取类别概率值最大的作为最终的音频情感类别,其中Softmax的计算公式如下:
Figure PCTCN2022070728-appb-000022
其中的e为常量。
综上所述,本实施提供的方法,通过融合人群信息,增加了音频情感特征提取的准确性,能够增加整个模型的情感识别能力。
以上所述,仅为本发明的优选实施案例,并非对本发明做任何形式上的限制。虽然前文对本发明的实施过程进行了详细说明,对于熟悉本领域的人员来说,其依然可以对前述各实例记载的技术方案进行修改,或者对其中部分技术特征进行同等替换。凡在本发明精神和原则之内所做修改、同等替换等,均应包含在本发明的保护范围之内。

Claims (4)

  1. 一种融合人群信息的语音情感识别方法,包括如下步骤:
    S1:通过录音采集设备,采集用户音频信号,表示为X audio
    S2:对采集的音频信号X audio,进行预处理,生成梅尔谱图信号,表示为X mel
    S3:对生成的梅尔谱图信号X mel,计算不同时间帧梅尔谱图的能量大小,通过设置阈值,切除前后静音段,得到长度为T的梅尔谱图信号,表示为X input
    S4:将S3得到的X input输入人群分类网络,得到人群深度特征信息H p
    S5:将S3得到的X input输入梅尔谱预处理网络,得到梅尔谱深度特征信息H m
    S6:将S4提取的人群深度特征信息H p和S5提取的梅尔谱深度特征信息H m通过通道注意力网络SENet进行融合,得到融合特征H f
    S7:将S6融合后的特征H f,通过池化层后,输入人群分类网络进行情感识别;
    所述人群分类网络由三层LSTM网络结构构成,所述步骤S4具体包括如下步骤:
    S4_1:首先将输入的长度为T的梅尔谱图信号X input,有重叠的切分成三段
    Figure PCTCN2022070728-appb-100001
    等长度的梅尔谱片段,切分方法为0到
    Figure PCTCN2022070728-appb-100002
    切分成第一段,
    Figure PCTCN2022070728-appb-100003
    Figure PCTCN2022070728-appb-100004
    切分成第二段,
    Figure PCTCN2022070728-appb-100005
    到T切分成第三段;
    S4_2:将S4_1切分好的三个梅尔谱片段,依次输入到三层LSTM网络中,并取LSTM网络输出的最后一个输出作为最终状态,三个梅尔谱片段最终获得3个隐含特征,最后将3个隐含特征进行取平均,得到最终的人群深度特征信息H p
  2. 如权利要求1所述的一种融合人群信息的语音情感识别方法,其特征在于,所述步骤S5的梅尔谱预处理网络由ResNet网络级联FMS网络组成,所述步骤S5具体包括如下步骤:首先将长度为T的梅尔谱图信号X input扩充成三维矩阵;然后利用所述的ResNet网络结构采用2层卷积加最大池化的结构,提取梅尔谱图信息中与表示情感的相关的信息;再利用FMS网络架构对ResNet网络提取出 的信息进行有效组合,最后得到梅尔谱深度特征信息H m
  3. 如权利要求1所述的一种融合人群信息的语音情感识别方法,其特征在于,所述步骤S6具体包括如下步骤:
    S6_1:所述人群深度特征信息H p是空间R C中的一维向量,其中C代表通道维度;所述梅尔谱深度特征信息H m是空间R T×W×C中的三维矩阵,其中T代表时间维度,W代表宽度维度,C代表通道维度;通过SENet网络,将H m在时间维度T和宽度维度W上做全局平均池化,转换成C维向量,得到空间R C的一维向量H p_avg,具体的,
    H m=[H 1,H 2,H 3,...,H C]
    其中,
    Figure PCTCN2022070728-appb-100006
    另外,
    Figure PCTCN2022070728-appb-100007
    全局平均池化公式如下:
    Figure PCTCN2022070728-appb-100008
    S6_2:将S6_1得到的H p_avg与人群深度特征信息H p进行拼接,得到拼接特征H c,表达式为:
    Figure PCTCN2022070728-appb-100009
    S6_3:将S6_2得到的拼接特征H c输入两层全连接网络,得到通道权重向量W c,其中,全连接网络的计算公式如下:
    Y=Q*X+b
    其中,Y表示网络的输出,X表示网络的输入,Q表示网络的权重参数,b表示网络的偏置参数;
    S6_4:将S6_3得到的权重参数乘以S5得到的深度梅尔谱特征信息H m,得到情感特征矩阵,将情感特征矩阵在维度T×W上做全局平均池化,得到融合特征H f
  4. 如权利要求1所述的一种融合人群信息的语音情感识别方法,其特征在于,所述步骤S7具体包括如下步骤:
    S7_1:将S6得到的H f,经过池化层后,输入到两层全连接网络,得到7维特征向量H b,其中7表示所有的情感类别数;
    S7_2:将S7_1得到的特征向量:
    Figure PCTCN2022070728-appb-100010
    作为Softmax算子的自变量,计算Softmax的最终值,做为输入音频属于每一类情感的概率值,最后取类别概率值最大的作为最终的音频情感类别,其中Softmax的计算公式如下:
    Figure PCTCN2022070728-appb-100011
    其中的e为常量。
PCT/CN2022/070728 2021-03-26 2022-01-07 一种融合人群信息的语音情感识别方法和系统 WO2022199215A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/845,908 US11837252B2 (en) 2021-03-26 2022-06-21 Speech emotion recognition method and system based on fused population information

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110322720.X 2021-03-26
CN202110322720.XA CN112712824B (zh) 2021-03-26 2021-03-26 一种融合人群信息的语音情感识别方法和系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/845,908 Continuation US11837252B2 (en) 2021-03-26 2022-06-21 Speech emotion recognition method and system based on fused population information

Publications (1)

Publication Number Publication Date
WO2022199215A1 true WO2022199215A1 (zh) 2022-09-29

Family

ID=75550314

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/CN2021/115694 WO2022198923A1 (zh) 2021-03-26 2021-08-31 一种融合人群信息的语音情感识别方法和系统
PCT/CN2022/070728 WO2022199215A1 (zh) 2021-03-26 2022-01-07 一种融合人群信息的语音情感识别方法和系统

Family Applications Before (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/115694 WO2022198923A1 (zh) 2021-03-26 2021-08-31 一种融合人群信息的语音情感识别方法和系统

Country Status (3)

Country Link
US (1) US11837252B2 (zh)
CN (1) CN112712824B (zh)
WO (2) WO2022198923A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114387997A (zh) * 2022-01-21 2022-04-22 合肥工业大学 一种基于深度学习的语音情感识别方法

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112712824B (zh) * 2021-03-26 2021-06-29 之江实验室 一种融合人群信息的语音情感识别方法和系统
CN113593537B (zh) * 2021-07-27 2023-10-31 华南师范大学 基于互补特征学习框架的语音情感识别方法及装置
CN113808620B (zh) * 2021-08-27 2023-03-21 西藏大学 一种基于cnn和lstm的藏语语音情感识别方法
CN114566189B (zh) * 2022-04-28 2022-10-04 之江实验室 基于三维深度特征融合的语音情感识别方法及系统
CN117475360B (zh) * 2023-12-27 2024-03-26 南京纳实医学科技有限公司 基于改进型mlstm-fcn的音视频特点的生物特征提取与分析方法

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6173260B1 (en) * 1997-10-29 2001-01-09 Interval Research Corporation System and method for automatic classification of speech based upon affective content
CN109935243A (zh) * 2019-02-25 2019-06-25 重庆大学 基于vtlp数据增强及多尺度时频域空洞卷积模型的语音情感识别方法
CN110021308A (zh) * 2019-05-16 2019-07-16 北京百度网讯科技有限公司 语音情绪识别方法、装置、计算机设备和存储介质
CN110164476A (zh) * 2019-05-24 2019-08-23 广西师范大学 一种基于多输出特征融合的blstm的语音情感识别方法
CN110491416A (zh) * 2019-07-26 2019-11-22 广东工业大学 一种基于lstm和sae的电话语音情感分析与识别方法
US10937446B1 (en) * 2020-11-10 2021-03-02 Lucas GC Limited Emotion recognition in speech chatbot job interview system
CN112712824A (zh) * 2021-03-26 2021-04-27 之江实验室 一种融合人群信息的语音情感识别方法和系统

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102222500A (zh) * 2011-05-11 2011-10-19 北京航空航天大学 结合情感点的汉语语音情感提取及建模方法
CN105869657A (zh) * 2016-06-03 2016-08-17 竹间智能科技(上海)有限公司 语音情感辨识系统及方法
CN108154879B (zh) * 2017-12-26 2021-04-09 广西师范大学 一种基于倒谱分离信号的非特定人语音情感识别方法
WO2019225801A1 (ko) * 2018-05-23 2019-11-28 한국과학기술원 사용자의 음성 신호를 기반으로 감정, 나이 및 성별을 동시에 인식하는 방법 및 시스템
CN108899049A (zh) * 2018-05-31 2018-11-27 中国地质大学(武汉) 一种基于卷积神经网络的语音情感识别方法及系统
CN109146066A (zh) * 2018-11-01 2019-01-04 重庆邮电大学 一种基于语音情感识别的虚拟学习环境自然交互方法
CN109817246B (zh) * 2019-02-27 2023-04-18 平安科技(深圳)有限公司 情感识别模型的训练方法、情感识别方法、装置、设备及存储介质
CN110047516A (zh) * 2019-03-12 2019-07-23 天津大学 一种基于性别感知的语音情感识别方法
CN110852215B (zh) * 2019-10-30 2022-09-06 国网江苏省电力有限公司电力科学研究院 一种多模态情感识别方法、系统及存储介质
CN111292765B (zh) * 2019-11-21 2023-07-28 台州学院 一种融合多个深度学习模型的双模态情感识别方法
CN111429948B (zh) * 2020-03-27 2023-04-28 南京工业大学 一种基于注意力卷积神经网络的语音情绪识别模型及方法
CN112037822B (zh) * 2020-07-30 2022-09-27 华南师范大学 基于ICNN与Bi-LSTM的语音情感识别方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6173260B1 (en) * 1997-10-29 2001-01-09 Interval Research Corporation System and method for automatic classification of speech based upon affective content
CN109935243A (zh) * 2019-02-25 2019-06-25 重庆大学 基于vtlp数据增强及多尺度时频域空洞卷积模型的语音情感识别方法
CN110021308A (zh) * 2019-05-16 2019-07-16 北京百度网讯科技有限公司 语音情绪识别方法、装置、计算机设备和存储介质
CN110164476A (zh) * 2019-05-24 2019-08-23 广西师范大学 一种基于多输出特征融合的blstm的语音情感识别方法
CN110491416A (zh) * 2019-07-26 2019-11-22 广东工业大学 一种基于lstm和sae的电话语音情感分析与识别方法
US10937446B1 (en) * 2020-11-10 2021-03-02 Lucas GC Limited Emotion recognition in speech chatbot job interview system
CN112712824A (zh) * 2021-03-26 2021-04-27 之江实验室 一种融合人群信息的语音情感识别方法和系统

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114387997A (zh) * 2022-01-21 2022-04-22 合肥工业大学 一种基于深度学习的语音情感识别方法
CN114387997B (zh) * 2022-01-21 2024-03-29 合肥工业大学 一种基于深度学习的语音情感识别方法

Also Published As

Publication number Publication date
WO2022198923A1 (zh) 2022-09-29
US20220328065A1 (en) 2022-10-13
CN112712824B (zh) 2021-06-29
CN112712824A (zh) 2021-04-27
US11837252B2 (en) 2023-12-05

Similar Documents

Publication Publication Date Title
WO2022199215A1 (zh) 一种融合人群信息的语音情感识别方法和系统
CN108717856B (zh) 一种基于多尺度深度卷积循环神经网络的语音情感识别方法
CN108597539B (zh) 基于参数迁移和语谱图的语音情感识别方法
CN108597541B (zh) 一种增强愤怒与开心识别的语音情感识别方法及系统
WO2020248376A1 (zh) 情绪检测方法、装置、电子设备及存储介质
CN112489635B (zh) 一种基于增强注意力机制的多模态情感识别方法
CN113408385B (zh) 一种音视频多模态情感分类方法及系统
CN109409296B (zh) 将人脸表情识别和语音情感识别融合的视频情感识别方法
CN105976809B (zh) 基于语音和面部表情的双模态情感融合的识别方法及系统
CN110675859B (zh) 结合语音与文本的多情感识别方法、系统、介质及设备
CN104200804A (zh) 一种面向人机交互的多类信息耦合的情感识别方法
CN110211594B (zh) 一种基于孪生网络模型和knn算法的说话人识别方法
CN105760852A (zh) 一种融合脸部表情和语音的驾驶员情感实时识别方法
CN107731233A (zh) 一种基于rnn的声纹识别方法
CN109767756A (zh) 一种基于动态分割逆离散余弦变换倒谱系数的音声特征提取算法
CN114566189B (zh) 基于三维深度特征融合的语音情感识别方法及系统
Zvarevashe et al. Recognition of speech emotion using custom 2D-convolution neural network deep learning algorithm
CN115862684A (zh) 一种基于音频的双模式融合型神经网络的抑郁状态辅助检测的方法
CN111341319A (zh) 一种基于局部纹理特征的音频场景识别方法及系统
CN112562725A (zh) 基于语谱图和胶囊网络的混合语音情感分类方法
Kuang et al. Simplified inverse filter tracked affective acoustic signals classification incorporating deep convolutional neural networks
Jiang et al. Speech emotion recognition method based on improved long short-term memory networks
Al-Banna et al. Stuttering detection using atrous convolutional neural networks
CN114626424B (zh) 一种基于数据增强的无声语音识别方法及装置
Aggarwal et al. Application of genetically optimized neural networks for hindi speech recognition system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22773878

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22773878

Country of ref document: EP

Kind code of ref document: A1