WO2020238045A1 - 智能语音识别方法、装置及计算机可读存储介质 - Google Patents

智能语音识别方法、装置及计算机可读存储介质 Download PDF

Info

Publication number
WO2020238045A1
WO2020238045A1 PCT/CN2019/117340 CN2019117340W WO2020238045A1 WO 2020238045 A1 WO2020238045 A1 WO 2020238045A1 CN 2019117340 W CN2019117340 W CN 2019117340W WO 2020238045 A1 WO2020238045 A1 WO 2020238045A1
Authority
WO
WIPO (PCT)
Prior art keywords
speech
acoustic
text
phoneme
voice
Prior art date
Application number
PCT/CN2019/117340
Other languages
English (en)
French (fr)
Inventor
王健宗
彭俊清
瞿晓阳
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2020238045A1 publication Critical patent/WO2020238045A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/16Speech classification or search using artificial neural networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • G10L2015/025Phonemes, fenemes or fenones being the recognition units
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • G10L2015/0635Training updating or merging of old and new templates; Mean values; Weighting

Definitions

  • This application relates to the field of artificial intelligence technology, and in particular to a method, device and computer-readable storage medium for intelligently performing voice recognition based on voice input.
  • Speech recognition methods can enable people to communicate more smoothly between people and machines, and allow machines to literally understand what users are saying, which is the basis of natural human-computer interaction.
  • the application of speech recognition methods is very extensive, and the demand for speech recognition is also increasing.
  • current speech recognition methods require a large amount of speech data and the support of text corresponding to these speeches, and most speech recognition methods are generally inefficient, and the effect needs to be improved.
  • This application provides an intelligent voice recognition method, device, and computer-readable storage medium, the main purpose of which is to present accurate voice recognition effects to the user when the user performs voice input.
  • an intelligent voice recognition method provided by this application includes:
  • the data processing layer receives the voice set and the text set, performs pre-processing operations including pre-emphasis, windowing and framing on the voice set, and performs pre-processing operations including depunctuation and word segmentation on the text set;
  • the feature extraction layer receives the preprocessed speech set, extracts acoustic features from the preprocessed speech set to obtain an acoustic feature set, and constructs a phoneme set based on the acoustic feature set, and then writes it in the preprocessed text Establish a state connection between the set and the phoneme set;
  • the model training layer builds an acoustic model based on the Naive Bayes and LSTM algorithm, and inputs the phoneme set and the preprocessed text set to the acoustic model training until the training value of the acoustic model is less than a preset threshold Quit training when;
  • the present application also provides an intelligent voice recognition method and device, which includes a memory and a processor.
  • the memory stores an intelligent voice recognition method program that can run on the processor.
  • the program of the intelligent speech recognition method is executed by the processor, the following steps are realized:
  • the data processing layer receives the voice set and the text set, and performs pre-processing operations including pre-emphasis, windowing and framing on the voice set, and performs pre-processing operations including depunctuation and word segmentation on the text set;
  • the feature extraction layer receives the preprocessed speech set, extracts acoustic features from the preprocessed speech set to obtain an acoustic feature set, and constructs a phoneme set based on the acoustic feature set, and then writes it in the preprocessed text Establish a state connection between the set and the phoneme set;
  • the model training layer builds an acoustic model based on the Naive Bayes and LSTM algorithm, and inputs the phoneme set and the preprocessed text set to the acoustic model training until the training value of the acoustic model is less than a preset threshold Quit training when;
  • the present application also provides a computer-readable storage medium with an intelligent voice recognition method program stored on the computer-readable storage medium, and the intelligent voice recognition method program can be used by one or more processors. Execute to realize the steps of the intelligent voice recognition method as described above.
  • the intelligent speech recognition method, device, and computer-readable storage medium proposed in this application receive a speech set and a text set, and perform pre-processing operations including pre-emphasis, windowing and framing on the speech set, and perform processing on the text set Perform preprocessing operations including depunctuation and word segmentation; extract acoustic features from the preprocessed speech set to obtain an acoustic feature set, and build a phoneme set based on the acoustic feature set; build acoustics based on naive Bayes and LSTM algorithms Model, and input the phoneme set and the preprocessed text set to the acoustic model training until the training value of the acoustic model is less than a preset threshold value and exit the training.
  • This application uses a deep learning algorithm, which can effectively improve the feature analysis capabilities of the phoneme set and the text set. Therefore, this application can implement accurate intelligent voice recognition functions.
  • FIG. 1 is a schematic flowchart of an intelligent voice recognition method provided by an embodiment of this application
  • FIG. 2 is a schematic diagram of the internal structure of an intelligent voice recognition method device provided by an embodiment of the application.
  • FIG. 3 is a schematic diagram of modules of an intelligent voice recognition method program in an intelligent voice recognition method device provided by an embodiment of the application.
  • This application provides an intelligent voice recognition method.
  • FIG. 1 it is a schematic flowchart of an intelligent voice recognition method provided by an embodiment of this application.
  • the method can be executed by a device, and the device can be implemented by software and/or hardware.
  • the intelligent voice recognition method includes:
  • the data processing layer receives the voice set and the text set, and performs pre-processing operations including pre-emphasis, windowing and framing on the voice set, and performs pre-processing operations including depunctuation and word segmentation on the text set.
  • the voice collection described in the preferred embodiment of the present application is recorded through a single carbon microphone in a quiet environment, the total duration of the voice collection exceeds 30 hours, and the persons participating in the recording have the ability to speak standard Mandarin. Further, the sampling frequency of the voice set is 16 kHz, and the sampling size is 16 bits.
  • the text set may be the Wall Street Journal data set WSJ.
  • the sound frequency of the speech set is pre-emphasized based on a digital filter, and the pre-emphasis method is:
  • H(z) is the speech set after the pre-emphasis
  • z is the sound frequency
  • is the pre-emphasis coefficient
  • n is the pre-emphasized speech set
  • N is the window length of the Hamming window method
  • cos is the cosine function
  • the feature extraction layer receives the preprocessed speech set, extracts acoustic features from the preprocessed speech set to obtain an acoustic feature set, and constructs a phoneme set based on the acoustic feature set, and completes the preprocessing A state connection is established between the text set of and the phoneme set.
  • the extraction of acoustic features from the preprocessed voice set to obtain the acoustic feature set includes: cutting off the mute part at the beginning and end of the data in the voice set based on the signal processing voice endpoint detection (Voice Activity Detection, VAD) technology, Waveform changes are made to the speech set of the mute part at the beginning and the end of the excised data, the acoustic characteristics of the speech set after the waveform change are extracted based on the Mel frequency cepstral coefficient feature extraction method, and the acoustic characteristic set in the form of a multidimensional vector matrix is output.
  • VAD Voice Activity Detection
  • said constructing a phoneme set based on the acoustic feature set, and establishing a state connection between the preprocessed text set and the phoneme set includes: combining the multidimensional vector matrix form
  • the data in the acoustic feature set is split into fixed-dimensional vector matrices.
  • the fixed-dimensional vector matrices are called state matrices. Every three of the state matrices are assembled into one phoneme, and the phoneme set is constructed.
  • the phoneme is mapped to a text word, and a state connection between the preprocessed text set and the phoneme set is established.
  • C(n) is the acoustic feature set in the form of the multi-dimensional vector matrix
  • n is the dimension of the matrix
  • L is the coefficient order of the Mel frequency cepstral coefficient feature extraction method
  • M is the number of filters
  • s(m) the logarithmic energy of the output of the filter.
  • the model training layer constructs an acoustic model based on the Naive Bayes and LSTM algorithm, and inputs the phoneme set and the preprocessed text set to the acoustic model training until the training value of the acoustic model is less than the preset value. Exit training when the threshold is set.
  • the acoustic model in the preferred embodiment of the present application includes the establishment of a probability model based on Naive Bayes and an LSTM model, and the probability model is:
  • Context( ⁇ ) is the text set
  • is the words in the text set
  • l ⁇ is the number of words in the preceding and following paragraphs of the ⁇
  • is the probability model parameter
  • X ⁇ is the vector representation of the ⁇ form
  • p() represents the solution probability form.
  • the LSTM model in the preferred embodiment of the present application includes a forget gate, an input gate, and an output gate.
  • the input gate receives the output data of the probability model and performs activation processing and then inputs it to the forget gate.
  • the forgotten door is:
  • f t is the output data of the forget gate
  • x t is the input data of the forget gate
  • t is the current time of the text set
  • t-1 is the time before the current time of the text set
  • h t-1 is the output data of the output gate at the time before the current time of the text set
  • w t is the weight of the current time
  • b t is the offset of the current time
  • [] is the matrix multiplication operation
  • represents the sigmoid function.
  • the output gate includes an activation function and a loss function.
  • the preprocessed text set is input into the probability model for training, until the training value of the probability model is less than the preset probability threshold, then the training is exited, and the phoneme set is input to the LSTM model training, until the training value of the LSTM model is less than the preset threshold, exit training, input the output value of the probability model and the training value of the LSTM model into the loss function of the LSTM output gate, and determine Whether the loss value of the loss function is within the error range for establishing a state connection between the preprocessed text set and the phoneme set, and when it exceeds the error range, continue training the probability model and the LSTM model, Until the loss value of the loss function is within the error range of establishing a state connection between the preprocessed text set and the phoneme set, and according to the mapping of every seven phonemes to a text word, every seven The training value of the LSTM model is mapped to the output data of the probability model. Until the mapping is completed
  • the invention also provides an intelligent voice recognition method and device.
  • FIG. 2 it is a schematic diagram of the internal structure of an intelligent voice recognition method device provided by an embodiment of this application.
  • the intelligent voice recognition method device 1 may be a PC (Personal Computer, personal computer), or a terminal device such as a smart phone, a tablet computer, or a portable computer, or a server.
  • the intelligent voice recognition method device 1 at least includes a memory 11, a processor 12, a communication bus 13, and a network interface 14.
  • the memory 11 includes at least one type of readable storage medium, and the readable storage medium includes flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), magnetic memory, magnetic disk, optical disk, etc.
  • the memory 11 may be an internal storage unit of the intelligent voice recognition method device 1, for example, the hard disk of the intelligent voice recognition method device 1.
  • the memory 11 may also be an external storage device of the smart voice recognition method device 1, for example, a plug-in hard disk equipped on the smart voice recognition method device 1, a smart media card (SMC), and a secure digital (Secure Digital, SD) card, flash card (Flash Card), etc.
  • the memory 11 may also include both the internal storage unit of the intelligent voice recognition method apparatus 1 and an external storage device.
  • the memory 11 can be used not only to store application software and various data installed in the intelligent voice recognition method device 1, such as the code of the intelligent voice recognition method program 01, etc., but also to temporarily store data that has been output or will be output.
  • the processor 12 may be a central processing unit (CPU), controller, microcontroller, microprocessor, or other data processing chip, and is used to run the program code or processing stored in the memory 11 Data, such as the implementation of intelligent voice recognition method program 01, etc.
  • CPU central processing unit
  • controller microcontroller
  • microprocessor or other data processing chip
  • the communication bus 13 is used to realize the connection and communication between these components.
  • the network interface 14 may optionally include a standard wired interface and a wireless interface (such as a WI-FI interface), and is usually used to establish a communication connection between the device 1 and other electronic devices.
  • the device 1 may also include a user interface.
  • the user interface may include a display (Display) and an input unit such as a keyboard (Keyboard).
  • the optional user interface may also include a standard wired interface and a wireless interface.
  • the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode, organic light emitting diode) touch device, etc.
  • the display can also be appropriately called a display screen or a display unit, which is used to display the information processed in the intelligent voice recognition method device 1 and to display a visualized user interface.
  • FIG. 2 only shows the intelligent voice recognition method device 1 with components 11-14 and the intelligent voice recognition method program 01.
  • FIG. 1 does not constitute an intelligent voice recognition method device
  • the definition of 1 may include fewer or more components than shown, or a combination of certain components, or different component arrangements.
  • the smart voice recognition method program 01 is stored in the memory 11; the processor 12 implements the following steps when executing the smart voice recognition method program 01 stored in the memory 11:
  • Step 1 The data processing layer receives the voice set and the text set, and performs pre-processing operations including pre-emphasis, windowing and framing on the voice set, and performs pre-processing operations including depunctuation and word segmentation on the text set .
  • the voice collection described in the preferred embodiment of the present application is recorded through a single carbon microphone in a quiet environment, the total duration of the voice collection exceeds 30 hours, and the persons participating in the recording have the ability to speak standard Mandarin. Further, the sampling frequency of the voice set is 16 kHz, and the sampling size is 16 bits.
  • the text set may be the Wall Street Journal data set WSJ.
  • the sound frequency of the speech set is pre-emphasized based on a digital filter, and the pre-emphasis method is:
  • H(z) is the speech set after the pre-emphasis
  • z is the sound frequency
  • is the pre-emphasis coefficient
  • n is the pre-emphasized speech set
  • N is the window length of the Hamming window method
  • cos is the cosine function
  • Step 2 The feature extraction layer receives the preprocessed speech set, extracts acoustic features from the preprocessed speech set to obtain an acoustic feature set, and builds a phoneme set based on the acoustic feature set, and performs the preprocessing A state connection is established between the completed text set and the phoneme set.
  • the preferred implementation of this application is based on Voice Activity Detection (VAD) to cut the mute part of the data at the beginning and the end of the voice set, and make waveform changes to the voice set of the mute part of the cut data at the beginning and end of the data, based on the Mel frequency
  • VAD Voice Activity Detection
  • the cepstral coefficient feature extraction method extracts the acoustic features of the speech set after the waveform change, and outputs the acoustic feature set in the form of a multi-dimensional vector matrix.
  • the data in the acoustic feature set in the form of a multi-dimensional vector matrix is split into a fixed-dimensional vector matrix.
  • the fixed-dimensional vector matrix is called a state matrix.
  • Form a phoneme construct a complete phoneme set, map every seven of the phonemes into a text word, and establish a state connection between the preprocessed text set and the phoneme set.
  • C(n) is the acoustic feature set in the form of the multi-dimensional vector matrix
  • n is the dimension of the matrix
  • L is the coefficient order of the Mel frequency cepstral coefficient feature extraction method
  • M is the number of filters
  • s(m) the logarithmic energy of the output of the filter.
  • Step 3 The model training layer builds an acoustic model based on Naive Bayes and LSTM algorithms, and inputs the phoneme set and the preprocessed text set to the acoustic model training until the training value of the acoustic model is less than Exit training when the threshold is preset.
  • the acoustic model in the preferred embodiment of the present application includes the establishment of a probability model based on Naive Bayes and an LSTM model, and the probability model is:
  • Context( ⁇ ) is the text set
  • is the words in the text set
  • l ⁇ is the number of words in the preceding and following paragraphs of the ⁇
  • is the probability model parameter
  • X ⁇ is the vector representation of the ⁇ form
  • p() represents the solution probability form.
  • the LSTM in the preferred embodiment of the present application includes a forget gate, an input gate, and an output gate.
  • the input gate receives the output data of the probability model and performs activation processing and input to the forget gate
  • the forgotten door is:
  • f t is the output data of the forget gate
  • x t is the input data of the forget gate
  • t is the current time of the text set
  • t-1 is the time before the current time of the text set
  • h t-1 is the output data of the output gate at the time before the current time of the text set
  • w t is the weight of the current time
  • b t is the offset of the current time
  • [] is the matrix multiplication operation
  • represents the sigmoid function.
  • the output gate includes an activation function and a loss function.
  • the preprocessed text set is input into the probability model for training, until the training value of the probability model is less than the preset probability threshold, then the training is exited, and the phoneme set is input to the LSTM model training, until the training value of the LSTM model is less than the preset threshold, exit training, input the output value of the probability model and the training value of the LSTM model into the loss function of the LSTM output gate, and determine Whether the loss value of the loss function is within the error range for establishing a state connection between the preprocessed text set and the phoneme set, and when it exceeds the error range, continue training the probability model and the LSTM model, Until the loss value of the loss function is within the error range of establishing a state connection between the preprocessed text set and the phoneme set, and according to the mapping of every seven phonemes to a text word, every seven The training value of the LSTM model is mapped to the output data of the probability model. Until the mapping is completed
  • Step 4 Receive the user's voice, use the acoustic model to recognize the user's voice, convert the user's voice into a text format, and output the text result.
  • the intelligent voice recognition method program can also be divided into one or more modules, and the one or more modules are stored in the memory 11 and run by one or more processors (this embodiment It is executed by the processor 12) to complete this application.
  • the module referred to in this application refers to a series of computer program instruction segments capable of completing specific functions, used to describe the execution process of the intelligent speech recognition method program in the intelligent speech recognition method device .
  • FIG. 3 a schematic diagram of the program modules of the smart voice recognition method program in an embodiment of the smart voice recognition method device of this application.
  • the smart voice recognition method program can be divided into data receiving modules 10.
  • the feature extraction module 20, the model training module 30, and the speech recognition output module 40 exemplarily:
  • the data receiving module 10 is configured to: receive a voice set and a text set, perform pre-processing operations including pre-emphasis, windowing and framing on the voice set, and perform pre-processing including depunctuation and word segmentation on the text set. Processing operation.
  • the feature extraction module 20 is configured to: receive the preprocessed speech set, extract acoustic features from the preprocessed speech set to obtain an acoustic feature set, and construct a phoneme set based on the acoustic feature set, and store A state connection is established between the preprocessed text set and the phoneme set.
  • the model training module 30 is used to construct an acoustic model based on Naive Bayes and LSTM algorithms, and input the phoneme set and the preprocessed text set to the acoustic model training until the acoustic model is trained Exit training when the training value is less than the preset threshold.
  • the voice recognition output module 40 is configured to: receive user voice, use the acoustic model to recognize the user voice, convert the user voice into a text format, and output the text result.
  • the embodiment of the present application also proposes a computer-readable storage medium, the computer-readable storage medium stores an intelligent voice recognition method program, and the intelligent voice recognition method program can be executed by one or more processors to Implement the following operations:
  • a voice set and a text set are received, and pre-processing operations including pre-emphasis, windowing and framing are performed on the voice set, and pre-processing operations including depunctuation and word segmentation are performed on the text set.
  • Receive the preprocessed speech set extract the acoustic features from the preprocessed speech set to obtain an acoustic feature set, and build a phoneme set based on the acoustic feature set, and use the preprocessed text set and all Establish state connections between the phoneme sets.

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Telephonic Communication Services (AREA)
  • Machine Translation (AREA)

Abstract

一种智能语音识别方法,包括:接收语音集和文本集,对语音集进行包括预加重、加窗分帧的预处理操作,并对文本集进行包括去标点符号、分词的预处理操作(S1);从预处理完成的语音集中提取声学特征得到声学特征集,基于声学特征集组建音素集,在预处理完成的文本集和音素集之间建立状态联系(S2);基于朴素贝叶斯和LSTM算法构建声学模型,将音素集和预处理完成的文本集输入至声学模型训练,直至声学模型的训练值小于预设阈值时退出训练(S3);接收用户语音,利用声学模型识别用户语音后,将用户语音转换为文本格式,输出文本结果(S4)。还提出一种智能语音识别方法装置以及一种计算机可读存储介质,可以将用户的语音转换为文字输出。

Description

智能语音识别方法、装置及计算机可读存储介质
本申请基于巴黎公约申明享有2019年5月29日递交的申请号为CN201910467875.5、名称为“智能语音识别方法、装置及计算机可读存储介质”的中国专利申请的优先权,该中国专利申请的整体内容以参考的方式结合在本申请中。
技术领域
本申请涉及人工智能技术领域,尤其涉及一种基于语音输入后智能进行语音识别的方法、装置及计算机可读存储介质。
背景技术
语音识别方法能够使人与人、人与机器实现更顺畅的交流,可以让机器在字面上明白用户在说什么,是自然的人机交互的基础。现在语音识别方法的运用已经十分广泛,对于语音识别的需求也日益庞大。但是目前语音识别方法需要大量的语音数据以及和这些语音相对应的文本的支持,且多数语音识别方法效率一般,效果有待提升。
发明内容
本申请提供一种智能语音识别方法、装置及计算机可读存储介质,其主要目的在于当用户进行语音输入时,给用户呈现出精准的语音识别效果。
为实现上述目的,本申请提供的一种智能语音识别方法,包括:
数据处理层接收语音集和文本集,对所述语音集进行包括预加重、加窗分帧的预处理操作,并对所述文本集进行包括去标点符号、分词的预处理操作;
特征提取层接收所述预处理完成的语音集,从所述预处理完成的语音集中提取声学特征得到声学特征集,并基于所述声学特征集组建音素集,并在所述预处理完成的文本集和所述音素集之间建立状态联系;
模型训练层基于朴素贝叶斯和LSTM算法构建声学模型,并将所述音素集和所述预处理完成的文本集输入至所述声学模型训练,直至所述声学模型 的训练值小于预设阈值时退出训练;
接收用户语音,利用所述声学模型识别所述用户语音后,将所述用户语音转换为文本格式,并输出文本结果。
此外,为实现上述目的,本申请还提供一种智能语音识别方法装置,该装置包括存储器和处理器,所述存储器中存储有可在所述处理器上运行的智能语音识别方法程序,所述智能语音识别方法程序被所述处理器执行时实现如下步骤:
数据处理层接收语音集和文本集,并对所述语音集进行包括预加重、加窗分帧的预处理操作,并对所述文本集进行包括去标点符号、分词的预处理操作;
特征提取层接收所述预处理完成的语音集,从所述预处理完成的语音集中提取声学特征得到声学特征集,并基于所述声学特征集组建音素集,并在所述预处理完成的文本集和所述音素集之间建立状态联系;
模型训练层基于朴素贝叶斯和LSTM算法构建声学模型,并将所述音素集和所述预处理完成的文本集输入至所述声学模型训练,直至所述声学模型的训练值小于预设阈值时退出训练;
接收用户语音,利用所述声学模型识别所述用户语音后,将所述用户语音转换为文本格式,并输出文本结果。
此外,为实现上述目的,本申请还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有智能语音识别方法程序,所述智能语音识别方法程序可被一个或者多个处理器执行,以实现如上所述的智能语音识别方法的步骤。
本申请提出的智能语音识别方法、装置及计算机可读存储介质,接收语音集和文本集,并对所述语音集进行包括预加重、加窗分帧的预处理操作,并对所述文本集进行包括去标点符号、分词的预处理操作;从所述预处理完成的语音集中提取声学特征得到声学特征集,并基于所述声学特征集组建音素集;基于朴素贝叶斯和LSTM算法构建声学模型,并将所述音素集和所述预处理完成的文本集输入至所述声学模型训练,直至所述声学模型的训练值小于预设阈值时退出训练。本申请使用深度学习算法,可有效提高对所述音素集和所述文本集的特征分析能力,因此本申请可以实现精准的智能语音识 别功能。
附图说明
图1为本申请一实施例提供的智能语音识别方法的流程示意图;
图2为本申请一实施例提供的智能语音识别方法装置的内部结构示意图;
图3为本申请一实施例提供的智能语音识别方法装置中智能语音识别方法程序的模块示意图。
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
本申请提供一种智能语音识别方法。参照图1所示,为本申请一实施例提供的智能语音识别方法的流程示意图。该方法可以由一个装置执行,该装置可以由软件和/或硬件实现。
在本实施例中,智能语音识别方法包括:
S1、数据处理层接收语音集和文本集,并对所述语音集进行包括预加重、加窗分帧的预处理操作,并对所述文本集进行包括去标点符号、分词的预处理操作。
本申请较佳实施例所述语音集是在安静的环境下,通过单个碳粒麦克风录取,所述语音集总时长超过30个小时,且参与录音的人员具有说标准普通话的能力。进一步地,所述语音集的采样频率16kHz,采样大小16bits。所述文本集可选取华尔街日报数据集WSJ。
本申请较佳实施例,基于数字滤波器对所述语音集的声音频率进行预加重,所述预加重的方法为:
H(z)=1-μz -1
其中,H(z)为所述预加重后的语音集,z为所述声音频率,μ为预加重系数;
基于所述预加重后的语音集,根据汉明窗法进行加窗分帧处理,所述汉明窗法ω(n)为:
Figure PCTCN2019117340-appb-000001
其中,n为所述预加重后的语音集,N为所述汉明窗法的窗长,cos为余弦函数。
S2、特征提取层接收所述预处理完成的语音集,从所述预处理完成的语音集中提取声学特征得到声学特征集,并基于所述声学特征集组建音素集,并在所述预处理完成的文本集和所述音素集之间建立状态联系。
本申请较佳实施,从所述预处理完成的语音集中提取声学特征得到声学特征集包括:基于信号处理语音端点检测(Voice Activity Detection,VAD)技术切除所述语音集内数据首尾端的静音部分,对所述切除数据首尾端静音部分的语音集做波形变化,基于梅尔频率倒谱系数特征提取法提取所述波形变化后的语音集的声学特征,并输出多维向量矩阵形式的声学特征集。
本申请较佳实施例中,所述基于所述声学特征集组建音素集,并在所述预处理完成的文本集和所述音素集之间建立状态联系包括:将所述多维向量矩阵形式的声学特征集内的数据拆分为固定维度的向量矩阵,所述固定维度的向量矩阵称为状态矩阵,将每三个所述状态矩阵组建为一个音素,构建完成音素集,将每七个所述音素映射为一个文本单词,建立所述预处理完成的文本集和所述音素集之间的状态联系。
本申请较佳实施所述梅尔频率倒谱系数特征提取法为:
Figure PCTCN2019117340-appb-000002
其中,C(n)为所述多维向量矩阵形式的声学特征集,n为矩阵的维度,L为所述梅尔频率倒谱系数特征提取法的系数阶数,M为滤波器个数,cos为所述余弦函数,s(m)所述滤波器的输出的对数能量。
S3、模型训练层基于朴素贝叶斯和LSTM算法构建声学模型,并将所述音素集和所述预处理完成的文本集输入至所述声学模型训练,直至所述声学模型的训练值小于预设阈值时退出训练。
本申请较佳实施例所述声学模型包括基于朴素贝叶斯建立概率模型和LSTM模型,所述概率模型为:
Figure PCTCN2019117340-appb-000003
其中,Context(ω)为所述文本集,ω为所述文本集内的单词,l ω为所述ω的前 后段的单词数,θ为概率模型参数,X ω为所述ω的向量表示形式,
Figure PCTCN2019117340-appb-000004
为所述ω的霍夫曼编码形式,p()表示求解概率形式。
本申请较佳实施例所述LSTM模型包括遗忘门、输入门、输出门,所述输入门接收所述概率模型的输出数据并进行激活处理后输入至所述遗忘门。
所述遗忘门为:
f t=σ(w t[h t-1,x t]+b t)
其中,f t为所述遗忘门的输出数据,x t为所述遗忘门的输入数据,t为所述文本集的当前时间,t-1为所述文本集当前时间的前一个时间,h t-1为所述输出门在所述文本集当前时间的前一个时间的输出数据,w t为所述当前时间的权重,b t为所述当前时间的偏置,[]为矩阵乘法操作,σ表示所述sigmoid函数。
所述输出门包括激活函数和损失函数。
本申请较佳实施例将所述预处理完成的文本集输入至所述概率模型中训练,直至所述概率模型的训练值小于预设概率阈值后退出训练,将所述音素集输入至所述LSTM模型训练,直至所述LSTM模型的训练值小于预设阈值时退出训练,将所述概率模型的输出值和所述LSTM模型的训练值输入至所述LSTM输出门的损失函数中,判断所述损失函数的损失值是否在所述预处理完成的文本集和所述音素集之间建立状态联系的误差范围内,当超出所述误差范围,继续训练所述概率模型和所述LSTM模型,直至所述损失函数的损失值在所述预处理完成的文本集和所述音素集之间建立状态联系的误差范围内,并根据每七个所述音素映射为一个文本单词,将每七个所述LSTM模型的训练值映射为所述概率模型的输出数据,直至映射结束,输出所述映射结果,得到文本结果,完成智能语音识别。
S4、接收用户语音,利用所述声学模型识别所述用户语音后,将所述用户语音转换为文本格式,并输出文本结果。
发明还提供一种智能语音识别方法装置。参照图2所示,为本申请一实施例提供的智能语音识别方法装置的内部结构示意图。
在本实施例中,所述智能语音识别方法装置1可以是PC(Personal Computer,个人电脑),或者是智能手机、平板电脑、便携计算机等终端设备, 也可以是一种服务器等。该智能语音识别方法装置1至少包括存储器11、处理器12,通信总线13,以及网络接口14。
其中,存储器11至少包括一种类型的可读存储介质,所述可读存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、磁性存储器、磁盘、光盘等。存储器11在一些实施例中可以是智能语音识别方法装置1的内部存储单元,例如该智能语音识别方法装置1的硬盘。存储器11在另一些实施例中也可以是智能语音识别方法装置1的外部存储设备,例如智能语音识别方法装置1上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,存储器11还可以既包括智能语音识别方法装置1的内部存储单元也包括外部存储设备。存储器11不仅可以用于存储安装于智能语音识别方法装置1的应用软件及各类数据,例如智能语音识别方法程序01的代码等,还可以用于暂时地存储已经输出或者将要输出的数据。
处理器12在一些实施例中可以是一中央处理器(Central Processing Unit,CPU)、控制器、微控制器、微处理器或其他数据处理芯片,用于运行存储器11中存储的程序代码或处理数据,例如执行智能语音识别方法程序01等。
通信总线13用于实现这些组件之间的连接通信。
网络接口14可选的可以包括标准的有线接口、无线接口(如WI-FI接口),通常用于在该装置1与其他电子设备之间建立通信连接。
可选地,该装置1还可以包括用户接口,用户接口可以包括显示器(Display)、输入单元比如键盘(Keyboard),可选的用户接口还可以包括标准的有线接口、无线接口。可选地,在一些实施例中,显示器可以是LED显示器、液晶显示器、触控式液晶显示器以及OLED(Organic Light-Emitting Diode,有机发光二极管)触摸器等。其中,显示器也可以适当的称为显示屏或显示单元,用于显示在智能语音识别方法装置1中处理的信息以及用于显示可视化的用户界面。
图2仅示出了具有组件11-14以及智能语音识别方法程序01的智能语音识别方法装置1,本领域技术人员可以理解的是,图1示出的结构并不构成对智能语音识别方法装置1的限定,可以包括比图示更少或者更多的部件,或者组合某些部件,或者不同的部件布置。
在图2所示的装置1实施例中,存储器11中存储有智能语音识别方法程序01;处理器12执行存储器11中存储的智能语音识别方法程序01时实现如下步骤:
步骤一、数据处理层接收语音集和文本集,并对所述语音集进行包括预加重、加窗分帧的预处理操作,并对所述文本集进行包括去标点符号、分词的预处理操作。
本申请较佳实施例所述语音集是在安静的环境下,通过单个碳粒麦克风录取,所述语音集总时长超过30个小时,且参与录音的人员具有说标准普通话的能力。进一步地,所述语音集的采样频率16kHz,采样大小16bits。所述文本集可选取华尔街日报数据集WSJ。
本申请较佳实施例,基于数字滤波器对所述语音集的声音频率进行预加重,所述预加重的方法为:
H(z)=1-μz -1
其中,H(z)为所述预加重后的语音集,z为所述声音频率,μ为预加重系数;
基于所述预加重后的语音集,根据汉明窗法进行加窗分帧处理,所述汉明窗法ω(n)为:
Figure PCTCN2019117340-appb-000005
其中,n为所述预加重后的语音集,N为所述汉明窗法的窗长,cos为余弦函数。
步骤二、特征提取层接收所述预处理完成的语音集,从所述预处理完成的语音集中提取声学特征得到声学特征集,并基于所述声学特征集组建音素集,并在所述预处理完成的文本集和所述音素集之间建立状态联系。
本申请较佳实施,基于语音端点检测技术(Voice Activity Detection,VAD)切除所述语音集内数据首尾端的静音部分,对所述切除数据首尾端静音部分的语音集做波形变化,基于梅尔频率倒谱系数特征提取法提取所述波形变化后的语音集的声学特征,并输出多维向量矩阵形式的声学特征集。
本申请较佳实施例,将所述多维向量矩阵形式的声学特征集内的数据拆分为固定维度的向量矩阵,所述固定维度的向量矩阵称为状态矩阵,将每三个所述状态矩阵组建为一个音素,构建完成音素集,将每七个所述音素映射为一个文本单词,建立所述预处理完成的文本集和所述音素集之间的状态联 系。
本申请较佳实施所述梅尔频率倒谱系数特征提取法为:
Figure PCTCN2019117340-appb-000006
其中,C(n)为所述多维向量矩阵形式的声学特征集,n为矩阵的维度,L为所述梅尔频率倒谱系数特征提取法的系数阶数,M为滤波器个数,cos为所述余弦函数,s(m)所述滤波器的输出的对数能量。
步骤三、模型训练层基于朴素贝叶斯和LSTM算法构建声学模型,并将所述音素集和所述预处理完成的文本集输入至所述声学模型训练,直至所述声学模型的训练值小于预设阈值时退出训练。
本申请较佳实施例所述声学模型包括基于朴素贝叶斯建立概率模型和LSTM模型,所述概率模型为:
Figure PCTCN2019117340-appb-000007
其中,Context(ω)为所述文本集,ω为所述文本集内的单词,l ω为所述ω的前后段的单词数,θ为概率模型参数,X ω为所述ω的向量表示形式,
Figure PCTCN2019117340-appb-000008
为所述ω的霍夫曼编码形式,p()表示求解概率形式。
本申请较佳实施例所述LSTM包括遗忘门、输入门、输出门,所述输入门接收所述概率模型的输出数据并进行激活处理后输入至所述遗忘门
所述遗忘门为:
f t=σ(w t[h t-1,x t]+b t)
其中,f t为所述遗忘门的输出数据,x t为所述遗忘门的输入数据,t为所述文本集的当前时间,t-1为所述文本集当前时间的前一个时间,h t-1为所述输出门在所述文本集当前时间的前一个时间的输出数据,w t为所述当前时间的权重,b t为所述当前时间的偏置,[]为矩阵乘法操作,σ表示所述sigmoid函数。
所述输出门包括激活函数和损失函数。
本申请较佳实施例将所述预处理完成的文本集输入至所述概率模型中训练,直至所述概率模型的训练值小于预设概率阈值后退出训练,将所述音素集输入至所述LSTM模型训练,直至所述LSTM模型的训练值小于预设阈值时退出训练,将所述概率模型的输出值和所述LSTM模型的训练值输入至所 述LSTM输出门的损失函数中,判断所述损失函数的损失值是否在所述预处理完成的文本集和所述音素集之间建立状态联系的误差范围内,当超出所述误差范围,继续训练所述概率模型和所述LSTM模型,直至所述损失函数的损失值在所述预处理完成的文本集和所述音素集之间建立状态联系的误差范围内,并根据每七个所述音素映射为一个文本单词,将每七个所述LSTM模型的训练值映射为所述概率模型的输出数据,直至映射结束,输出所述映射结果,得到文本结果,完成智能语音识别。
步骤四、接收用户语音,利用所述声学模型识别所述用户语音后,将所述用户语音转换为文本格式,并输出文本结果。
可选地,在其他实施例中,智能语音识别方法程序还可以被分割为一个或者多个模块,一个或者多个模块被存储于存储器11中,并由一个或多个处理器(本实施例为处理器12)所执行以完成本申请,本申请所称的模块是指能够完成特定功能的一系列计算机程序指令段,用于描述智能语音识别方法程序在智能语音识别方法装置中的执行过程。
例如,参照图3所示,为本申请智能语音识别方法装置一实施例中的智能语音识别方法程序的程序模块示意图,该实施例中,所述智能语音识别方法程序可以被分割为数据接收模块10、特征提取模块20、模型训练模块30以及语音识别输出模块40,示例性地:
所述数据接收模块10用于:接收语音集和文本集,对所述语音集进行包括预加重、加窗分帧的预处理操作,并对所述文本集进行包括去标点符号、分词的预处理操作。
所述特征提取模块20用于:接收所述预处理完成的语音集,从所述预处理完成的语音集中提取声学特征得到声学特征集,并基于所述声学特征集组建音素集,并在所述预处理完成的文本集和所述音素集之间建立状态联系。
所述模型训练模块30用于:基于朴素贝叶斯和LSTM算法构建声学模型,并将所述音素集和所述预处理完成的文本集输入至所述声学模型训练,直至所述声学模型的训练值小于预设阈值时退出训练。
所述语音识别输出模块40用于:接收用户语音,利用所述声学模型识别 所述用户语音后,将所述用户语音转换为文本格式,并输出文本结果。
上述数据接收模块10、特征提取模块20、模型训练模块30以及语音识别输出模块40等程序模块被执行时所实现的功能或操作步骤与上述实施例大体相同,在此不再赘述。
此外,本申请实施例还提出一种计算机可读存储介质,所述计算机可读存储介质上存储有智能语音识别方法程序,所述智能语音识别方法程序可被一个或多个处理器执行,以实现如下操作:
接收语音集和文本集,对所述语音集进行包括预加重、加窗分帧的预处理操作,并对所述文本集进行包括去标点符号、分词的预处理操作。
接收所述预处理完成的语音集,从所述预处理完成的语音集中提取声学特征得到声学特征集,并基于所述声学特征集组建音素集,并在所述预处理完成的文本集和所述音素集之间建立状态联系。
基于朴素贝叶斯和LSTM算法构建声学模型,并将所述音素集和所述预处理完成的文本集输入至所述声学模型训练,直至所述声学模型的训练值小于预设阈值时退出训练。
接收用户语音,利用所述声学模型识别所述用户语音后,将所述用户语音转换为文本格式,并输出文本结果。
本申请计算机可读存储介质具体实施方式与上述智能语音识别方法装置和方法各实施例基本相同,在此不作累述。
需要说明的是,上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。并且本文中的术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、装置、物品或者方法不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、装置、物品或者方法所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、装置、物品或者方法中还存在另外的相同要素。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通 过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上所述的一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。

Claims (20)

  1. 一种智能语音识别方法,其特征在于,所述方法包括:
    数据处理层接收语音集和文本集,对所述语音集进行包括预加重、加窗分帧的预处理操作,并对所述文本集进行包括去标点符号、分词的预处理操作;
    特征提取层接收所述预处理完成的语音集,从所述预处理完成的语音集中提取声学特征得到声学特征集,并基于所述声学特征集组建音素集,并在所述预处理完成的文本集和所述音素集之间建立状态联系;
    模型训练层基于朴素贝叶斯和LSTM算法构建声学模型,并将所述音素集和所述预处理完成的文本集输入至所述声学模型训练,直至所述声学模型的训练值小于预设阈值时退出训练;
    接收用户语音,利用所述声学模型识别所述用户语音后,将所述用户语音转换为文本格式,并输出文本结果。
  2. 如权利要求1所述的智能语音识别方法,其特征在于,对所述语音集进行包括预加重、加窗分帧的预处理操作,包括:
    基于数字滤波器对所述语音集的声音频率进行预加重,所述预加重的方法为:
    H(z)=1-μz -1
    其中,H(z)为所述预加重后的语音集,z为所述声音频率,μ为预加重系数;
    基于所述预加重后的语音集,根据汉明窗法进行加窗分帧处理,所述汉明窗法ω(n)为:
    Figure PCTCN2019117340-appb-100001
    其中,n为所述预加重后的语音集,N为所述汉明窗法的窗长,cos为余弦函数。
  3. 如权利要求1或2所述的智能语音识别方法,其特征在于,从所述预处理完成的语音集中提取声学特征得到声学特征集,包括:
    基于语音端点检测技术切除所述语音集内数据首尾端的静音部分;
    对所述切除数据首尾端静音部分的语音集做波形变化,基于梅尔频率倒谱系数特征提取法提取所述波形变化后的语音集的声学特征,并输出多维向 量矩阵形式的声学特征集。
  4. 如权利要求3所述的智能语音识别方法,其特征在于,所述梅尔频率倒谱系数梅尔频率倒谱系数特征提取法为:
    Figure PCTCN2019117340-appb-100002
    其中,C(n)为所述多维向量矩阵形式的声学特征集,n为矩阵的维度,L为所述梅尔频率倒谱系数特征提取法的系数阶数,M为滤波器个数,cos为所述余弦函数,s(m)所述滤波器的输出的对数能量。
  5. 如权利要求4所述的智能语音识别方法,其特征在于,基于所述声学特征集组建音素集,并在所述预处理完成的文本集和所述音素集之间建立状态联系,包括:
    将所述多维向量矩阵形式的声学特征集内的数据拆分为固定维度的向量矩阵,所述固定维度的向量矩阵称为状态矩阵;
    将每三个所述状态矩阵组建为一个音素,构建完成音素集;
    将每七个所述音素映射为一个文本单词,建立所述预处理完成的文本集和所述音素集之间的状态联系。
  6. 如权利要求1所述的智能语音识别方法,其特征在于,所述声学模型包括基于朴素贝叶斯建立概率模型和LSTM模型。
  7. 如权利要求6所述的智能语音识别方法,其特征在于,所述LSTM模型包括遗忘门、输入门、输出门,所述输入门接收所述概率模型的输出数据并进行激活处理后输入至所述遗忘门。
  8. 一种智能语音识别方法装置,其特征在于,所述装置包括存储器和处理器,所述存储器上存储有可在所述处理器上运行的智能语音识别方法程序,所述智能语音识别方法程序被所述处理器执行时实现如下步骤:
    数据处理层接收语音集和文本集,对所述语音集进行包括预加重、加窗分帧的预处理操作,并对所述文本集进行包括去标点符号、分词的预处理操作;
    特征提取层接收所述预处理完成的语音集,从所述预处理完成的语音集中提取声学特征得到声学特征集,并基于所述声学特征集组建音素集,并在所述预处理完成的文本集和所述音素集之间建立状态联系;
    模型训练层基于朴素贝叶斯和LSTM算法构建声学模型,并将所述音素集和所述预处理完成的文本集输入至所述声学模型训练,直至所述声学模型的训练值小于预设阈值时退出训练;
    接收用户语音,利用所述声学模型识别所述用户语音后,将所述用户语音转换为文本格式,并输出文本结果。
  9. 如权利要求8所述的智能语音识别方法装置,其特征在于,对所述语音集进行包括预加重、加窗分帧的预处理操作,包括:
    基于数字滤波器对所述语音集的声音频率进行预加重,所述预加重的方法为:
    H(z)=1-μz -1
    其中,H(z)为所述预加重后的语音集,z为所述声音频率,μ为预加重系数;
    基于所述预加重后的语音集,根据汉明窗法进行加窗分帧处理,所述汉明窗法ω(n)为:
    Figure PCTCN2019117340-appb-100003
    其中,n为所述预加重后的语音集,N为所述汉明窗法的窗长,cos为余弦函数。
  10. 如权利要求8或9所述的智能语音识别方法装置,其特征在于,从所述预处理完成的语音集中提取声学特征得到声学特征集,包括:
    基于语音端点检测技术切除所述语音集内数据首尾端的静音部分;
    对所述切除数据首尾端静音部分的语音集做波形变化,基于梅尔频率倒谱系数特征提取法提取所述波形变化后的语音集的声学特征,并输出多维向量矩阵形式的声学特征集。
  11. 如权利要求10所述的智能语音识别方法装置,其特征在于,所述梅尔频率倒谱系数特征提取法为:
    Figure PCTCN2019117340-appb-100004
    其中,C(n)为所述多维向量矩阵形式的声学特征集,n为矩阵的维度,L为所述梅尔频率倒谱系数特征提取法的系数阶数,M为滤波器个数,cos为所述余弦函数,s(m)所述滤波器的输出的对数能量。
  12. 如权利要求11所述的智能语音识别方法装置,其特征在于,基于所 述声学特征集组建音素集,并在所述预处理完成的文本集和所述音素集之间建立状态联系,包括:
    将所述多维向量矩阵形式的声学特征集内的数据拆分为固定维度的向量矩阵,所述固定维度的向量矩阵称为状态矩阵;
    将每三个所述状态矩阵组建为一个音素,构建完成音素集;
    将每七个所述音素映射为一个文本单词,建立所述预处理完成的文本集和所述音素集之间的状态联系。
  13. 如权利要求8所述的智能语音识别方法装置,其特征在于,所述声学模型包括基于朴素贝叶斯建立概率模型和LSTM模型。
  14. 如权利要求13所述的智能语音识别方法装置,其特征在于,所述LSTM模型包括遗忘门、输入门、输出门,所述输入门接收所述概率模型的输出数据并进行激活处理后输入至所述遗忘门。
  15. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有智能语音识别方法程序,所述智能语音识别方法程序可被一个或者多个处理器执行,以实现如下步骤:
    数据处理层接收语音集和文本集,对所述语音集进行包括预加重、加窗分帧的预处理操作,并对所述文本集进行包括去标点符号、分词的预处理操作;
    特征提取层接收所述预处理完成的语音集,从所述预处理完成的语音集中提取声学特征得到声学特征集,并基于所述声学特征集组建音素集,并在所述预处理完成的文本集和所述音素集之间建立状态联系;
    模型训练层基于朴素贝叶斯和LSTM算法构建声学模型,并将所述音素集和所述预处理完成的文本集输入至所述声学模型训练,直至所述声学模型的训练值小于预设阈值时退出训练;
    接收用户语音,利用所述声学模型识别所述用户语音后,将所述用户语音转换为文本格式,并输出文本结果。
  16. 如权利要求15所述的计算机可读存储介质,其特征在于,对所述语音集进行包括预加重、加窗分帧的预处理操作,包括:
    基于数字滤波器对所述语音集的声音频率进行预加重,所述预加重的方法为:
    H(z)=1-μz -1
    其中,H(z)为所述预加重后的语音集,z为所述声音频率,μ为预加重系数;
    基于所述预加重后的语音集,根据汉明窗法进行加窗分帧处理,所述汉明窗法ω(n)为:
    Figure PCTCN2019117340-appb-100005
    其中,n为所述预加重后的语音集,N为所述汉明窗法的窗长,cos为余弦函数。
  17. 如权利要求15或16所述的计算机可读存储介质,其特征在于,从所述预处理完成的语音集中提取声学特征得到声学特征集,包括:
    基于语音端点检测技术切除所述语音集内数据首尾端的静音部分;
    对所述切除数据首尾端静音部分的语音集做波形变化,基于梅尔频率倒谱系数特征提取法提取所述波形变化后的语音集的声学特征,并输出多维向量矩阵形式的声学特征集。
  18. 如权利要求17所述的计算机可读存储介质,其特征在于,所述梅尔频率倒谱系数梅尔频率倒谱系数特征提取法为:
    Figure PCTCN2019117340-appb-100006
    其中,C(n)为所述多维向量矩阵形式的声学特征集,n为矩阵的维度,L为所述梅尔频率倒谱系数特征提取法的系数阶数,M为滤波器个数,cos为所述余弦函数,s(m)所述滤波器的输出的对数能量。
  19. 如权利要求18所述的计算机可读存储介质,其特征在于,基于所述声学特征集组建音素集,并在所述预处理完成的文本集和所述音素集之间建立状态联系,包括:
    将所述多维向量矩阵形式的声学特征集内的数据拆分为固定维度的向量矩阵,所述固定维度的向量矩阵称为状态矩阵;
    将每三个所述状态矩阵组建为一个音素,构建完成音素集;
    将每七个所述音素映射为一个文本单词,建立所述预处理完成的文本集和所述音素集之间的状态联系。
  20. 如权利要求15所述的计算机可读存储介质,其特征在于,所述声学模型包括基于朴素贝叶斯建立概率模型和LSTM模型。
PCT/CN2019/117340 2019-05-29 2019-11-12 智能语音识别方法、装置及计算机可读存储介质 WO2020238045A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910467875.5 2019-05-29
CN201910467875.5A CN110277088B (zh) 2019-05-29 2019-05-29 智能语音识别方法、装置及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2020238045A1 true WO2020238045A1 (zh) 2020-12-03

Family

ID=67960442

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/117340 WO2020238045A1 (zh) 2019-05-29 2019-11-12 智能语音识别方法、装置及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN110277088B (zh)
WO (1) WO2020238045A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114155832A (zh) * 2021-11-12 2022-03-08 深圳市北科瑞声科技股份有限公司 基于深度学习的语音识别方法、装置、设备及介质

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110277088B (zh) * 2019-05-29 2024-04-09 平安科技(深圳)有限公司 智能语音识别方法、装置及计算机可读存储介质
CN110928519A (zh) * 2019-12-30 2020-03-27 Tcl通力电子(惠州)有限公司 指令生成方法、智能键盘和存储介质
CN111985231B (zh) * 2020-08-07 2023-12-26 中移(杭州)信息技术有限公司 无监督角色识别方法、装置、电子设备及存储介质
CN112201253B (zh) * 2020-11-09 2023-08-25 观华(广州)电子科技有限公司 文字标记方法、装置、电子设备及计算机可读存储介质
CN112712797A (zh) * 2020-12-29 2021-04-27 平安科技(深圳)有限公司 语音识别方法、装置、电子设备及可读存储介质
CN113053362A (zh) * 2021-03-30 2021-06-29 建信金融科技有限责任公司 语音识别的方法、装置、设备和计算机可读介质
CN115080300A (zh) * 2022-07-25 2022-09-20 北京云迹科技股份有限公司 用户下单异常的处理方法及装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9263036B1 (en) * 2012-11-29 2016-02-16 Google Inc. System and method for speech recognition using deep recurrent neural networks
CN106328122A (zh) * 2016-08-19 2017-01-11 深圳市唯特视科技有限公司 一种利用长短期记忆模型递归神经网络的语音识别方法
CN106875943A (zh) * 2017-01-22 2017-06-20 上海云信留客信息科技有限公司 一种用于大数据分析的语音识别系统
CN107680597A (zh) * 2017-10-23 2018-02-09 平安科技(深圳)有限公司 语音识别方法、装置、设备以及计算机可读存储介质
CN108492820A (zh) * 2018-03-20 2018-09-04 华南理工大学 基于循环神经网络语言模型和深度神经网络声学模型的中文语音识别方法
CN110277088A (zh) * 2019-05-29 2019-09-24 平安科技(深圳)有限公司 智能语音识别方法、装置及计算机可读存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9466292B1 (en) * 2013-05-03 2016-10-11 Google Inc. Online incremental adaptation of deep neural networks using auxiliary Gaussian mixture models in speech recognition
WO2018118442A1 (en) * 2016-12-21 2018-06-28 Google Llc Acoustic-to-word neural network speech recognizer
CN107633842B (zh) * 2017-06-12 2018-08-31 平安科技(深圳)有限公司 语音识别方法、装置、计算机设备及存储介质
CN108831445A (zh) * 2018-05-21 2018-11-16 四川大学 四川方言识别方法、声学模型训练方法、装置及设备
CN109599093B (zh) * 2018-10-26 2021-11-26 北京中关村科金技术有限公司 智能质检的关键词检测方法、装置、设备及可读存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9263036B1 (en) * 2012-11-29 2016-02-16 Google Inc. System and method for speech recognition using deep recurrent neural networks
CN106328122A (zh) * 2016-08-19 2017-01-11 深圳市唯特视科技有限公司 一种利用长短期记忆模型递归神经网络的语音识别方法
CN106875943A (zh) * 2017-01-22 2017-06-20 上海云信留客信息科技有限公司 一种用于大数据分析的语音识别系统
CN107680597A (zh) * 2017-10-23 2018-02-09 平安科技(深圳)有限公司 语音识别方法、装置、设备以及计算机可读存储介质
CN108492820A (zh) * 2018-03-20 2018-09-04 华南理工大学 基于循环神经网络语言模型和深度神经网络声学模型的中文语音识别方法
CN110277088A (zh) * 2019-05-29 2019-09-24 平安科技(深圳)有限公司 智能语音识别方法、装置及计算机可读存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114155832A (zh) * 2021-11-12 2022-03-08 深圳市北科瑞声科技股份有限公司 基于深度学习的语音识别方法、装置、设备及介质

Also Published As

Publication number Publication date
CN110277088A (zh) 2019-09-24
CN110277088B (zh) 2024-04-09

Similar Documents

Publication Publication Date Title
WO2020238045A1 (zh) 智能语音识别方法、装置及计算机可读存储介质
WO2021093449A1 (zh) 基于人工智能的唤醒词检测方法、装置、设备及介质
WO2021208287A1 (zh) 用于情绪识别的语音端点检测方法、装置、电子设备及存储介质
WO2019196196A1 (zh) 一种耳语音恢复方法、装置、设备及可读存储介质
JP4768969B2 (ja) 高度対話型インターフェースに対する理解同期意味オブジェクト
JP4768970B2 (ja) 音声アプリケーション言語タグとともに実装される理解同期意味オブジェクト
CN110047481B (zh) 用于语音识别的方法和装置
US20150325240A1 (en) Method and system for speech input
US20240021202A1 (en) Method and apparatus for recognizing voice, electronic device and medium
CN111833845B (zh) 多语种语音识别模型训练方法、装置、设备及存储介质
CN112562691A (zh) 一种声纹识别的方法、装置、计算机设备及存储介质
WO2021051514A1 (zh) 一种语音识别方法、装置、计算机设备及非易失性存储介质
WO2020238046A1 (zh) 人声智能检测方法、装置及计算机可读存储介质
US20230127787A1 (en) Method and apparatus for converting voice timbre, method and apparatus for training model, device and medium
CN113129867B (zh) 语音识别模型的训练方法、语音识别方法、装置和设备
JP6875819B2 (ja) 音響モデル入力データの正規化装置及び方法と、音声認識装置
CN112669842A (zh) 人机对话控制方法、装置、计算机设备及存储介质
CN111429914B (zh) 麦克风控制方法、电子装置及计算机可读存储介质
CN110335608A (zh) 声纹验证方法、装置、设备及存储介质
WO2021051564A1 (zh) 语音识别方法、装置、计算设备和存储介质
WO2019169722A1 (zh) 快捷键识别方法、装置、设备以及计算机可读存储介质
CN109074809B (zh) 信息处理设备、信息处理方法和计算机可读存储介质
US10714087B2 (en) Speech control for complex commands
WO2023272616A1 (zh) 一种文本理解方法、系统、终端设备和存储介质
CN111898363B (zh) 文本长难句的压缩方法、装置、计算机设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19931033

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19931033

Country of ref document: EP

Kind code of ref document: A1