WO2022267451A1 - Automatic speech recognition method based on neural network, device, and readable storage medium - Google Patents

Automatic speech recognition method based on neural network, device, and readable storage medium Download PDF

Info

Publication number
WO2022267451A1
WO2022267451A1 PCT/CN2022/071220 CN2022071220W WO2022267451A1 WO 2022267451 A1 WO2022267451 A1 WO 2022267451A1 CN 2022071220 W CN2022071220 W CN 2022071220W WO 2022267451 A1 WO2022267451 A1 WO 2022267451A1
Authority
WO
WIPO (PCT)
Prior art keywords
language model
audio
gpt
recognition
score
Prior art date
Application number
PCT/CN2022/071220
Other languages
French (fr)
Chinese (zh)
Inventor
方明
魏韬
马骏
王少军
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2022267451A1 publication Critical patent/WO2022267451A1/en

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • G10L15/32Multiple recognisers used in sequence or in parallel; Score combination systems therefor, e.g. voting systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/284Lexical analysis, e.g. tokenisation or collocates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/14Speech classification or search using statistical models, e.g. Hidden Markov Models [HMMs]
    • G10L15/142Hidden Markov Models [HMMs]
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/16Speech classification or search using artificial neural networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • G10L2015/0631Creating reference templates; Clustering

Definitions

  • the present application relates to the technical field of artificial intelligence, and in particular to a neural network-based automatic speech recognition method, device, electronic equipment, and computer-readable storage medium.
  • the language model In the process of traditional speech recognition, there are two models, namely the acoustic model and the language model; among them, the language model generally adopts the ngram language model, and ngram, a probability model based on tuple statistics, can only capture the statistics before and after the phrase information, it is impossible to learn more in-depth grammatical and semantic information, coupled with this probability calculation method of word frequency statistics, there are problems with too large parameter space and serious data sparseness, especially in high-order ngram models, with the order increase, the ngram model and sparsity will increase exponentially. Even if people propose and work hard to solve the problems of the ngram model itself, such as pruning and regression, they only weaken the problem of the ngram model, and cannot solve the fundamental problems of the ngram language model.
  • a common solution is to keep the original ngram model unchanged, and after wfst decoding, generate top n ARS recognition results, re-score and reorder the language model of the generated sentences.
  • the inventor realizes that using an ngram model with more corpus, a higher-order ngram model, etc.; but the problem that is often encountered is that using a more complex language model often leads to more recognition delays, and using a simple language model It is often impossible to obtain accurate recognition results.
  • the present application provides a neural network-based automatic speech recognition method, device, electronic equipment and computer-readable storage medium, the main purpose of which is to solve the problem of data sparsity by using the gpt language model.
  • the neural network-based automatic speech recognition method provided by the application is applied to electronic equipment, and the method includes:
  • each recognition result includes an acoustic model score, an ngram language model score, and the acoustic The sum of the model score and the ngram language model score;
  • the initial recognition result is transmitted to the rescore process, and the gpt language model in the rescore process is used for scoring processing to obtain the gpt language model score;
  • the application also provides a neural network-based automatic speech recognition device, including:
  • the initial identification result acquisition module is used to jointly identify and process the audio to be identified through the acoustic model and the ngram language model in the ASR identification process, and obtain at least two initial identification results; wherein each identification result includes an acoustic model score, ngram language model score and the sum of the acoustic model score and the ngram language model score;
  • the gpt language model score acquisition module is used to transmit the recognition result to the rescore process, and perform scoring processing through the gpt language model in the rescore process to obtain the gpt language model score;
  • a language model score replacement module configured to transmit the gpt language model score to the ASR recognition process, and replace the ngram language model score in the ASR recognition process;
  • the final recognition result acquisition module is used to sort the sum of the gpt language model score and the acoustic model score in the ASR recognition process, and use the top-ranked recognition result among the sorted results as the final recognition result.
  • an electronic device which includes:
  • the memory stores instructions that can be executed by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can perform the above-mentioned neural network-based automatic speech recognition method. step.
  • the present application also provides a computer-readable storage medium, at least one instruction is stored in the computer-readable storage medium, and the at least one instruction is executed by a processor in the electronic device to realize the above-mentioned Automatic Speech Recognition Method Based on Neural Network.
  • the acoustic model in the ASR recognition process and the ngram language model are used to jointly identify and process the audio to be recognized, and at least two or more initial recognition results are obtained; the initial recognition results are transmitted to the rescore process, and passed through the rescore process
  • the sum of the score of the gpt language model and the score of the acoustic model is sorted, and the result of the sorting of the score of the gpt language model and the score of the acoustic model in the ranking result is the final recognition result .
  • the main purpose of this application is to solve the problem of data sparsity by using the gpt language model.
  • Fig. 1 is the schematic flow chart of the neural network-based automatic speech recognition method that an embodiment of the present application provides;
  • Fig. 2 is the module schematic diagram of the automatic speech recognition device based on neural network provided by an embodiment of the present application;
  • FIG. 3 is a schematic diagram of the internal structure of an electronic device implementing a neural network-based automatic speech recognition method provided by an embodiment of the present application;
  • FIG. 1 it is a schematic flowchart of a neural network-based automatic speech recognition method provided by an embodiment of the present application.
  • the method may be performed by a device, and the device may be implemented by software and/or hardware.
  • the automatic speech recognition method based on neural network includes:
  • each recognition result includes the acoustic model score, the ngram language model score and all The sum of the acoustic model score and the ngram language model score;
  • S2 Transmit the initial recognition result to the rescore process, and perform scoring processing through the gpt language model in the rescore process to obtain the gpt language model score;
  • S4 Sort the sum of the scores of the gpt language model and the scores of the acoustic model in the ASR recognition process, and use the top recognition result among the sorted results as the final recognition result.
  • the above is the neural network-based automatic speech recognition method of the artificial intelligence of the present application.
  • the neural network-based automatic speech recognition method of the present application includes ASR (Automatic Speech Recognition, Automatic Speech Recognition, referred to as "ASR") recognition process and rescore (re-scoring) process.
  • ASR Automatic Speech Recognition, Automatic Speech Recognition
  • rescore re-scoring
  • the ASR recognition process is the sound-to-character module, which adopts the traditional GMM-HMM technical route.
  • the modeling unit of the neural network is hmm state, and the input is acoustic features.
  • the ASR recognition is completed through two modules of acoustic model and language model decoding. Before the recognition starts, the training of the acoustic model and the language model is required.
  • the training principle of the acoustic model is the backpropagation of the neural network, which requires a large amount of audio duration and marked text; the training input of the language model is a large amount of text corpus, and the 3gram language is used.
  • Model modeling generates a language model in arpa format, and then uses tools such as arpa2fst to generate a wfst graph of hclg structure, which is used as input for language model decoding.
  • step S1 the acoustic model and the ngram language model in the ASR recognition process perform recognition processing on the audio to be recognized, and obtain at least two recognition results, including the following steps:
  • each recognition result includes text, an acoustic model score, an ngram language model score, and a sum of scores of two models (acoustic model, ngram language model).
  • step S111 converting the audio to be recognized into audio features includes the following steps:
  • Step S11101 Framing and windowing the to-be-recognized audio speech to obtain standardized audio
  • Step S11102 Perform feature extraction on the standard audio through the MFCC feature extraction algorithm to obtain audio features of the audio to be recognized.
  • step S112 the posteriori probability of each frame in the audio feature will be obtained according to the audio feature, including the following steps:
  • Step S11201 extracting the audio feature into an audio feature vector sequence
  • Step S11202 Input the audio features of the audio feature vector sequence into the pre-trained acoustic model to determine the time boundary of the phoneme state;
  • Step S11203 According to the time boundary, extract all the frames in the time boundary, take the average value according to the frame length of the speech frame, and use it as the posterior probability of the speech frame.
  • step S113 according to the posterior probability of each frame, the wfst graph generated by the ngram language model is subjected to viterbi decoding to generate a lattice graph, including the following steps:
  • Step S11301 modeling the ngram language model to generate a language model in arpa format
  • S11302 Use the arpa2fst tool to generate a wfst diagram of the hclg structure
  • S11303 Construct a wfst search space according to the Viterbi algorithm (viterbi), the posterior probability, and the wfst graph;
  • S11304 Find the optimal path with the highest matching probability in the weighted finite-state transducer (wfst) search space, and obtain the text recognition result.
  • each recognition result includes a text, an acoustic model score, a language model score, and a sum of the acoustic model score and the language model score.
  • the scores of the acoustic model and the language model can be extracted from the lattice graph. Sorting the total score of each output of lattice from small to large, backtracking the top1 result is the default ASR result output by the ngram language model, backtracking the top n result can extract nbest information, and output it to the rescore process to complete the re-scoring work.
  • the rescore thread is the re-scoring module, which is designed as a separate process because of the gpu dependency, works on the gpu and is accelerated by TensorRT, and is responsible for decoding the request and response of the thread in addition to completing the reasoning process of the gpt language model .
  • the Rescore module will input text sentences with a fixed batch size each time, and output the gpt language model score corresponding to each text.
  • the described recognition result is transmitted to the rescore process, and processed by the gpt language model in the rescore process, to obtain the gpt language model score, including the following steps:
  • Step S121 within the preset time, put together the sentences to be rescored (re-scored) into a batch of sentences to be rescored;
  • Step S122 Perform neural network forward reasoning on batches of sentences to be re-scored through the gpt language model
  • Step S123 Accumulate the posterior probability of each word of the sentence to be re-scored, and output it in logarithmic form, so as to obtain the gpt language model score of the sentence to be re-scored.
  • the text to be scored is "[CLS] the dog is hairy [SEP]”
  • the text sequence input to the gpt model is "[CLS] the dog is hairy Hairy” has 5 words in total, and the probability of the current word corresponding to the next word is taken on the output probability matrix.
  • the probability item of the word "dog” in the probability distribution item of "the” is the probability distribution item of "the” as its corresponding output probability.
  • the output logarithmic probability sequence of the above input sequence is p1 p2 p3 p4 p5, accumulating p1 to p5 is the gpt language model score.
  • the gpt language model score result is returned to the ASR decoding thread, and the ASR decoding thread replaces the ngram score in the total score in the top n sentence with the gpt language model score, and then calculates the new The total score is reordered from small to large, and the reordered top 1 ASR text is used as the final ASR recognition result, that is, the top recognition result of the gpt language model score and acoustic model score sorting results is used as the final recognition result , so as to achieve the purpose of improving the accuracy of ASR.
  • GPT rescore is performed on the ASR recognition results.
  • the accuracy of the overall ASR recognition results is greatly improved, often by about 2 percentage points, and the character accuracy is reduced by about 1 percentage point. . From the perspective of delay, it will only cause a delay of about 50ms, and the impact on the delay of the overall speech recognition system is very limited.
  • the improvement of recognition accuracy is not just a simple drop in character accuracy, but also brings a better ASR experience.
  • upstream systems that rely on ASR recognition results, such as voice customer service robots, intelligent voice assistants, smart speakers, etc., which can indirectly improve the effect of upstream systems, improve service quality, and improve customer satisfaction.
  • the acoustic model in the ASR recognition process and the ngram language model are used to jointly identify and process the audio to be recognized, and at least two or more initial recognition results are obtained; the initial recognition results are transmitted to the rescore process, and passed through the rescore process
  • the gpt language model in carries out scoring process, obtains gpt language model score; Described gpt language model score is transmitted to described ASR identification process, and replaces the ngram language model score in the described ASR identification process; Described ASR identification process
  • the sum of the gpt language model score and the acoustic model score in is sorted, and the recognition result ranked first among the sorted results is taken as the final recognition result.
  • the main purpose of this application is to solve the problem of data sparsity by using the gpt language model.
  • FIG. 2 it is a functional block diagram of the neural network-based automatic speech recognition device of the present application.
  • the substrate 100 described in this application can be installed in electronic equipment.
  • the neural network-based automatic speech recognition device may include: an initial recognition result acquisition module 101 , a gpt language model score acquisition module 102 , a language model score replacement module 103 and a final recognition result acquisition module 104 .
  • the module described in this application can also be called a unit, which refers to a series of computer program segments that can be executed by the processor of the electronic device and can complete fixed functions, and are stored in the memory of the electronic device.
  • each module/unit is as follows:
  • the initial recognition result acquisition module 101 is used to jointly identify and process the audio to be recognized through the acoustic model and the ngram language model in the ASR recognition process, and obtain at least two or more initial recognition results; wherein, each recognition result includes an acoustic model score , ngram language model score and the sum of the acoustic model score and the ngram language model score;
  • the gpt language model score acquisition module 102 is used to transmit the recognition result to the rescore process, and perform scoring processing through the gpt language model in the rescore process to obtain the gpt language model score;
  • the language model score replacement module 103 is used to transmit the gpt language model score to the ASR recognition process, and replace the ngram language model score in the ASR recognition process;
  • the final recognition result acquisition module 104 is configured to sort the sum of the gpt language model scores and the acoustic model scores in the ASR recognition process, and use the top-ranked recognition result among the sorted results as the final recognition result.
  • the initial recognition result acquisition module 101 includes an audio feature conversion module, a posterior probability acquisition module, a lattice graph acquisition module and more than two recognition result acquisition modules, wherein,
  • the audio feature conversion module is used to convert the audio to be identified into an audio feature
  • the posterior probability obtaining module is used to obtain the posterior probability of each frame in the audio feature according to the audio feature;
  • the lattice graph acquisition module is used to perform viterbi decoding on the wfst graph generated by the ngram language model to generate a lattice graph according to the posterior probability of each frame;
  • the two or more identification result acquisition modules are configured to acquire at least two or more initial identification results according to the lattice graph.
  • the audio feature conversion module includes a standard audio acquisition module and an audio feature acquisition module of the audio to be recognized.
  • the standard audio acquisition module is used to perform frame division and window processing on the audio speech to be recognized to obtain standard audio;
  • the audio feature acquisition module of the audio to be identified is used to extract the features of the standard audio through the MFCC feature extraction algorithm, and acquire the audio features of the audio to be identified.
  • the posterior probability acquisition module is used to acquire the posterior probability of each frame of the audio feature according to the audio feature, including: an audio feature vector sequence acquisition module, a time boundary determination module of a phoneme state, a posteriori Test probability determination module. in,
  • the audio feature vector sequence acquisition module is used to extract the audio feature into an audio feature vector sequence
  • the time boundary determination module of the phoneme state is used to input the audio feature vector sequence audio feature into the pre-trained acoustic model to determine the time boundary of the phoneme state;
  • the posterior probability determination module is used to extract all frames in the time boundary according to the time boundary, and take an average value according to the frame length of the speech frame as the posterior probability of the speech frame.
  • the lattice diagram generation module includes: arpa format generation module, wfst diagram generation module, wfst search space construction module and text recognition result determination module, wherein,
  • the arpa format generation module is used to model the ngram language model to generate an arpa format language model
  • the wfst graph generation module is used to generate the wfst graph of the hclg structure by using the arpa2fst tool;
  • a wfst search space construction module is used to construct a wfst search space according to the Viterbi algorithm (viterbi), the posterior probability and the wfst graph;
  • the character recognition result determination module is used to find the optimal path with the highest matching probability in the wfst search space search space to obtain the character recognition result.
  • wfst weightedfinite-statetransducer
  • wfst weightedfinite-statetransducer
  • each recognition result includes the text, the score of the acoustic model, the score of the language model and the sum of the scores of the two models (acoustic model, language model).
  • the scores of the acoustic model and the language model can be extracted from the lattice graph. Sorting the total score of each output of lattice from small to large, backtracking the top1 result is the default ASR result output by the ngram language model, backtracking the top n result can extract nbest information, and output it to the rescore process to complete the re-scoring work.
  • the rescore thread is the re-scoring module, which is designed as a separate process due to gpu dependence, works on the gpu and uses TensorRT to accelerate, in addition to completing the reasoning process of the gpt language model, it also It is responsible for decoding the request and response of the thread.
  • the Rescore module will input text sentences with a fixed batch size each time, and output the gpt language model score corresponding to each text.
  • the sentences to be rescored are pieced together into batches of sentences to be re-scored;
  • the text to be scored is "[CLS] the dog is hairy [SEP]”
  • the text sequence input to the gpt model is "[CLS] the dog is hairy Hairy” has 5 words in total, and the probability of the current word corresponding to the next word is taken on the output probability matrix.
  • the probability item of the word "dog” in the probability distribution item of "the” is its corresponding output probability.
  • the output logarithmic probability sequence of the above input sequence is p1, p2, p3, p4, p5, Accumulating p1 to p5 is: gpt language model score.
  • the rescore process returns the gpt language model score result to the ASR decoding thread, and the ASR decoding thread replaces the ngram in the total score in the top n sentence with the gpt language model score Score, and then reorder the new total score from small to large, and use the reordered top 1 ASR text as the final ASR recognition result (the top recognition result in the sorting result is the final recognition result), so as to improve the accuracy of ASR rate purposes.
  • GPT rescore is performed on the ASR recognition results.
  • the accuracy of the overall ASR recognition results is greatly improved, often by about 2 percentage points, and the character accuracy is reduced by about 1 percentage point. . From the perspective of delay, it will only cause a delay of about 50ms, and the impact on the delay of the overall speech recognition system is very limited.
  • the improvement of recognition accuracy is not just a simple drop in character accuracy, but also brings a better ASR experience.
  • upstream systems that rely on ASR recognition results, such as voice customer service robots, intelligent voice assistants, smart speakers, etc., which can indirectly improve the effect of upstream systems, improve service quality, and improve customer satisfaction.
  • the acoustic model in the ASR identification process and the ngram language model are used to identify and process the audio to be identified, and at least two or more initial identification results are obtained; the initial identification results are transmitted to the rescore process, and Score processing by the gpt language model in the rescore process to obtain the gpt language model score; transfer the gpt language model score to the ASR recognition process, and replace the ngram language model score in the ASR recognition process; The sum of the gpt language model score and the acoustic model score in the ASR recognition process is sorted, and the recognition result ranked first in the sorted results is used as the final recognition result.
  • the main purpose of this application is to solve the problem of data sparsity by using the gpt language model.
  • FIG. 3 it is a schematic structural diagram of an electronic device implementing the neural network-based automatic speech recognition method of the present application.
  • the electronic device 1 may include a processor 10, a memory 11 and a bus, and may also include a computer program stored in the memory 11 and operable on the processor 10, such as an automatic speech recognition program 12 based on a neural network .
  • the memory 11 includes at least one type of readable storage medium, and the readable storage medium includes flash memory, mobile hard disk, multimedia card, card-type memory (for example: SD or DX memory, etc.), magnetic memory, magnetic disk, CD etc.
  • the storage 11 may be an internal storage unit of the electronic device 1 in some embodiments, such as a mobile hard disk of the electronic device 1 .
  • the memory 11 can also be an external storage device of the electronic device 1 in other embodiments, such as a plug-in mobile hard disk, a smart memory card (Smart memory card) equipped on the electronic device 1 Media Card, SMC), Secure Digital (Secure Digital, SD) card, flash memory card (Flash Card) and so on.
  • the memory 11 may also include both an internal storage unit of the electronic device 1 and an external storage device.
  • the memory 11 can not only be used to store application software and various data installed in the electronic device 1 , such as the code of a data audit program, but also can be used to temporarily store data that has been output or will be output.
  • the memory may store content that may be displayed by the electronic device or sent to other devices (eg, headphones) for display or playback by other devices.
  • the memory may also store content received from other devices. The content from other devices may be displayed, played, or used by the electronic device to perform any necessary tasks or operations that may be performed by computer processors or other components in the electronic device and/or wireless access point.
  • the processor 10 may be composed of integrated circuits, for example, may be composed of a single packaged integrated circuit, or may be composed of multiple integrated circuits with the same function or different functions, including one or more Central processing unit (Central Processing unit, CPU), microprocessor, digital processing chip, graphics processor and a combination of various control chips, etc.
  • the processor 10 is the control core (Control Unit) of the electronic device, and uses various interfaces and lines to connect various components of the entire electronic device, and runs or executes programs or modules stored in the memory 11 (such as data audit program, etc.), and call the data stored in the memory 11 to execute various functions of the electronic device 1 and process data.
  • Control Unit Control Unit
  • the electronics may also include a chipset (not shown) for controlling communications between the one or more processors and one or more of the other components of the user device.
  • the electronic device may be based on Intel® architecture or ARM® architecture, and the processor and chipset may be from the Intel® processor and chipset family.
  • the one or more processors 104 may also include one or more application specific integrated circuits (ASICs) or application specific standard products (ASSPs) for handling specific data processing functions or tasks.
  • ASICs application specific integrated circuits
  • ASSPs application specific standard products
  • the bus may be a peripheral component interconnect standard (PCI for short) bus or an extended industry standard architecture (extended industry standard architecture, referred to as EISA) bus, etc.
  • PCI peripheral component interconnect standard
  • EISA extended industry standard architecture
  • the bus can be divided into address bus, data bus, control bus and so on.
  • the bus is configured to realize connection and communication between the memory 11 and at least one processor 10 and the like.
  • network and I/O interfaces may include one or more communication interfaces or network interface devices to provide for data transfer between the electronic device and other devices (eg, web servers) via a network (not shown).
  • Communication interfaces may include, but are not limited to: Body Area Network (BAN), Personal Area Network (PAN), Wired Local Area Network (LAN), Wireless Local Area Network (WLAN), Wireless Wide Area Network (WWAN), and the like.
  • User equipment 102 may be coupled to the network via a wired connection.
  • the wireless system interface may include hardware or software to broadcast and receive messages using the Wi-Fi Direct standard and/or the IEEE 802.11 wireless standard, the Bluetooth standard, the Bluetooth low energy standard, the Wi-Gig standard, and/or any other wireless standards and/or combinations thereof.
  • a wireless system may include a transmitter and a receiver or transceiver capable of operating over a wide range of operating frequencies governed by the IEEE 802.11 wireless standard.
  • a communication interface may utilize acoustic, radio frequency, optical, or other signals to exchange data between the electronic device and other devices, such as access points, hosts, servers, routers, reading devices, and the like.
  • Network 118 may include, but is not limited to, the Internet, a private network, a virtual private network, a wireless wide area network, a local area network, a metropolitan area network, a telephone network, and the like.
  • Displays may include, but are not limited to, liquid crystal displays, light emitting diode displays, or E-InkTM displays manufactured by E Ink Corp. of Cambridge, Massachusetts, USA.
  • the display can be used to display content to the user in the form of text, images, or video.
  • the display can also operate as a touch screen display, which can enable a user to initiate commands or operations by touching the screen with certain fingers or gestures.
  • FIG. 3 only shows an electronic device with components. Those skilled in the art can understand that the structure shown in FIG. 2 does not constitute a limitation to the electronic device 1, and may include fewer or more components, or combinations of certain components, or different arrangements of components.
  • the electronic device 1 may also include a power supply (such as a battery) for supplying power to various components.
  • the power supply may be logically connected to the at least one processor 10 through a power management device, so that through power management
  • the device implements functions such as charge management, discharge management, and power consumption management.
  • the power supply may also include one or more DC or AC power sources, recharging devices, power failure detection circuits, power converters or inverters, power status indicators and other arbitrary components.
  • the electronic device 1 may also include various sensors, bluetooth modules, Wi-Fi modules, etc., which will not be repeated here.
  • the electronic device 1 may also include a network interface, optionally, the network interface may include a wired interface and/or a wireless interface (such as a WI-FI interface, a Bluetooth interface, etc.), which are usually used in the electronic device 1 Establish a communication connection with other electronic devices.
  • a network interface optionally, the network interface may include a wired interface and/or a wireless interface (such as a WI-FI interface, a Bluetooth interface, etc.), which are usually used in the electronic device 1 Establish a communication connection with other electronic devices.
  • the electronic device 1 may further include a user interface.
  • the user interface may be a display (Display) or an input unit (such as a keyboard (Keyboard)).
  • the user interface may also be a standard wired interface or a wireless interface.
  • the display can be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, and an OLED (Organic Light-Emitting Diode, Organic Light-Emitting Diode) touch controller, etc.
  • the display may also be appropriately called a display screen or a display unit, and is used for displaying information processed in the electronic device 1 and for displaying a visualized user interface.
  • the neural network-based automatic speech recognition program 12 stored in the memory 11 in the electronic device 1 is a combination of multiple instructions. When running in the processor 10, it can realize:
  • each recognition result includes an acoustic model score, an ngram language model score, and the acoustic The sum of the model score and the ngram language model score;
  • the initial recognition result is transmitted to the rescore process, and the gpt language model in the rescore process is used for scoring processing to obtain the gpt language model score;
  • the integrated modules/units of the electronic device 1 are realized in the form of software function units and sold or used as independent products, they can be stored in a computer-readable storage medium.
  • the computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U disk, removable hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory).
  • a computer-readable storage medium at least one instruction is stored in the computer-readable storage medium, and the at least one instruction is executed by a processor in an electronic device to implement the above-mentioned neural network-based
  • the step of the automatic speech recognition method, concrete method is as follows:
  • each recognition result includes an acoustic model score, an ngram language model score, and the acoustic The sum of the model score and the ngram language model score;
  • the initial recognition result is transmitted to the rescore process, and the gpt language model in the rescore process is used for scoring processing to obtain the gpt language model score;
  • the disclosed devices, devices and methods can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the modules is only a logical function division, and there may be other division methods in actual implementation.
  • the computer-readable storage medium may be non-volatile or volatile.
  • the modules described as separate components may or may not be physically separated, and the components displayed as modules may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the modules can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional module in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware, or in the form of hardware plus software function modules.
  • These computer-executable program instructions can be loaded into a general-purpose computer, special-purpose computer, processor, or other programmable data processing device to produce a specific machine, so that the instructions executed on the computer, processor, or other programmable data processing device Create a component that implements one or more functions specified in a flowchart block or blocks.
  • These computer program products can also be stored in a computer-readable memory, which can instruct a computer or other programmable data processing apparatus to operate in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture, which includes implementing the process An instructional component of one or more functions specified in a block or blocks of a diagram.
  • the embodiments of the present application may provide a computer program product, which includes a computer-usable medium having computer-readable program code or program instructions embodied therein, and the computer-readable program code is adapted to be executed to realize One or more functions specified in multiple boxes.
  • Computer program instructions can also be loaded onto a computer or other programmable data processing device to cause a series of operational elements or steps to be executed on the computer or other programmable device. The instructions executed above provide elements or steps for implementing the functions specified in the flowchart block or blocks.
  • blocks in the block diagrams or flow diagrams support combinations of means for performing the specified functions, combinations of elements or steps for performing the specified functions and program instruction means for performing the specified functions. It should also be understood that each block in the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, can be implemented by a dedicated hardware-based computer system that performs the specified functions, elements, or steps, or by dedicated hardware or A combined implementation of computer instructions.

Abstract

The present application relates to artificial intelligence, and provides an automatic speech recognition method and apparatus based on a neural network, an electronic device, and a computer-readable storage medium. The method comprises: performing recognition processing, by means of an acoustic model and a ngram language model in an ASR recognition process, on an audio to be recognized to obtain at least more than two primary recognition results; transmitting the primary recognition results to a rescore process, and performing scoring processing by means of a gpt language model in the rescore process to obtain a gpt language model score; transmitting the gpt language model score to the ASR recognition process, and replacing a ngram language model score in the ASR recognition process; and sorting the sum of the gpt language model score and the acoustic model score in the ASR recognition process, and taking the top recognition result in a sorting result as a final recognition result. The present application mainly aims to solve the problem of data sparsity by using the gpt language model.

Description

基于神经网络的自动语音识别方法、设备及可读存储介质Automatic speech recognition method, device and readable storage medium based on neural network
本申请要求于2021年06月24日提交中国专利局、申请号为202110706592.9,发明名称为“基于神经网络的自动语音识别方法、设备及可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application with the application number 202110706592.9 submitted to the China Patent Office on June 24, 2021, and the invention title is "Automatic Speech Recognition Method, Device and Readable Storage Medium Based on Neural Network", all of which The contents are incorporated by reference in this application.
技术领域technical field
本申请涉及人工智能技术领域,尤其涉及一种基于神经网络的自动语音识别方法、装置、电子设备及计算机可读存储介质。The present application relates to the technical field of artificial intelligence, and in particular to a neural network-based automatic speech recognition method, device, electronic equipment, and computer-readable storage medium.
背景技术Background technique
在传统的语音识别的过程中包括两个模型,分别是声学模型和语言模型;其中,语言模型一般采用ngram语言模型,ngram这种基于元组统计的概率模型,只能抓到词组前后的统计信息,无法学习到更深入的语法、语义信息,再加上这种词频统计的概率计算方法,有参数空间过大问题和数据稀疏严重的问题,尤其在高阶ngram模型中,随着阶数增加,ngram模型和稀疏性会指数级别增加。即使人们提出和很多种办法去努力解决ngram模型本身的问题,例如剪枝和回退,都只是减弱ngram模型的问题,无法从解决ngram语言模型的根本性问题。In the process of traditional speech recognition, there are two models, namely the acoustic model and the language model; among them, the language model generally adopts the ngram language model, and ngram, a probability model based on tuple statistics, can only capture the statistics before and after the phrase information, it is impossible to learn more in-depth grammatical and semantic information, coupled with this probability calculation method of word frequency statistics, there are problems with too large parameter space and serious data sparseness, especially in high-order ngram models, with the order increase, the ngram model and sparsity will increase exponentially. Even if people propose and work hard to solve the problems of the ngram model itself, such as pruning and regression, they only weaken the problem of the ngram model, and cannot solve the fundamental problems of the ngram language model.
目前一种常见的解决方案:保持原有ngram模型不变,在wfst解码之后,生成top n的ARS识别结果的基础上,重新对生成语句的语言模型进行打分,并进行重排序。发明人意识到,采用有更多语料的ngram模型,更高阶的ngram模型等等;但常常遇到的问题是采用越复杂语言模型常常会导致更多的识别时延,用简单的语言模型往往不能取得准确的识别效果。At present, a common solution is to keep the original ngram model unchanged, and after wfst decoding, generate top n ARS recognition results, re-score and reorder the language model of the generated sentences. The inventor realizes that using an ngram model with more corpus, a higher-order ngram model, etc.; but the problem that is often encountered is that using a more complex language model often leads to more recognition delays, and using a simple language model It is often impossible to obtain accurate recognition results.
为了解决上述问题,亟需一种新的自动语音识别方案。In order to solve the above problems, a new automatic speech recognition scheme is urgently needed.
技术问题technical problem
本申请提供一种基于神经网络的自动语音识别方法、装置、电子设备及计算机可读存储介质,其主要目的在于通过采用gpt语言模型,解决数据稀疏性的问题。The present application provides a neural network-based automatic speech recognition method, device, electronic equipment and computer-readable storage medium, the main purpose of which is to solve the problem of data sparsity by using the gpt language model.
为实现上述目的,本申请提供的基于神经网络的自动语音识别方法,应用于电子设备,所述方法包括:In order to achieve the above purpose, the neural network-based automatic speech recognition method provided by the application is applied to electronic equipment, and the method includes:
通过ASR识别进程中的声学模型和ngram语言模型共同对待识别的音频进行识别处理,获取至少两个以上的初次识别结果;其中,每个识别结果包括声学模型得分、ngram语言模型得分及所述声学模型得分与所述ngram语言模型得分之和;Through the acoustic model and the ngram language model in the ASR recognition process, the audio to be recognized is jointly recognized and processed, and at least two or more initial recognition results are obtained; wherein, each recognition result includes an acoustic model score, an ngram language model score, and the acoustic The sum of the model score and the ngram language model score;
将所述初次识别结果传输至rescore进程,并通过rescore进程中的gpt语言模型进行评分处理,获取gpt语言模型得分;The initial recognition result is transmitted to the rescore process, and the gpt language model in the rescore process is used for scoring processing to obtain the gpt language model score;
将所述gpt语言模型得分传输至所述ASR识别进程,并替换所述ASR识别进程中的ngram语言模型得分;Transmitting the gpt language model score to the ASR recognition process, and replacing the ngram language model score in the ASR recognition process;
对所述ASR识别进程中的所述gpt语言模型得分与所述声学模型得分之和进行排序,并将排序结果中排序最前的识别结果作为最终识别结果。Sorting the sum of the gpt language model scores and the acoustic model scores in the ASR recognition process, and using the top recognition result among the sorted results as the final recognition result.
为了解决上述问题,本申请还提供一种基于神经网络的自动语音识别装置,包括:In order to solve the above problems, the application also provides a neural network-based automatic speech recognition device, including:
初次识别结果获取模块,用于通过ASR识别进程中的声学模型和ngram语言模型共同对待识别的音频进行识别处理,获取至少两个以上的初次识别结果;其中,每个识别结果包括声学模型得分、ngram语言模型得分及所述声学模型得分与所述ngram语言模型得分之和;The initial identification result acquisition module is used to jointly identify and process the audio to be identified through the acoustic model and the ngram language model in the ASR identification process, and obtain at least two initial identification results; wherein each identification result includes an acoustic model score, ngram language model score and the sum of the acoustic model score and the ngram language model score;
gpt语言模型得分获取模块,用于将所述识别结果传输至rescore进程,并通过rescore进程中的gpt语言模型进行评分处理,获取gpt语言模型得分;The gpt language model score acquisition module is used to transmit the recognition result to the rescore process, and perform scoring processing through the gpt language model in the rescore process to obtain the gpt language model score;
语言模型得分替换模块,用于将所述gpt语言模型得分传输至所述ASR识别进程,并替换所述ASR识别进程中的ngram语言模型得分;A language model score replacement module, configured to transmit the gpt language model score to the ASR recognition process, and replace the ngram language model score in the ASR recognition process;
最终识别结果获取模块,用于对所述ASR识别进程中的所述gpt语言模型得分与所述声学模型得分之和进行排序,并将排序结果中排序最前的识别结果作为最终识别结果。The final recognition result acquisition module is used to sort the sum of the gpt language model score and the acoustic model score in the ASR recognition process, and use the top-ranked recognition result among the sorted results as the final recognition result.
为了解决上述问题,本申请还提供一种电子设备,所述电子设备包括:In order to solve the above problems, the present application also provides an electronic device, which includes:
至少一个处理器;以及,at least one processor; and,
与所述至少一个处理器通信连接的存储器;其中,a memory communicatively coupled to the at least one processor; wherein,
所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行上述的基于神经网络的自动语音识别方法的步骤。The memory stores instructions that can be executed by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can perform the above-mentioned neural network-based automatic speech recognition method. step.
为了解决上述问题,本申请还提供一种计算机可读存储介质,所述计算机可读存储介质中存储有至少一个指令,所述至少一个指令被电子设备中的处理器执行以实现上述所述的基于神经网络的自动语音识别方法。In order to solve the above problems, the present application also provides a computer-readable storage medium, at least one instruction is stored in the computer-readable storage medium, and the at least one instruction is executed by a processor in the electronic device to realize the above-mentioned Automatic Speech Recognition Method Based on Neural Network.
本申请实施例通过ASR识别进程中的声学模型和ngram语言模型共同对待识别的音频进行识别处理,获取至少两个以上的初次识别结果;将所述初次识别结果传输至rescore进程,并通过rescore进程中的gpt语言模型进行评分处理,获取gpt语言模型得分;将所述gpt语言模型得分传输至所述ASR识别进程,并替换所述ASR识别进程中的ngram语言模型得分;对所述ASR识别进程中的所述gpt语言模型得分与所述声学模型得分之和进行排序,并将排序结果中排序最前的识别结果述所述gpt语言模型得分与所述声学模型得分的排序的结果作为最终识别结果。本申请的主要目的在于通过采用gpt语言模型,解决数据稀疏性的问题。In the embodiment of the present application, the acoustic model in the ASR recognition process and the ngram language model are used to jointly identify and process the audio to be recognized, and at least two or more initial recognition results are obtained; the initial recognition results are transmitted to the rescore process, and passed through the rescore process The gpt language model in carries out scoring process, obtains gpt language model score; Described gpt language model score is transmitted to described ASR identification process, and replaces the ngram language model score in the described ASR identification process; Described ASR identification process The sum of the score of the gpt language model and the score of the acoustic model is sorted, and the result of the sorting of the score of the gpt language model and the score of the acoustic model in the ranking result is the final recognition result . The main purpose of this application is to solve the problem of data sparsity by using the gpt language model.
技术解决方案technical solution
在此处键入技术解决方案描述段落。Type technical solution description paragraph here.
有益效果Beneficial effect
在此处键入有益效果描述段落。Type benefit description paragraph here.
附图说明Description of drawings
图1为本申请一实施例提供的基于神经网络的自动语音识别方法的流程示意图;Fig. 1 is the schematic flow chart of the neural network-based automatic speech recognition method that an embodiment of the present application provides;
图2为本申请一实施例提供的基于神经网络的自动语音识别装置的模块示意图;Fig. 2 is the module schematic diagram of the automatic speech recognition device based on neural network provided by an embodiment of the present application;
图3为本申请一实施例提供的实现基于神经网络的自动语音识别方法的电子设备的内部结构示意图;3 is a schematic diagram of the internal structure of an electronic device implementing a neural network-based automatic speech recognition method provided by an embodiment of the present application;
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。The realization, functional features and advantages of the present application will be further described in conjunction with the embodiments and with reference to the accompanying drawings.
本发明的最佳实施方式BEST MODE FOR CARRYING OUT THE INVENTION
在此处键入本发明的最佳实施方式描述段落。Type the paragraph describing the best mode for carrying out the invention here.
本发明的实施方式Embodiments of the present invention
为解决上述问题,本申请提供一种基于神经网络的自动语音识别方法。参照图1所示,为本申请一实施例提供的基于神经网络的自动语音识别方法的流程示意图。该方法可以由一个装置执行,该装置可以由软件和/或硬件实现。In order to solve the above problems, the present application provides an automatic speech recognition method based on a neural network. Referring to FIG. 1 , it is a schematic flowchart of a neural network-based automatic speech recognition method provided by an embodiment of the present application. The method may be performed by a device, and the device may be implemented by software and/or hardware.
在本实施例中,基于神经网络的自动语音识别方法包括:In this embodiment, the automatic speech recognition method based on neural network includes:
S1:通过ASR识别进程中的声学模型和ngram语言模型共同对待识别的音频进行识别处理,获取至少两个以上的初次识别结果;其中,每个识别结果包括声学模型得分、ngram语言模型得分及所述声学模型得分与所述ngram语言模型得分之和;S1: Through the acoustic model and the ngram language model in the ASR recognition process, the audio to be recognized is jointly recognized and processed, and at least two or more initial recognition results are obtained; each recognition result includes the acoustic model score, the ngram language model score and all The sum of the acoustic model score and the ngram language model score;
S2:将所述初次识别结果传输至rescore进程,并通过rescore进程中的gpt语言模型进行评分处理,获取gpt语言模型得分;S2: Transmit the initial recognition result to the rescore process, and perform scoring processing through the gpt language model in the rescore process to obtain the gpt language model score;
S3:将所述gpt语言模型得分传输至所述ASR识别进程,并替换所述ASR识别进程中的ngram语言模型得分;S3: Transmitting the gpt language model score to the ASR recognition process, and replacing the ngram language model score in the ASR recognition process;
S4:对所述ASR识别进程中的所述gpt语言模型得分与所述声学模型得分之和进行排序,并将排序结果中排序最前的识别结果作为最终识别结果。S4: Sort the sum of the scores of the gpt language model and the scores of the acoustic model in the ASR recognition process, and use the top recognition result among the sorted results as the final recognition result.
上述为本申请人工智能的基于神经网络的自动语音识别方法,在本申请的基于神经网络的自动语音识别方法中,包含ASR(自动语音识别,Automatic Speech Recognition,简称“ASR”)识别进程和rescore(重打分)进程。ASR识别进程为音转字模块,采用传统的GMM-HMM技术路线,神经网络的建模单元为hmm state,输入为声学特征,经历声学模型和语言模型解码两个模块完成ASR识别。在识别开始前,要进行声学模型与语言模型的训练,声学模型的训练原理为神经网络的反向传播,需要大量的音频时长与标注文本;语言模型训练输入为大量的文本语料,做3gram语言模型建模生成arpa格式的语言模型,接着利用arpa2fst等工具生成hclg结构的wfst图,并作为输入用于语言模型解码。The above is the neural network-based automatic speech recognition method of the artificial intelligence of the present application. In the neural network-based automatic speech recognition method of the present application, it includes ASR (Automatic Speech Recognition, Automatic Speech Recognition, referred to as "ASR") recognition process and rescore (re-scoring) process. The ASR recognition process is the sound-to-character module, which adopts the traditional GMM-HMM technical route. The modeling unit of the neural network is hmm state, and the input is acoustic features. The ASR recognition is completed through two modules of acoustic model and language model decoding. Before the recognition starts, the training of the acoustic model and the language model is required. The training principle of the acoustic model is the backpropagation of the neural network, which requires a large amount of audio duration and marked text; the training input of the language model is a large amount of text corpus, and the 3gram language is used. Model modeling generates a language model in arpa format, and then uses tools such as arpa2fst to generate a wfst graph of hclg structure, which is used as input for language model decoding.
在步骤S1中,所述通过ASR识别进程中声学模型和ngram语言模型对待识别的音频进行识别处理,获取至少两个以上的识别结果,包括如下步骤:In step S1, the acoustic model and the ngram language model in the ASR recognition process perform recognition processing on the audio to be recognized, and obtain at least two recognition results, including the following steps:
S111:将所述待识别的音频转化为音频特征;S111: Convert the audio to be recognized into audio features;
S112:根据所述音频特征获取所述音频特征中每帧的后验概率;S112: Obtain the posterior probability of each frame in the audio feature according to the audio feature;
S113:根据所述每帧的后验概率,对ngram语言模型生成的wfst图进行viterbi解码,生成lattice图;以及S113: According to the posterior probability of each frame, perform viterbi decoding on the wfst graph generated by the ngram language model to generate a lattice graph; and
S114:根据所述lattice图,获取top n个识别结果;其中,每个识别结果包含文本、声学模型得分、ngram语言模型得分及两个模型(声学模型、ngram语言模型)的得分之和。S114: Obtain top n recognition results according to the lattice graph; wherein, each recognition result includes text, an acoustic model score, an ngram language model score, and a sum of scores of two models (acoustic model, ngram language model).
其中,在步骤S111中,将所述待识别的音频转化为音频特征,包括如下步骤:Wherein, in step S111, converting the audio to be recognized into audio features includes the following steps:
步骤S11101:对所述待识别音频语音进行分帧、加窗处理,获取规范音频;以及,Step S11101: Framing and windowing the to-be-recognized audio speech to obtain standardized audio; and,
步骤S11102:通过MFCC特征提取算法对所述规范音频进行特征提取,获取所述待识别音频的音频特征。Step S11102: Perform feature extraction on the standard audio through the MFCC feature extraction algorithm to obtain audio features of the audio to be recognized.
在步骤S112中,所述将根据所述音频特征获取所述音频特征的中每帧的后验概率,包括如下步骤:In step S112, the posteriori probability of each frame in the audio feature will be obtained according to the audio feature, including the following steps:
步骤S11201:将所述音频特征提取为音频特征向量序列;Step S11201: extracting the audio feature into an audio feature vector sequence;
步骤S11202:将所述音频特征向量序列音频特征 输入预先训练好的声学模型,确定音素状态的时间边界;Step S11202: Input the audio features of the audio feature vector sequence into the pre-trained acoustic model to determine the time boundary of the phoneme state;
步骤S11203:根据所述时间边界,提取所述时间边界内的所有帧,按语音帧的帧长取平均值,作为所述语音帧的后验概率。Step S11203: According to the time boundary, extract all the frames in the time boundary, take the average value according to the frame length of the speech frame, and use it as the posterior probability of the speech frame.
其中,在步骤S113中,所述根据所述每帧的后验概率,对ngram语言模型生成的wfst图进行viterbi解码,生成lattice图,包括如下步骤:Wherein, in step S113, according to the posterior probability of each frame, the wfst graph generated by the ngram language model is subjected to viterbi decoding to generate a lattice graph, including the following steps:
步骤S11301:将ngram语言模型建模生成arpa格式的语言模型;Step S11301: modeling the ngram language model to generate a language model in arpa format;
S11302:利用arpa2fst工具生成hclg结构的wfst图;S11302: Use the arpa2fst tool to generate a wfst diagram of the hclg structure;
S11303:根据维特比算法(viterbi)、所述后验概率及所述wfst图构建wfst搜索空间;S11303: Construct a wfst search space according to the Viterbi algorithm (viterbi), the posterior probability, and the wfst graph;
S11304:在加权有限状态转换器(weightedfinite-statetransducer,wfst)搜索空间内寻找匹配概率最大的最优路径,得到文字识别结果。S11304: Find the optimal path with the highest matching probability in the weighted finite-state transducer (wfst) search space, and obtain the text recognition result.
其中,每个识别结果包含文本、声学模型得分、语言模型得分及声学模型得分和语言模型得分之和。Wherein, each recognition result includes a text, an acoustic model score, a language model score, and a sum of the acoustic model score and the language model score.
在本申请的实施例中,lattice图中可取出声学模型和语言模型得分。对lattice的每个输出做总得分的从小到大排序,回溯top1结果即为ngram语言模型输出的默认ASR结果,回溯top n结果则可以提取nbest信息,输出到rescore进程完成重打分工作。In the embodiment of the present application, the scores of the acoustic model and the language model can be extracted from the lattice graph. Sorting the total score of each output of lattice from small to large, backtracking the top1 result is the default ASR result output by the ngram language model, backtracking the top n result can extract nbest information, and output it to the rescore process to complete the re-scoring work.
在步骤S120中,rescore线程为重打分模块,因有gpu依赖被单独设计为一个进程,工作在gpu上面并使用TensorRT加速,除完成gpt语言模型的推理过程,还要负责解码线程的请求与响应。Rescore模块每次会输入固定batch大小的文本语句,输出每条文本对应的gpt语言模型得分。In step S120, the rescore thread is the re-scoring module, which is designed as a separate process because of the gpu dependency, works on the gpu and is accelerated by TensorRT, and is responsible for decoding the request and response of the thread in addition to completing the reasoning process of the gpt language model . The Rescore module will input text sentences with a fixed batch size each time, and output the gpt language model score corresponding to each text.
其中,所述将所述识别结果传输至rescore进程,并通过rescore进程中的gpt语言模型进行处理,获取gpt语言模型得分,包括如下步骤:Wherein, the described recognition result is transmitted to the rescore process, and processed by the gpt language model in the rescore process, to obtain the gpt language model score, including the following steps:
步骤S121:在预设时间内,将待rescore(重打分)的语句拼凑成批量的待重新打分语句;Step S121: within the preset time, put together the sentences to be rescored (re-scored) into a batch of sentences to be rescored;
步骤S122:通过gpt语言模型对批量的待重新打分语句做神经网络前向推理;Step S122: Perform neural network forward reasoning on batches of sentences to be re-scored through the gpt language model;
步骤S123:累加所述待重新打分语句的每个词的后验概率,并以对数形式输出,以获取待重新打分语句的gpt语言模型得分。Step S123: Accumulate the posterior probability of each word of the sentence to be re-scored, and output it in logarithmic form, so as to obtain the gpt language model score of the sentence to be re-scored.
在本申请的一个具体的实施例中,例假设要打分的文本为“ [CLS] the dog is hairy [SEP]”,那么输入到gpt模型文本序列为“[CLS] the dog is hairy”共5个词,在输出概率矩阵上取当前词对应下个词的概率,具体到本例中,“the”这个词的输出概率分布中,当前词的下个词为“dog”,则取“the”的概率分布项中“dog”词的概率项为其对应的输出概率,经此处理后,假定上述输入序列的输出对数概率序列为p1 p2 p3 p4 p5,累加p1到p5即为gpt语言模型得分。In a specific embodiment of the present application, for example, suppose the text to be scored is "[CLS] the dog is hairy [SEP]", then the text sequence input to the gpt model is "[CLS] the dog is hairy Hairy" has 5 words in total, and the probability of the current word corresponding to the next word is taken on the output probability matrix. Specifically, in this example, in the output probability distribution of the word "the", the next word of the current word is "dog", Then take the probability item of the word "dog" in the probability distribution item of "the" as its corresponding output probability. After this processing, it is assumed that the output logarithmic probability sequence of the above input sequence is p1 p2 p3 p4 p5, accumulating p1 to p5 is the gpt language model score.
在步骤S130和步骤S140中,在rescore进程再将gpt语言模型得分结果返回到ASR解码线程中,ASR解码线程用gpt语言模型得分替换掉top n语句中总得分中的ngram得分,再对新的总得分进行从小到大重排序 ,使用重排序的top 1的ASR文本作为最终的ASR识别结果,也就是说,将gpt语言模型得分、声学模型得分排序结果中排序最前的识别结果作为最终识别结果,从而达到提升ASR准确率的目的。In steps S130 and S140, in the rescore process, the gpt language model score result is returned to the ASR decoding thread, and the ASR decoding thread replaces the ngram score in the total score in the top n sentence with the gpt language model score, and then calculates the new The total score is reordered from small to large, and the reordered top 1 ASR text is used as the final ASR recognition result, that is, the top recognition result of the gpt language model score and acoustic model score sorting results is used as the final recognition result , so as to achieve the purpose of improving the accuracy of ASR.
在本申请的实施例中,对ASR识别结果做GPT rescore,经过实验测试,对asr整体的识别结果的准确率有较高的提升,往往可以提升2个百分点左右,字准下降1个百分点左右。从时延的角度,仅会造成50ms左右的时延,对于整体语音识别系统的延时影响是很有限的。而识别准确率的提升,不仅仅是单纯的字准的下降,带来更好的ASR体验。还有对于依赖ASR识别结果的上游系统,例如语音客服机器人、智能语音助手、智能音箱等,都可以间接提升上游系统的效果,提升服务质量,提高客户满意度。In the embodiment of this application, GPT rescore is performed on the ASR recognition results. After experimental testing, the accuracy of the overall ASR recognition results is greatly improved, often by about 2 percentage points, and the character accuracy is reduced by about 1 percentage point. . From the perspective of delay, it will only cause a delay of about 50ms, and the impact on the delay of the overall speech recognition system is very limited. The improvement of recognition accuracy is not just a simple drop in character accuracy, but also brings a better ASR experience. There are also upstream systems that rely on ASR recognition results, such as voice customer service robots, intelligent voice assistants, smart speakers, etc., which can indirectly improve the effect of upstream systems, improve service quality, and improve customer satisfaction.
本申请实施例通过ASR识别进程中的声学模型和ngram语言模型共同对待识别的音频进行识别处理,获取至少两个以上的初次识别结果;将所述初次识别结果传输至rescore进程,并通过rescore进程中的gpt语言模型进行评分处理,获取gpt语言模型得分;将所述gpt语言模型得分传输至所述ASR识别进程,并替换所述ASR识别进程中的ngram语言模型得分;对所述ASR识别进程中的所述gpt语言模型得分与所述声学模型得分之和进行排序,并将排序结果中排序最前的识别结果作为最终识别结果。本申请的主要目的在于通过采用gpt语言模型,解决数据稀疏性的问题。In the embodiment of the present application, the acoustic model in the ASR recognition process and the ngram language model are used to jointly identify and process the audio to be recognized, and at least two or more initial recognition results are obtained; the initial recognition results are transmitted to the rescore process, and passed through the rescore process The gpt language model in carries out scoring process, obtains gpt language model score; Described gpt language model score is transmitted to described ASR identification process, and replaces the ngram language model score in the described ASR identification process; Described ASR identification process The sum of the gpt language model score and the acoustic model score in is sorted, and the recognition result ranked first among the sorted results is taken as the final recognition result. The main purpose of this application is to solve the problem of data sparsity by using the gpt language model.
如图2所示,是本申请基于神经网络的自动语音识别装置的功能模块图。As shown in FIG. 2 , it is a functional block diagram of the neural network-based automatic speech recognition device of the present application.
本申请所述基100可以安装于电子设备中。根据实现的功能,所述基于神经网络的自动语音识别装置可以包括:初次识别结果获取模块101、gpt语言模型得分获取模块102、语言模型得分替换模块103和最终识别结果获取模块104。本申请所述模块也可以称之为单元,是指一种能够被电子设备处理器所执行,并且能够完成固定功能的一系列计算机程序段,其存储在电子设备的存储器中。The substrate 100 described in this application can be installed in electronic equipment. According to the realized functions, the neural network-based automatic speech recognition device may include: an initial recognition result acquisition module 101 , a gpt language model score acquisition module 102 , a language model score replacement module 103 and a final recognition result acquisition module 104 . The module described in this application can also be called a unit, which refers to a series of computer program segments that can be executed by the processor of the electronic device and can complete fixed functions, and are stored in the memory of the electronic device.
在本实施例中,关于各模块/单元的功能如下:In this embodiment, the functions of each module/unit are as follows:
初次识别结果获取模块101,用于通过ASR识别进程中的声学模型和ngram语言模型共同对待识别的音频进行识别处理,获取至少两个以上的初次识别结果;其中,每个识别结果包括声学模型得分、ngram语言模型得分及所述声学模型得分与所述ngram语言模型得分之和;The initial recognition result acquisition module 101 is used to jointly identify and process the audio to be recognized through the acoustic model and the ngram language model in the ASR recognition process, and obtain at least two or more initial recognition results; wherein, each recognition result includes an acoustic model score , ngram language model score and the sum of the acoustic model score and the ngram language model score;
gpt语言模型得分获取模块102,用于将所述识别结果传输至rescore进程,并通过rescore进程中的gpt语言模型进行评分处理,获取gpt语言模型得分;The gpt language model score acquisition module 102 is used to transmit the recognition result to the rescore process, and perform scoring processing through the gpt language model in the rescore process to obtain the gpt language model score;
语言模型得分替换模块103,用于将所述gpt语言模型得分传输至所述ASR识别进程,并替换所述ASR识别进程中的ngram语言模型得分;The language model score replacement module 103 is used to transmit the gpt language model score to the ASR recognition process, and replace the ngram language model score in the ASR recognition process;
最终识别结果获取模块104,用于对所述ASR识别进程中的所述gpt语言模型得分与所述声学模型得分之和进行排序,并将排序结果中排序最前的识别结果作为最终识别结果。The final recognition result acquisition module 104 is configured to sort the sum of the gpt language model scores and the acoustic model scores in the ASR recognition process, and use the top-ranked recognition result among the sorted results as the final recognition result.
在本申请的实施例中,初次识别结果获取模块101包括音频特征转化模块、后验概率获取模块、lattice图获取模块和两个以上的识别结果获取模块,其中,In the embodiment of the present application, the initial recognition result acquisition module 101 includes an audio feature conversion module, a posterior probability acquisition module, a lattice graph acquisition module and more than two recognition result acquisition modules, wherein,
所述音频特征转化模块,用于将所述待识别的音频转化为音频特征;The audio feature conversion module is used to convert the audio to be identified into an audio feature;
所述后验概率获取模块,用于根据所述音频特征获取所述音频特征中每帧的后验概率;The posterior probability obtaining module is used to obtain the posterior probability of each frame in the audio feature according to the audio feature;
所述lattice图获取模块,用于根据所述每帧的后验概率,对ngram语言模型生成的wfst图进行viterbi解码生成lattice图;The lattice graph acquisition module is used to perform viterbi decoding on the wfst graph generated by the ngram language model to generate a lattice graph according to the posterior probability of each frame;
所述两个以上的识别结果获取模块,用于根据所述lattice图,获取至少两个以上的初次识别结果。The two or more identification result acquisition modules are configured to acquire at least two or more initial identification results according to the lattice graph.
其中,音频特征转化模块包括规范音频获取模块和待识别音频的音频特征获取模块。其中,Wherein, the audio feature conversion module includes a standard audio acquisition module and an audio feature acquisition module of the audio to be recognized. in,
所述规范音频获取模块,用于对所述待识别音频语音进行分帧、加窗处理,获取规范音频;以及,The standard audio acquisition module is used to perform frame division and window processing on the audio speech to be recognized to obtain standard audio; and,
所述待识别音频的音频特征获取模块,用于通过MFCC特征提取算法对所述规范音频进行特征提取,获取所述待识别音频的音频特征。The audio feature acquisition module of the audio to be identified is used to extract the features of the standard audio through the MFCC feature extraction algorithm, and acquire the audio features of the audio to be identified.
其中,所述后验概率获取模块,用于将根据所述音频特征获取所述音频特征的中每帧的后验概率,包括:音频特征向量序获取模块、音素状态的时间边界确定模块、后验概率确定模块。其中,Wherein, the posterior probability acquisition module is used to acquire the posterior probability of each frame of the audio feature according to the audio feature, including: an audio feature vector sequence acquisition module, a time boundary determination module of a phoneme state, a posteriori Test probability determination module. in,
音频特征向量序列获取模块,用于将所述音频特征提取为音频特征向量序列;The audio feature vector sequence acquisition module is used to extract the audio feature into an audio feature vector sequence;
音素状态的时间边界确定模块,用于将所述音频特征向量序列音频特征 输入预先训练好的声学模型,确定音素状态的时间边界;The time boundary determination module of the phoneme state is used to input the audio feature vector sequence audio feature into the pre-trained acoustic model to determine the time boundary of the phoneme state;
后验概率确定模块,用于根据所述时间边界,提取所述时间边界内的所有帧,按语音帧的帧长取平均值,作为所述语音帧的后验概率。The posterior probability determination module is used to extract all frames in the time boundary according to the time boundary, and take an average value according to the frame length of the speech frame as the posterior probability of the speech frame.
其中,lattice图生成模块,包括:arpa格式生成模块、wfst图生成模块、wfst搜索空间构建模块和文字识别结果确定模块,其中,Wherein, the lattice diagram generation module includes: arpa format generation module, wfst diagram generation module, wfst search space construction module and text recognition result determination module, wherein,
arpa格式生成模块,用于将ngram语言模型建模生成arpa格式的语言模型;The arpa format generation module is used to model the ngram language model to generate an arpa format language model;
wfst图生成模块,用于利用arpa2fst工具生成hclg结构的wfst图;The wfst graph generation module is used to generate the wfst graph of the hclg structure by using the arpa2fst tool;
wfst搜索空间构建模块,用于根据维特比算法(viterbi)、所述后验概率及所述wfst图构建wfst搜索空间;A wfst search space construction module is used to construct a wfst search space according to the Viterbi algorithm (viterbi), the posterior probability and the wfst graph;
文字识别结果确定模块,用于在所述wfst搜索空间搜索空间内寻找匹配概率最大的最优路径,得到文字识别结果。The character recognition result determination module is used to find the optimal path with the highest matching probability in the wfst search space search space to obtain the character recognition result.
其中,wfst(weightedfinite-statetransducer),具体指加权有限状态转换器。Among them, wfst (weightedfinite-statetransducer), specifically refers to the weighted finite state transducer.
在本申请的实施例中,每个识别结果包含文本、声学模型得分、语言模型得分及两个模型(声学模型、语言模型)的得分之和。In the embodiment of the present application, each recognition result includes the text, the score of the acoustic model, the score of the language model and the sum of the scores of the two models (acoustic model, language model).
在本申请的实施例中,lattice图中可取出声学模型和语言模型得分。对lattice的每个输出做总得分的从小到大排序,回溯top1结果即为ngram语言模型输出的默认ASR结果,回溯top n结果则可以提取nbest信息,输出到rescore进程完成重打分工作。In the embodiment of the present application, the scores of the acoustic model and the language model can be extracted from the lattice graph. Sorting the total score of each output of lattice from small to large, backtracking the top1 result is the default ASR result output by the ngram language model, backtracking the top n result can extract nbest information, and output it to the rescore process to complete the re-scoring work.
在所述gpt语言模型得分获取模块102中,rescore线程即为重打分模块,因有gpu依赖被单独设计为一个进程,工作在gpu上面并使用TensorRT加速,除完成gpt语言模型的推理过程,还要负责解码线程的请求与响应。Rescore模块每次会输入固定batch大小的文本语句,输出每条文本对应的gpt语言模型得分。In the gpt language model score acquisition module 102, the rescore thread is the re-scoring module, which is designed as a separate process due to gpu dependence, works on the gpu and uses TensorRT to accelerate, in addition to completing the reasoning process of the gpt language model, it also It is responsible for decoding the request and response of the thread. The Rescore module will input text sentences with a fixed batch size each time, and output the gpt language model score corresponding to each text.
其中,在gpt语言模型得分获取模块102中,在预设时间内,将待rescore(重打分)的语句拼凑成批量的待重新打分语句;Wherein, in the gpt language model score acquisition module 102, within a preset time, the sentences to be rescored (re-marked) are pieced together into batches of sentences to be re-scored;
通过gpt语言模型对批量的待重新打分语句做神经网络前向推理;Perform neural network forward reasoning on batches of sentences to be re-scored through the gpt language model;
累加所述待重新打分语句的每个词的后验概率,并以对数形式输出,以获取待重新打分语句的gpt语言模型得分。Accumulate the posterior probability of each word of the sentence to be re-scored, and output it in logarithmic form, so as to obtain the gpt language model score of the sentence to be re-scored.
在本申请的一个具体的实施例中,例假设要打分的文本为“ [CLS] the dog is hairy [SEP]”,那么输入到gpt模型文本序列为“[CLS] the dog is hairy”共5个词,在输出概率矩阵上取当前词对应下个词的概率,具体到本例中,“the”这个词的输出概率分布中,当前词的下个词为“dog”,则取“the”的概率分布项中“dog”词的概率项为其对应的输出概率,经此处理后,假定上述输入序列的输出对数概率序列为p1 、p2、 p3 、p4 、p5,累加p1到p5即为:gpt语言模型得分。In a specific embodiment of the present application, for example, suppose the text to be scored is "[CLS] the dog is hairy [SEP]", then the text sequence input to the gpt model is "[CLS] the dog is hairy Hairy" has 5 words in total, and the probability of the current word corresponding to the next word is taken on the output probability matrix. Specifically, in this example, in the output probability distribution of the word "the", the next word of the current word is "dog", Then take the probability item of the word "dog" in the probability distribution item of "the" as its corresponding output probability. After this processing, it is assumed that the output logarithmic probability sequence of the above input sequence is p1, p2, p3, p4, p5, Accumulating p1 to p5 is: gpt language model score.
在语言模型得分替换模块103和最终识别结果获取模块104中,rescore进程再将gpt语言模型得分结果返回到ASR解码线程,ASR解码线程用gpt语言模型得分替换掉top n语句中总得分中的ngram得分,再对新的总得分进行从小到大重排序 ,使用重排序的top 1的ASR文本作为最终的ASR识别结果(排序结果中排序最前的识别结果作为最终识别结果),从而达到提升ASR准确率的目的。In the language model score replacement module 103 and the final recognition result acquisition module 104, the rescore process returns the gpt language model score result to the ASR decoding thread, and the ASR decoding thread replaces the ngram in the total score in the top n sentence with the gpt language model score Score, and then reorder the new total score from small to large, and use the reordered top 1 ASR text as the final ASR recognition result (the top recognition result in the sorting result is the final recognition result), so as to improve the accuracy of ASR rate purposes.
在本申请的实施例中,对ASR识别结果做GPT rescore,经过实验测试,对ASR整体的识别结果的准确率有较高的提升,往往可以提升2个百分点左右,字准下降1个百分点左右。从时延的角度,仅会造成50ms左右的时延,对于整体语音识别系统的延时影响是很有限的。而识别准确率的提升,不仅仅是单纯的字准的下降,带来更好的ASR体验。还有对于依赖ASR识别结果的上游系统,例如语音客服机器人、智能语音助手、智能音箱等,都可以间接提升上游系统的效果,提升服务质量,提高客户满意度。In the embodiment of this application, GPT rescore is performed on the ASR recognition results. After experimental testing, the accuracy of the overall ASR recognition results is greatly improved, often by about 2 percentage points, and the character accuracy is reduced by about 1 percentage point. . From the perspective of delay, it will only cause a delay of about 50ms, and the impact on the delay of the overall speech recognition system is very limited. The improvement of recognition accuracy is not just a simple drop in character accuracy, but also brings a better ASR experience. There are also upstream systems that rely on ASR recognition results, such as voice customer service robots, intelligent voice assistants, smart speakers, etc., which can indirectly improve the effect of upstream systems, improve service quality, and improve customer satisfaction.
在本申请的实施例中通过ASR识别进程中的声学模型和ngram语言模型共同对待识别的音频进行识别处理,获取至少两个以上的初次识别结果;将所述初次识别结果传输至rescore进程,并通过rescore进程中的gpt语言模型进行评分处理,获取gpt语言模型得分;将所述gpt语言模型得分传输至所述ASR识别进程,并替换所述ASR识别进程中的ngram语言模型得分;对所述ASR识别进程中的所述gpt语言模型得分与所述声学模型得分之和进行排序,并将排序结果中排序最前的识别结果作为最终识别结果。本申请的主要目的在于通过采用gpt语言模型,解决数据稀疏性的问题。In the embodiment of the present application, the acoustic model in the ASR identification process and the ngram language model are used to identify and process the audio to be identified, and at least two or more initial identification results are obtained; the initial identification results are transmitted to the rescore process, and Score processing by the gpt language model in the rescore process to obtain the gpt language model score; transfer the gpt language model score to the ASR recognition process, and replace the ngram language model score in the ASR recognition process; The sum of the gpt language model score and the acoustic model score in the ASR recognition process is sorted, and the recognition result ranked first in the sorted results is used as the final recognition result. The main purpose of this application is to solve the problem of data sparsity by using the gpt language model.
如图3所示,是本申请实现基于神经网络的自动语音识别方法的电子设备的结构示意图。As shown in FIG. 3 , it is a schematic structural diagram of an electronic device implementing the neural network-based automatic speech recognition method of the present application.
所述电子设备1可以包括处理器10、存储器11和总线,还可以包括存储在所述存储器11中并可在所述处理器10上运行的计算机程序,如基于神经网络的自动语音识别程序12。The electronic device 1 may include a processor 10, a memory 11 and a bus, and may also include a computer program stored in the memory 11 and operable on the processor 10, such as an automatic speech recognition program 12 based on a neural network .
其中,所述存储器11至少包括一种类型的可读存储介质,所述可读存储介质包括闪存、移动硬盘、多媒体卡、卡型存储器(例如:SD或DX存储器等)、磁性存储器、磁盘、光盘等。所述存储器11在一些实施例中可以是电子设备1的内部存储单元,例如该电子设备1的移动硬盘。所述存储器11在另一些实施例中也可以是电子设备1的外部存储设备,例如电子设备1上配备的插接式移动硬盘、智能存储卡(Smart Media Card, SMC)、安全数字(Secure Digital, SD)卡、闪存卡(Flash Card)等。进一步地,所述存储器11还可以既包括电子设备1的内部存储单元也包括外部存储设备。所述存储器11不仅可以用于存储安装于电子设备1的应用软件及各类数据,例如数据稽核程序的代码等,还可以用于暂时地存储已经输出或者将要输出的数据。存储器可以存储内容,该内容可由电子设备显示或被发送到其他设备(例如,耳机)以由其他设备来显示或播放。存储器还可以存储从其他设备接收的内容。该来自其他设备的内容可由电子设备显示、播放、或使用,以执行任何必要的可由电子设备和/或无线接入点中的计算机处理器或其他组件实现的任务或操作。Wherein, the memory 11 includes at least one type of readable storage medium, and the readable storage medium includes flash memory, mobile hard disk, multimedia card, card-type memory (for example: SD or DX memory, etc.), magnetic memory, magnetic disk, CD etc. The storage 11 may be an internal storage unit of the electronic device 1 in some embodiments, such as a mobile hard disk of the electronic device 1 . The memory 11 can also be an external storage device of the electronic device 1 in other embodiments, such as a plug-in mobile hard disk, a smart memory card (Smart memory card) equipped on the electronic device 1 Media Card, SMC), Secure Digital (Secure Digital, SD) card, flash memory card (Flash Card) and so on. Further, the memory 11 may also include both an internal storage unit of the electronic device 1 and an external storage device. The memory 11 can not only be used to store application software and various data installed in the electronic device 1 , such as the code of a data audit program, but also can be used to temporarily store data that has been output or will be output. The memory may store content that may be displayed by the electronic device or sent to other devices (eg, headphones) for display or playback by other devices. The memory may also store content received from other devices. The content from other devices may be displayed, played, or used by the electronic device to perform any necessary tasks or operations that may be performed by computer processors or other components in the electronic device and/or wireless access point.
所述处理器10在一些实施例中可以由集成电路组成,例如可以由单个封装的集成电路所组成,也可以是由多个相同功能或不同功能封装的集成电路所组成,包括一个或者多个中央处理器(Central Processing unit,CPU)、微处理器、数字处理芯片、图形处理器及各种控制芯片的组合等。所述处理器10是所述电子设备的控制核心(Control Unit),利用各种接口和线路连接整个电子设备的各个部件,通过运行或执行存储在所述存储器11内的程序或者模块(例如数据稽核程序等),以及调用存储在所述存储器11内的数据,以执行电子设备1的各种功能和处理数据。电子还可包括芯片组(未示出),其用于控制一个或多个处理器与用户设备的其他组件中的一个或多个之间的通信。在特定的实施例中,电子设备可基于Intel®架构或ARM®架构,并且处理器和芯片集可来自Intel®处理器和芯片集家族。该一个或多个处理器104还可包括一个或多个专用集成电路(ASIC)或专用标准产品(ASSP),其用于处理特定的数据处理功能或任务。In some embodiments, the processor 10 may be composed of integrated circuits, for example, may be composed of a single packaged integrated circuit, or may be composed of multiple integrated circuits with the same function or different functions, including one or more Central processing unit (Central Processing unit, CPU), microprocessor, digital processing chip, graphics processor and a combination of various control chips, etc. The processor 10 is the control core (Control Unit) of the electronic device, and uses various interfaces and lines to connect various components of the entire electronic device, and runs or executes programs or modules stored in the memory 11 (such as data audit program, etc.), and call the data stored in the memory 11 to execute various functions of the electronic device 1 and process data. The electronics may also include a chipset (not shown) for controlling communications between the one or more processors and one or more of the other components of the user device. In particular embodiments, the electronic device may be based on Intel® architecture or ARM® architecture, and the processor and chipset may be from the Intel® processor and chipset family. The one or more processors 104 may also include one or more application specific integrated circuits (ASICs) or application specific standard products (ASSPs) for handling specific data processing functions or tasks.
所述总线可以是外设部件互连标准(peripheral component interconnect,简称PCI)总线或扩展工业标准结构(extended industry standard architecture,简称EISA)总线等。该总线可以分为地址总线、数据总线、控制总线等。所述总线被设置为实现所述存储器11以及至少一个处理器10等之间的连接通信。The bus may be a peripheral component interconnect standard (PCI for short) bus or an extended industry standard architecture (extended industry standard architecture, referred to as EISA) bus, etc. The bus can be divided into address bus, data bus, control bus and so on. The bus is configured to realize connection and communication between the memory 11 and at least one processor 10 and the like.
此外,网络和I/O接口可包括一个或多个通信接口或网络接口设备,以提供经由网络(未示出)在电子设备和其他设备(例如,网络服务器)之间的数据传输。通信接口可包括但不限于:人体区域网络(BAN)、个人区域网络(PAN)、有线局域网(LAN)、无线局域网(WLAN)、无线广域网(WWAN)、等等。用户设备102可以经由有线连接耦合到网络。然而,无线系统接口可包括硬件或软件以广播和接收消息,其使用Wi-Fi直连标准和/或IEEE 802.11无线标准、蓝牙标准、蓝牙低耗能标准、Wi-Gig标准、和/或任何其他无线标准和/或它们的组合。Additionally, network and I/O interfaces may include one or more communication interfaces or network interface devices to provide for data transfer between the electronic device and other devices (eg, web servers) via a network (not shown). Communication interfaces may include, but are not limited to: Body Area Network (BAN), Personal Area Network (PAN), Wired Local Area Network (LAN), Wireless Local Area Network (WLAN), Wireless Wide Area Network (WWAN), and the like. User equipment 102 may be coupled to the network via a wired connection. However, the wireless system interface may include hardware or software to broadcast and receive messages using the Wi-Fi Direct standard and/or the IEEE 802.11 wireless standard, the Bluetooth standard, the Bluetooth low energy standard, the Wi-Gig standard, and/or any other wireless standards and/or combinations thereof.
无线系统可包括发射器和接收器或能够在由IEEE 802.11无线标准所支配的操作频率的广泛范围内操作的收发器。通信接口可以利用声波、射频、光学、或其他信号来在电子设备与其他设备(诸如接入点、主机、服务器、路由器、读取设备、和类似物)之间交换数据。网络118可包括但不限于:因特网、专用网络、虚拟专用网络、无线广域网、局域网、城域网、电话网络、等等。A wireless system may include a transmitter and a receiver or transceiver capable of operating over a wide range of operating frequencies governed by the IEEE 802.11 wireless standard. A communication interface may utilize acoustic, radio frequency, optical, or other signals to exchange data between the electronic device and other devices, such as access points, hosts, servers, routers, reading devices, and the like. Network 118 may include, but is not limited to, the Internet, a private network, a virtual private network, a wireless wide area network, a local area network, a metropolitan area network, a telephone network, and the like.
显示器可包括但不限于液晶显示器、发光二极管显示器、或由在美国马萨诸塞州剑桥城的E Ink公司(E Ink Corp. of Cambridge, Massachusetts)所制造的E-InkTM显示器。该显示器可用于将内容以文本、图像、或视频的形式显示给用户。在特定的实例中,该显示器还可以作为触控屏显示器操作,其可以使得用户能够藉由使用某些手指或手势来触摸屏幕以启动命令或操作。Displays may include, but are not limited to, liquid crystal displays, light emitting diode displays, or E-Ink™ displays manufactured by E Ink Corp. of Cambridge, Massachusetts, USA. The display can be used to display content to the user in the form of text, images, or video. In certain instances, the display can also operate as a touch screen display, which can enable a user to initiate commands or operations by touching the screen with certain fingers or gestures.
图3仅示出了具有部件的电子设备,本领域技术人员可以理解的是,图2示出的结构并不构成对所述电子设备1的限定,可以包括比图示更少或者更多的部件,或者组合某些部件,或者不同的部件布置。FIG. 3 only shows an electronic device with components. Those skilled in the art can understand that the structure shown in FIG. 2 does not constitute a limitation to the electronic device 1, and may include fewer or more components, or combinations of certain components, or different arrangements of components.
例如,尽管未示出,所述电子设备1还可以包括给各个部件供电的电源(比如电池),优选地,电源可以通过电源管理装置与所述至少一个处理器10逻辑相连,从而通过电源管理装置实现充电管理、放电管理、以及功耗管理等功能。电源还可以包括一个或一个以上的直流或交流电源、再充电装置、电源故障检测电路、电源转换器或者逆变器、电源状态指示器等任意组件。所述电子设备1还可以包括多种传感器、蓝牙模块、Wi-Fi模块等,在此不再赘述。For example, although not shown, the electronic device 1 may also include a power supply (such as a battery) for supplying power to various components. Preferably, the power supply may be logically connected to the at least one processor 10 through a power management device, so that through power management The device implements functions such as charge management, discharge management, and power consumption management. The power supply may also include one or more DC or AC power sources, recharging devices, power failure detection circuits, power converters or inverters, power status indicators and other arbitrary components. The electronic device 1 may also include various sensors, bluetooth modules, Wi-Fi modules, etc., which will not be repeated here.
进一步地,所述电子设备1还可以包括网络接口,可选地,所述网络接口可以包括有线接口和/或无线接口(如WI-FI接口、蓝牙接口等),通常用于在该电子设备1与其他电子设备之间建立通信连接。Further, the electronic device 1 may also include a network interface, optionally, the network interface may include a wired interface and/or a wireless interface (such as a WI-FI interface, a Bluetooth interface, etc.), which are usually used in the electronic device 1 Establish a communication connection with other electronic devices.
可选地,该电子设备1还可以包括用户接口,用户接口可以是显示器(Display)、输入单元(比如键盘(Keyboard)),可选地,用户接口还可以是标准的有线接口、无线接口。可选地,在一些实施例中,显示器可以是LED显示器、液晶显示器、触控式液晶显示器以及OLED(Organic Light-Emitting Diode,有机发光二极管)触摸器等。其中,显示器也可以适当的称为显示屏或显示单元,用于显示在电子设备1中处理的信息以及用于显示可视化的用户界面。Optionally, the electronic device 1 may further include a user interface. The user interface may be a display (Display) or an input unit (such as a keyboard (Keyboard)). Optionally, the user interface may also be a standard wired interface or a wireless interface. Optionally, in some embodiments, the display can be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, and an OLED (Organic Light-Emitting Diode, Organic Light-Emitting Diode) touch controller, etc. Wherein, the display may also be appropriately called a display screen or a display unit, and is used for displaying information processed in the electronic device 1 and for displaying a visualized user interface.
应该了解,所述实施例仅为说明之用,在专利申请范围上并不受此结构的限制。It should be understood that the embodiments are only for illustration, and are not limited by the structure in terms of the scope of the patent application.
所述电子设备1中的所述存储器11存储的基于神经网络的自动语音识别程序12是多个指令的组合,在所述处理器10中运行时,可以实现:The neural network-based automatic speech recognition program 12 stored in the memory 11 in the electronic device 1 is a combination of multiple instructions. When running in the processor 10, it can realize:
通过ASR识别进程中的声学模型和ngram语言模型共同对待识别的音频进行识别处理,获取至少两个以上的初次识别结果;其中,每个识别结果包括声学模型得分、ngram语言模型得分及所述声学模型得分与所述ngram语言模型得分之和;Through the acoustic model and the ngram language model in the ASR recognition process, the audio to be recognized is jointly recognized and processed, and at least two or more initial recognition results are obtained; wherein, each recognition result includes an acoustic model score, an ngram language model score, and the acoustic The sum of the model score and the ngram language model score;
将所述初次识别结果传输至rescore进程,并通过rescore进程中的gpt语言模型进行评分处理,获取gpt语言模型得分;The initial recognition result is transmitted to the rescore process, and the gpt language model in the rescore process is used for scoring processing to obtain the gpt language model score;
将所述gpt语言模型得分传输至所述ASR识别进程,并替换所述ASR识别进程中的ngram语言模型得分;Transmitting the gpt language model score to the ASR recognition process, and replacing the ngram language model score in the ASR recognition process;
对所述ASR识别进程中的所述gpt语言模型得分与所述声学模型得分之和进行排序,并将排序结果中排序最前的识别结果作为最终识别结果。Sorting the sum of the gpt language model scores and the acoustic model scores in the ASR recognition process, and using the top recognition result among the sorted results as the final recognition result.
具体地,所述处理器10对上述指令的具体实现方法可参考图1对应实施例中相关步骤的描述,在此不赘述。Specifically, for the specific implementation method of the above instruction by the processor 10, reference may be made to the description of the relevant steps in the embodiment corresponding to FIG. 1 , which will not be repeated here.
进一步地,所述电子设备1集成的模块/单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。所述计算机可读介质可以包括:能够携带所述计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only Memory)。Further, if the integrated modules/units of the electronic device 1 are realized in the form of software function units and sold or used as independent products, they can be stored in a computer-readable storage medium. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U disk, removable hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory).
在本申请的实施例中,计算机可读存储介质,所述计算机可读存储介质中存储有至少一个指令,所述至少一个指令被电子设备中的处理器执行以实现上述所述的基于神经网络的自动语音识别方法的步骤,具体方法如下:In an embodiment of the present application, a computer-readable storage medium, at least one instruction is stored in the computer-readable storage medium, and the at least one instruction is executed by a processor in an electronic device to implement the above-mentioned neural network-based The step of the automatic speech recognition method, concrete method is as follows:
通过ASR识别进程中的声学模型和ngram语言模型共同对待识别的音频进行识别处理,获取至少两个以上的初次识别结果;其中,每个识别结果包括声学模型得分、ngram语言模型得分及所述声学模型得分与所述ngram语言模型得分之和;Through the acoustic model and the ngram language model in the ASR recognition process, the audio to be recognized is jointly recognized and processed, and at least two or more initial recognition results are obtained; wherein, each recognition result includes an acoustic model score, an ngram language model score, and the acoustic The sum of the model score and the ngram language model score;
将所述初次识别结果传输至rescore进程,并通过rescore进程中的gpt语言模型进行评分处理,获取gpt语言模型得分;The initial recognition result is transmitted to the rescore process, and the gpt language model in the rescore process is used for scoring processing to obtain the gpt language model score;
将所述gpt语言模型得分传输至所述ASR识别进程,并替换所述ASR识别进程中的ngram语言模型得分;Transmitting the gpt language model score to the ASR recognition process, and replacing the ngram language model score in the ASR recognition process;
对所述ASR识别进程中的所述gpt语言模型得分与所述声学模型得分之和进行排序,并将排序结果中排序最前的识别结果作为最终识别结果。Sorting the sum of the gpt language model scores and the acoustic model scores in the ASR recognition process, and using the top recognition result among the sorted results as the final recognition result.
在本申请所提供的几个实施例中,应该理解到,所揭露的设备,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。其中所述计算机可读存储介质可以是非易失性,也可以是易失性。In the several embodiments provided in this application, it should be understood that the disclosed devices, devices and methods can be implemented in other ways. For example, the device embodiments described above are only illustrative. For example, the division of the modules is only a logical function division, and there may be other division methods in actual implementation. Wherein the computer-readable storage medium may be non-volatile or volatile.
所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。The modules described as separate components may or may not be physically separated, and the components displayed as modules may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the modules can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
另外,在本申请各个实施例中的各功能模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能模块的形式实现。In addition, each functional module in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit. The above-mentioned integrated units can be implemented in the form of hardware, or in the form of hardware plus software function modules.
以上参考根据本申请的示例性实施例的系统和方法和/或计算机程序产品的框图和流程图描述了本申请的某些实施例。应当理解的是,框图和流程图中的一个或多个方框、以及在框图和流程图中的方框的组合,可以分别由计算机可执行程序指令实现。同样地,根据本申请的一些实施例,框图和流程图中的一些方框可以不必按照所呈现的顺序执行,或者甚至可以完全不需要执行。Certain embodiments of the present application are described above with reference to block diagrams and flowchart illustrations of systems and methods and/or computer program products according to exemplary embodiments of the application. It should be understood that one or more blocks of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, respectively, can be implemented by computer-executable program instructions. Likewise, some blocks in the block diagrams and flowcharts may not necessarily be executed in the order presented, or may not even be executed at all, according to some embodiments of the present application.
这些计算机可执行程序指令可以被加载到通用计算机、专用计算机、处理器、或其他可编程数据处理装置上以产生特定机器,使得在计算机、处理器、或其他可编程数据处理装置上执行的指令创建用于实现在流程图方框或多个方框中所指定的一个或多个功能的构件。这些计算机程序产品还可以存储在计算机可读存储器中,其可以指导计算机或其他可编程数据处理装置以特定的方式运行,使得存储在计算机可读存储器中的指令产生制品,该制品包括实现在流程图的方框或多个方框中指定的一个或多个功能的指令构件。例如,本申请的实施例可提供计算机程序产品,其包括其中包含有计算机可读程序代码或程序指令的计算机可用介质,所述计算机可读程序代码适于被执行以实现在流程图方框或多个方框中指定的一个或多个功能。计算机程序指令还可以被加载到计算机或其他可编程数据处理装置上,以致使一系列操作元素或步骤在计算机或其他可编程装置上执行易产生计算机实现的程序,使得在计算机或其他可编程装置上执行的指令提供用于实现在流程图方框或多个方框中指定的功能的元素或步骤。These computer-executable program instructions can be loaded into a general-purpose computer, special-purpose computer, processor, or other programmable data processing device to produce a specific machine, so that the instructions executed on the computer, processor, or other programmable data processing device Create a component that implements one or more functions specified in a flowchart block or blocks. These computer program products can also be stored in a computer-readable memory, which can instruct a computer or other programmable data processing apparatus to operate in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture, which includes implementing the process An instructional component of one or more functions specified in a block or blocks of a diagram. For example, the embodiments of the present application may provide a computer program product, which includes a computer-usable medium having computer-readable program code or program instructions embodied therein, and the computer-readable program code is adapted to be executed to realize One or more functions specified in multiple boxes. Computer program instructions can also be loaded onto a computer or other programmable data processing device to cause a series of operational elements or steps to be executed on the computer or other programmable device. The instructions executed above provide elements or steps for implementing the functions specified in the flowchart block or blocks.
相应地,框图或流程图中的方框支持用以执行指定功能的构件的组合、用于执行指定功能的元素或步骤与用于执行指定功能的程序指令构件的组合。还应当理解的是,框图和流程图中的每个方框以及框图和流程图中的方框的组合可由执行指定功能、元素或步骤的专用的基于硬件的计算机系统实现,或由专用硬件或计算机指令的组合实现。Accordingly, blocks in the block diagrams or flow diagrams support combinations of means for performing the specified functions, combinations of elements or steps for performing the specified functions and program instruction means for performing the specified functions. It should also be understood that each block in the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, can be implemented by a dedicated hardware-based computer system that performs the specified functions, elements, or steps, or by dedicated hardware or A combined implementation of computer instructions.
虽然本申请的某些实施例已经结合目前被认为是最实用的且各式各样的实施例进行了描述,但应当理解,本申请并不限于所公开的实施例,而是意在覆盖包含在所附权利要求书的范围之内的各种修改和等价布置。虽然本文采用了特定的术语,但它们仅以一般性和描述性的意义使用,而不是用于限制的目的。While certain embodiments of the present application have been described in connection with what are presently considered to be the most practical and varied embodiments, it should be understood that the application is not limited to the disclosed embodiments, but is intended to cover Various modifications and equivalent arrangements within the scope of the appended claims. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.
对于本领域技术人员而言,显然本申请不限于上述示范性实施例的细节,而且在不背离本申请的精神或基本特征的情况下,能够以其他的具体形式实现本申请。It will be apparent to those skilled in the art that the present application is not limited to the details of the exemplary embodiments described above, but that the present application can be implemented in other specific forms without departing from the spirit or essential characteristics of the present application.
因此,无论从哪一点来看,均应将实施例看作是示范性的,而且是非限制性的,本申请的范围由所附权利要求而不是上述说明限定,因此旨在将落在权利要求的等同要件的含义和范围内的所有变化涵括在本申请内。不应将权利要求中的任何附关联图标记视为限制所涉及的权利要求。Therefore, the embodiments should be regarded as exemplary and not restrictive in all points of view, and the scope of the application is defined by the appended claims rather than the foregoing description, and it is intended that the scope of the present application be defined by the appended claims rather than by the foregoing description. All changes within the meaning and range of equivalents of the elements are embraced in this application. Any reference sign in a claim should not be construed as limiting the claim concerned.
最后应说明的是,以上实施例仅用以说明本申请的技术方案而非限制,尽管参照较佳实施例对本申请进行了详细说明,本领域的普通技术人员应当理解,可以对本申请的技术方案进行修改或等同替换,而不脱离本申请技术方案的精神和范围。Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the present application without limitation. Although the present application has been described in detail with reference to the preferred embodiments, those skilled in the art should understand that the technical solutions of the present application can be Make modifications or equivalent replacements without departing from the spirit and scope of the technical solutions of the present application.
工业实用性Industrial Applicability
在此处键入工业实用性描述段落。Type the industrial applicability description paragraph here.
序列表自由内容Sequence Listing Free Content
在此处键入序列表自由内容描述段落。Type the sequence listing free content description paragraph here.

Claims (20)

  1. 一种基于神经网络的自动语音识别方法,应用于电子设备,其中,所述方法包括:A neural network-based automatic speech recognition method applied to electronic equipment, wherein the method includes:
    通过ASR识别进程中的声学模型和ngram语言模型共同对待识别的音频进行识别处理,获取至少两个以上的初次识别结果;其中,每个识别结果包括声学模型得分、ngram语言模型得分及所述声学模型得分与所述ngram语言模型得分之和;Through the acoustic model and the ngram language model in the ASR recognition process, the audio to be recognized is jointly recognized and processed, and at least two or more initial recognition results are obtained; wherein, each recognition result includes an acoustic model score, an ngram language model score, and the acoustic The sum of the model score and the ngram language model score;
    将所述初次识别结果传输至rescore进程,并通过rescore进程中的gpt语言模型进行评分处理,获取gpt语言模型得分;The initial recognition result is transmitted to the rescore process, and the gpt language model in the rescore process is used for scoring processing to obtain the gpt language model score;
    将所述gpt语言模型得分传输至所述ASR识别进程,并替换所述ASR识别进程中的ngram语言模型得分;Transmitting the gpt language model score to the ASR recognition process, and replacing the ngram language model score in the ASR recognition process;
    对所述ASR识别进程中的所述gpt语言模型得分与所述声学模型得分之和进行排序,并将排序结果中排序最前的识别结果作为最终识别结果。Sorting the sum of the gpt language model scores and the acoustic model scores in the ASR recognition process, and using the top recognition result among the sorted results as the final recognition result.
  2. 如权利要求1所述的基于神经网络的自动语音识别方法,其中,The automatic speech recognition method based on neural network as claimed in claim 1, wherein,
    所述通过ASR识别进程中声学模型和ngram语言模型共同对待识别的音频进行识别处理,获取至少两个以上的初次识别结果,包括如下步骤:The acoustic model and the ngram language model in the ASR recognition process are used to identify and process the audio to be recognized together, and obtain at least two or more initial recognition results, including the following steps:
    将所述待识别的音频转化为音频特征;converting the audio to be identified into audio features;
    根据所述音频特征获取所述音频特征中每帧的后验概率;Obtaining the posterior probability of each frame in the audio feature according to the audio feature;
    根据所述每帧的后验概率,对ngram语言模型生成的wfst图进行viterbi解码生成lattice图;以及According to the posterior probability of each frame, viterbi decoding is performed on the wfst graph generated by the ngram language model to generate a lattice graph; and
    根据所述lattice图,获取至少两个以上的初次识别结果。According to the lattice graph, at least two or more initial recognition results are obtained.
  3. 如权利要求2所述的基于神经网络的自动语音识别方法,其中,所述将所述待识别的音频转化为音频特征,包括如下步骤:The neural network-based automatic speech recognition method as claimed in claim 2, wherein, said audio to be recognized is converted into audio features, comprising the steps of:
    对所述待识别音频进行分帧、加窗处理,获取规范音频;以及Framing and windowing the audio to be identified to obtain standardized audio; and
    通过MFCC特征提取算法对所述规范音频进行特征提取,获取所述待识别音频的音频特征。The feature extraction of the standard audio is carried out through the MFCC feature extraction algorithm, and the audio features of the audio to be recognized are obtained.
  4. 如权利要求2所述的基于神经网络的自动语音识别方法,其中,所述根据所述音频特征获取所述音频特征中每帧的后验概率,包括如下步骤:The neural network-based automatic speech recognition method according to claim 2, wherein said obtaining the posterior probability of each frame in said audio feature according to said audio feature comprises the steps of:
    将所述音频特征提取为音频特征向量序列;extracting the audio features as a sequence of audio feature vectors;
    将所述音频特征向量序列输入预先训练好的声学模型,确定音素状态的时间边界;Input the audio feature vector sequence into the pre-trained acoustic model to determine the time boundary of the phoneme state;
    根据所述时间边界,提取所述时间边界内的所有帧,按语音帧的帧长取平均值,作为所述语音帧的后验概率。According to the time boundary, extract all frames in the time boundary, and take an average value according to the frame length of the speech frame, as the posterior probability of the speech frame.
  5. 如权利要求2所述的基于神经网络的自动语音识别方法,其中,The automatic speech recognition method based on neural network as claimed in claim 2, wherein,
    所述根据所述每帧的后验概率,对ngram语言模型生成的wfst图进行viterbi解码生成lattice图,包括如下步骤:According to the posterior probability of each frame, viterbi decoding is performed on the wfst graph generated by the ngram language model to generate a lattice graph, comprising the steps of:
    将所述ngram语言模型建模生成arpa格式的语言模型;Modeling the ngram language model to generate a language model in arpa format;
    利用arpa2fst工具生成hclg结构的wfst图;Use the arpa2fst tool to generate the wfst diagram of the hclg structure;
    根据所述viterbi、所述后验概率及所述wfst图构建wfst搜索空间;Construct a wfst search space according to the viterbi, the posterior probability and the wfst graph;
    在所述wfst搜索空间内寻找匹配概率最大的最优路径,得到文字识别结果。Finding the optimal path with the highest matching probability in the wfst search space to obtain the character recognition result.
  6. 如权利要求1所述的基于神经网络的自动语音识别方法,其中,The automatic speech recognition method based on neural network as claimed in claim 1, wherein,
    所述将所述识别结果传输至rescore进程,并通过rescore进程中的gpt语言模型进行处理,获取gpt语言模型得分,包括如下步骤:The described recognition result is transmitted to the rescore process, and is processed by the gpt language model in the rescore process, and obtains the gpt language model score, including the steps:
    在预设时间内,将待重打分的语句拼凑成批量的待重新打分语句;Put together sentences to be re-scored into batches of sentences to be re-scored within the preset time;
    通过gpt语言模型对批量的待重新打分语句做神经网络前向推理;Perform neural network forward reasoning on batches of sentences to be re-scored through the gpt language model;
    累加所述待重新打分语句的每个词的后验概率,并以对数形式输出,以获取待重新打分语句的gpt语言模型得分。Accumulate the posterior probability of each word of the sentence to be re-scored, and output it in logarithmic form, so as to obtain the gpt language model score of the sentence to be re-scored.
  7. 一种基于神经网络的自动语音识别装置,其中,所述装置包括:A neural network-based automatic speech recognition device, wherein the device includes:
    初次识别结果获取模块,用于通过ASR识别进程中的声学模型和ngram语言模型共同对待识别的音频进行识别处理,获取至少两个以上的初次识别结果;其中,每个识别结果包括声学模型得分、ngram语言模型得分及所述声学模型得分与所述ngram语言模型得分之和;The initial identification result acquisition module is used to jointly identify and process the audio to be identified through the acoustic model and the ngram language model in the ASR identification process, and obtain at least two initial identification results; wherein each identification result includes an acoustic model score, ngram language model score and the sum of the acoustic model score and the ngram language model score;
    gpt语言模型得分获取模块,用于将所述识别结果传输至rescore进程,并通过rescore进程中的gpt语言模型进行评分处理,获取gpt语言模型得分;The gpt language model score acquisition module is used to transmit the recognition result to the rescore process, and perform scoring processing through the gpt language model in the rescore process to obtain the gpt language model score;
    语言模型得分替换模块,用于将所述gpt语言模型得分传输至所述ASR识别进程,并替换所述ASR识别进程中的ngram语言模型得分;A language model score replacement module, configured to transmit the gpt language model score to the ASR recognition process, and replace the ngram language model score in the ASR recognition process;
    最终识别结果获取模块,用于对所述ASR识别进程中的所述gpt语言模型得分与所述声学模型得分之和进行排序,并将排序结果中排序最前的识别结果作为最终识别结果。The final recognition result acquisition module is used to sort the sum of the gpt language model score and the acoustic model score in the ASR recognition process, and use the top-ranked recognition result among the sorted results as the final recognition result.
  8. 如权利要求7所述的基于神经网络的自动语音识别装置,其中,The automatic speech recognition device based on neural network as claimed in claim 7, wherein,
    所述初次识别结果获取模块包括音频特征转化模块、后验概率获取模块、lattice图获取模块和两个以上的识别结果获取模块,其中,The initial recognition result acquisition module includes an audio feature conversion module, a posterior probability acquisition module, a lattice graph acquisition module and more than two recognition result acquisition modules, wherein,
    所述音频特征转化模块,用于将所述待识别的音频转化为音频特征;The audio feature conversion module is used to convert the audio to be identified into an audio feature;
    所述后验概率获取模块,用于根据所述音频特征获取所述音频特征中每帧的后验概率;The posterior probability obtaining module is used to obtain the posterior probability of each frame in the audio feature according to the audio feature;
    所述lattice图获取模块,用于根据所述每帧的后验概率,对ngram语言模型生成的wfst图进行viterbi解码生成lattice图;The lattice graph acquisition module is used to perform viterbi decoding on the wfst graph generated by the ngram language model to generate a lattice graph according to the posterior probability of each frame;
    所述两个以上的识别结果获取模块,用于根据所述lattice图,获取至少两个以上的初次识别结果。The two or more identification result acquisition modules are configured to acquire at least two or more initial identification results according to the lattice graph.
  9. 一种电子设备,其中,所述电子设备包括:An electronic device, wherein the electronic device includes:
    至少一个处理器;以及,at least one processor; and,
    与所述至少一个处理器通信连接的存储器;其中,a memory communicatively coupled to the at least one processor; wherein,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行基于神经网络的自动语音识别方法的步骤,其中,The memory stores instructions executable by the at least one processor, the instructions are executed by the at least one processor, so that the at least one processor can perform the steps of the neural network-based automatic speech recognition method, in,
    所述基于神经网络的自动语音识别方法包括:The automatic speech recognition method based on neural network comprises:
    通过ASR识别进程中的声学模型和ngram语言模型共同对待识别的音频进行识别处理,获取至少两个以上的初次识别结果;其中,每个识别结果包括声学模型得分、ngram语言模型得分及所述声学模型得分与所述ngram语言模型得分之和;Through the acoustic model and the ngram language model in the ASR recognition process, the audio to be recognized is jointly recognized and processed, and at least two or more initial recognition results are obtained; wherein, each recognition result includes an acoustic model score, an ngram language model score, and the acoustic The sum of the model score and the ngram language model score;
    将所述初次识别结果传输至rescore进程,并通过rescore进程中的gpt语言模型进行评分处理,获取gpt语言模型得分;The initial recognition result is transmitted to the rescore process, and the gpt language model in the rescore process is used for scoring processing to obtain the gpt language model score;
    将所述gpt语言模型得分传输至所述ASR识别进程,并替换所述ASR识别进程中的ngram语言模型得分;Transmitting the gpt language model score to the ASR recognition process, and replacing the ngram language model score in the ASR recognition process;
    对所述ASR识别进程中的所述gpt语言模型得分与所述声学模型得分之和进行排序,并将排序结果中排序最前的识别结果作为最终识别结果。Sorting the sum of the gpt language model scores and the acoustic model scores in the ASR recognition process, and using the top recognition result among the sorted results as the final recognition result.
  10. 如权利要求9所述的电子设备,其中,The electronic device of claim 9, wherein,
    所述通过ASR识别进程中声学模型和ngram语言模型共同对待识别的音频进行识别处理,获取至少两个以上的初次识别结果,包括如下步骤:The acoustic model and the ngram language model in the ASR recognition process are used to identify and process the audio to be recognized together, and obtain at least two or more initial recognition results, including the following steps:
    将所述待识别的音频转化为音频特征;converting the audio to be identified into audio features;
    根据所述音频特征获取所述音频特征中每帧的后验概率;Obtaining the posterior probability of each frame in the audio feature according to the audio feature;
    根据所述每帧的后验概率,对ngram语言模型生成的wfst图进行viterbi解码生成lattice图;以及According to the posterior probability of each frame, viterbi decoding is performed on the wfst graph generated by the ngram language model to generate a lattice graph; and
    根据所述lattice图,获取至少两个以上的初次识别结果。According to the lattice graph, at least two or more initial recognition results are obtained.
  11. 如权利要求10所述的电子设备,其中,所述将所述待识别的音频转化为音频特征,包括如下步骤:The electronic device according to claim 10, wherein said converting the audio to be recognized into audio features comprises the steps of:
    对所述待识别音频进行分帧、加窗处理,获取规范音频;以及Framing and windowing the audio to be identified to obtain standardized audio; and
    通过MFCC特征提取算法对所述规范音频进行特征提取,获取所述待识别音频的音频特征。The feature extraction of the standard audio is carried out through the MFCC feature extraction algorithm, and the audio features of the audio to be recognized are obtained.
  12. 如权利要求10所述的电子设备,其中,所述根据所述音频特征获取所述音频特征中每帧的后验概率,包括如下步骤:The electronic device according to claim 10, wherein said obtaining the posterior probability of each frame in said audio feature according to said audio feature comprises the following steps:
    将所述音频特征提取为音频特征向量序列;extracting the audio features as a sequence of audio feature vectors;
    将所述音频特征向量序列输入预先训练好的声学模型,确定音素状态的时间边界;Input the audio feature vector sequence into the pre-trained acoustic model to determine the time boundary of the phoneme state;
    根据所述时间边界,提取所述时间边界内的所有帧,按语音帧的帧长取平均值,作为所述语音帧的后验概率。According to the time boundary, extract all frames in the time boundary, and take an average value according to the frame length of the speech frame, as the posterior probability of the speech frame.
  13. 如权利要求10所述的电子设备,其中,The electronic device of claim 10, wherein,
    所述根据所述每帧的后验概率,对ngram语言模型生成的wfst图进行viterbi解码生成lattice图,包括如下步骤:According to the posterior probability of each frame, viterbi decoding is performed on the wfst graph generated by the ngram language model to generate a lattice graph, comprising the steps of:
    将所述ngram语言模型建模生成arpa格式的语言模型;Modeling the ngram language model to generate a language model in arpa format;
    利用arpa2fst工具生成hclg结构的wfst图;Use the arpa2fst tool to generate the wfst diagram of the hclg structure;
    根据所述viterbi、所述后验概率及所述wfst图构建wfst搜索空间;Construct a wfst search space according to the viterbi, the posterior probability and the wfst graph;
    在所述wfst搜索空间内寻找匹配概率最大的最优路径,得到文字识别结果。Finding the optimal path with the highest matching probability in the wfst search space to obtain the character recognition result.
  14. 如权利要求9所述的电子设备,其中,The electronic device of claim 9, wherein,
    所述将所述识别结果传输至rescore进程,并通过rescore进程中的gpt语言模型进行处理,获取gpt语言模型得分,包括如下步骤:The described recognition result is transmitted to the rescore process, and is processed by the gpt language model in the rescore process, and obtains the gpt language model score, including the steps:
    在预设时间内,将待重打分的语句拼凑成批量的待重新打分语句;Put together sentences to be re-scored into batches of sentences to be re-scored within the preset time;
    通过gpt语言模型对批量的待重新打分语句做神经网络前向推理;Perform neural network forward reasoning on batches of sentences to be re-scored through the gpt language model;
    累加所述待重新打分语句的每个词的后验概率,并以对数形式输出,以获取待重新打分语句的gpt语言模型得分。Accumulate the posterior probability of each word of the sentence to be re-scored, and output it in logarithmic form, so as to obtain the gpt language model score of the sentence to be re-scored.
  15. 一种计算机可读存储介质,存储有计算机程序,其中,所述计算机程序被处理器执行时实现基于神经网络的自动语音识别方法,其中,A computer-readable storage medium, storing a computer program, wherein, when the computer program is executed by a processor, an automatic speech recognition method based on a neural network is implemented, wherein,
    通过ASR识别进程中的声学模型和ngram语言模型共同对待识别的音频进行识别处理,获取至少两个以上的初次识别结果;其中,每个识别结果包括声学模型得分、ngram语言模型得分及所述声学模型得分与所述ngram语言模型得分之和;Through the acoustic model and the ngram language model in the ASR recognition process, the audio to be recognized is jointly recognized and processed, and at least two or more initial recognition results are obtained; wherein, each recognition result includes an acoustic model score, an ngram language model score, and the acoustic The sum of the model score and the ngram language model score;
    将所述初次识别结果传输至rescore进程,并通过rescore进程中的gpt语言模型进行评分处理,获取gpt语言模型得分;The initial recognition result is transmitted to the rescore process, and the gpt language model in the rescore process is used for scoring processing to obtain the gpt language model score;
    将所述gpt语言模型得分传输至所述ASR识别进程,并替换所述ASR识别进程中的ngram语言模型得分;Transmitting the gpt language model score to the ASR recognition process, and replacing the ngram language model score in the ASR recognition process;
    对所述ASR识别进程中的所述gpt语言模型得分与所述声学模型得分之和进行排序,并将排序结果中排序最前的识别结果作为最终识别结果。Sorting the sum of the gpt language model scores and the acoustic model scores in the ASR recognition process, and using the top recognition result among the sorted results as the final recognition result.
  16. 如权利要求15所述的计算机可读存储介质,其中,The computer readable storage medium of claim 15, wherein:
    所述通过ASR识别进程中声学模型和ngram语言模型共同对待识别的音频进行识别处理,获取至少两个以上的初次识别结果,包括如下步骤:The acoustic model and the ngram language model in the ASR recognition process are used to identify and process the audio to be recognized together, and obtain at least two or more initial recognition results, including the following steps:
    将所述待识别的音频转化为音频特征;converting the audio to be identified into audio features;
    根据所述音频特征获取所述音频特征中每帧的后验概率;Obtaining the posterior probability of each frame in the audio feature according to the audio feature;
    根据所述每帧的后验概率,对ngram语言模型生成的wfst图进行viterbi解码生成lattice图;以及According to the posterior probability of each frame, viterbi decoding is performed on the wfst graph generated by the ngram language model to generate a lattice graph; and
    根据所述lattice图,获取至少两个以上的初次识别结果。According to the lattice graph, at least two or more initial recognition results are obtained.
  17. 如权利要求16所述的计算机可读存储介质,其中,所述将所述待识别的音频转化为音频特征,包括如下步骤:The computer-readable storage medium according to claim 16, wherein said converting the audio to be recognized into audio features comprises the steps of:
    对所述待识别音频进行分帧、加窗处理,获取规范音频;以及Framing and windowing the audio to be identified to obtain standardized audio; and
    通过MFCC特征提取算法对所述规范音频进行特征提取,获取所述待识别音频的音频特征。The feature extraction of the standard audio is carried out through the MFCC feature extraction algorithm, and the audio features of the audio to be recognized are obtained.
  18. 如权利要求16所述的计算机可读存储介质,其中,所述根据所述音频特征获取所述音频特征中每帧的后验概率,包括如下步骤:The computer-readable storage medium according to claim 16, wherein said acquiring the posterior probability of each frame in said audio feature according to said audio feature comprises the following steps:
    将所述音频特征提取为音频特征向量序列;extracting the audio features as a sequence of audio feature vectors;
    将所述音频特征向量序列输入预先训练好的声学模型,确定音素状态的时间边界;Input the audio feature vector sequence into the pre-trained acoustic model to determine the time boundary of the phoneme state;
    根据所述时间边界,提取所述时间边界内的所有帧,按语音帧的帧长取平均值,作为所述语音帧的后验概率。According to the time boundary, extract all frames in the time boundary, and take an average value according to the frame length of the speech frame, as the posterior probability of the speech frame.
  19. 如权利要求16所述的计算机可读存储介质,其中,The computer readable storage medium of claim 16, wherein:
    所述根据所述每帧的后验概率,对ngram语言模型生成的wfst图进行viterbi解码生成lattice图,包括如下步骤:According to the posterior probability of each frame, viterbi decoding is performed on the wfst graph generated by the ngram language model to generate a lattice graph, comprising the steps of:
    将所述ngram语言模型建模生成arpa格式的语言模型;Modeling the ngram language model to generate a language model in arpa format;
    利用arpa2fst工具生成hclg结构的wfst图;Use the arpa2fst tool to generate the wfst diagram of the hclg structure;
    根据所述viterbi、所述后验概率及所述wfst图构建wfst搜索空间;Construct a wfst search space according to the viterbi, the posterior probability and the wfst graph;
    在所述wfst搜索空间内寻找匹配概率最大的最优路径,得到文字识别结果。Finding the optimal path with the highest matching probability in the wfst search space to obtain the character recognition result.
  20. 如权利要求15所述的计算机可读存储介质,其中,The computer readable storage medium of claim 15, wherein:
    所述将所述识别结果传输至rescore进程,并通过rescore进程中的gpt语言模型进行处理,获取gpt语言模型得分,包括如下步骤:The described recognition result is transmitted to the rescore process, and is processed by the gpt language model in the rescore process, and obtains the gpt language model score, including the steps:
    在预设时间内,将待重打分的语句拼凑成批量的待重新打分语句;Put together sentences to be re-scored into batches of sentences to be re-scored within the preset time;
    通过gpt语言模型对批量的待重新打分语句做神经网络前向推理;Perform neural network forward reasoning on batches of sentences to be re-scored through the gpt language model;
    累加所述待重新打分语句的每个词的后验概率,并以对数形式输出,以获取待重新打分语句的gpt语言模型得分。Accumulate the posterior probability of each word of the sentence to be re-scored, and output it in logarithmic form, so as to obtain the gpt language model score of the sentence to be re-scored.
PCT/CN2022/071220 2021-06-24 2022-01-11 Automatic speech recognition method based on neural network, device, and readable storage medium WO2022267451A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110706592.9A CN113450805B (en) 2021-06-24 2021-06-24 Automatic speech recognition method and device based on neural network and readable storage medium
CN202110706592.9 2021-06-24

Publications (1)

Publication Number Publication Date
WO2022267451A1 true WO2022267451A1 (en) 2022-12-29

Family

ID=77812508

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/071220 WO2022267451A1 (en) 2021-06-24 2022-01-11 Automatic speech recognition method based on neural network, device, and readable storage medium

Country Status (2)

Country Link
CN (1) CN113450805B (en)
WO (1) WO2022267451A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113450805B (en) * 2021-06-24 2022-05-17 平安科技(深圳)有限公司 Automatic speech recognition method and device based on neural network and readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104575490A (en) * 2014-12-30 2015-04-29 苏州驰声信息科技有限公司 Spoken language pronunciation detecting and evaluating method based on deep neural network posterior probability algorithm
US20150179169A1 (en) * 2013-12-19 2015-06-25 Vijay George John Speech Recognition By Post Processing Using Phonetic and Semantic Information
CN110797026A (en) * 2019-09-17 2020-02-14 腾讯科技(深圳)有限公司 Voice recognition method, device and storage medium
CN112349289A (en) * 2020-09-28 2021-02-09 北京捷通华声科技股份有限公司 Voice recognition method, device, equipment and storage medium
CN112562640A (en) * 2020-12-01 2021-03-26 北京声智科技有限公司 Multi-language speech recognition method, device, system and computer readable storage medium
CN113450805A (en) * 2021-06-24 2021-09-28 平安科技(深圳)有限公司 Automatic speech recognition method and device based on neural network and readable storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102386854B1 (en) * 2015-08-20 2022-04-13 삼성전자주식회사 Apparatus and method for speech recognition based on unified model
US10706852B2 (en) * 2015-11-13 2020-07-07 Microsoft Technology Licensing, Llc Confidence features for automated speech recognition arbitration
CN110517693B (en) * 2019-08-01 2022-03-04 出门问问(苏州)信息科技有限公司 Speech recognition method, speech recognition device, electronic equipment and computer-readable storage medium
US10916242B1 (en) * 2019-08-07 2021-02-09 Nanjing Silicon Intelligence Technology Co., Ltd. Intent recognition method based on deep learning network
US11961511B2 (en) * 2019-11-08 2024-04-16 Vail Systems, Inc. System and method for disambiguation and error resolution in call transcripts
CN111402894B (en) * 2020-03-25 2023-06-06 北京声智科技有限公司 Speech recognition method and electronic equipment
CN112699683A (en) * 2020-12-31 2021-04-23 大唐融合通信股份有限公司 Named entity identification method and device fusing neural network and rule

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150179169A1 (en) * 2013-12-19 2015-06-25 Vijay George John Speech Recognition By Post Processing Using Phonetic and Semantic Information
CN104575490A (en) * 2014-12-30 2015-04-29 苏州驰声信息科技有限公司 Spoken language pronunciation detecting and evaluating method based on deep neural network posterior probability algorithm
CN110797026A (en) * 2019-09-17 2020-02-14 腾讯科技(深圳)有限公司 Voice recognition method, device and storage medium
CN112349289A (en) * 2020-09-28 2021-02-09 北京捷通华声科技股份有限公司 Voice recognition method, device, equipment and storage medium
CN112562640A (en) * 2020-12-01 2021-03-26 北京声智科技有限公司 Multi-language speech recognition method, device, system and computer readable storage medium
CN113450805A (en) * 2021-06-24 2021-09-28 平安科技(深圳)有限公司 Automatic speech recognition method and device based on neural network and readable storage medium

Also Published As

Publication number Publication date
CN113450805B (en) 2022-05-17
CN113450805A (en) 2021-09-28

Similar Documents

Publication Publication Date Title
US11416681B2 (en) Method and apparatus for determining a reply statement to a statement based on a sum of a probability of the reply statement being output in response to the statement and a second probability in which the statement is output in response to the statement and further based on a terminator
CN107301865B (en) Method and device for determining interactive text in voice input
US10629193B2 (en) Advancing word-based speech recognition processing
CN112185348B (en) Multilingual voice recognition method and device and electronic equipment
US8296142B2 (en) Speech recognition using dock context
CN110444191A (en) A kind of method, the method and device of model training of prosody hierarchy mark
CN110765759B (en) Intention recognition method and device
WO2022227211A1 (en) Bert-based multi-intention recognition method for discourse, and device and readable storage medium
CN110570840B (en) Intelligent device awakening method and device based on artificial intelligence
US9811517B2 (en) Method and system of adding punctuation and establishing language model using a punctuation weighting applied to chinese speech recognized text
US11830482B2 (en) Method and apparatus for speech interaction, and computer storage medium
JP2001188558A (en) Device and method for voice recognition, computer system, and storage medium
EP3444806A1 (en) Voice recognition-based decoding method and device
CN112466289A (en) Voice instruction recognition method and device, voice equipment and storage medium
CN110751234A (en) OCR recognition error correction method, device and equipment
WO2023065633A1 (en) Abnormal semantic truncation detection method and apparatus, and device and medium
CN113326702A (en) Semantic recognition method and device, electronic equipment and storage medium
WO2023134069A1 (en) Entity relationship identification method, device, and readable storage medium
WO2022267451A1 (en) Automatic speech recognition method based on neural network, device, and readable storage medium
WO2022260790A1 (en) Error correction in speech recognition
CN114360510A (en) Voice recognition method and related device
CN113326367A (en) Task type dialogue method and system based on end-to-end text generation
CN114398896A (en) Information input method and device, electronic equipment and computer readable storage medium
CN113553833B (en) Text error correction method and device and electronic equipment
CN114662484A (en) Semantic recognition method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22826962

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE