CN117043742A - Using speech-to-text data in training text-to-speech models - Google Patents

Using speech-to-text data in training text-to-speech models Download PDF

Info

Publication number
CN117043742A
CN117043742A CN202280023555.0A CN202280023555A CN117043742A CN 117043742 A CN117043742 A CN 117043742A CN 202280023555 A CN202280023555 A CN 202280023555A CN 117043742 A CN117043742 A CN 117043742A
Authority
CN
China
Prior art keywords
text
region
specific
program instructions
computer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202280023555.0A
Other languages
Chinese (zh)
Inventor
A·福瑞德
V·K·瑟特姆普迪
S·佩里帕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Publication of CN117043742A publication Critical patent/CN117043742A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/08Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • G10L13/033Voice editing, e.g. manipulating the voice of the synthesiser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/9032Query formulation
    • G06F16/90332Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/06Elementary speech units used in speech synthesisers; Concatenation rules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/01Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Machine Translation (AREA)

Abstract

A system and method for providing text-to-speech output by: receiving user audio data; determining a user region specific pronunciation category from the audio data; determining text for a response to the user from the audio data; identifying a portion from the text, wherein a region-specific pronunciation dictionary includes the portion; and using a phoneme string from the dictionary selected according to the user region specific pronunciation category for outputting the word in text-to-speech to the user.

Description

Using speech-to-text data in training text-to-speech models
Technical Field
The present invention relates generally to the use of speech-to-text (STT) data in training from a text-to-speech (TTS) model. More particularly, the present invention relates to selecting a customized speech-to-text phoneme sequence for text-to-speech output.
Background
The accent classification model enables identification and classification of the speaker's accent from a minimum amount of audio data. Such a model evaluates the phones used by the speaker for the keywords and identifies the accent of the user by matching the used keyword phones with a database of keyword phone sequences classified according to different phone classifications.
A speech-to-text system receives audio data and generates a text output based on recognition of an audio phoneme sequence in the data and classifying the recognized phoneme sequence into specific words using one or more classification models.
The text-to-speech system generates an audio output by scanning the text data string and matching corresponding portions of the text data with database entries containing default phoneme sequences for the identified text portions. Such a system then generates a synthesized speech output of the whole-phoneme sequence associated with the text sequence, including adding the appropriate silence between words, and the appropriate silence is associated with punctuation present in the original text sequence.
Disclosure of Invention
The following presents a simplified summary in order to provide a basic understanding of one or more embodiments of the disclosure. This summary is not intended to identify key or critical elements or to delineate any scope of the particular embodiments or any scope of the claims. Its sole purpose is to present the concepts in a simplified form as a prelude to the more detailed description that is presented later. In one or more embodiments described herein, an apparatus, system, computer-implemented method, apparatus, and/or computer program product enable automatic generation of text-to-speech responses from local pronunciation differences of a user.
Aspects of the invention disclose methods, systems, and computer readable media associated with: providing text-to-speech output by receiving user audio data; determining a region-specific pronunciation class of the user based on the audio data; determining text for a response to the user from the audio data; identifying a portion from the text, wherein a region-specific pronunciation dictionary includes the portion; and using a phoneme string from the dictionary selected according to the user's region-specific pronunciation category for outputting to the user the portion of text-to-speech.
According to an aspect of some embodiments of the presently disclosed subject matter, there is provided a computer-implemented method for providing text-to-speech output, the method comprising: receiving user audio data; determining, by one or more computer processors, a user region-specific pronunciation category from the audio data;
determining, by the one or more computer processors, text for a response to the user from the audio data; identifying, by the one or more computer processors, a portion from the text, wherein a region-specific pronunciation dictionary includes the portion; and
A phoneme string is used by the one or more computer processors from the region-specific pronunciation dictionary selected according to the user region-specific pronunciation classification for the portion of text-to-speech output to the user.
Optionally, the method further comprises:
a default phoneme sequence in the text-to-speech output to the user is used by the one or more computer processors for words from the text that are not present in the region-specific pronunciation dictionary.
Optionally, the method further comprises constructing the distinct specific pronunciation dictionary by:
receiving, by the one or more computer processors, audio data from a plurality of speakers, the audio data including a domain-specific portion and a region-specific pronunciation for the domain-specific portion;
classifying, by the one or more computer processors, the audio data according to region-specific pronunciation;
determining, by the one or more computer processors, a most common region-specific pronunciation for the region-specific portion; and
the most common region-specific pronunciation of the domain-specific portion is stored by the one or more computer processors as the phoneme string of the domain-specific portion-region-specific pronunciation combination. Optionally, the method further comprises: the domain-specific portion is defined by the one or more computer processors. Optionally, the method further comprises: converting, by the one or more computer processors, the audio data into text data; and scanning, by the one or more computer processors, the text data for domain-specific portions.
Optionally, the portion of the method further comprises at least one of a word, an n-gram, and a phrase.
Optionally, the method further comprises:
determining, by the one or more computer processors, user text from the audio data:
determining, by the one or more computer processors, a response from the user text;
scanning, by the one or more computer processors, for a response for the domain portion; and
the domain portion is matched by the one or more computer processors with the region-specific pronunciation dictionary entry.
According to an aspect of some embodiments of the presently disclosed subject matter, there is provided a computer program product for providing text-to-speech output, the computer program product comprising one or more computer-readable storage devices and program instructions commonly stored on the one or more computer-readable storage devices, the stored program instructions comprising:
program instructions for receiving user audio data;
program instructions for determining a user region specific pronunciation class from the audio data;
program instructions for determining text for a response to the user from the audio data;
Program instructions for identifying a portion from the text, wherein a region-specific pronunciation dictionary includes the portion; and
program instructions for using a phoneme string from the region-specific pronunciation dictionary selected according to the user region-specific pronunciation classification for the portion of text-to-speech output to the user.
Optionally, the stored program instructions of the computer program product further comprise:
program instructions for outputting words from the text that are not present in the region-specific pronunciation dictionary to the user speech using a default phoneme sequence in the text.
Optionally, the stored program instructions of the computer program product further comprise program instructions for constructing the region-specific pronunciation dictionary by:
receiving audio data from a plurality of speakers, the audio data including a domain-specific portion and a region-specific pronunciation for the domain-specific portion;
classifying the audio data according to region specific pronunciation;
determining the most common region-specific pronunciation for the region-specific portion; and
the most common region-specific pronunciation of the domain-specific portion is stored as a phoneme string of the domain-specific portion-region-specific pronunciation combination. Optionally, the stored program instructions of the computer program product further comprise: program instructions for defining a domain-specific portion. Optionally, the stored program instructions of the computer program product further comprise: program instructions for converting the audio data into text data; and program instructions for scanning the text data for domain specific portions.
Optionally, the portion of the computer program product includes at least one of a word, an n-gram, and a phrase.
Optionally, the stored program instructions of the computer program product further comprise:
program instructions for determining user text from the audio data:
program instructions for determining a response based on the user text;
program instructions for scanning for responses to the domain portion; and
program instructions for matching the domain portion with the region-specific pronunciation dictionary entry.
According to an aspect of some embodiments of the presently disclosed subject matter, there is provided a computer system for providing text-to-speech output, the computer system comprising:
one or more computer processors;
one or more computer-readable storage devices; and
program instructions stored on the one or more computer-readable storage devices for execution by the one or more computer processors, the stored program instructions comprising:
program instructions for receiving user audio data;
program instructions for determining a user region specific pronunciation class from the audio data;
program instructions for determining text for a response to the user from the audio data;
Program instructions for identifying a portion from the text, wherein a region-specific pronunciation dictionary includes the portion; and program instructions for using a phoneme string from the region-specific pronunciation dictionary selected according to the user region-specific pronunciation classification for the portion of text-to-speech output to the user.
Optionally, the stored program instructions of the computer system further comprise:
program instructions for outputting words from the text that are not present in the region-specific pronunciation dictionary to the user speech using a default phoneme sequence in the text.
Optionally, the stored program instructions of the computer system further comprise program instructions for constructing the region-specific pronunciation dictionary by:
receiving audio data from a plurality of speakers, the audio data including a domain-specific portion and a region-specific pronunciation for the domain-specific portion;
classifying the audio data according to region specific pronunciation;
determining the most common region-specific pronunciation for the region-specific portion; and
the most common region-specific pronunciation of the domain-specific portion is stored as a phoneme string of the domain-specific portion-region-specific pronunciation combination. Optionally, the stored program instructions of the computer system further comprise: program instructions for defining a particular portion of a domain. Optionally, the stored program instructions of the computer system further comprise: program instructions for converting the audio data into text data; and program instructions for scanning the text data for a particular portion of the field.
Optionally, the portion of the computer system includes at least one of a word, an n-gram, and a phrase.
Additional features and advantages are realized through the techniques of the present invention. Other embodiments and aspects of the invention are described in detail herein and are considered a part of the claimed invention.
Drawings
The foregoing and other objects, features and advantages of the disclosure will be apparent from the following more particular descriptions of certain embodiments of the disclosure as illustrated in the accompanying drawings wherein like reference numbers generally refer to the same parts throughout the embodiments of the disclosure.
FIG. 1 provides a schematic diagram of a computing environment according to an embodiment of the invention.
Fig. 2 provides a flow chart depicting a sequence of operations according to an embodiment of the invention.
FIG. 3 illustrates a cloud computing environment according to an embodiment of the present invention.
FIG. 4 illustrates an abstract model layer, according to an embodiment of the invention.
Detailed Description
Some embodiments will be described in more detail with reference to the accompanying drawings, in which embodiments of the disclosure are shown. However, the present disclosure may be embodied in various forms and, thus, should not be construed as limited to the embodiments set forth herein.
Currently, speech-to-text (STT) and text-to-speech (TTS) systems require separate, lengthy training procedures, especially during domain adaptation. In training the STT model, care must be taken to capture the user's pronunciation of domain terms. Training the TTS system alone requires an attempt to find a full-size fit "best" synthesis of the phoneme sequence for the domain term in the TTS readout. The disclosed embodiments enable determination of region-specific phoneme sequences for domain terms from previously evaluated STT data. The disclosed embodiments provide a text-to-speech system that is capable of adapting domain-specific terms to the accent of the user with whom the system is interacting. This increases the familiarity and usability of TTS systems for users with different backgrounds of different dialects and accents. The disclosed embodiments provide a comfort system that adapts the pronunciation patterns of its unfamiliar words to the user.
As used herein, the term field refers to a subset of words or technical terms and phrases from a particular language associated with a particular field, such as medical terms, engineering or other technical terms, industrial terms, slang, spoken language, local idioms, and the like. For any element of the domain, there may be multiple local variations in pronunciation and actual words depending on the regional dialect and accent of the individual user. As an example, english pronunciation may differ depending on the country of origin of the user and different regions within the country and whether english is the first language of the user. In an embodiment, the system receives defined domain words from an administrator or other individual. In this embodiment, the system and method uses historical user inputs and available dictionaries to define domain-specific words to identify domains and domain-specific words.
Aspects of the present invention relate generally to question-answering systems and, more particularly, to providing a phoneme sequence of domain-specific words or phrases that match a user's local pronunciation accent or dialect in order to answer a user question. In an embodiment, a question-answering (QA) system receives audio data including questions from a user, including local pronunciation differences for the user or local accents or dialects for the user. The system uses a trained machine learning model to identify and classify accents of users. The system uses a speech-to-text converter to convert the user's audio data into text taking into account the user's recognized local pronunciation. The system uses a decision tree or similar model to evaluate the user's question and determine a response to the user's question. The system scans the determined response to identify one or more portions of the response. For each identified portion, the method searches the region-specific pronunciation dictionary for entries that match the word or phrase of the portion and that correspond to the user's identified accent. The method formulates a response using corresponding local variants for the identified portion. The method extracts a phoneme sequence of the partial pronunciation of the portion corresponding to the recognized accent of the user. The method uses a text-to-speech generator and a phoneme sequence of the local pronunciation to generate audio data corresponding to the local text response. The method provides the generated text to a speech output as an audio output to the user, the speech output including a pronunciation of the identified portion of the user's accent.
In an embodiment, the system and method receives audio data from a system user. The method transcribes audio data using a speech-to-text model and then associates a corresponding portion of the audio data with each n-gram, word, and phrase of the speech-to-text output. The method then identifies a phoneme string of the audio data associated with the words of the text and identifies the accent of the user according to a model trained using the labeled local pronunciations of the various common words of the target language. For example, training a system for use by an English speaker includes training a model using labeled audio data that includes local pronunciations of English words that are typically used when interacting with chat programs, voice programs, or other automated dialog systems. In this embodiment, the method receives the labeled training data, converts the speech data to text, and associates the identified phonemes with the speech-to-text output. The model uses accent tags of audio data when building network node weights adapted to receive the data and to classify the data according to accent identification of the user.
Aspects of the present invention provide improvements in the field of QA system technology. Conventional QA systems, after determining the entity and intent of the user's input audio, utilize a default set of static decision trees and phonemes in generating an audio data output using a text-to-speech generator. The disclosed embodiments build on such systems by identifying the accent of a user and customizing the audio text-to-speech response for that user using a dictionary of entries with n-grams, words, and phrases. Each entry has a sequence of phonemes defined in accordance with one or more user accents. For example, for any defined accent, the dictionary has a set of domain and accent specific entries associated with the accent. Further, the dictionary may be considered to have a plurality of different accent phoneme sequences for at least some of the entries, with phoneme pronunciation sequences provided for U.S. version, india version, uk version, scotch version, irish version, and australian version of a single dictionary entry.
Aspects of the invention also provide improvements to computer functionality. In particular, implementations of the present invention relate to specific improvements in the manner in which the QA system operates, as embodied in a continuously adjusted phoneme sequence associated with respective domains and corresponding terms of different accents. The disclosed method starts with a region-specific dictionary of phoneme sequences for different term-accent combinations. Over time, the most common phoneme sequence for any particular term-accent combination may be modified for the dictionary entry based on a change to the most common pronunciation of the term in the particular accent received as input audio data from the system user. As input data in ongoing training of dictionaries used by the systems and methods.
By way of overview, the QA system is an artificial intelligence application executing on data processing hardware that answers questions related to a given subject matter field presented in natural language. The QA system receives input from different sources, including input over a network, electronic document libraries or other data, data from content creators, information from one or more content users, and other such input from other possible input sources. The data storage device stores a corpus of data. The content creator creates content in the document to be used as part of the data corpus of the QA system. The documents may include any file, text, article, or data source used in the QA system. For example, the QA system accesses a knowledge body about a domain, or subject matter domain (e.g., financial domain, medical domain, legal domain, etc.), wherein the knowledge body (knowledge base) may be organized in a variety of configurations, such as, but not limited to, a structured repository of domain-specific information, such as an ontology, or unstructured data related to a domain, or a collection of natural language documents related to a domain.
In an embodiment, the QA system further identifies accents of the user form audio data received from the user. The system determines a response to the user input and then modifies the response based on a match between the response portion and an entry in the region-specific pronunciation dictionary developed by the disclosed embodiments to provide customized audio text to the speech output using the user's accent. In this embodiment, the method generates an audio output in response to the user's input, wherein the audio output includes one or more sequences of phonemes that express the words and phrases generated by the QA response generator using the identified user's accent.
In one embodiment, one or more components of the system may employ hardware and/or software to address the problem of being highly technical in nature (e.g., receiving user audio data, determining user region-specific pronunciation classifications from the audio data, using a speech-to-text analysis, phoneme detection, and trained accent classification machine learning classification architecture, determining text responsive to the user from the audio data, and, for example, suggesting intent and entities to extract from speech to text data, identifying a portion from text, having a matching decision tree in a developed region-specific pronunciation dictionary, using phoneme strings, from a dictionary selected according to user region-specific pronunciation classifications, for a portion of the generated text-to-speech output to the user, etc.). These solutions are not abstract and cannot be performed as a set of mental activities for humans because of the processing power required to facilitate the generation of generated text-to-speech output, e.g., customized to the accent of the system user. Further, some of the processing performed may be performed by a special purpose computer for performing defined tasks related to generating user accent custom text-to-speech phone strings. For example, a special purpose computer may be employed to perform tasks related to generating customized text-to-speech output for a question-answering system or the like.
In an embodiment, the method builds a region-specific pronunciation dictionary for providing user-specific customized text to a speech output of an automated dialog system. In constructing the dictionary, the method receives audio data including speech samples from a plurality of individuals. Each audio data sample includes a tag indicating that an individual accent is provided. In this embodiment, the method performs speech-to-text analysis on the audio data samples and phoneme analysis on the audio data to produce a series of sequences of phonemes from the data. The method then correlates the phoneme sequence with text from the audio, creating text-to-phoneme sequence pairs for each word n-tuple and/or phrase of the text. In an embodiment, for domain-specific words, the method provides scripts or other prompts for the individual to follow when creating the audio data. In this embodiment, the method ensures that domain words and phrases of interest are included in the audio sample from the individual.
In an embodiment, the method sorts the phone-text pairs according to each labeled accent. For each accent, the method identifies the most common phoneme sequence for each word of text, including especially domain-specific words. In this embodiment, the phoneme sequence ordered according to the accent of the label provides a basis for identifying the accent of the user by comparing all user audio data input phoneme sequences with the set of phoneme sequences labeled according to the accent.
In an embodiment, the method compares the phoneme sequence with words used across multiple accents to identify a phoneme sequence unique to a single accent or subset of accents as a step oriented to enable classification of the accents of a user from a small set of key inputs.
For domain-specific or accent-specific words having multiple different pronunciations within the labeled data for the accent, the method selects the most common pronunciation as the pronunciation representative of the labeled accent. The method indicates a relative ranking of each of the plurality of pronunciations of the word or phrase in a intonation-specific dictionary entry of the word or phrase. Once used, the dictionary undergoes continuous scrutiny to evaluate accent specific pronunciations to determine the shift in the most common pronunciation of any particular word or phrase. The method maintains an accumulated count of occurrences of different pronunciations for each accent specific term and modifies dictionary entries for the term after a relative ranking of different pronunciations for the term changes due to user input including the term or phrase. After creating the region-specific pronunciation dictionary that includes the accent-specific phoneme sequence of words, the method utilizes dictionary entries in generating text-to-speech audio output, as described below.
As an example, the method receives audio data input from a plurality of individuals, including each individual who owns indian english or american english accents. For each individual, the method undoes the audio data that includes the individual's word "dental Zhou pronunciation. Exemplary pronunciations for words for each of these two accents are listed in table 1.
Table 1:
from Table 1, the method determines that the most common phoneme sequence for the American accent is [.0px.2rY.0x.1dan.0txl ]. The most common pronunciation of the Indian accent is [.2pi.0x.2rY.0x.1dan.0txl ]. For a dictionary entry of "periodontal", the method annotates multiple phoneme sequences for each accent and indicates which phoneme sequence is most common for each accent.
In an embodiment, a method receives user audio data associated with an automated conversation system, such as a question and answer system. The audio data may be received directly from the user through a microphone connected to the system, or may be received indirectly via the user microphone, the user's computing system, the communication network, the receiving computing system associated with the QA system, and one or more intermediary computing systems that may include edge clouds and cloud computing resources. In this embodiment, the audio data comprises a digital audio file, such as a wav or similar data file containing a digitized version of the user's voice input. In a sense, the audio data file includes a digitized phoneme sequence string corresponding to the spoken word string from the user.
In an embodiment, the method performs a speech-to-text conversion on the audio data to produce text strings corresponding to the user's speech input. The method further analyzes the audio data to produce an identified phone string corresponding to the audio data. The method correlates the phoneme string with the text string, associating a particular phoneme sequence with each word, partial word or combination of words of the text string. In this embodiment, the method matches the relevant phoneme sequence-word combinations to identify the user's accent or other local pronunciation. In an embodiment, the method uses a trained machine-learning classification model (such as a convolutional neural network, a recurrent neural network, a deep learning neural network) or a generation classifier (such as a generation opponent network variation automatic encoder) to classify the accents of the users according to the phoneme sequence-word correlation of the user's input audio data. In an embodiment, the method receives a previously trained machine-learned classification model that is outside the scope of the disclosed invention. The trained model provides the accent classification to the user as output.
The method analyzes text strings using natural language understanding or natural language processing algorithms to extract entities and intents from the text strings. In an embodiment, the method processes the extracted entities and intents using a decision tree that includes parent nodes associated with different entities and intent and child nodes for each parent node associated with possible system responses to user inputs. The output of the decision tree includes text strings selected by the tree in response to user input.
In an embodiment, the method scans the response text and parses the text into parts such as n-grams, words or phrases. The method then associates the identified portion of the scanned text with an entry in the region-specific pronunciation dictionary. The method correlates the portion with dictionary entries having corresponding word-phoneme sequences for the identified accents of the current system user.
In this embodiment, the method identifies the most common part of the accent specific phoneme sequence identification that has a matching entry for the identified accent. The method generates an overall phoneme sequence for a text response. The entire phoneme sequence includes those extracted from the relevant region-specific pronunciation dictionary entries. For text portions that do not exist and lack a match in the region-specific pronunciation dictionary, the method uses a default phoneme sequence associated with those text portions. The method provides the final overall phoneme sequence of the response text to the user as a text-to-speech output.
In an embodiment, the method utilizes a decision tree that includes accent-specific child nodes for at least some of the parent nodes of the tree. In this embodiment, the method identifies a parent decision node associated with the user input. The method identifies a decision and corresponding set of sub-nodes associated with the input based on the intent and entity of the input, and then selects a sub-node based on the identified accent of the user. In this embodiment, there may be multiple child nodes for a particular parent node, where a group of child nodes differ according to accent, but are otherwise equivalent conceptual responses to the parent node's decision. In practice, the method proceeds through a decision tree to a parent node, evaluates the decision of the parent node based on details entered by the user, selects a set of other equivalent child nodes as a response, and then selects the child node that matches the identified user's accent as an output response for the user. In this embodiment, the method then proceeds as described above when generating a phoneme sequence that matches the accent of the identified user to generate a text-to-speech output of the user.
For example, two users call an automatic question and answer system, one with an American accent and one with an Indian accent. The U.S. user asks: "how much money to deep clean? "System response: "pay $ 25 for regular clearing your total. For [.0px.2rY.0x.1dan.0xl ], your total payment is $50 ". In contrast, indian users ask the way: "how much money i need to pay when cleaning? "System response: "pay $ 25 for regular clearing your total. For [.2pi.0x.2ry.0x.1dan.0xl ], your total payment is $50 ".
FIG. 1 provides a schematic diagram of an exemplary network resource associated with practicing the disclosed invention. The invention may be practiced in a processor that processes any of the disclosed elements of an instruction stream. As shown, the networked client device 110 is wirelessly connected to the server subsystem 102. Client device 104 is wirelessly connected to server subsystem 102 via network 114. Client devices 104 and 110 include an automated question-and-answer program (not shown) and sufficient computing resources (processor, memory, network communication hardware) for executing the program. The client devices 104 and 110 may act as user access points for the QA system so that users can provide input and receive output from the system. The overall system functionality may occur across the set of computing devices and across further environmental resources (such as edge clouds and cloud resources). As shown in fig. 1, the server subsystem 102 includes a server computer 150. FIG. 1 shows a block diagram of the components of a server computer 150 within a networked computer system 1000 according to an embodiment of the invention. It should be understood that fig. 1 provides only a diagrammatic representation of one implementation and does not imply any limitation as to the environments in which different embodiments may be implemented. Many modifications to the depicted environments may be made.
The server computer 150 may include a processor 154, a memory 158, a persistent memory 170, a communication unit 152, an input/output (I/O) interface 156, and a communication fabric 140. Communication structure 140 provides communication between cache 162, memory 158, persistent storage 170, communication unit 152, and input/output (I/O) interface 156. Communication structure 140 may be implemented with any architecture designed to transfer data and/or control information between processors (such as microprocessors, communication and network processors, etc.), system memory, peripherals, and any other hardware components within the system. For example, communication structure 140 may be implemented using one or more buses.
Memory 158 and persistent storage 170 are computer-readable storage media. In this embodiment, memory 158 includes Random Access Memory (RAM) 160. In general, memory 158 may include any suitable volatile or non-volatile computer-readable storage media. Cache 162 is a fast memory that enhances the performance of processor 154 by maintaining recently accessed data from memory 158 and data in the vicinity of the recently accessed data.
Program instructions and data (e.g., automated dialog program 175) for practicing embodiments of the present invention are stored in persistent storage 170 for execution and/or access by one or more of the respective processors 154 of server computer 150 via cache 162. In this embodiment, persistent storage 170 comprises a magnetic hard drive. In addition to, or in place of, a magnetic hard disk drive, persistent storage 170 may include a solid state hard disk drive, a semiconductor memory device, a read-only memory (ROM), an erasable programmable read-only memory (EPROM), flash memory, or any other computer-readable storage medium capable of storing program instructions or digital information.
The media used by persistent storage 170 also may be removable. For example, a removable hard drive may be used for persistent storage 170. Other examples include optical and magnetic disks, thumb drives, and smart cards, which are inserted into the drive for transfer to another computer-readable storage medium (which is also part of persistent memory 170).
In these examples, communication unit 152 provides communication with other data processing systems or devices, including resources of client computing devices 104 and 110. In these examples, communication unit 152 includes one or more network interface cards. The communication unit 152 may provide communication using either or both physical and wireless communication links. Software distribution programs and other programs and data for implementing the present invention may be downloaded to the persistent memory 170 of the server computer 150 through the communication unit 152.
The I/O interface 156 allows input and output of data with other devices that may be connected to the server computer 150. For example, the I/O interface(s) 156 may provide a connection to external device(s) 190, such as a keyboard, a keypad, a touch screen, a microphone for directly receiving user audio data, a digital camera, and/or some other suitable input device. External device(s) 190 may also include portable computer-readable storage media such as, for example, thumb drives, portable optical or magnetic disks, and memory cards. Software and data for implementing embodiments of the present invention (e.g., automated dialog 175 on server computer 150) may be stored on such portable computer-readable storage media and may be loaded onto persistent storage 170 via I/O interface 156. The I/O interface 156 is also connected to a display 180.
The display 180 provides a mechanism for displaying data to a user and may be, for example, a computer monitor. The display 180 may also be used as a touch screen, such as the display of a tablet computer.
Fig. 2 provides a flow chart 200 illustrating exemplary activities associated with the practice of the present disclosure. After the program is started, a dialogue is initiated between the user and the automated question-answering system at block 210. As part of the dialog, the method receives an audio input λ from the user, converts the audio to text using speech-to-text techniques, and communicates the text data for analysis. In an embodiment, the method analyzes audio data of a user and extracts a phoneme sequence from the audio. The method matches sequences of phoneme sequences from audio with text data from speech to text.
At block 220, the method determines a text response to text data related to user input for the conversation using an automatic question and answer system decision tree or other automatic response generator.
At decision point 230, the method attempts to identify the user's accent using the phoneme sequence-text pairs extracted from the user's audio input data received by the system. In an embodiment, the method utilizes a machine-learned classification model to identify accents of the user. In an embodiment, the method matches the phoneme sequence-text parity with a corpus of phoneme sequence text pairs in a database.
At block 240, for the user to have an accent that successfully identified, the method continues to identify portions of the text response that have matches in the region-specific pronunciation dictionary built for use in the system.
For the lack of successfully identified accents by the user, the method proceeds to block 260 and generates a phoneme text-to-speech sequence for the determined response of block 220. For such users, the method utilizes a default phoneme sequence for each portion of the determined response.
At block 250, the method generates an ensemble phoneme sequence for the determined response of block 220 using the partial pronunciation phoneme sequence for the response portion identified at block 240. At block 260, the partial pronunciation phoneme sequence of the portion identified at block 240 is combined with the default phoneme sequences from all other portions of the determined response of block 220. The local pronunciation from the region-specific pronunciation dictionary corresponds to the most common pronunciation of response items derived from audio data collected from a plurality of individuals having the same accent as the accent of the current user.
At block 270, the method provides the final overall phoneme sequence corresponding to the determined response of block 220 to the user as a text-to-speech audio output. The output may be provided directly to the user using a local system speaker or may be provided to user equipment through a communications network, including devices such as a local computer, tablet computer, landline telephone or mobile telephone that interface with the QA system.
It should be understood that while the present disclosure includes a detailed description of cloud computing, implementations of the teachings recited herein are not limited to cloud computing environments. Rather, embodiments of the invention can be implemented in connection with any other type of computing environment, now known or later developed.
Cloud computing is a model of service delivery for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, network bandwidth, servers, processes, memory, storage, applications, virtual machines, and services) that can be rapidly provisioned and released with minimal administrative effort or interaction with providers of the services. The cloud model may include at least five characteristics, at least three service models, and at least four deployment models.
The characteristics are as follows:
on-demand self-service: cloud consumers can unilaterally automatically provide computing power on demand, such as server time and network storage, without human interaction with the provider of the service.
Wide network access: the capabilities are available over the network and accessed through standard mechanisms that facilitate the use of heterogeneous thin client platforms or thick client platforms (e.g., mobile phones, laptops, and PDAs).
And (3) a resource pool: the computing resources of the provider are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources being dynamically assigned and reassigned as needed. There is a sense of location independence because consumers typically do not have control or knowledge of the exact location of the provided resources, but may be able to specify locations at a higher level of abstraction (e.g., country, state, or data center).
Quick elasticity: the ability to quickly and flexibly provide, in some cases automatically, a quick zoom out and a quick release for quick zoom in. The available supply capacity generally appears to the consumer unrestricted and may be purchased in any number at any time.
Measured service: cloud systems automatically control and optimize resource usage by utilizing metering capabilities at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage may be monitored, controlled, and reported, providing transparency to the provider and consumer of the utilized service.
The service model is as follows:
software as a service (SaaS): the capability provided to the consumer is to use the provider's application running on the cloud infrastructure. Applications may be accessed from different client devices through a thin client interface such as a web browser (e.g., web-based email). Consumers do not manage or control the underlying cloud infrastructure including network, server, operating system, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.
Platform as a service (PaaS): the capability provided to the consumer is to deploy consumer-created or acquired applications created using programming languages and tools supported by the provider onto the cloud infrastructure. The consumer does not manage or control the underlying cloud infrastructure, including networks, servers, operating systems, or storage, but has control over the deployed applications and possible application hosting environment configurations.
Infrastructure as a service (laaS): the ability to be provided to the consumer is to provide processing, storage, networking, and other basic computing resources that the consumer can deploy and run any software, which may include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure, but rather has control over the operating system, storage, deployed applications, and possibly limited control over selected networking components (e.g., host firewalls).
The deployment model is as follows:
private cloud: the cloud infrastructure operates only for an organization. It may be managed by an organization or a third party and may exist either on-site or off-site.
Community cloud: the cloud infrastructure is shared by several organizations and supports specific communities that share concerns (e.g., tasks, security requirements, policies, and compliance considerations). It may be managed by an organization or a third party and may exist either on-site or off-site.
Public cloud: the cloud infrastructure is made available to the public or large industry groups and owned by the organization selling the cloud services.
Mixing cloud: a cloud infrastructure is a combination of two or more clouds (private, community, or public) that hold unique entities but are bound together by standardized or proprietary technologies that enable data and applications to migrate (e.g., cloud bursting for load balancing between clouds).
Cloud computing environments are service-oriented, focusing on stateless, low-coupling, modular, and semantic interoperability. At the heart of cloud computing is an infrastructure that includes a network of interconnected nodes.
Referring now to FIG. 3, an illustrative cloud computing environment 50 is depicted. As shown, cloud computing environment 50 includes one or more cloud computing nodes 10 with which local computing devices used by cloud consumers, such as, for example, personal Digital Assistants (PDAs) or cellular telephones 54A, desktop computers 54B, laptop computers 54C, and/or automobile computer systems 54N, may communicate. Nodes 10 may communicate with each other. They may be physically or virtually grouped (not shown) in one or more networks, such as a private cloud, community cloud, public cloud or hybrid cloud as described above, or a combination thereof. This allows the cloud computing environment 50 to provide infrastructure, platforms, and/or software as a service for which cloud consumers do not need to maintain resources on local computing devices. It should be appreciated that the types of computing devices 54A-N shown in fig. 3 are intended to be illustrative only, and that computing node 10 and cloud computing environment 50 may communicate with any type of computerized device over any type of network and/or network-addressable connection (e.g., using a web browser).
Referring now to FIG. 4, a set of functional abstraction layers provided by cloud computing environment 50 (FIG. 3) is shown. It should be understood in advance that the components, layers, and functions shown in fig. 4 are intended to be illustrative only, and embodiments of the present invention are not limited thereto. As described, the following layers and corresponding functions are provided:
the hardware and software layer 60 includes hardware and software components. Examples of hardware components include: a mainframe 61; a server 62 based on RISC (reduced instruction set computer) architecture; a server 63; blade server 64; a storage device 65; and a network and networking component 66. In some embodiments, the software components include web application server software 67 and database software 68.
The virtualization layer 70 provides an abstraction layer from which the following examples of virtual entities may be provided: a virtual server 71; virtual memory 72; a virtual network 73 including a virtual private network; virtual applications and operating systems 74; and a virtual client 75.
In one example, management layer 80 may provide the functionality described below. Resource supply 81 provides dynamic procurement of computing resources and other resources for performing tasks within the cloud computing environment. Metering and pricing 82 provides cost tracking as resources are utilized within the cloud computing environment and billing or invoicing for consumption of those resources. In one example, the resources may include application software licenses. Security provides authentication for cloud consumers and tasks, as well as protection for data and other resources. User portal 83 provides consumers and system administrators with access to the cloud computing environment. Service level management 84 provides cloud computing resource allocation and management such that the required service level is met. Service Level Agreement (SLA) planning and fulfillment 85 provides for the pre-arrangement and procurement of cloud computing resources that anticipate future demands according to the SLA.
Workload layer 90 provides an example of functionality that may utilize a cloud computing environment. Examples of workloads and functions that may be provided from this layer include: map and navigation 91; software development and lifecycle management 92; virtual classroom education delivery 93; a data analysis process 94; transaction 95; and an automated session program 175.
The present invention may be any possible system, method and/or computer program product of technical detail integration. The invention may be advantageously practiced in any system (single or parallel) that processes a stream of instructions. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to perform aspects of the present invention.
The computer readable storage medium may be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices such as punch cards, or a protruding structure in a slot having instructions recorded thereon, and any suitable combination of the foregoing. A computer-readable storage medium or computer-readable storage device as used herein should not be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., light pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a corresponding computing/processing device, or to an external computer or external storage device via a network (e.g., the internet, a local area network, a wide area network, and/or a wireless network). The network may include copper transmission cables, optical transmission fibers, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
Computer readable program instructions for carrying out operations of the present invention may be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, configuration data for an integrated circuit, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, c++ or the like and a process programming language such as the "C" programming language or similar programming languages. The computer-readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, electronic circuitry, including, for example, programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), may execute computer-readable program instructions by personalizing the electronic circuitry with state information for the computer-readable program instructions in order to perform aspects of the present invention.
The present invention is described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable storage medium having the instructions stored therein in common includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
References in the specification to "one embodiment," "an example embodiment," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
The description of the various embodiments of the present invention has been presented for purposes of illustration and is not intended to be exhaustive or limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope of the invention. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application, or the technical improvement over the technology found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (20)

1. A computer-implemented method for providing text-to-speech output, the method comprising:
receiving user audio data;
determining, by one or more computer processors, a user region-specific pronunciation category from the audio data;
determining, by the one or more computer processors, text for a response to the user from the audio data;
identifying, by the one or more computer processors, a portion from the text, wherein a region-specific pronunciation dictionary includes the portion; and
a phoneme string is used by the one or more computer processors from the region-specific pronunciation dictionary selected according to the user region-specific pronunciation classification for the portion of text-to-speech output to the user.
2. The computer-implemented method of claim 1, further comprising:
a default phoneme sequence in the text-to-speech output to the user is used by the one or more computer processors for words from the text that are not present in the region-specific pronunciation dictionary.
3. The computer-implemented method of claim 1, further comprising creating the region-specific pronunciation dictionary by:
Receiving, by the one or more computer processors, audio data from a plurality of speakers, the audio data including a domain-specific portion and a region-specific pronunciation for the domain-specific portion;
classifying, by the one or more computer processors, the audio data according to region-specific pronunciation;
determining, by the one or more computer processors, a most common region-specific pronunciation for the region-specific portion; and
the most common region-specific pronunciation of the domain-specific portion is stored by the one or more computer processors as the phoneme string of the domain-specific portion-region-specific pronunciation combination.
4. The computer-implemented method of claim 3, further comprising:
the domain-specific portion is defined by the one or more computer processors.
5. The computer-implemented method of claim 3, further comprising:
converting, by the one or more computer processors, the audio data into text data; and
the text data is scanned by the one or more computer processors for domain-specific portions.
6. The computer-implemented method of claim 1, wherein the portion comprises at least one of a word, an n-gram, and a phrase.
7. The computer-implemented method of claim 1, further comprising:
determining, by the one or more computer processors, user text from the audio data:
determining, by the one or more computer processors, a response from the user text;
scanning, by the one or more computer processors, for a response for the domain portion; and
the domain portion is matched by the one or more computer processors with the region-specific pronunciation dictionary entry.
8. A computer program product for providing text-to-speech output, the computer program product comprising one or more computer-readable storage devices and program instructions commonly stored on the one or more computer-readable storage devices, the stored program instructions comprising:
program instructions for receiving user audio data;
program instructions for determining a user region specific pronunciation class from the audio data;
program instructions for determining text for a response to the user from the audio data;
program instructions for identifying a portion from the text, wherein a region-specific pronunciation dictionary includes the portion; and
Program instructions for using a phoneme string from the region-specific pronunciation dictionary selected according to the user region-specific pronunciation classification for the portion of text-to-speech output to the user.
9. The computer program product of claim 8, the stored program instructions further comprising:
program instructions for outputting words from the text that are not present in the region-specific pronunciation dictionary to the user speech using a default phoneme sequence in the text.
10. The computer program product of claim 8, the stored program instructions further comprising program instructions for constructing the region-specific pronunciation dictionary by:
receiving audio data from a plurality of speakers, the audio data including a domain-specific portion and a region-specific pronunciation for the domain-specific portion;
classifying the audio data according to region specific pronunciation;
determining the most common region-specific pronunciation of the region-specific portion; and
the most common region-specific pronunciation of the domain-specific portion is stored as a phoneme string of the domain-specific portion-region-specific pronunciation combination.
11. The computer program product of claim 10, the stored program instructions further comprising:
program instructions for defining a particular portion of a domain.
12. The computer program product of claim 10, the stored program instructions further comprising:
program instructions for converting the audio data into text data; and
program instructions for scanning the text data for domain specific portions.
13. The computer program product of claim 8, wherein the portion comprises at least one of a word, an n-gram, and a phrase.
14. The computer program product of claim 8, the stored program instructions further comprising:
program instructions for determining user text from the audio data:
program instructions for determining a response based on the user text;
program instructions for scanning for responses to the domain portion; and
program instructions for matching the domain portion with the region-specific pronunciation dictionary entry.
15. A computer system for providing text-to-speech output, the computer system comprising:
one or more computer processors;
one or more computer-readable storage devices; and
Program instructions stored on the one or more computer-readable storage devices for execution by the one or more computer processors, the stored program instructions comprising:
program instructions for receiving user audio data;
program instructions for determining a user region specific pronunciation class from the audio data;
program instructions for determining text for a response to the user from the audio data;
program instructions for identifying a portion from the text, wherein a region-specific pronunciation dictionary includes the portion; and
program instructions for using a phoneme string from the region-specific pronunciation dictionary selected according to the user region-specific pronunciation classification for the portion of text-to-speech output to the user.
16. The computer system of claim 15, the stored program instructions further comprising:
program instructions for outputting words from the text that are not present in the region-specific pronunciation dictionary to the user speech using a default phoneme sequence in the text.
17. The computer system of claim 15, the stored program instructions further comprising program instructions for creating the region-specific pronunciation dictionary by:
Receiving audio data from a plurality of speakers, the audio data including a domain-specific portion and a region-specific pronunciation for the domain-specific portion;
classifying the audio data according to region specific pronunciation;
determining the most common region-specific pronunciation of the region-specific portion; and
the most common region-specific pronunciation of the domain-specific portion is stored as a phoneme string of the domain-specific portion-region-specific pronunciation combination.
18. The computer system of claim 17, the stored program instructions further comprising:
program instructions for defining a particular portion of a domain.
19. The computer system of claim 17, the stored program instructions further comprising:
program instructions for converting the audio data into text data; and
program instructions for scanning the text data for domain specific portions.
20. The computer system of claim 15, wherein the portion comprises at least one of a word, an n-gram, and a phrase.
CN202280023555.0A 2021-04-30 2022-04-04 Using speech-to-text data in training text-to-speech models Pending CN117043742A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US17/245,048 US11699430B2 (en) 2021-04-30 2021-04-30 Using speech to text data in training text to speech models
US17/245,048 2021-04-30
PCT/IB2022/053095 WO2022229743A1 (en) 2021-04-30 2022-04-04 Using speech to text data in training text to speech models

Publications (1)

Publication Number Publication Date
CN117043742A true CN117043742A (en) 2023-11-10

Family

ID=83808657

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202280023555.0A Pending CN117043742A (en) 2021-04-30 2022-04-04 Using speech-to-text data in training text-to-speech models

Country Status (4)

Country Link
US (1) US11699430B2 (en)
JP (1) JP2024519263A (en)
CN (1) CN117043742A (en)
WO (1) WO2022229743A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11875785B2 (en) * 2021-08-27 2024-01-16 Accenture Global Solutions Limited Establishing user persona in a conversational system

Family Cites Families (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH02195400A (en) * 1989-01-24 1990-08-01 Canon Inc Speech recognition device
TW413105U (en) 1999-12-15 2000-11-21 Chen Sen Kuen Movable jaw structure of vise for clamping workpiece
US6684187B1 (en) * 2000-06-30 2004-01-27 At&T Corp. Method and system for preselection of suitable units for concatenative speech
ATE404967T1 (en) * 2003-12-16 2008-08-15 Loquendo Spa TEXT-TO-SPEECH SYSTEM AND METHOD, COMPUTER PROGRAM THEREOF
US7415411B2 (en) * 2004-03-04 2008-08-19 Telefonaktiebolaget L M Ericsson (Publ) Method and apparatus for generating acoustic models for speaker independent speech recognition of foreign words uttered by non-native speakers
JP4025355B2 (en) * 2004-10-13 2007-12-19 松下電器産業株式会社 Speech synthesis apparatus and speech synthesis method
US7742919B1 (en) * 2005-09-27 2010-06-22 At&T Intellectual Property Ii, L.P. System and method for repairing a TTS voice database
US8972268B2 (en) * 2008-04-15 2015-03-03 Facebook, Inc. Enhanced speech-to-speech translation system and methods for adding a new word
US8290775B2 (en) * 2007-06-29 2012-10-16 Microsoft Corporation Pronunciation correction of text-to-speech systems between different spoken languages
US11290400B2 (en) * 2009-12-22 2022-03-29 Cyara Solutions Pty Ltd System and method for testing of automated contact center customer response systems
TWI413105B (en) 2010-12-30 2013-10-21 Ind Tech Res Inst Multi-lingual text-to-speech synthesis system and method
US20130110511A1 (en) * 2011-10-31 2013-05-02 Telcordia Technologies, Inc. System, Method and Program for Customized Voice Communication
US9275633B2 (en) * 2012-01-09 2016-03-01 Microsoft Technology Licensing, Llc Crowd-sourcing pronunciation corrections in text-to-speech engines
US8849666B2 (en) * 2012-02-23 2014-09-30 International Business Machines Corporation Conference call service with speech processing for heavily accented speakers
US9368104B2 (en) * 2012-04-30 2016-06-14 Src, Inc. System and method for synthesizing human speech using multiple speakers and context
US20140379334A1 (en) * 2013-06-20 2014-12-25 Qnx Software Systems Limited Natural language understanding automatic speech recognition post processing
KR20150027465A (en) 2013-09-04 2015-03-12 한국전자통신연구원 Method and apparatus for generating multiple phoneme string for foreign proper noun
US11295730B1 (en) * 2014-02-27 2022-04-05 Soundhound, Inc. Using phonetic variants in a local context to improve natural language understanding
US10339920B2 (en) * 2014-03-04 2019-07-02 Amazon Technologies, Inc. Predicting pronunciation in speech recognition
CN104391673A (en) * 2014-11-20 2015-03-04 百度在线网络技术(北京)有限公司 Voice interaction method and voice interaction device
RU2632424C2 (en) * 2015-09-29 2017-10-04 Общество С Ограниченной Ответственностью "Яндекс" Method and server for speech synthesis in text
US10152965B2 (en) * 2016-02-03 2018-12-11 Google Llc Learning personalized entity pronunciations
US20180032884A1 (en) * 2016-07-27 2018-02-01 Wipro Limited Method and system for dynamically generating adaptive response to user interactions
US10319250B2 (en) * 2016-12-29 2019-06-11 Soundhound, Inc. Pronunciation guided by automatic speech recognition
US10467335B2 (en) * 2018-02-20 2019-11-05 Dropbox, Inc. Automated outline generation of captured meeting audio in a collaborative document context
EP3955243A3 (en) * 2018-10-11 2022-05-11 Google LLC Speech generation using crosslingual phoneme mapping
US10930274B2 (en) * 2018-11-30 2021-02-23 International Business Machines Corporation Personalized pronunciation hints based on user speech
US11450311B2 (en) * 2018-12-13 2022-09-20 i2x GmbH System and methods for accent and dialect modification
US20200372110A1 (en) * 2019-05-22 2020-11-26 Himanshu Kaul Method of creating a demographic based personalized pronunciation dictionary
CN110827803A (en) 2019-11-11 2020-02-21 广州国音智能科技有限公司 Method, device and equipment for constructing dialect pronunciation dictionary and readable storage medium

Also Published As

Publication number Publication date
WO2022229743A1 (en) 2022-11-03
US20220351715A1 (en) 2022-11-03
US11699430B2 (en) 2023-07-11
JP2024519263A (en) 2024-05-10

Similar Documents

Publication Publication Date Title
US10726204B2 (en) Training data expansion for natural language classification
US10366160B2 (en) Automatic generation and display of context, missing attributes and suggestions for context dependent questions in response to a mouse hover on a displayed term
US10394963B2 (en) Natural language processor for providing natural language signals in a natural language output
US10394861B2 (en) Natural language processor for providing natural language signals in a natural language output
US11636376B2 (en) Active learning for concept disambiguation
US10565314B2 (en) Disambiguating concepts in natural language
WO2022062595A1 (en) Improving speech recognition transcriptions
CN114450747B (en) Method, system, and computer-readable medium for updating documents based on audio files
US20220101835A1 (en) Speech recognition transcriptions
WO2022237376A1 (en) Contextualized speech to text conversion
US20220188525A1 (en) Dynamic, real-time collaboration enhancement
CN116601648A (en) Alternative soft label generation
CN117043742A (en) Using speech-to-text data in training text-to-speech models
US10991370B2 (en) Speech to text conversion engine for non-standard speech
US11003854B2 (en) Adjusting an operation of a system based on a modified lexical analysis model for a document
JP2023100253A (en) Computer implemented method, computer program and system (automated domain-specific constrained decoding from speech input to structured resource)
US20230080417A1 (en) Generating workflow representations using reinforced feedback analysis
CN114283810A (en) Improving speech recognition transcription
US11361761B2 (en) Pattern-based statement attribution
CN116235163A (en) Inference-based natural language interpretation
US20230335123A1 (en) Speech-to-text voice visualization
US11501070B2 (en) Taxonomy generation to insert out of vocabulary terms and hypernym-hyponym pair induction
JP2024521024A (en) Contextual speech-to-text

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination