WO2022105235A1 - 一种信息识别方法、装置及存储介质 - Google Patents

一种信息识别方法、装置及存储介质 Download PDF

Info

Publication number
WO2022105235A1
WO2022105235A1 PCT/CN2021/103287 CN2021103287W WO2022105235A1 WO 2022105235 A1 WO2022105235 A1 WO 2022105235A1 CN 2021103287 W CN2021103287 W CN 2021103287W WO 2022105235 A1 WO2022105235 A1 WO 2022105235A1
Authority
WO
WIPO (PCT)
Prior art keywords
text
phoneme
language
type
target
Prior art date
Application number
PCT/CN2021/103287
Other languages
English (en)
French (fr)
Inventor
夏海荣
温建
刘宁
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2022105235A1 publication Critical patent/WO2022105235A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/005Language recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/16Speech classification or search using artificial neural networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • G10L2015/025Phonemes, fenemes or fenones being the recognition units
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/225Feedback of the input speech

Definitions

  • the present application relates to the technical field of information identification, and in particular, to an information identification method, device and storage medium.
  • Voice recognition technology refers to the technology that machines convert voice signals into corresponding text or commands through the process of recognition and understanding. , voice retrieval and other scenarios are widely used.
  • the voice interaction system usually uses a speech recognition engine to recognize the voice input by the user and understand natural language.
  • the voice interaction system usually can only recognize one type of language.
  • the voice input by the user contains multiple types of language
  • the accuracy of the voice interaction system in recognizing the voice is low.
  • the voice interaction system usually only recognizes Chinese, and English will be wrong according to Chinese pinyin rules. identify.
  • the present application provides an information identification method, device and storage medium to improve the accuracy of information identification.
  • the present application provides an information identification method, which can obtain information input by a user, and the user input information may be a first phoneme based on a first type of language, a speech to be recognized, or a text to be corrected, wherein the to be recognized
  • the voice includes the voice of the first type of language and the second type of language
  • the text to be corrected includes the text of the first type of language, and the first type and the second type belong to different types; then, according to the phoneme corresponding to the input information, the standard
  • the target words in the text based on the second type of language are identified as words in the target text, so as to obtain the target text, the target text includes texts in at least two types of languages, that is, the first text based on the first type of language and the second based on the second type of language.
  • the second text of the type language, and the degree of difference between the phoneme of the target word and the phoneme corresponding to the input information is smaller than the degree of difference between other words in the standard text and the phoneme corresponding to the input information.
  • the second phoneme when the user inputs speech, the second phoneme may be the phoneme of the first type of language determined by identifying the to-be-recognized speech, and when the user inputs the text to be corrected, the second phoneme may be the text to be corrected For the corresponding phoneme, for example, the second phoneme can be obtained by phoneticizing the text to be corrected.
  • the phoneme based on the first type of language corresponding to the user input information on the basis of recognizing the text based on the first type of language, it is also possible to determine the relationship between the phoneme and the phoneme from the standard text.
  • the target word based on the second type of language with the smallest degree of phoneme difference is used as the word in the target text, instead of identifying the other type of language according to the pronunciation rules of one type of language, which makes the final recognized target text not only in the target text.
  • the first text based on the first type of language may also include the text based on the second type of language. In this way, texts containing multiple types of languages at the same time can be recognized, thereby improving the accuracy of information recognition.
  • the target text when the target text is identified according to the difference between the phoneme corresponding to the input information and the third phoneme of the standard text, it may be specifically based on the difference between the phoneme corresponding to the input information and the third phoneme.
  • the vectorized difference identifies target words based on the second type of language in the standard text as words in the target text.
  • the final output target text can be determined by the vectorized difference between the phoneme sequences. For example, when the vectorization difference between the two phoneme sequences is small, the standard text can be used as the target text; and when the vectorization difference is large, the second phoneme corresponding to the second phoneme can be identified based on other methods. text.
  • the phoneme corresponding to the input information may be vectorized to obtain the corresponding phoneme and calculate the vector distance between the first vector and the second vector corresponding to the third phoneme, so that the target word based on the second type of language in the standard text can be identified as the target text according to the vector distance words in.
  • the vectorized difference between the two phoneme sequences can be determined by the vector distance between the phoneme corresponding to the input information and the third phoneme, so that the target text corresponding to the input information can be determined according to the size of the vector distance.
  • a similarity calculation grid can be constructed, wherein, in two vertical directions of the grid, one of the directions (such as vertical axis) is the phoneme sequence of the word, and the other direction (such as the horizontal axis) is the phoneme sequence of the input information, so that the similarity between the two phoneme sequences can be scored based on this grid, so that each Phonemic similarity between words and input information.
  • the word with the largest similarity with the input information is determined from it, and the word is determined as the target word as the word in the target text.
  • the word is determined as the target word as the word in the target text.
  • other possible ways may also be used to select a word from a plurality of words as a target word, and this application does not limit the specific implementation of this process.
  • the second phoneme is specifically a phoneme based on the first type of language
  • the third phoneme is a phoneme based on the second type of language
  • the third phoneme includes both the phoneme of the first type of language and the third phoneme Phonemes of two types of languages.
  • a vectorization model may be used to complete the vectorization process of the phoneme, wherein the vectorization model may be constructed based on a neural network. In this way, fast vectorization of the phoneme sequence can be achieved, and the vectorization accuracy of the phoneme sequence can be guaranteed through the model training process.
  • the standard text may contain polyphonic text, and the polyphonic text has multiple different pronunciations.
  • the standard text includes "AAA”
  • its pronunciation 1 may be the character-by-character pronunciation of the text
  • its pronunciation 2 can also be read as "3A” and so on. In this way, based on the different pronunciations of the user, it can be determined that the content that the user actually expects to input is the standard text, thereby improving the flexibility and freedom of the user's pronunciation.
  • a speech recognition engine may be used to perform speech recognition on the speech to be recognized input by the user to obtain the initial text.
  • the speech recognition engine performs speech recognition on the speech to be recognized based on a pronunciation rule, and it is difficult to recognize texts in multiple types of languages, so the initial text obtained by the speech recognition engine usually only contains texts in one type of language. Therefore, after the initial text is obtained, the initial text can be phoneticized to obtain the second phoneme corresponding to the to-be-recognized speech, so that the target text containing multiple types of languages can be recognized based on the second phoneme, so as to improve the accuracy of speech recognition. Rate.
  • the target text may specifically be the text obtained by performing error correction on the initial text, that is, for the initial text recognized by the speech recognition engine, the phoneme sequence obtained by performing the initial text according to the initial text, Error correction can be performed on part of the content in the original text by using the standard text, for example, part of the content in the original text based on the first type of language is corrected to the content based on the second type of language.
  • an embodiment of the present application further provides an information recognition device, including: an information acquisition module, configured to acquire input information, where the input information includes a first phoneme, a to-be-recognized voice or a to-be-corrected language based on a first type of language error text, the to-be-recognized speech includes speech of a first type of language and a second type of language, the to-be-corrected text includes text of the first type of language, and the first type is different from the second type;
  • the identification module is used to identify the target word based on the second type language in the standard text as a word in the target text according to the phoneme corresponding to the input information, and the phoneme corresponding to the input information includes the first phoneme or the second phoneme.
  • the phoneme, the second phoneme includes the phoneme based on the first type language determined by recognizing the speech to be recognized or the phoneme corresponding to the text to be corrected, the phoneme of the target word and the phoneme corresponding to the input information.
  • the degree of difference is smaller than the degree of difference between other words in the standard text and the phonemes corresponding to the input information
  • the target text includes a first text based on the first type language and a language based on the second type.
  • the second text of the language is
  • the identification module is specifically configured to, according to the vectorized difference between the phoneme corresponding to the input information and the third phoneme of the standard text, classify the standard text based on the second-type language
  • the target words of are recognized as words in the target text.
  • the identification module is specifically used for:
  • a target word based on the second type of language in the standard text is identified as a word in the target text.
  • the second phoneme is a phoneme based on a first type of language
  • the third phoneme of the standard text is a phoneme based on a second type of language
  • the third phoneme of the standard text includes phonemes of the first type of language and phonemes of the second type of language.
  • the identification module is specifically configured to use a vectorization model to vectorize the phoneme corresponding to the input information, and the vectorization model is constructed based on a neural network.
  • the standard text includes polyphonic text
  • the polyphonic text has a first pronunciation and a second pronunciation
  • the first pronunciation is different from the second pronunciation.
  • the device further includes:
  • a speech recognition module for performing speech recognition on the to-be-recognized speech by using a speech recognition engine to obtain an initial text
  • the Zhuyin module is configured to perform Zhuyin on the initial text to obtain the second phoneme corresponding to the to-be-recognized speech.
  • the target text is a text obtained by performing error correction on the initial text.
  • an embodiment of the present application further provides an apparatus, the apparatus includes a memory and a processor, the memory and the memory communicate with each other, and the processor is configured to execute instructions stored in the memory to The method described in any one of the implementation manners of the first aspect is performed.
  • the present application provides a chip including a processor and a chip interface.
  • the chip interface is used to receive instructions and transmit them to the processor.
  • the processor executes the above-mentioned instructions to perform the information identification method described in any one of the above-mentioned first aspects.
  • the present application provides a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, when the computer-readable storage medium runs on a computer device, the computer device causes the computer device to execute the method described in the first aspect.
  • the present application provides a computer program product comprising instructions which, when executed on a computer device, cause the computer device to perform the method described in the first aspect above.
  • the present application may further combine to provide more implementation manners.
  • FIG. 1 is a schematic structural diagram of a computer device in an embodiment of the application.
  • FIG. 2 is a schematic structural diagram of a cloud server in an embodiment of the application
  • FIG. 3 is a schematic flowchart of a speech recognition method in an embodiment of the present application.
  • FIG. 4 is a schematic diagram of an exemplary input interface
  • Fig. 5 is the schematic diagram of utilizing JSON language to record the phoneme sequence of " NE40E” and " AAA ";
  • FIG. 6 is a schematic diagram of an exemplary neural network in an embodiment of the present application.
  • FIG. 7 is a schematic diagram of an exemplary voice interaction scenario in an embodiment of the present application.
  • FIG. 8 is a schematic flowchart of a speech recognition method in an embodiment of the present application.
  • FIG. 9 is a schematic diagram of generating a candidate set according to an initial text in an embodiment of the present application.
  • FIG. 10 is a schematic diagram of a vectorized model in an embodiment of the present application.
  • 11 is a schematic diagram of calculating the similarity between two phoneme sequences through a grid in an embodiment of the application
  • FIG. 12 is a schematic structural diagram of an information identification device according to an embodiment of the present application.
  • first and second are only used for descriptive purposes, and cannot be understood as indicating or implying relative importance or implying the number of indicated technical features. Thus, a feature defined as “first” or “second” may expressly or implicitly include one or more of that feature.
  • the terms “including”, “including”, “having” and their variants mean “including but not limited to” unless specifically emphasized otherwise.
  • the speech input by the user may include multiple types of languages at the same time.
  • different types of languages refer to languages that are pronounced according to different pronunciation rules, including different languages, such as Chinese, English, Korean, etc.; they can also include symbols that do not belong to communicative languages, such as "——", "*" Wait.
  • the voice interaction system usually recognizes other types of languages based on the pronunciation rules of one type of language, which makes the recognition accuracy rate of the voice interaction system for the user input speech low. For example, suppose the actual content of the user's voice input is "she is a good girl, worthy of your love", that is, the user's voice input in Chinese is interspersed with English (other languages) to express pronunciation.
  • the voice interaction system usually It still recognizes the voice content input by the user according to the Chinese pronunciation rules, and for the English pronunciation content of "girl", the voice interaction system usually refers to the standard Chinese pinyin scheme and adopts the homophonic method to phonetize the foreign language word "girl” (that is, use The symbol represents the pronunciation of the text), which leads to a speech interaction system that is likely to recognize "girl” as the Chinese word “dog” with a similar pronunciation.
  • the voice content recognized by the voice interaction system is "it is a good dog, worthy of your love", which is quite different from what the user actually expects to input, so that the voice The recognition accuracy is low.
  • the voice content recognized by the voice interaction system is wrong, the system will execute wrong operation commands based on the wrong voice recognition result, which will seriously affect the user experience.
  • an embodiment of the present application provides an information identification method, which can be applied to an information identification device.
  • phonemes for identification are the smallest phonetic unit divided according to the natural attributes of voice.
  • a pronunciation action can form a phoneme.
  • the pinyin "ma” in Chinese contains two pronunciation actions of "m” and "a” when it is pronounced, and it is two phonemes.
  • the sounds produced by the same pronunciation action are the same phoneme, and the sounds produced by different pronunciation actions are different phonemes.
  • the pinyin string "mami” in Chinese contains four pronunciation actions of “m”, “a”, “m”, and “i” in sequence, among which two " The pronunciation action of “m” is the same, which is the same phoneme, while the pronunciation action of "m", “a”, and “i” is different, and they are different phonemes.
  • the information recognition device may determine the target based on the second type of language in the standard text according to the phoneme of the first type of language input by the user, the phoneme of the first type of language corresponding to the user input voice or the user input text Words, texts containing multiple types of languages at the same time can be recognized, instead of recognizing the content of multiple types of languages according to the pronunciation rules of one type of language, so that the accuracy of information recognition can be improved.
  • the information recognition device recognizes the voice, it can recognize the content input by the user according to the Chinese pronunciation according to the Chinese phonemes, and at the same time, the information recognition device The content "girl” input by the user according to the English pronunciation can be recognized according to the English phonemes. In this way, the text finally recognized by the information recognition device is "she is a good girl", thereby improving the accuracy of speech recognition. Or, when the user inputs the pinyin string "tashiyigehaogouer", the information recognition device can recognize the text "she is a good girl” based on the phonemes in Chinese and English, so as to improve the accuracy of text recognition.
  • the information recognition device can perform phonetic notation on the input text to obtain the phoneme sequence corresponding to the input text, so that the information recognition device can use the English phoneme to the Chinese-based phoneme sequence.
  • the phoneme sequence is recognized and corrected, and the recognition result of "she is a good girl" is obtained.
  • the speech recognition method provided in this embodiment of the present application may be applied to the computer device 100 shown in FIG. 1 , including but not limited to.
  • computer device 100 may include a bus 101 , at least one processor 102 , at least one communication interface 103 , and at least one memory 104 .
  • the processor 102 , the memory 104 and the communication interface 103 communicate through the bus 101 .
  • the bus 101 may be a peripheral component interconnect (PCI) bus, a peripheral component interconnect express (PCIe) or an extended industry standard architecture (EISA) bus, or the like.
  • the bus can be divided into address bus, data bus, control bus and so on. For ease of presentation, only one thick line is used in FIG. 1, but it does not mean that there is only one bus or one type of bus.
  • the communication interface 103 is used for external communication, such as receiving data or instructions input by a user through a data input device (such as a mouse, a keyboard, a microphone, etc.).
  • the computer device 100 may be a personal computer (personal computer, PC) such as a tablet computer or a desktop computer, or a workstation or a server.
  • PC personal computer
  • the processor 102 may be a central processing unit (central processing unit, CPU), a field programmable gate array (field programmable gate array, FPGA) or an application specific integrated circuit (application specific integrated circuit, ASIC).
  • CPU central processing unit
  • FPGA field programmable gate array
  • ASIC application specific integrated circuit
  • Memory 104 may include volatile memory, such as random access memory (RAM).
  • RAM random access memory
  • the memory 104 may also include non-volatile memory (non-volatile memory), such as read-only memory (ROM), flash memory, hard disk drive (HDD) or solid state drive (Solid State Drive) , SSD).
  • ROM read-only memory
  • HDD hard disk drive
  • SSD solid state drive
  • the memory 104 stores programs or instructions, such as programs or instructions required to realize information identification, and the processor 102 executes the programs or instructions to implement modeling for an object.
  • data can also be stored in the memory 104, for example, the phoneme text or the to-be-recognized speech input by the user, the target text recognized based on the phoneme or the to-be-recognized speech, and other intermediate information generated or involved in the information recognition process can also be stored. information (such as phonemes), etc.
  • the processor 104 can obtain the phoneme or the speech to be recognized by reading the memory 104, and recognize the to-be-recognized speech, so as to obtain the text or the like that the user expects to input.
  • the memory 104 may be integrated with the computer device 100 or may be independent of the computer device 100 .
  • the hardware structure of the computer device 100 shown in FIG. 1 is not intended to limit the hardware composition of the computer device 100 in practical applications.
  • Memory 104 may be volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory.
  • the non-volatile memory may be read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (erasable PROM, EPROM), electrically programmable Erase programmable read-only memory (electrically EPROM, EEPROM) or flash memory.
  • Volatile memory may be random access memory (RAM), which acts as an external cache.
  • RAM static random access memory
  • DRAM dynamic random access memory
  • SDRAM synchronous dynamic random access memory
  • Double data rate synchronous dynamic random access memory double data date SDRAM, DDR SDRAM
  • enhanced synchronous dynamic random access memory enhanced SDRAM, ESDRAM
  • synchronous link dynamic random access memory direct rambus RAM, DR RAM
  • the information identification method provided in this embodiment of the present application may be applied to the cloud server 200 including but not limited to as shown in FIG. 2 .
  • the cloud server 200 can be connected with the user equipment 210 .
  • the user can input the phoneme sequence or the voice to be recognized on the user equipment 210, and the user equipment 201 sends the phoneme sequence or the voice to be recognized to the cloud server 200, and requests or instructs the cloud server 200 to perform information recognition.
  • the cloud server 200 can send the target text to the user equipment 210, so that the user equipment 210 can present the target text to the user; or, the cloud server 200
  • An operation command can be determined based on the target text, and the operation command can be further executed.
  • the cloud server 200 includes a bus 201 , a processor 202 , a communication interface 203 and a memory 204 .
  • the processor 202 , the memory 204 and the communication interface 203 communicate through the bus 201 .
  • the bus 201 may be a PCI bus, a PCIe or an EISA bus, or the like.
  • the bus can be divided into address bus, data bus, control bus and so on. For ease of presentation, only one thick line is used in FIG. 2, but it does not mean that there is only one bus or one type of bus.
  • the communication interface 203 is used to communicate with the outside, such as receiving operation attribute information and object attribute information of IO operations and the like.
  • the processor 202 may be a CPU.
  • Memory 204 may include volatile memory, such as RAM.
  • Memory 204 may also include non-volatile memory such as ROM, flash memory, HDD or SSD, and the like.
  • Programs or instructions are stored in the memory 204, for example, programs or instructions required for realizing speech recognition are stored, and the processor 202 executes the programs or instructions to execute the above-mentioned speech recognition method.
  • the memory 204 may also store data, such as the phoneme sequence sent by the user equipment 210, the speech to be recognized, the recognized target text, and the like.
  • the information identification method of the embodiment of the present application will be described in detail below by taking the information identification device for identifying the speech to be recognized as an example.
  • the information identification device can be realized by hardware, such as the above-mentioned computer equipment 100 or cloud server 200; or, the information identification device can also be realized by software, such as a functional module running on the computer device/cloud server 200 Wait.
  • FIG. 3 a schematic flowchart of an information identification method is shown, and the method may specifically include:
  • the information recognition device acquires input information, the input information includes a first phoneme based on a first type of language, a speech to be recognized, or a text to be corrected, and the speech to be recognized at least includes a first type of language and a second type of language, and the to-be-recognized speech is to be corrected.
  • the erroneous text is text based on a first type of language, wherein the first type of language is different from the second type of language.
  • the user input information received by the computer device 100 or the user device 210 may be any one of three types of information: phoneme, voice, and text.
  • the computer device 100 or the user device 210 can provide the user with an input interface as shown in FIG. 4 , and the user can long press the voice input button in the input interface and input voice.
  • the computer device 100/user device 210 can use the voice content input by the user in the input interface as the voice to be recognized.
  • the computer device 100/user device 210 may be in a state of listening to the user's voice, and use the voice content input by the user during this period as the voice to be recognized.
  • the specific implementation manner of how the information recognition apparatus acquires the speech to be recognized is not limited.
  • the user can directly input a phoneme sequence on the computer device 100/user device 210, such as a Chinese-based pinyin string, etc., so that the information recognition device can obtain the first phoneme input by the user, so as to facilitate the basis of the first phoneme input by the user.
  • a phoneme identifies texts in multiple types of languages, and the first phoneme is a phoneme based on the first type of language, such as Chinese pinyin or the like.
  • the user can input text to be corrected on the computer device 100/user device 210, and the text to be corrected includes text in one type of language (such as Chinese, etc.), so that the information recognition device can Error correction is performed on the text, and parts of the text in the text are corrected into texts in other types of languages.
  • the text to be corrected includes text in one type of language (such as Chinese, etc.), so that the information recognition device can Error correction is performed on the text, and parts of the text in the text are corrected into texts in other types of languages.
  • the existing devices for speech recognition usually use the pronunciation rules of a single language to perform information recognition.
  • Voice may contain multiple different types of languages at the same time.
  • the content of the user's voice input can be "she is a good girl", including both Chinese and English, or other industry terms including a mixture of Chinese and English; for example, , the content of the user's voice input can be "A/B" (pronounced as "A slash B” in Chinese), including English and the symbol "/" and so on.
  • the information identification apparatus may continue to perform step S302 to improve the accuracy of information identification.
  • the information recognition device recognizes the target word based on the second type of language in the standard text as a word in the target text according to the phoneme corresponding to the input information, wherein the phoneme corresponding to the input information includes the first phoneme or the second phoneme, the The second phoneme includes the phoneme of the first type of language determined by recognizing the speech to be recognized or the phoneme corresponding to the text to be corrected The degree of difference between phonemes corresponding to the information, the target text includes a first text based on a first type of language and a second text based on a second type of language.
  • the information recognition device can first determine the phoneme (hereinafter referred to as the second phoneme) contained in the speech to be recognized, and then according to the second phoneme Identify texts corresponding to different types of languages. It is worth noting that the information recognition device can first recognize the phonemes contained in the speech to be recognized based on the pronunciation rules of a type of language (A-type language), which makes the phonemes corresponding to other types of languages, usually through transliteration. It is recognized as the phoneme of the A-type language. Therefore, in the subsequent recognition process, the information recognition device can re-recognize the phoneme corresponding to the A-type language that is transliterated based on the phonemes corresponding to other types of languages, so as to improve the accuracy of information recognition. .
  • A-type language a type of language
  • the information recognition device may use a speech recognition engine to perform speech recognition on the speech to be recognized input by the user to obtain the initial text, and then the information recognition device may perform phonetic on the initial text to obtain the initial text
  • the corresponding phoneme sequence, the phoneme sequence is the second phoneme corresponding to the speech to be recognized.
  • the speech recognition engine usually recognizes the speech to be recognized based on the pronunciation rules of a single type of language, the accuracy of the recognized initial text is usually low.
  • the information recognition device performs phonetic on the initial text. Then, the obtained second phoneme may be different from the phoneme actually corresponding to the speech to be recognized. Therefore, the information recognition device also needs to determine the actual phoneme sequence corresponding to the to-be-recognized speech according to the second phoneme obtained by phonetic transcription, so as to determine the phoneme sequence corresponding to the speech to be recognized. corresponding correct text.
  • the information recognition apparatus may also determine the second phoneme according to the acoustic features of the speech to be recognized.
  • the information identification device may first acquire the acoustic features corresponding to each phoneme in the unified phoneme set, wherein the unified phoneme set at least includes the phonemes of the first type of language.
  • the information recognition device can match the acoustic features of the speech to be recognized with the acoustic features corresponding to each phoneme in the unified phoneme set, so that each phoneme that matches the acoustic features of the to-be-recognized speech can be determined, and based on the acoustic features According to the sequence in the speech to be recognized, a corresponding phoneme sequence can be obtained, that is, the above-mentioned second phoneme. Since the pronunciation mode of some users may be different from the standard pronunciation mode of the phoneme, the second phoneme determined based on the acoustic feature may not match the user's expected input.
  • the correct voice input should be “nie” (the Chinese pinyin of the word “nie”), but due to regional pronunciation habits, the user's actual voice input is "lie” ” (Hanyu Pinyin), which makes the second phoneme identified by the information recognition device based on the acoustic features possibly different from the user’s expected input, so the information recognition device also needs to determine the speech to be recognized according to the second phoneme obtained by matching The corresponding actual phoneme sequence in order to determine its corresponding correct text.
  • the information recognition device recognizes the second phoneme corresponding to the speech to be recognized when the information input by the user is speech.
  • the information input by the user may also be the text to be corrected.
  • the information identification device may also obtain the second phoneme corresponding to the text to be corrected by phoneticizing the text to be corrected.
  • the error correction text may be, for example, the above-mentioned initial text obtained by using the speech recognition engine to recognize the speech to be recognized.
  • the information recognition apparatus may directly obtain the first phoneme.
  • the information identification device may, according to the difference between the first phoneme or the second phoneme based on the first type language and the third phoneme based on the second type language corresponding to the standard text, in The target word is determined in the standard text, and the target word is used as a word in the target text.
  • the information recognition apparatus may determine the target word in the standard text according to the vectorized difference between phonemes, and the vectorized difference may be measured by, for example, vector distance.
  • the information recognition device can vectorize the second phoneme corresponding to the speech to be recognized/text to be error-corrected to obtain the corresponding second phoneme.
  • the vectorization of the third phoneme corresponding to the standard text is also completed (the vectorization process may be performed in advance, or may be performed each time the text is determined).
  • the information recognition device can calculate the vector distance between the first vector and the second vector corresponding to the third phoneme of the standard text, and according to the vector distance, select the target word corresponding to the minimum vector distance from the standard text as Words in the target text, the target words being words based on the second type of language.
  • the difference between the initial text and the target text is small, so it can be considered that the recognized initial text conforms to the user
  • the initial text can be used as the target text
  • the vector distance between the first vector and the second vector is large (specifically greater than the preset threshold)
  • the initial text and the target text are represented.
  • the difference is relatively large, so that the initial text can be modified according to the target text, such as replacing some words/words in the initial text, and the modified initial text can be used as the target text that the user expects to input.
  • the information recognition device may divide the second phoneme into multiple candidate segments, and calculate the vector distance between each candidate segment and the phoneme corresponding to each word in the standard text, so that the vector distance between the phonemes can be calculated according to the distance between the phonemes in the vector.
  • Vector distance to determine the word corresponding to the candidate segment in the standard text.
  • the words in the standard text can be used to replace the words in the original text. For example, inputting "she is a good girl" with the user's voice, the initial text may be "it is a good dog".
  • the information recognition apparatus may select one word from these words as the target text corresponding to the candidate segment.
  • the information recognition device can calculate the similarity between each word and the candidate segment through a grid alignment process, so that the word corresponding to the phoneme with the greatest similarity can be used as the target text corresponding to the candidate segment. , and its specific calculation process can be found in the following description, which is not repeated here.
  • other methods may also be used to determine a word from a plurality of words as the target text corresponding to the candidate segment, which is not limited in this embodiment.
  • the standard text may be a thesaurus (or may be referred to as a "dictionary") including multiple types of languages, which may be input into the information identification device in advance by a user or a technician, or configured to be acquired by the information identification device.
  • each word in the lexicon may have a corresponding phoneme sequence and a phoneme vector corresponding to the phoneme sequence based on its standard pronunciation.
  • the vector distance between the phoneme of each word in the initial text and the phoneme of each word in the lexicon can be calculated, so that according to the vector distance between the phonemes, for each phoneme in the initial text words, the words with the smallest vector distance between phonemes or less than a preset threshold can be determined in the thesaurus. In this way, the initial text can be modified based on the determined words to obtain the target text that the user expects to input.
  • the information recognition device may also perform the above-mentioned similar process, and divide the first phoneme into a plurality of candidates. segment, and perform vectorization processing on the candidate segment of each first phoneme, so as to obtain the target text corresponding to the first phoneme based on the vectorization result of the first phoneme, and the specific implementation process can refer to the relevant part of the above-mentioned process. description, which is not repeated here.
  • the second phoneme may be a phoneme of a first type of language
  • the third phoneme may be a phoneme of a second type of language.
  • the information recognition device may first recognize and obtain the second phoneme of the first type of language according to the to-be-recognized speech, because some of the phonemes may be based on the first type of language.
  • the information recognition device can use the third phoneme of the second-type language to determine the phoneme similar to the third phoneme in the second phoneme, so that for the determined part of the phoneme, can use the third phoneme
  • the text of the two-type language is used as the recognition text corresponding to the part of the phoneme.
  • the information recognition device can first use a speech recognition engine to recognize the speech to be recognized as an initial Chinese text, and perform phonetic transcription on the initial text to obtain a Chinese-based The second phoneme; then, the information recognition device can calculate the similarity between each part of the second phoneme and the third phoneme based on English in the standard text, and when the similarity is high, the third phoneme can be used.
  • the corresponding English vocabulary replaces the Chinese vocabulary corresponding to the second phoneme in the initial text, so that the target text finally recognized by the information recognition device can include both Chinese and English.
  • the information recognition device may also adopt a similar manner as described above according to the second phoneme corresponding to the text to be error-corrected input by the user, and the part of the text content in the text to be error-corrected is based on the second type.
  • the third phoneme of the language is identified and corrected. For details, please refer to the above process description, which will not be repeated here.
  • the third phoneme may include both the phoneme of the first type of language and the phoneme of the second type of language, so that the second phoneme corresponding to the to-be-recognized speech can be identified by using the phonemes corresponding to the multiple types of languages , to obtain the target text corresponding to the speech to be recognized.
  • each word in a standard text may have one or more pronunciations, and thus one or more phoneme sequences.
  • the phoneme sequence corresponding to a possible standard pronunciation can be "EH1N-IY1-SI4-LING2-IY1"; for another example, suppose a word in the standard text If it is "AAA”, the phoneme sequence corresponding to its possible standard pronunciation may be "EY1-EY1-EY1", or it may also be "SAN-EY1".
  • texts with multiple different pronunciations that is, with multiple different phoneme sequences
  • polyphonic texts Since polyphonic text has multiple pronunciations, for the same text, no matter which pronunciation the user uses for voice input or text input, the information recognition device can accurately identify it, thereby improving the user's pronunciation/ The freedom of text input improves the flexibility of information recognition.
  • the information identification device may use the JS Object Notation (JavaScript Object Notation, JSON) language to record standard text.
  • JSON JavaScript Object Notation
  • the standard text can be regarded as a dictionary (dict) data type in the python language, including a series of ⁇ key , value> pair (ie key-value pair).
  • the key value is the unique label of a specific word, and the value value is the phoneme sequence corresponding to the possible pronunciation of the word.
  • “NE40E” records only one pronunciation with one phoneme sequence
  • "AAA” records two pronunciations with two different phoneme sequences.
  • the phoneme when the information recognition apparatus performs vectorization on the second phoneme, the phoneme may be vectorized by using a vectorization model trained in advance.
  • the information recognition apparatus may construct a vectorized model by using the neural network shown in FIG. 6 .
  • the neural network includes an input layer, a double-layer long short-term memory (LSTM) network, and an output layer.
  • the input of the neural network is a phoneme sequence
  • the input phoneme sequence is one-hot encoded in the input layer, and then sent to the double-layer LSTM network.
  • the sequence can be converted into a fixed-dimensional vector, that is, the vectorization of the phoneme sequence is completed, and finally the vectorization information of the phoneme sequence is output by the output layer.
  • FIG. 6 is only an example of a vectorization model, and the specific implementation of the vectorization model is not limited to this example. Among them, for the training process of the vectorized model, please refer to the description below, and will not be described in detail here.
  • the target text that includes two types of languages is used as an example.
  • the speech to be recognized may include not only Chinese and English, but also types of languages such as Korean or symbols. Therefore, in the process of speech recognition, the speech to be recognized can also be recognized based on the phonemes of more types of languages.
  • the recognized target texts there can be more than three types of languages including Chinese, English, and Korean. text. Since it is similar to the specific implementation process of identifying the target text of the first type of language and the second type of language in this embodiment, this embodiment does not recognize three or more types of languages according to the first phoneme and the second phoneme. The specific realization method of the target will not be repeated here.
  • the speech recognition process performed by the information recognition apparatus described in the above embodiments for the voice input by the user can be applied to the speech interaction scenario shown in FIG.
  • the function module, and the information recognition device can perform speech recognition on the voice input by the user based on the above process, and then perform natural language understanding on the recognized target text to determine the semantics of the target text, so that the voice interaction system can be based on the target text.
  • Determine the dialog semantics of the response (which may be determined by the dialog task management module or the execution module).
  • the voice interaction system can generate corresponding natural language text based on the dialogue semantics, and synthesize corresponding voice based on the natural language text and output it, so that the voice interaction process between the user and the machine can be realized.
  • the speech recognition methods described in the above embodiments can be applied to other applicable scenarios, such as speech transcription, voice on demand, and voice dialing scenarios similar to those in FIG. 7 .
  • the above-mentioned speech recognition process may be integrated into the speech recognition engine, so that when the speech recognition engine recognizes the speech input by the user, the accuracy of the obtained recognition result may be higher; or, it may be independent of speech recognition. engine, and correct errors for the text recognized by the speech recognition engine, so as to ensure the accuracy of the target text finally recognized by the speech interaction system.
  • the technical solutions of the embodiments of the present application will be described in detail below with reference to a scenario in which errors are corrected for text recognized by a speech recognition engine.
  • the speech recognition method provided by the embodiment of the present application may specifically include:
  • the preprocessing module performs phonetic notation on the initial text recognized by the speech recognition engine to obtain a phoneme sequence corresponding to the initial text.
  • the voice interaction system can use a voice recognition engine to recognize it to obtain the initial text. Since the speech recognition engine usually adopts the pronunciation rules of a single type of language to convert the speech into the initial text, when the speech input by the user includes multiple types of languages, the accuracy of the obtained initial text is low.
  • the preprocessing module can use the pre-saved pronunciation dictionary to perform phonetic notation for the initial text to obtain the phoneme sequence of the candidate segment.
  • the pronunciation dictionary can be pre-established and imported by the user.
  • the pronunciation dictionary may include the vocabulary of a specific type of language and the phoneme corresponding to the vocabulary, so that when phoneticizing the initial text, the preprocessing module can determine that the pronunciation dictionary matches the characters in the initial text by means of character matching or the like , so that the phoneme corresponding to the vocabulary is used to phoneticize the corresponding characters in the initial text.
  • the preprocessing module can also perform phonetic transcription for characters in the initial text based on regular expressions, for example, the regular expression can be " ⁇ [a-zA-Z]+[ ⁇ d]+[ ⁇ da-zA -Z-]*$", which is a combination of letters + numbers + letters, where "a-zA-Z” represents the letters from the lowercase letter a to the lowercase letter z and the uppercase letter A to the uppercase letter Z, " ⁇ d” represents the numbers from 0 to 9, and " ⁇ da-zA-Z-” represents the letters after the numbers (from a to z and from A to Z), then when phoneticizing the characters that satisfy the regular expression , that is, according to the pronunciation of letters and numbers, the pronunciation is carried out one by one.
  • the regular expression can be " ⁇ [a-zA-Z]+[ ⁇ d]+[ ⁇ da-zA -Z-]*$", which is a combination of letters + numbers + letters, where "a-zA-Z” represents the letters
  • the part of the characters in the initial text does not match the corresponding complete vocabulary in the pronunciation dictionary, then the part of the characters can be further divided into words and phonetic.
  • the part of the characters can be divided into multiple characters, and each character Zhuyin is carried out one by one, so as to realize the Zhuyin of the part of the character, and obtain the phoneme sequence corresponding to the part of the character.
  • the pronunciation dictionary may include multiple pronunciations of certain words, that is, for a certain word, there may be multiple pronunciations, so that the preprocessing module can phoneticize the corresponding characters in the initial text.
  • the character is annotated with multiple pronunciations so that the character can correspond to multiple phoneme sequences.
  • the preprocessing module can also perform pronunciation variation processing in the phonetic process. For example, considering the differences in local pronunciation habits, when using the pronunciation dictionary to phonetize the characters in the initial text, it can be based on the phoneme and the difference in pronunciation habits. , annotate the character with another pronunciation.
  • the preprocessing module can also add another pronunciation "lie” to the character A based on the pronunciation habit where "l” and "n” are indistinguishable Wait. Or, the preprocessing module annotates the candidate segment with other pronunciations based on the similarity of pronunciations. For example, when the initial text includes the string "1401", the preprocessing module is phonetically labeled as Y AO–S I–L I NG–Y AO", you can also note “IY-S I-L I NG-IY” to indicate "E40E” ("1" in Chinese is similar to "E” in English).
  • the preprocessing module may also perform special pronunciation processing for specific character combinations included in the initial text. For example, when the initial text includes a combination of numbers and letters, after identifying such a non-Chinese character string, the preprocessing module can perform phonetic pronunciation on it according to a preset pronunciation rule. For example, for the non-Chinese character string "V100", it can be pronounced “v” in English + “100” in Chinese, or "v” in English + “110” in Chinese, or it can be pronounced in Chinese English pronunciation "v” + Chinese pronunciation "Yao Yao zero” and so on.
  • the candidate generation module generates a plurality of candidate segments based on the initial text, and performs vectorization processing for the phoneme sequences of the candidate segments.
  • the candidate generation module can perform minimum unit division of the initial text. Taking the initial text as Chinese as an example, the candidate generation module can take each Chinese character in the initial text as a minimum unit. Exemplarily, when the initial text includes a string of numbers, a string of letters, and a foreign language word (such as an English word, etc.), it can be regarded as a complete unit, so as to avoid the intersection and spanning of these characters and Chinese characters.
  • the candidate generation module may generate multiple candidate segments of the same length to obtain a candidate set, and the number of minimum units included in different candidate segments in the candidate set may be the same.
  • the candidate generation module can generate candidates with the length of 2 minimum units.
  • the obtained candidates are "help me”, “I transfer”, “transfer”, “receive”, “en” 1401".
  • the candidate generation module may also generate multiple candidate segments of other lengths (such as candidate segments composed of 3 or 4 candidate lengths, etc.), and the same candidate set may include candidate segments of different lengths, etc. This implementation The example is not limited to this.
  • the candidate generation module may simultaneously generate multiple candidate sets based on the initial text, and the lengths of candidate segments included in different candidate sets are different.
  • the preprocessing module can also perform term discrimination on the candidate segments in the candidate set according to the term corpus, where the term corpus can be pre-trained and imported by the user, which can include multiple terms, such as industry terms, custom terms, etc. .
  • each candidate set can include not only multiple candidate segments, but also the position information (such as offset) of each candidate segment in the initial text, the length of the candidate segment, and the phoneme sequence corresponding to the candidate segment (based on The phonetic process of the preprocessing module is obtained, which can be one or more phoneme sequences).
  • each candidate segment in the candidate set a vectorized model that has been trained in advance can be used to perform vectorization processing on the phoneme sequence corresponding to the candidate segment.
  • the corresponding vector is also for multiple.
  • each candidate segment can have text information, its position information in the original text, candidate segment length information, and vectorization information of the phoneme sequence
  • the candidate generation module may use the vectorization model as shown in FIG. 6 to vectorize the phoneme sequence, and the specific implementation can refer to the descriptions in the above-mentioned embodiments, which will not be repeated here.
  • the training process of the vectorized model may be implemented by a voice interaction system or other devices, and the training process may specifically be:
  • the neural network shown in FIG. 10 may be based on the neural network shown in FIG. 6 , and a vector distance calculation layer and a Sigmoid function layer are added to the output layer.
  • the input layer is the one-hot encoding corresponding to the two phoneme sequences (that is, the one-hot encoding 1 corresponding to the phoneme sequence 1 and the one-hot encoding 2 corresponding to the phoneme sequence 2).
  • the vectorization results corresponding to the two phoneme sequences can be obtained respectively (ie, the vector 1 corresponding to the phoneme sequence 1 and the vector 2 corresponding to the phoneme sequence 2).
  • these two vectors are calculated by the vector distance and the Sigmoid function in the output layer, and can output 0 or 1, which is used to indicate whether the phoneme sequence 1 and the phoneme sequence 2 are the same phoneme sequence, where 1 can represent the same phoneme sequence, 0 It is possible to characterize sequences that are not the same phoneme.
  • sample data required for training the model may include positive example data and negative example data.
  • the positive example data includes the phoneme sequences that match each other in different types of languages as input to the model and the numerical values as the output of the model.
  • the positive example data including the phoneme sequence corresponding to Chinese and the phoneme sequence corresponding to English as an example, the positive example data can be shown in Table 1.
  • the counter-example data also includes the phoneme sequences that are not matched by different types of languages as input to the model and the numerical values as the output of the model, as shown in Table 3:
  • negative example data can be constructed based on positive example data.
  • positive example data Taking the counter-example data of constructing English names as an example, you can arbitrarily select an English name and the Chinese transliteration name X corresponding to the English name set from the English name set, and then select one from all the Chinese transliteration names corresponding to the English name set.
  • a Chinese transliterated person name Y that does not have the same phoneme as the Chinese transliterated person name X can constitute a counter-example data based on the English person name and the selected Chinese transliterated person name Y.
  • multiple pieces of counterexample data can be constructed.
  • the number of positive example data and the number of negative example data may be the same or similar.
  • the phoneme sequence corresponding to the English name and the phoneme sequence corresponding to the Chinese transliteration name in the positive example data can be input into the input layer of the neural network.
  • the two phoneme sequences are input to the double-layer LSTM network, and then , the LSTM network outputs the vectors corresponding to the two phoneme sequences respectively, and then, the output layer can calculate the vector distance between the vectors of the two phoneme sequences, and use the Sigmoid function to determine the model output results corresponding to the two vector distances , so as to adjust the parameters in the double-layer LSTM network according to the model output results and the expected model output results in the positive example data (ie, 1), and use the next sample data to continue to adjust the parameters of the neural network. train.
  • the neural network shown in Figure 6 can be obtained, so that the training of the vectorized model can be completed.
  • the scoring module uses the distance model to determine at least one target word in the standard text set according to the vectorization information of the phoneme sequence corresponding to each candidate segment, the phoneme sequence of the target word and the candidate The vector distance between the phoneme sequences of the segment is less than a preset threshold.
  • the candidate set is ⁇ ci, 0 ⁇ i ⁇ M ⁇ , where M is the number of candidate segments included in the candidate set, and ci represents the ith candidate segment in the candidate set.
  • the standard text set is ⁇ t j , 0 ⁇ j ⁇ N ⁇ , where N is the number of words included in the standard text set, and t j represents the jth word in the standard text set. Then, the scoring module needs to perform at least M*N vector distance calculations.
  • formula (1) can be used to calculate the vector distance between two phoneme sequences:
  • dist i, j represents the vector distance between two phoneme sequences, and the smaller dist i, j is, the closer the candidate segment is to the corresponding word in the standard text (the smaller the difference), otherwise, dist i , the larger the j , the greater the difference between the candidate segment and the corresponding vocabulary in the standard text; L represents the vector dimension.
  • the standard text set contains a large number of words, and the difference between the large number of words contained in the standard text set and the candidate segment is relatively large, and the meaning for determining the error correction text corresponding to the candidate segment is small.
  • the scoring module can filter the words in the standard text set for each candidate segment according to the vector distance, so that the error correction text corresponding to the candidate segment can be determined from the filtered words.
  • the scoring module can set a threshold r, and filter out the words corresponding to dist i, j that are smaller than the threshold r from the standard text set, so as to achieve the purpose of compressing the standard text set. In this way, the subsequent calculation process can be effectively reduced The amount of computation required in .
  • the scoring module uses the alignment model to calculate the similarity score between each candidate segment in the candidate set and the target word.
  • the scoring module can use the alignment model and the phoneme confusion matrix to calculate the similarity score between the candidate segment and the phoneme sequence of the target word, so as to select the error correction text as the candidate segment from the multiple target words. target word.
  • the two phonemes may have a small difference in pronunciation or a large difference. Therefore, the degree of difference between different phonemes (or the difference in pronunciation between two phonemes) can be measured by the degree of confusion, which can be calculated by Defined as a floating-point number greater than or equal to zero. If the two phonemes are exactly the same, the confusion degree is 0.0; if the two phonemes differ greatly, the confusion degree can be a larger value. In practical applications, in order to facilitate operation and understanding, the numerical range of the phoneme confusion degree can be normalized to [0.0, 1.0], but it can also be determined according to the model output, which is not limited in this embodiment.
  • the phoneme confusion matrix is a matrix that records the degree of confusion between different phonemes.
  • the degree of confusion between two phonemes can be calculated based on the trained neural network shown in FIG. 10 .
  • the two phoneme sequences input to the neural network only contain one phoneme.
  • the advantage of this is that The phoneme confusion degree and phoneme vector quantization process use homologous data, so that the phoneme confusion degree matrix can be updated by means of data.
  • a local example of a phoneme confusion matrix can be shown in Table 4 below:
  • the degree of confusion between different phonemes in the above phoneme confusion matrix can also be manually set by technicians, or a corresponding speech analysis algorithm is used to analyze the similarity between speech signals, and according to The evaluation value of the similarity of speech signals determines the degree of confusion between phonemes, etc.
  • the specific implementation manner of how to determine the degree of confusion between phonemes is not limited.
  • a grid can be constructed to calculate the similarity between the two phonemes. For example, if the initial text is "Pithias”, the corresponding phoneme sequence is "BI-T I-AI-S I”, and the standard text is "BTS”, and the corresponding phoneme sequence is "BIY-T” IY–EHS” as an example, can be calculated based on the grid shown in Figure 11.
  • the optimal path may be determined based on the similarity scores of each grid point in the grid.
  • each grid point in the grid has a basic score of phoneme similarity, and the basic score can be determined based on the degree of confusion between two phonemes recorded in the phoneme confusion matrix. Among them, the greater the degree of confusion between phonemes, the greater the pronunciation difference between phonemes, and the smaller the score of phoneme similarity.
  • the score of grid point (i, j) is related to its own basic score, (i-1, j), (i, j-1) and (i-1, j-1) basic score
  • the specific can be calculated by the following formula (2):
  • s i,j max(s i-1,j ,s i,j-1 ,s i-1,j-1 )+ ci,j (2)
  • s i, j represent the score of the grid point (i, j)
  • s i-1, j represent the score of the grid point (i-1, j)
  • s i, j-1 represent the grid point (i , j-1)
  • si-1, j-1 represent the score of grid point (i-1, j-1)
  • ci, j represent the basic score of grid point (i, j).
  • the similarity score can also be normalized to mask the phoneme sequence.
  • the effect of different lengths on the similarity score e.g., the longer the phoneme sequence, the higher the similarity score may be at the end of the path.
  • the similarity between the two phoneme sequences is measured by the similarity score, and in other possible implementations, it can also be measured based on the vector distance between the two phoneme sequences The similarity between two phoneme sequences. At this time, the smaller the vector distance of the path end point, the more similar the two phoneme sequences are. Since it is similar to the specific implementation concept of the above-mentioned embodiment, the specific implementation process of measuring the similarity degree of phoneme sequences based on the vector distance will not be repeated in this embodiment. Moreover, the above-mentioned execution process of determining the similarity between two phoneme sequences can be implemented by encapsulating it into an alignment model.
  • the replacement module selects a corresponding target word to replace the candidate segment according to the similarity score between the candidate segment and the target word.
  • a plurality of corresponding target words can be used to sort according to the similarity score, and the target word pair with the highest similarity score is selected as the error correction text of the candidate segment . Since each candidate segment can simultaneously record the original text, its position information in the original text, and the length information of the candidate segment, the replacement module can use the error correction text to directly replace the corresponding candidate segment in the original text.
  • the replacement module may further determine whether to replace the candidate segment according to the maximum similarity score in advance. Specifically, after determining the maximum similarity score, the replacement module may compare whether the maximum similarity score is greater than a preset score threshold. If it is greater than that, the target word corresponding to the maximum similarity score can be used as the error correction text, and the corresponding candidate segment can be replaced; if not, it indicates that the phoneme difference between the target word and the candidate segment is relatively large. When the replacement module does not use the target word corresponding to the maximum similarity score to replace the candidate segment, that is, the candidate segment is used as the word in the target text, so that the use of the wrong target word to replace the correct candidate segment can be reduced. possibility to reduce the probability of false positives.
  • the possibility of speech recognition errors in some candidate segments is low.
  • the corresponding candidate segments may include “Help me”, “I transfer” , “transfer”, “receive grace”, “en 1401”, then, for some of the candidate segments "help me”, “transfer” and other candidate segments, it may be correctly recognized by the speech recognition engine in practical applications Therefore, in some possible implementations, some candidate segments obtained based on the initial text may be filtered, specifically filtering candidate segments with a higher possibility of accurate identification (such as “Help me”, “Transfer”). "Continue” etc.), and the remaining candidate segments are less likely to be accurately recognized by the speech recognition engine.
  • Steps S803 to S805 may be used to determine whether to use the corresponding target words to perform text replacement on the remaining candidate segments. In this way, the number of candidate segments involved in the calculation in steps S803 to S805 can be effectively reduced, thereby not only reducing the amount of calculation required for determining the target text, but also improving the efficiency of determining the target text to a certain extent.
  • the voice interaction system can obtain multiple different candidate sets based on the initial text. The lengths of the candidate segments are different, so that the speech interaction system can determine different target texts based on different candidate sets. Then, the voice interaction system can determine the target text as the final voice recognition result from different target texts. For example, the voice interaction system can calculate the text similarity between each target text separately, and further calculate the sum (or average similarity) of the similarity between each target text and other texts, so that the maximum similarity can be calculated The target text corresponding to the sum (or the maximum similarity average) is used as the target text of the final speech recognition result.
  • the target text as the final speech recognition result may also be determined from multiple target texts in other possible manners, which is not limited in this embodiment.
  • the apparatus 1200 includes:
  • the information acquisition module 1201 is used to acquire input information, the input information includes the first phoneme based on the first type of language, the to-be-recognized speech or the to-be-corrected text, and the to-be-recognized speech includes the first type of language and the second type of language
  • the voice of the to-be-corrected text includes the text of the first type of language, and the first type is different from the second type;
  • the identification module 1202 is configured to identify the target word based on the second type of language in the standard text as a word in the target text according to the phoneme corresponding to the input information, and the phoneme corresponding to the input information includes the first phoneme or the first phoneme.
  • the second phonemes include the phonemes determined by recognizing the speech to be recognized based on the first type of language or the phonemes corresponding to the text to be corrected, the phonemes of the target word and the phonemes corresponding to the input information The degree of difference between them is smaller than the degree of difference between other words in the standard text and the phonemes corresponding to the input information
  • the target text includes a first text based on the first type of language and a phoneme based on the second type of language.
  • the second text of the typed language is configured to identify the target word based on the second type of language in the standard text as a word in the target text according to the phoneme corresponding to the input information, and the phoneme corresponding to the input information includes the first phoneme or the first phoneme.
  • the identifying module 1202 is specifically configured to, according to the vectorized difference between the phoneme corresponding to the input information and the third phoneme of the standard text, classify the standard text based on the second type
  • the target words of the language are identified as words in the target text.
  • the identifying module 1202 is specifically configured to:
  • a target word based on the second type of language in the standard text is identified as a word in the target text.
  • the second phoneme is a phoneme based on a first type of language
  • the third phoneme of the standard text is a phoneme based on a second type of language
  • the third phoneme of the standard text includes phonemes of the first type of language and phonemes of the second type of language.
  • the identifying module 1202 is specifically configured to use a vectorization model to vectorize the phonemes corresponding to the input information, and the vectorization model is constructed based on a neural network.
  • the standard text includes polyphonic text
  • the polyphonic text has a first pronunciation and a second pronunciation
  • the first pronunciation is different from the second pronunciation.
  • the apparatus 1200 further includes:
  • the speech recognition module 1203 is used to perform speech recognition on the to-be-recognized speech by using a speech recognition engine to obtain an initial text;
  • the Zhuyin module 1204 is configured to perform Zhuyin on the initial text to obtain the second phoneme corresponding to the speech to be recognized.
  • the target text is a text obtained by performing error correction on the initial text.
  • the information identification apparatus 1200 may correspond to executing the methods described in the embodiments of the present application, and the above-mentioned and other operations and/or functions of the various modules of the information identification apparatus 1200 are respectively for implementing the respective methods in the foregoing embodiments The corresponding process, for the sake of brevity, will not be repeated here.
  • connection relationship between the modules indicates that there is a communication connection between them, which may be specifically implemented as one or more communication buses or signal lines.
  • the computer program object includes one or more computer instructions.
  • the computer may be a general purpose computer, special purpose computer, computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be retrieved from a website, computer, training device, or data Transmission from the center to another website site, computer, training facility or data center via wired (eg coaxial cable, fiber optic, digital subscriber line (DSL)) or wireless (eg infrared, wireless, microwave, etc.) means.
  • the computer-readable storage medium may be any available medium that can be stored by a computer, or a data storage device such as a training device, a data center, or the like that includes an integration of one or more available media.
  • the usable media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVDs), or semiconductor media (eg, SSDs), and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Machine Translation (AREA)

Abstract

一种信息识别方法、装置(1200)及存储介质,包括:获取用户输入信息,输入信息可以是音素(如拼音串)、语音或者文本等,其中,用户输入语音可以包括不同类型语言的语音,而用户输入文本包括第一类型语言的文本;然后,根据输入信息对应的第一音素或第二音素,将标准文本中与输入信息中音素差异程度最小的基于第二类型语言的目标词识别为目标文本中的词,第二音素为识别语音所确定的基于第一类型语言的音素或文本对应的音素,目标文本包括多种类型语言的文本。如此,在人工智能、智能对话等人机交互场景中,可以根据用户的输入内容识别出同时包含多种类型语言的文本,从而可以提高信息识别的准确率,提高人机交互体验。

Description

一种信息识别方法、装置及存储介质
本申请要求于2020年11月18日提交中国专利局、申请号为202011293842.2、发明名称为“一种信息识别方法、装置及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及信息识别技术领域,尤其涉及一种信息识别方法、装置及存储介质。
背景技术
语音识别技术,是指机器通过识别以及理解过程将语音信号转变为相应的文本或命令的技术,在语音记录(如会议记录等)、语音交互(如人与智能音箱、智能汽车的交互等)、语音检索等场景存在广泛应用。
目前,语音交互系统通常是利用语音识别引擎对用户输入的语音进行识别以及自然语言理解,但是,语音交互系统通常只能对一种类型的语言进行识别,当用户输入的语音中包含多种类型的语言时,语音交互系统识别该语音的准确率较低,比如,当用户输入的语音中包括中文以及英文时,语音交互系统通常只能识别出中文,而英文会根据中文的拼音规则被错误识别。并且,在其它场景中,也存在基于输入的内容难以识别出包含多种类型语言的文本的问题。
发明内容
本申请提供了一种信息识别方法、装置及存储介质,以提高信息识别的准确性。
第一方面,本申请提供了一种信息识别方法,可以获取用户输入的信息,用户输入信息可以是基于第一类型语言的第一音素、待识别语音或待纠错文本,其中,该待识别语音包括第一类型语言以及第二类型语言的语音,而待纠错文本包括第一类型语言的文本,该第一类型与第二类型属于不同类型;然后,根据输入信息对应的音素,将标准文本中基于第二类型语言的目标词识别为目标文本中的词,从而识别得到目标文本,该目标文本包括至少两种类型语言的文本,即基于第一类型语言的第一文本以及基于第二类型语言的第二文本,并且,目标词的音素与输入信息对应的音素之间的差异程度小于标准文本中的其它词与输入信息对应的音素之间的差异程度。其中,当用户输入语音时,该第二音素可以是对待识别语音进行识别所确定的第一类型语言的音素,而当用户输入待纠错文本时,该第二音素可以是该待纠错文本对应的音素,例如可以是通过对该待纠错文本进行注音得到第二音素等。
由于在识别或者纠错过程中,根据用户输入信息对应的基于第一类型语言的音素,在识别出基于第一类型语言的文本的基础上,还能够从标准文本中确定出与该音素之间音素差异程度最小的基于第二类型语言的目标词作为目标文本中的词,而不是根据其中一种类型语言的发音规则对另一种类型语言进行识别,这使得最终识别得到的目标文本 中不仅包含基于第一类型语言的第一文本,还可以同时包括基于第二类型语言的文本,如此,可以识别得到同时包含多种类型语言的文本,从而可以提高信息识别的准确性。
在一种可能的实施方式中,在根据输入信息对应的音素与标准文本的第三音素之间的差异,识别得到目标文本时,具体可以是根据输入信息对应的音素与第三音素之间的向量化差异,将标准文本中基于第二类型语言的目标词识别为目标文本中的词。如此,可以通过音素序列之间的向量化差异大小,来确定最终输出的目标文本。例如,当两个音素序列之间的向量化差异较小时,则可以利用该标准文本来作为目标文本;而当该向量化差异较大时,则可以基于其它方式识别出该第二音素对应的文本。
在一种可能的实施方式中,在根据输入信息对应的音素与第三音素之间的向量化差异,识别得到目标文本时,具体可以是对输入信息对应的音素进行向量化,得到该音素对应的第一向量,并计算出该第一向量与第三音素对应的第二向量之间的向量距离,从而可以根据该向量距离,将标准文本中基于第二类型语言的目标词识别为目标文本中的词。如此,在通过输入信息对应的音素与第三音素的向量距离,可以确定两个音素序列之间的向量化差异,从而可以根据该向量距离的大小,确定输入信息对应的目标文本。
实际应用中,标准文本中可能存在多个词与输入信息对应的音素之间向量距离最小,此时,通过向量距离可能难以从多个词中抉择出一个词作为目标文本中的词,因此,在一种可能的实施方式中,在基于向量距离从标准文本中确定出多个词后,可以构建相似度计算网格,其中,在网格的两个垂直方向上,其中一个方向(如纵轴)为词的音素序列,另一个方向(如横轴)为输入信息的音素序列,这样,基于该网格可以对两个音素序列之间的相似度进行评分,以此可以计算出每个词与输入信息之间的音素相似度。然后,根据每个词与输入信息之间的相似度大小,从中确定出与输入信息之间相似度最大的词,并将该词确定为目标词,作为目标文本中的词。当然,也可以是采用其它可能的方式从多个词中选择一个词作为目标词,本申请对该过程的具体实现并不进行限定。
在一种可能的实施方式中,第二音素具体为基于第一类型语言的音素,而第三音素为基于第二类型语言的音素;或者,第三音素同时包括第一类型语言的音素以及第二类型语言的音素。如此,对用户输入的语音进行识别或者对待纠错文本进行纠错时,可以将待识别语音或待纠错文本中的部分第二音素识别为第三音素,从而实现识别得到的目标文本中同时包括第一类型语言的文本以及第二类型语言的文本。
在一种可能的实施方式中,在对输入信息对应的音素进行向量化时,具体可以是利用向量化模型完成该音素的向量化过程,其中,该向量化模型可以是基于神经网络进行构建。如此,可以实现对于音素序列的快速向量化,并且,音素序列的向量化准确度可以是通过模型训练过程予以保证。
在一种可能的实施方式中,标准文本中可以包含多音文本,该多音文本具有多种不同的发音,例如,假设标准文本包括“AAA”,其发音1可以是该文本逐个字符的读音,其发音2也可以是读成“3A”等。如此,基于用户不同的发音,既可以确定该用户实际期望输入的内容为该标准文本,从而可以提高用户发音的灵活性和自由度。
在一种可能的实施方式中,当用户输入语音时,可以利用语音识别引擎对用户输入的待识别语音进行语音识别,得到初始文本。通常情况下,语音识别引擎是基于一种发音规则对待识别语音进行语音识别,难以识别出多种类型语言的文本,从而语音识别引 擎所得到的初始文本中通常仅包含一种类型语言的文本。因此,在得到初始文本后,可以对该初始文本进行注音,得到该待识别语音对应的第二音素,以便基于该第二音素识别出包含多种类型语言的目标文本,以提高语音识别的准确率。
在一种可能的实施方式中,目标文本具体可以是对初始文本进行纠错而得到的文本,即,对于利用语音识别引擎所识别出的初始文本,根据该初始文本进行所得到的音素序列,可以利用标准文本对该初始文本中的部分内容进行错误纠正,如将初始文本中基于第一类型语言的部分内容纠正为基于第二类型语言的内容。
第二方面,本申请实施例还提供了一种信息识别装置,包括:信息获取模块,用于获取输入信息,所述输入信息包括基于第一类型语言的第一音素、待识别语音或待纠错文本,所述待识别语音包括第一类型语言以及第二类型语言的语音,所述待纠错文本包括所述第一类型语言的文本,所述第一类型与所述第二类型不同;识别模块,用于根据所述输入信息对应的音素,将标准文本中基于第二类型语言的目标词识别为目标文本中的词,所述输入信息对应的音素包括所述第一音素或第二音素,所述第二音素包括识别所述待识别语音所确定的基于第一类型语言的音素或所述待纠错文本对应的音素,所述目标词的音素与所述输入信息对应的音素之间的差异程度小于所述标准文本中的其它词与所述输入信息对应的音素之间的差异程度,所述目标文本包括基于所述第一类型语言的第一文本以及基于所述第二类型语言的第二文本。
在一种可能的实施方式中,所述识别模块,具体用于根据所述输入信息对应的音素与所述标准文本的第三音素之间的向量化差异,将标准文本中基于第二类型语言的目标词识别为目标文本中的词。
在一种可能的实施方式中,所述识别模块,具体用于:
对所述输入信息对应的音素进行向量化,得到第一向量;
计算所述第一向量与所述第三音素所对应的第二向量之间的向量距离;
根据所述向量距离,将所述标准文本中基于第二类型语言的目标词识别为所述目标文本中的词。
在一种可能的实施方式中,所述第二音素为基于第一类型语言的音素,所述标准文本的第三音素为基于第二类型语言的音素;
或,所述标准文本的第三音素包括所述第一类型语言的音素以及所述第二类型语言的音素。
在一种可能的实施方式中,所述识别模块,具体用于利用向量化模型对所述输入信息对应的音素进行向量化,所述向量化模型基于神经网络进行构建。
在一种可能的实施方式中,所述标准文本中包括多音文本,所述多音文本具有第一发音以及第二发音,所述第一发音与所述第二发音不同。
在一种可能的实施方式中,所述装置还包括:
语音识别模块,用于利用语音识别引擎对所述待识别语音进行语音识别,得到初始文本;
注音模块,用于对所述初始文本进行注音,得到所述待识别语音对应的第二音素。
在一种可能的实施方式中,所述目标文本为对所述初始文本进行纠错而得到的文本。
第三方面,本申请实施例还提供了一种装置,该装置包括存储器和处理器,所述存储器、所述存储器进行相互的通信,所述处理器用于执行所述存储器中存储的指令,以执行第一方面中任意一种实现方式所描述的方法。
第四方面,本申请提供了一种芯片,所述芯片包括处理器和芯片接口。其中,所述芯片接口用于接收指令,并传输至所述处理器。所述处理器执行上述指令以执行上述第一方面中任一方面所述的信息识别方法。
第五方面,本申请提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当其在计算机设备上运行时,使得计算机设备执行上述第一方面所述的方法。
第六方面,本申请提供了一种包含指令的计算机程序产品,当其在计算机设备上运行时,使得计算机设备执行上述第一方面所述的方法。
本申请在上述各方面提供的实现方式的基础上,还可以进行进一步组合以提供更多实现方式。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请中记载的一些实施例,对于本领域普通技术人员来讲,还可以根据这些附图获得其他的附图。
图1为本申请实施例中一种计算机设备的结构示意图;
图2为本申请实施例中一种云服务器的结构示意图;
图3为本申请实施例中一种语音识别方法的流程示意图;
图4为一示例性输入界面示意图;
图5为利用JSON语言记录“NE40E”以及“AAA”的音素序列的示意图;
图6为本申请实施例中一示例性神经网络的示意图;
图7为本申请实施例中一示例性语音交互场景示意图;
图8为本申请实施例中一种语音识别方法的流程示意图;
图9为本申请实施例中根据初始文本生成候选集的示意图;
图10为本申请实施例中向量化模型的示意图;
图11为本申请实施例中通过网格计算两个音素序列之间相似度的示意图;
图12为本申请实施例中一种信息识别装置的结构示意图。
具体实施方式
下面将结合附图,对本发明实施例中的技术方案进行描述。显然,所描述的实施例仅是本说明书一部分实施例,而不是全部的实施例。
在本说明书的描述中“一个实施例”或“一些实施例”等意味着在本说明书的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。
其中,在本说明书的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表 示A或B;本文中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,在本说明书实施例的描述中,“多个”是指两个或多于两个。
在本说明书的描述中,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。
实际应用的部分场景中,用户输入的语音可能同时包括多种类型的语言。其中,不同类型的语言,是指按照不同发音规则进行发音的语言,包括不同的语种,如中文、英文、韩文等;还可以包括不属于交际语种的符号,如“——”、“*”等。
此时,语音交互系统通常是基于一种类型语言的发音规则对其余类型语言进行识别,这就使得语音交互系统对于用户输入语音的识别准确率较低。比如,假设用户语音输入的实际内容为“她是一个好girl,值得你好好爱她”,即用户在语音输入中文的过程,穿插了英文(其它语种)表达发音,此时,语音交互系统通常仍然是按照中文的发音规则对用户输入的语音内容进行识别,而对于“girl”的英文发音内容,语音交互系统通常参照标准汉语拼音方案而采用谐音的方式对外语词“girl”进行注音(即用符号表征文本的发音),这就导致语音交互系统很可能会将“girl”识别为汉语中与其具有相似发音的词“狗儿”。此时,基于用户输入的语音,语音交互系统所识别出的语音内容为“它是一个好狗儿,值得你好好爱它”,这与用户实际期望输入的内容存在较大差异,从而使得语音识别的准确率较低。而在语音控制场景中,若语音交互系统所识别出的语音内容错误,则系统会基于错误的语音识别结果执行错误的操作命令,从而会严重影响用户的使用体验。
类似的,当用户期望输入“她是一个好girl”,但是由于用户可能忘记“girl”的拼写而只记得“girl”的发音,则用户可能会根据英文“girl”的汉语音译,输入拼音串“tashiyigehaogouer”,但是目前的识别系统,通常难以根据该拼音串,识别得到“她是一个好girl”的中英混合文本。或者,当用户期望输入“她是一个好girl”时,用户所提供的输入为“它是一个好狗儿”的文本,而基于目标的识别系统,通常也难以将该输入文本纠正为“她是一个好girl”的中英混合文本。
基于此,本申请实施例提供了一种信息识别方法,可以应用于信息识别装置中,该信息识别装置可以根据用户输入的音素、用户输入的语音所包括的音素或者用户输入的文本所对应的音素来进行识别。其中,音素,是根据语音的自然属性划分出来的最小语音单位。一个发音动作,可以形成一个音素。如汉语中的拼音“ma”,其在发音时包含“m”、“a”两个发音动作,是两个音素。相同发音动作发出的音就是同一音素,不同发音动作发出的音就是不同音素。如汉语中的拼音串“mami”(中文例如是“妈咪”),其在发音时依次包含“m”、“a”、“m”、“i”四个发音动作,其中,两个“m”的发音动作相同,为同一音素,而“m”、“a”、“i”的发音动作不同,互为不同音素。
本申请实施例中,信息识别装置可以根据用户输入的第一类型语言的音素、用户输入语音或者用户输入文本所对应的第一类型语言的音素,在标准文本中确定基于第二类型语言的目标词,识别得到同时包含多种类型语言的文本,而不是将多种类型语言的内 容均按照一种类型语言的发音规则进行识别,如此,可以提高信息识别的准确性。
仍以上述用户语音输入“她是一个好girl,值得你好好爱她”为例,信息识别装置在识别该语音时,可以根据中文的音素识别用户按照中文发音输入的内容,同时,信息识别装置可以根据英文的音素识别用户按照英文发音输入的内容“girl”,如此,信息识别装置最终所识别出的文本即为“她是一个好girl”,从而提高了语音识别的准确性。或者,当用户输入拼音串“tashiyigehaogouer”时,信息识别装置可以根据中文以及英文的音素,基于该拼音串可以识别得到文本“她是一个好girl”,以提高文本识别的准确性。或者,当用户输入文本“她是一个好狗儿”时,信息识别装置可以对该输入文本进行注音,得到该输入文本对应的音素序列,从而信息识别装置可以根据英文的音素对该基于中文的音素序列进行识别纠正,得到“她是一个好girl”的识别结果。
下面结合附图,对本申请的实施例进行描述。本申请实施例提供的语音识别方法可以应用于包括但不限于如图1所示的计算机设备100中。
如图1所示,计算机设备100可以包括总线101、至少一个处理器102、至少一个通信接口103和至少一个存储器104。处理器102、存储器104和通信接口103之间通过总线101通信。总线101可以是外设部件互连标准(peripheral component interconnect,PCI)总线、快捷外设部件互连标准(peripheral component interconnect express,PCIe)或扩展工业标准结构(extended industry standard architecture,EISA)总线等。总线可以分为地址总线、数据总线、控制总线等。为便于表示,图1中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。通信接口103用于与外部通信,例如接收用户通过数据输入设备(如鼠标、键盘、麦克风等)输入的数据或指令等。
其中,计算机设备100可以是平板电脑或者台式机等个人计算机(personal computer,PC),也可以是工作站或者服务器等。
处理器102可以为中央处理器(central processing unit,CPU)、现场可编程逻辑门阵列(field programmable gate array,FPGA)或者专用集成电路(application specific integrated circuit,ASIC)。计算机设备可以通过该处理器为用户提供计算资源。
存储器104可以包括易失性存储器(volatile memory),例如随机存取存储器(random access memory,RAM)。存储器104还可以包括非易失性存储器(non-volatile memory),例如只读存储器(read-only memory,ROM),快闪存储器,机械硬盘(Hard Disk Drive,HDD)或固态硬盘(Solid State Drive,SSD)。
存储器104中存储有程序或指令,例如实现信息识别所需的程序或指令,处理器102执行该程序或指令实现为对象进行建模。当然,存储器104中还可以存储数据,例如缓存用户输入的音素文本或待识别语音、基于该音素或待识别语音所识别出的目标文本,还可以存储信息识别过程中所生成或涉及的其它中间信息(如音素)等。处理器104可以通过读取该存储器104获取音素或待识别语音,并对该待识别语音进行识别,以得到用户所期望输入的文本等。需要说明的是,存储器104可以集成于计算机设备100,也可以独立于计算机设备100。图1所示的计算机设备100的硬件结构并不用于限定计算 机设备100在实际应用中的硬件组成。
存储器104可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(random access memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(DRAM)、同步动态随机存取存储器(synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(double data date SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(direct rambus RAM,DR RAM)。
或者,本申请实施例提供的信息识别方法可以应用于包括但不限于如图2所示的云服务器200中。
如图2所示,该云服务器200可以与用户设备210连接。用户可以在用户设备210上输入音素序列或待识别语音,并由用户设备201将该音素序列或待识别语音发送给云服务器200,并请求或者命令云服务器200进行信息识别。相应的,云服务器200在识别出该音素或待识别语音所对应的目标文本后,可以将该目标文本发送该用户设备210,以便用户设备210将该目标文本呈现给用户;或者,云服务器200可以基于该目标文本确定操作命令,并可以进一步执行该操作命令。
其中,云服务器200包括总线201、处理器202、通信接口203和存储器204。处理器202、存储器204和通信接口203之间通过总线201通信。总线201可以是PCI总线、PCIe或EISA总线等。总线可以分为地址总线、数据总线、控制总线等。为便于表示,图2中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。通信接口203用于与外部通信,例如接收IO操作的操作属性信息和对象属性信息等等。
其中,处理器202可以为CPU。存储器204可以包括易失性存储器,例如RAM。存储器204还可以包括非易失性存储器,例如ROM,快闪存储器,HDD或SSD等。
存储器204中存储有程序或指令,例如存储有实现语音识别所需的程序或指令,处理器202执行该程序或指令以执行上述语音识别方法。当然,存储器204中还可以存储数据,例如存储用户设备210发送的音素序列、待识别语音、所识别出的目标文本等。
为了使得本申请的技术方案更加清楚、易于理解,下面以信息识别装置对待识别语音进行识别为例,对本申请实施例的信息识别方法进行详细说明。其中,该信息识别装置可以通过硬件实现,如可以是上述计算机设备100或者云服务器200;或者,信息识别装置也可以是通过软件实现,如可以是运行在计算机设备/云服务器200上的功能模块等。
参见图3,示出了一种信息识别方法的流程示意图,该方法具体可以包括:
S301:信息识别装置获取输入信息,该输入信息包括基于第一类型语言的第一音素、待识别语音或待纠错文本,该待识别语音至少包括第一类型语言以及第二类型语言,待 纠错文本为基于第一类型语言的文本,其中,第一类型语言与第二类型语言不同。
本实施例中,计算机设备100或者用户设备210所接收到的用户输入信息可以是音素、语音以及文本这三种信息中的任意一种。以用户输入语音为例,计算机设备100或者用户设备210可以向用户提供如图4所示的输入界面,而用户可以在该输入界面中长按语音输入按钮,并输入语音。这样,计算机设备100/用户设备210可以将用户在该输入界面中输入的语音内容,作为待识别语音。或者,在语音控制场景中,计算机设备100/用户设备210可以处于侦听用户语音的状态,并将用户在此期间输入的语音内容作为待识别语音。当然,本实施例中,对于信息识别装置如何获取待识别语音的具体实现方式并不进行限定。或者,在音素输入场景中,用户可以在计算机设备100/用户设备210上直接输入音素序列,如基于中文的拼音串等,从而信息识别装置可以获得用户输入的第一音素,以便于基于该第一音素识别出多种类型语言的文本,该第一音素为基于第一类型语言的音素,如可以是中文拼音等。或者,在文本输入场景中,用户可以在计算机设备100/用户设备210上输入待纠错文本,该待纠错文本中包含一种类型语言的文本(如中文等),从而信息识别装置可以对该文本进行纠错,将该文本中的部分文字纠正为其它类型语言的文字。
其中,在识别用户输入语音的场景中,已有的用于语音识别的装置(如语音识别引擎等)通常是采用单一语种的发音规则来进行信息识别,而实际应用中,用户输入的待识别语音,可能会同时包含多种不同类型语言,比如,用户语音输入的内容可以是“她是一个好girl”,同时包括中文以及英文,或者是其它包括中文与英文混合的行业术语等;又比如,用户语音输入的内容可以是“A/B”(中文发音读作“A斜杠B”),同时包括英文以及符号“/”等。因此,若仍然采用单一语种的发音规则来识别该待识别语音,则针对于该待识别语音的识别结果的准确率通常较低。另外,当用户输入文本或者直接输入音素序列时,通常也难以根据该输入文本或者音素序列识别得到多种类型语言的文本内容。为此,本实施例中,信息识别装置可以通过继续执行步骤S302,以提高信息识别的准确率。
S302:信息识别装置根据输入信息对应的音素,将标准文本中基于第二类型语言的目标词识别为目标文本中的词,其中,该输入信息对应的音素包括第一音素或第二音素,该第二音素包括识别待识别语音所确定的第一类型语言的音素或待纠错文本对应的音素,该目标词的音素与输入信息对应的音素之间的差异程度小于标准文本中其它词与输入信息对应的音素之间的差异程度,该目标文本包括基于第一类型语言的第一文本以及基于第二类型语言的第二文本。
以用户输入的信息为待识别语音为例,信息识别装置在进行信息识别的过程中,可以先确定出该待识别语音所包含的音素(以下称之为第二音素),再根据第二音素识别出不同类型语言对应的文本。值得注意的是,信息识别装置可以先基于一种类型语言(A类型语言)的发音规则对待识别语音中所包含的音素进行识别,这使得其它类型语言对应的音素,通常是通过音译的方式而被识别为A类型语言的音素,因此,在后续的识别过程中,信息识别装置可以基于其它类型语言对应的音素对被音译成A类型语言对应的音素进行重新识别,以提高信息识别的准确性。
作为一种确定第二音素的示例,信息识别装置可以利用语音识别引擎对用户输入的 待识别语音进行语音识别,得到初始文本,然后,信息识别装置可以对初始文本进行注音,可以得到该初始文本对应的音素序列,该音素序列即为待识别语音对应的第二音素。值得注意的是,由于语音识别引擎通常是基于单一类型语言的发音规则对待识别语音进行识别,因此,所识别得到的初始文本的准确性通常较低,相应的,信息识别装置对初始文本进行注音后,所得到的第二音素可能与待识别语音实际对应的音素存在差异,因此,信息识别装置还需要根据注音得到的第二音素,来确定待识别语音对应的实际音素序列,以便确定出其对应的正确文本。
而在其它确定第二音素的示例中,信息识别装置也可以是根据待识别语音的声学特征来确定第二音素。具体实现时,信息识别装置可以先获取统一音素集中各个音素对应的声学特征,其中,该统一音素集中至少包括第一类型语言的音素。然后,信息识别装置可以将待识别语音的声学特征与统一音素集中的各个音素对应的声学特征进行匹配,如此,可以确定出与待识别语音的声学特征相匹配的各个音素,并基于声学特征在待识别语音中的顺序,可以得到相应的音素序列,即为上述第二音素。由于部分用户的发音方式可能与音素的标准发音方式存在差异,这就使得基于声学特征所确定出的第二音素,可能与用户预期的输入并不相符。比如,在语音输入场景中,部分用户可能期望语音输入“聂”,其正确语音输入应为“nie”(“nie”字的汉语拼音),但是由于地域发音习惯,用户实际语音输入为“lie”(汉语拼音),这就使得信息识别装置基于声学特征所识别出的第二音素可能与用户预期的输入存在差异,从而信息识别装置还需要根据匹配得到的第二音素,来确定待识别语音对应的实际音素序列,以便确定出其对应的正确文本。
上述示例中,是以用户输入的信息为语音时信息识别装置识别得到待识别语音对应的第二音素为例,而在其他可能的实施方式中,用户输入的信息也可以是待纠错文本,此时,信息识别装置也可以是通过对待纠错文本进行注音的方式,得到该待纠错文本对应的第二音素。作为一种示例,该纠错文本例如可以是上述利用语音识别引擎对待识别语音进行识别所得到的初始文本。或者,当用户直接输入音素序列时,信息识别装置可以直接获得第一音素。
在获得第一音素或者第二音素后,信息识别装置可以根据基于第一类型语言的第一音素或第二音素,与标准文本对应的基于第二类型语言的第三音素之间的差异,在标准文本中确定出目标词,并将该目标词作为目标文本中的词。示例性的,信息识别装置可以是根据音素之间的向量化差异来确定标准文本中的目标词等,该向量化差异例如可以是通过向量距离进行衡量。具体实现时,以对待识别语音/待纠错文本对应的第二音素进行识别为例,信息识别装置可以对待识别语音/待纠错文本对应的第二音素进行向量化,得到第二音素对应的第一向量,同时,标准文本对应的第三音素也完成向量化(其向量化过程可以预先执行,也可以是在每次确定文本的过程中执行)。然后,信息识别装置可以计算该第一向量与标准文本的第三音素所对应的第二向量之间的向量距离,并根据该向量距离,从标准文本中选取最小向量距离所对应的目标词作为该目标文本中的词,该目标词为基于第二类型语言的词。比如,当第一向量与第二向量之间的向量距离较小(具体为小于预设阈值)时,表征初始文本与目标文本之间的差异较小,从而可以认为所识别的初始文本符合用户的输入预期,则可以将该初始文本作为目标文本;而当第一向量与第二向量之间的向量距离较大(具体为大于该预设阈值)时,表征初始文本 与目标文本之间的差异较大,从而可以根据目标文本对初始文本进行修正,如对初始文本中的部分字/词进行替换等,并将修正后的初始文本作为用户期望输入的目标文本。
实际应用时,信息识别装置可以将第二音素划分为多个候选段,并计算每个候选段与标准文本中的每个词对应的音素之间的向量距离,从而可以根据向量音素之间的向量距离,确定该候选段在标准文本中所对应的词。进一步的,若该候选段在初始文本中的词与在标准文本中对应的词不同时,可以利用标准文本中的词对初始文本中的词进行替换。比如,以用户语音输入“她是一个好girl”,则初始文本可能为“它是一个好狗儿”,当基于向量化处理以及向量距离的计算确定初始文本中“狗儿”对应的音素与标准文本中的词“girl”对应的音素之间的向量距离最小,则可以利用标准文本中的“girl”对初始文本中的词“狗儿”进行替换,所得到的文本为“它是一个好girl”(还可以根据文本的语法和语义进行进一步替换,如将“它”修改为“她”等)。
当标准文本中存在多个词的音素与某个候选段之间的向量距离最小时,信息识别装置可以从这些词中选择其中一个词作为该候选段对应的目标文本。示例性的,信息识别装置可以通过网格对齐过程,分别计算每个词与该候选段之间的相似度,从而可以将具有最大相似度的音素所对应的词作为该候选段对应的目标文本,其具体计算过程可参见后文描述,在此不做赘述。当然,在其它实施例中,也可以是采用其它方式从多个词中确定出一个词作为候选段对应的目标文本,本实施例对此并不进行限定。
其中,标准文本可以是包括多种类型语言的词库(或者可以称为“字典”),其可以是由用户或者技术人员预先输入信息识别装置中,或者被配置为可被信息识别装置获取。示例性的,该词库中的每个词基于其标准发音,可以具有相应的音素序列以及该音素序列对应的音素向量。在确定目标文本的过程中,可以计算初始文本中每个词的音素与词库中每个词的音素之间的向量距离,从而可以根据音素之间的向量距离,针对于初始文本中每个词,均可以在词库中确定出音素之间向量距离最小或者小于预设阈值的词。如此,可以基于所确定出的词对初始文本进行修正,得到用户期望输入的目标文本。
与根据第二音素识别得到目标文本类似,当根据用户输入的基于第一类型语言的第一音素识别目标文本时,信息识别装置也可以是执行上述类似过程,将第一音素划分为多个候选段,并对每个第一音素的候选段进行向量化处理,从而基于第一音素的向量化结果,得到该第一音素对应的目标文本,其具体实现过程,可以参见上述过程的相关之处描述,在此不做赘述。
本实施例中,第二音素可以是第一类型语言的音素,而第三音素可以是第二类型语言的音素。具体的,在语音识别场景中,信息识别装置在确定待识别语音对应的第二音素时,可以先根据待识别语音识别得到第一类型语言的第二音素,由于其中的部分音素可能是基于第二类型语言的音译发音得到,因此,信息识别装置可以利用第二类型语言的第三音素确定第二音素中与第三音素相似的音素,从而针对于所确定出的该部分音素,可以利用第二类型语言的文本作为该部分音素对应的识别文本。比如,假设第二音素为中文音素,第三音素为英文音素,则信息识别装置可以先利用语音识别引擎将待识别语音识别为中文的初始文本,并对该初始文本进行注音,得到基于中文的第二音素;然后,信息识别装置可以计算该第二音素中的各部分音素,与标准文本中基于英文的第三音素之间的相似度,当相似度较高时,则可以利用第三音素对应的英文词汇对初始文 本中第二音素对应的中文词汇进行替换,如此,信息识别装置最终所识别出的目标文本可以同时包括中文与英文。类似的,在文本纠错场景中,信息识别装置根据用户输入的待纠错文本所对应的第二音素,也可以采用上述类似方式,对该待纠错文本中的部分文本内容基于第二类型语言的第三音素进行识别和纠错,具体可参见上述过程描述,在此不做赘述。
当然,在其它可能的实施方式中,第三音素可以同时包括第一类型语言的音素以及第二类型语言的音素,从而可以利用多种类型语言对应的音素对待识别语音对应的第二音素进行识别,以得到待识别语音所对应的目标文本。
作为一种示例,标准文本中的每个词可以具有一种或者多种发音,从而可以具有一种或者多种音素序列。比如,假设标准文本中的一个词为“NE40E”,其一种可能的标准发音所对应的音素序列可以是“EH1N-IY1-SI4-LING2-IY1”;又比如,假设标准文本中的一个词为“AAA”,则其可能的标准发音所对应的音素序列可以是“EY1-EY1-EY1”,或者,也可以是“SAN-EY1”。本实施例中,对于标准文本中具有多种不同发音(即具有多种不同音素序列)的文本,可以称之为多音文本。由于多音文本具有多种发音,因此,针对于同一文本,无论用户基于采用哪种发音进行语音输入或者文本输入,信息识别装置均可以将其准确识别,从而可以在一定程度上提高用户发音/文本输入的自由度、提高信息识别的灵活性。
实际应用中,信息识别装置可以利用JS对象简谱(JavaScript Object Notation,JSON)语言记录标准文本。例如,利用JSON语言记录“NE40E”以及“AAA”的音素序列的具体实现方式可以如图5所示,该标准文本可以视为python语言中的字典(dict)数据类型,包括一系列的<key,value>对(即键值对)。其key值是某一特定词汇的唯一标号,value值是表示该词汇的可能发音所对应的音素序列。基于图5所示示例,“NE40E”只记录了一种发音,具有一种音素序列,而“AAA”则记录了两种发音,具有两种不同音素序列。
作为一种可能的实施方式,信息识别装置在对第二音素进行向量化时,可以是通过预先完成训练的向量化模型对音素进行向量化。比如,信息识别装置可以利用图6所示的神经网络构建向量化模型。如图6所示,该神经网络包括输入层、双层长短期记忆网络(long short-term memory,LSTM)网络以及输出层。其中,该神经网络的输入为音素序列,在输入层对输入的音素序列进行独热(one-hot)编码后,将其送入至双层LSTM网络。在双层LSTM网络中,可以将序列转换成一个固定维度的向量,即完成对该音素序列的向量化,最后再由输出层将该音素序列的向量化信息输出。当然,图6仅作为一种向量化模型的示例,并不限定向量化模型的具体实现局限于该示例。其中,对于向量化模型的训练过程,可以参见后文描述,在此不做详述。
需要说明的是,为便于理解与说明,本实施例中是以识别出包括两种类型语言的目标文本为例,实际应用中,也可以是基于第一音素或者第二音素识别出包括三种以上(包括三种)类型语言的目标文本。例如,待识别语音中,不仅可以包括中文、英文,还可以同时包括韩文或者符号等类型语言。因此,在进行语音识别过程中,还可以基于更多类型语言的音素,对待识别语音进行识别,相应的,所识别得到的目标文本中,可以是包括中文、英文、韩文等三种以上类型语言的文本。由于其与本实施例中上述识别出第 一类型语言以及第二类型语言的目标文本的具体实现过程类似,因此,本实施例对根据第一音素以及第二音素识别出三种以上类型语言的目标的具体实现方式不再进行赘述。
值得注意的是,上述实施例所描述的信息识别装置针对于用户输入的语音所执行的语音识别过程,可以应用于图7所示的语音交互场景,即信息识别装置可以是语音交互系统中的功能模块,并且,信息识别装置可以基于上述过程对用户输入的语音进行语音识别,然后,对识别得到的目标文本进行自然语言理解,确定目标文本的语义,从而语音交互系统可以根据目标文本的语义确定回应的对话语义(其可以是通过对话任务管理模块或者执行模块进行确定)。这样,语音交互系统可以基于该对话语义生成相应的自然语言文本,并基于该自然语言文本合成相应的语音并进行输出,如此可以实现用户与机器之间的语音交互过程。当然,上述实施例所描述的语音识别方法,可以应用于其它可适用的场景中,如与图7类似的语音转写、语音点播、语音拨号场景等。
实际应用时,上述语音识别过程可以集成于语音识别引擎中,这样,语音识别引擎在识别用户输入的语音时,所得到的识别结果的准确性可以较高;或者,也可以是独立于语音识别引擎,并针对于语音识别引擎所识别出的文本进行纠错,从而保证语音交互系统最终所识别出的目标文本的准确性。为便于理解,下面结合对语音识别引擎识别出的文本进行纠错的场景对本申请实施例的技术方案进行详细描述。
参阅图8,本申请实施例提供的语音识别方法,具体可以包括:
S801:预处理模块对语音识别引擎所识别得到的初始文本进行注音,得到初始文本对应的音素序列。
本实施例中,对于用户输入的语音,语音交互系统可以利用语音识别引擎对其进行识别,得到初始文本。由于语音识别引擎通常是采用单一类型语言的发音规则将语音转换成初始文本,因此,当用户输入的语音中包括多种类型语言时,所得到的初始文本的准确性较低。
在对初始文本进行纠错的过程中,预处理模块可以利用预先保存的发音字典为该初始文本进行注音,得到该候选段的音素序列。其中,该发音字典可以由用户预先建立并完成导入。
其中,发音字典中可以包括特定类型语言的词汇以及该词汇所对应的音素,从而在对初始文本进行注音时,预处理模块可以通过字符匹配等方式确定发音字典中与初始文本中的字符相匹配的词汇,从而利用该词汇对应的音素为初始文本中的相应字符进行注音。或者,预处理模块也可以是基于正则表达式来为初始文本中的字符进行注音,例如,其正则表达式可以是“^[a-zA-Z]+[\d]+[\da-zA-Z-]*$”,即为字母+数字+字母的组合,其中,“a-zA-Z”表征从小写字母a至小写字母z以及大写字母A至字大写母Z之间的字母,“\d”表征0至9的数字,“\da-zA-Z-”表征数字之后的字母(从a至z以及从A至Z),则在为满足该正则表达式的字符进行注音时,即按照字母以及数字的发音方式进行逐个注音。进一步的,若初始文本中的部分字符在发音字典中未匹配到相应的完整词汇,则可以对该部分字符进行进一步分词并注音,如可以将该部分字符切分成多个字符,并对各个字符逐个进行注音,以此实现对该部分字符的注音,并得到该部分字符所对应的音素序列。
在一些示例中,该发音字典中可以包括某些词汇的多个发音,即针对于某一词汇,其可以存在多种发音,从而预处理模块在为初始文本中的相应字符进行注音时,可以为该字符注上多种发音,从而该字符可以对应于多种音素序列。同时,预处理模块在注音过程中,还可以进行发音变异处理,比如,考虑到地方发音习惯的差异,在利用发音字典为初始文本中的字符进行注音时,可以基于该音素以及发音习惯的差异,为该字符注上其它发音。如,假设基于发音字典为初始文本中的字符A注音为“nie”,则预处理模块还可以基于“l”与“n”不区分的地方发音习惯为该字符A注上其它发音“lie”等。或者,预处理模块基于发音的相似性为候选段注上其它发音,比如,当初始文本中包括字符串“1401”时,预处理模块为其注音为Y AO–S I–L I NG–Y AO”的同时,还可以注上“IY–S I–L I NG–IY”以表示“E40E”(中文的“1”与英文的“E”发音相似)。
进一步的,本实施例中,预处理模块还可以针对于初始文本中包含的特定字符组合进行特殊的发音处理。比如,当初始文本中包括数字以及字母组合时,预处理模块在识别出这样的非汉字串后,可以按照预设的发音规则对其进行注音。如对于非汉字串“V100”,可以为其注音英文发音“v”+中文发音“一百”,或者可以为其注音英文发音“v”+中文发音“一一零”,或者可以为其注音英文发音“v”+中文发音“幺幺零”等。
S802:候选生成模块基于初始文本生成多个候选段,并为该候选段的音素序列进行向量化处理。
预处理模块在完成对初始文本的注音后,可以将其传递给候选生成模块。候选生成模块可以对初始文本进行最小单元划分。以初始文本为中文为例,候选生成模块可以将该初始文本中的每个汉字作为一个最小单元。示例性的,当初始文本中包括数字串、字母串、外语单词(如英文单词等),可以将其作为一个完整单元,避免这些字符与汉字产生交叉、跨越。比如,假设初始文本为“帮我转接恩1401”时(其实际输入可以是“帮我转接NE40E”),可以将“帮”、“我”、“转”、“接”、“恩”、“1401”分别识别为一个最小单元,如图9所示。
然后,基于该最小单元,候选生成模块可以生成多个相同长度的候选段,得到候选集,该候选集中不同候选段所包括的最小单元的数量可以相同。比如,候选生成模块可以以2个最小单元的长度生成候选,如图9所示,所得到的候选分别为“帮我”、“我转”、“转接”、“接恩”、“恩1401”。当然,实际应用中,候选生成模块也可以是生成其它长度的多个候选段(如由3或4个候选长度组成的候选段等),同一候选集中可以包括不同长度的候选段等,本实施例对此并不进行限定。示例性的,候选生成模块可以基于初始文本同时生成多个候选集,不同候选集中所包括的候选段的长度不同。为便于说明,下面以对一个候选集的处理过程为例进行示例性说明。特别的,预处理模块还可以根据术语语料库对候选集中的候选段进行术语判别,其中,该术语语料库可以由用户预先完成训练并导入,其可以包括多个术语,如行业术语,自定义术语等。
实际应用中,每个候选集不仅可以包括多个候选段,还可以包括各个候选段在初始文本中的位置信息(如偏移量)、候选段长度、该候选段所对应的音素序列(基于预处理模块的注音过程得到,可以是一个或者多个音素序列)。
针对于候选集中的每个候选段,可以利用预先完成训练的向量化模型对该候选段对应的音素序列进行向量化处理,相应的,当候选段具有多个音素序列时,其对应的向量 也为多个。这样,每个候选段可以具有文本信息、其在初始文本中的位置信息、候选段长度信息、以及音素序列的向量化信息
作为一种示例,候选生成模块可以利用如图6所示的向量化模型对音素序列进行向量化,其具体实现可参见前述实施例中的相关之处描述,在此不做赘述。
其中,对于向量化模型的训练过程,可以是由语音交互系统或者其它装置实现,其训练过程具体可以是:
(1)创建如图10所示的神经网络。其中,图10所示的神经网络可以是在图6所示的神经网络的基础上,在输出层增加向量距离计算层以及Sigmoid函数层。其输入层为两个音素序列对应的独热编码(即音素序列1对应的独热编码1以及音素序列2对应的独热编码2),这两个音素序列对应的独热编码经过双层LSTM网络的向量化处理后,可以得到这两个音素序列分别对应的向量化结果(即音素序列1对应的向量1以及音素序列2对应的向量2)。然后,这两个向量在输出层经过向量距离以及Sigmoid函数计算,可以输出0或者1,用于表征音素序列1与音素序列2是否为同一音素序列,其中,1可以表征是同一音素序列,0可以表征不是同一音素序列。
(2)获取训练模型所需的样本数据,该样本数据可以包括正例数据以及反例数据。
其中,正例数据包括作为模型输入的不同类型语言相互匹配的音素序列以及作为模型输出的数值。
以正例数据包括中文对应的音素序列以及英文对应的音素序列为例,其正例数据可以如表1所示。
表1
Figure PCTCN2021103287-appb-000001
例如,其具体的数据实例可以如下表2所示:
表2
Figure PCTCN2021103287-appb-000002
Figure PCTCN2021103287-appb-000003
相应的,反例数据同样包括作为模型输入的不同类型语言不匹配的音素序列以及作为模型输出的数值,如表3所示:
表3
Figure PCTCN2021103287-appb-000004
示例性的,可以基于正例数据构造反例数据。以构造英文人名的反例数据为例,可以从英文人名集合中任意选择一条英文人名以及该英文人名所对应的中文音译人名X,然后,从该英文人名集合所对应的所有中文音译人名中选择一条与中文音译人名X没有相同音素的中文音译人名Y,从而基于该英文人名以及所选择的中文音译人名Y可以构成一条反例数据。基于类似方式,可以构建多条反例数据。实际应用中,正例数据的数量与反例数据的数量可以相同或者相近。
(3)利用样本数据对(1)中的神经网络进行训练。
例如,可以将正例数据中的英文人名对应的音素序列以及中文音译人名对应的音素序列输入至神经网络的输入层,这两个音素序列经过独热编码后,输入至双层LSTM网络,然后,LSTM网络分别输出这两个音素序列分别对应的向量,然后,输出层可以计算这两个音素序列的向量之间的向量距离,并通过Sigmoid函数确定这两个向量距离所对应的模型输出结果,从而根据该模型输出结果以及正例数据中所期望的模型输出结果(即为1)对双层LSTM网络中的参数进行调整,并利用下一条样本数据继续对经过参数调整后的神经网络进行训练。
通过正例数据以及反例数据的迭代训练后,可以得到如图6所示的神经网络,从而可以完成向量化模型的训练。
S803:针对于候选集中的每个候选段,评分模块根据每个候选段对应的音素序列的向量化信息,利用距离模型确定出标准文本集中至少一个目标词,该目标词的音素序列与该候选段的音素序列之间的向量距离小于预设阈值。
其中,假设候选集为{c i,0≤i<M},其中,M为候选集中所包括的候选段的数量,c i表征候选集中的第i个候选段。标准文本集为{t j,0≤j<N},其中,N为标准文本集所包括的词的数量,t j表征标准文本集中的第j个词。则,评分模块需要执行至少M*N次向量距离的计算。
作为一种示例,可以采用下述公式(1)计算两个音素序列之间的向量距离:
Figure PCTCN2021103287-appb-000005
其中,dist i,j表征两个音素序列之间的向量距离,并且,dist i,j越小,表征候选段与标准文本中的相应词汇之间越相近(差异越小),反之,dist i,j越大,表征候选段与标准文本中的相应词汇之间的差异越大;L表征向量维度。
实际应用中,标准文本集中所包括的词的数量较多,而且,其包含的较多数量的词与候选段之间的差异较大,对于确定候选段对应的纠错文本的意义较小。基于此,本实施例中,评分模块可以根据向量距离,为每个候选段过滤标准文本集中的词,从而可以从过滤得到的词中确定出该候选段对应的纠错文本。
具体实现时,评分模块可以设定阈值r,并从标准文本集中过滤出小于该阈值r的dist i,j所对应的词,从而达到压缩标准文本集的目的,如此,可以有效减少后续计算过程中所需的计算量。
S804:评分模块利用对齐模型计算候选集中每个候选段与目标词之间的相似度评分。
实际应用中,针对于同一候选段,在标准文本集中可能存在多个目标词与该候选段在向量距离上相距较近,此时,仅仅利用向量距离可能难以从多个目标词中选择合适的目标词作为该候选段的纠错文本。为此,本实施例中,评分模块可以利用对齐模型以及音素混淆矩阵计算候选段与目标词的音素序列之间的相似度评分,以便从多个目标词中选择作为候选段的纠错文本的目标词。
其中,两个音素在发音上可能差异较小,也可能差异较大,因此,可以利用混淆度来衡量不同音素之间的差异程度(或者说两个音素之间的发音差异),其可以被定义为一个大于等于零的浮点数。如果两个音素完全相同,则混淆度为0.0;如果两个音素差异较大,则混淆度可以为较大的数值。实际应用中,为了方便运算和理解,可以将音素混淆度的数值范围归一化到[0.0,1.0]之间,但也可以根据模型输出而定,本实施例对此并不进行限定。
而音素混淆矩阵,即为记录了不同音素之间的混淆度的矩阵。
作为一种示例,可以基于图10所示的完成训练的神经网络计算出两个音素之间的混淆度,此时,作为神经网络输入的两个音素序列均仅包含一个音素,这样的优势在于音素混淆度与音素向量化过程采用同源数据,从而使得音素混淆度矩阵可以通过数据的方式来更新。例如,音素混淆矩阵的局部示例,可以如下表4所示:
表4
  AI B EH I IY S T
AI 0.0 1.447 0.118 0.097 0.113 0.178 0.511
B   0.0 1.504 1.459 1.484 1.416 1.543
EH     0.0 0.092 0.105 0.226 0.571
I       0.0 0.049 0.247 0.580
IY         0.0 0.279 0.597
S           0.0 0.420
T             0.0
当然,实际应用中,上述音素混淆矩阵中的不同音素之间的混淆度也可以是由技术人员进行人工设定,或者是采用相应的语音分析算法进行语音信号之间的相似性分析,并根据语音信号相似性的评估数值确定音素之间的混淆度等。本实施例中,对于如何确定音素之间的混淆度的具体实现方式并不进行限定。
在基于对齐模型以及音素混淆矩阵对候选段与目标词之间的相似度进行评分时,可以构建在网格中对两个音素之间的相似度进行计算。例如,以初始文本为“毕替埃斯”,其对应的音素序列为“B I–T I–AI–S I”,标准文本为“BTS”,其对应的音素序列为“B IY–T IY–EH S”为例,可以基于如图11所示的网格进行计算。
如图11所示,从左下角的网格点至右上角的网格点,其存在较多可选的路径。但是,通常情况下,路径越贴合对角线,则表明两个音素序列越相似,特别的,当两个音素序列相同时,其最佳路径即为对角线所在路径。因此,在图11所示的网格中,可以寻找最贴合对角线的路径。
具体实现时,可以基于网格中各个网格点的相似度评分来确定最佳路径。其中,网格中的每个网格点,均具有音素相似性的基础评分,其基础评分可以基于音素混淆矩阵中记录的两个音素之间的混淆度进行确定。其中,音素之间的混淆度越大,音素之间的发音差异越大,从而音素相似性的评分越小。根据动态规划原理,网格点(i,j)的得分,与其自身的基础评分、(i-1,j)、(i,j-1)以及(i-1,j-1)的基础评分有关,具体可以是通过如下公式(2)进行计算:
s i,j=max(s i-1,j,s i,j-1,s i-1,j-1)+c i,j   (2)
其中,s i,j表征网格点(i,j)的得分,s i-1,j表征网格点(i-1,j)的得分,s i,j-1表征网格点(i,j-1)的得分,s i-1,j-1表征网格点(i-1,j-1)的得分,c i,j表征网格点(i,j)的基础得分。
如此,从图11所示网络的左下角出发,可以寻找相似度最大的路径,从而右上角的网格点(也即路径的终点)的相似度评分,即为这两个音素序列之间的相似度评分。
实际应用中,由于不同目标词对应的音素序列长度不同,因此,每次在计算出两个音素序列之间的相似度评分后,还可以对该相似度评分进行归一化,以便屏蔽音素序列的不同长度对于相似度评分的影响(例如,音素序列越长,路径终点的相似度评分可能越高)。
值得注意的是,上述实施方式中,是以相似度评分来衡量两个音素序列之间的相似度,在其它可能的实施方式中,也可以是基于两个音素序列之间的向量距离来衡量两个音素序列之间的相似度,此时,路径终点的向量距离越小,表征两个音素序列越相似。由于其与上述实施方式的具体实现构思相近,本实施例对基于向量距离衡量音素序列相似程度的具体实现过程不再进行赘述。并且,上述确定两个音素序列之间的相似度的执行过程,可以被封装成一个对齐模型实现。
S805:替换模块根据候选段与目标词之间的相似度评分,选择相应的目标词对候选段进行替换。
本实施例中,针对于初始文本中的每个候选段,可以利用其对应的多个目标词按照 相似度评分进行排序,并选择相似度评分最高的目标词对作为该候选段的纠错文本。由于每个候选段可以同时记录有原始文本、其在初始文本中的位置信息、候选段长度信息,则替换模块可以利用该纠错文本直接对初始文本中的相应候选段进行替换。
在一些可能的实施方式中,替换模块还可以预先根据最大相似度评分来确定是否对候选段进行替换。具体的,替换模块在确定出最大相似度评分后,可以比较最大相似度评分是否大于预设评分阈值。若大于,则可以利用最大相似度评分所对应的目标词作为纠错文本,并对相应的候选段进行替换;而若不大于,表明该目标词与候选段之间的音素差异较大,此时,替换模块可以不利用该最大相似度评分所对应的目标词对候选段进行替换,即利用该候选段作为目标文本中的词,如此,可以降低利用错误的目标词替换正确的候选段的可能性,降低误报概率。
实际应用中,部分候选段发生语音识别错误的可能性较低,比如,当初始文本中为“帮我转接恩1401”时,其对应的候选段可以包括“帮我”、“我转”、“转接”、“接恩”、“恩1401”,则,对于其中的部分候选段“帮我”、“转接”等候选段,其在实际应用中被语音识别引擎正确识别的可能性较高,因此,在一些可能的实施方式中,可以对基于初始文本所得到的部分候选段进行过滤,具体是过滤准确识别的可能性较高的候选段(如“帮我”、“转接”等),而剩余的候选段被语音识别引擎准确识别的可能性较低,可以通过步骤S803至步骤S805确定是否利用相应的目标词对剩余的候选段进行文本替换。如此,可以有效减少步骤S803至步骤S805中参与计算的候选段数量,从而不仅可以减少确定目标文本所需的计算量,而且也可以在一定程度上提高确定目标文本的效率。
值得注意的是,上述实施方式中,是以对一个候选集中的候选段进行处理为例进行示例性说明,实际应用中,语音交互系统可以基于初始文本得到多个不同的候选集,不同候选集中的候选段长度不同,从而语音交互系统可以基于不同候选集确定出不同的目标文本。然后,语音交互系统可以从不同目标文本中确定出作为最终的语音识别结果的目标文本。比如,语音交互系统可以分别计算各个目标文本之间的文本相似度,并进一步计算出每个目标文本与其它文本之间的相似度之和(或者相似度平均值),从而可以将最大相似度之和(或者最大相似度平均值)所对应的目标文本作为最终的语音识别结果的目标文本。当然,实际应用时,也可以是采用其它可能的方式从多个目标文本中确定出作为最终的语音识别结果的目标文本,本实施例对此并不进行限定。
上文中结合图1至图11,详细描述了本申请所提供的信息识别方法,下面将结合附图,描述根据本申请所提供的信息识别装置。
参见图12所示的对象建模装置的结构示意图,该装置1200包括:
信息获取模块1201,用于获取输入信息,所述输入信息包括基于第一类型语言的第一音素、待识别语音或待纠错文本,所述待识别语音包括第一类型语言以及第二类型语言的语音,所述待纠错文本包括所述第一类型语言的文本,所述第一类型与所述第二类型不同;
识别模块1202,用于根据所述输入信息对应的音素,将标准文本中基于第二类型语言的目标词识别为目标文本中的词,所述输入信息对应的音素包括所述第一音素或第二音素,所述第二音素包括识别所述待识别语音所确定的基于第一类型语言的音素或所述 待纠错文本对应的音素,所述目标词的音素与所述输入信息对应的音素之间的差异程度小于所述标准文本中的其它词与所述输入信息对应的音素之间的差异程度,所述目标文本包括基于所述第一类型语言的第一文本以及基于所述第二类型语言的第二文本。
在一种可能的实施方式中,所述识别模块1202,具体用于根据所述输入信息对应的音素与所述标准文本的第三音素之间的向量化差异,将标准文本中基于第二类型语言的目标词识别为目标文本中的词。
在一种可能的实施方式中,所述识别模块1202,具体用于:
对所述输入信息对应的音素进行向量化,得到第一向量;
计算所述第一向量与所述第三音素所对应的第二向量之间的向量距离;
根据所述向量距离,将所述标准文本中基于第二类型语言的目标词识别为所述目标文本中的词。
在一种可能的实施方式中,所述第二音素为基于第一类型语言的音素,所述标准文本的第三音素为基于第二类型语言的音素;
或,所述标准文本的第三音素包括所述第一类型语言的音素以及所述第二类型语言的音素。
在一种可能的实施方式中,所述识别模块1202,具体用于利用向量化模型对所述输入信息对应的音素进行向量化,所述向量化模型基于神经网络进行构建。
在一种可能的实施方式中,所述标准文本中包括多音文本,所述多音文本具有第一发音以及第二发音,所述第一发音与所述第二发音不同。
在一种可能的实施方式中,所述装置1200还包括:
语音识别模块1203,用于利用语音识别引擎对所述待识别语音进行语音识别,得到初始文本;
注音模块1204,用于对所述初始文本进行注音,得到所述待识别语音对应的第二音素。
在一种可能的实施方式中,所述目标文本为对所述初始文本进行纠错而得到的文本。
根据本申请实施例的信息识别装置1200可对应于执行本申请实施例中描述的方法,并且信息识别装置1200的各个模块的上述和其它操作和/或功能分别为了实现前述实施例中的各个方法的相应流程,为了简洁,在此不再赘述。
另外需说明的是,以上所描述的实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。另外,本申请提供的装置实施例附图中,模块之间的连接关系表示它们之间具有通信连接,具体可以实现为一条或多条通信总线或信号线。
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到本申请可借助软件加必需的通用硬件的方式来实现,当然也可以通过专用硬件包括专用集成电路、专用CPU、专用存储器、专用元器件等来实现。一般情况下,凡由计算机程序完成的功能 都可以很容易地用相应的硬件来实现,而且,用来实现同一功能的具体硬件结构也可以是多种多样的,例如模拟电路、数字电路或专用电路等。
但是,对本申请而言更多情况下软件程序实现是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件对象的形式体现出来,该计算机软件对象存储在可读取的存储介质中,如计算机的软盘、U盘、移动硬盘、ROM、RAM、磁碟或者光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,训练设备,或者网络设备等)执行本申请各个实施例所述的方法。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序对象的形式实现。
所述计算机程序对象包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。
所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、训练设备或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、训练设备或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存储的任何可用介质或者是包含一个或多个可用介质集成的训练设备、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如SSD)等。

Claims (18)

  1. 一种信息识别方法,其特征在于,所述方法包括:
    获取输入信息,所述输入信息包括基于第一类型语言的第一音素、待识别语音或待纠错文本,所述待识别语音包括第一类型语言以及第二类型语言的语音,所述待纠错文本包括所述第一类型语言的文本,所述第一类型与所述第二类型不同;
    根据所述输入信息对应的音素,将标准文本中基于第二类型语言的目标词识别为目标文本中的词,所述输入信息对应的音素包括所述第一音素或第二音素,所述第二音素包括识别所述待识别语音所确定的基于第一类型语言的音素或所述待纠错文本对应的音素,所述目标词的音素与所述输入信息对应的音素之间的差异程度小于所述标准文本中的其它词与所述输入信息对应的音素之间的差异程度,所述目标文本包括基于所述第一类型语言的第一文本以及基于所述第二类型语言的第二文本。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述输入信息对应的音素,将标准文本中基于第二类型语言的目标词识别为目标文本中的词,包括:
    根据所述输入信息对应的音素与所述标准文本的第三音素之间的向量化差异,将标准文本中基于第二类型语言的目标词识别为目标文本中的词。
  3. 根据权利要求2所述的方法,其特征在于,所述根据所述输入信息对应的音素与所述标准文本的第三音素之间的向量化差异,将标准文本中基于第二类型语言的目标词识别为目标文本中的词,包括:
    对所述输入信息对应的音素进行向量化,得到第一向量;
    计算所述第一向量与所述第三音素对应的第二向量之间的向量距离;
    根据所述向量距离,将所述标准文本中基于第二类型语言的目标词识别为所述目标文本中的词。
  4. 根据权利要求1至3任一项所述的方法,其特征在于,所述第二音素为基于第一类型语言的音素,所述标准文本的第三音素为基于第二类型语言的音素;
    或,所述标准文本的第三音素包括所述第一类型语言的音素以及所述第二类型语言的音素。
  5. 根据权利要求3或4所述的方法,其特征在于,所述对所述输入信息对应的音素进行向量化,包括:
    利用向量化模型对所述输入信息对应的音素进行向量化,所述向量化模型基于神经网络进行构建。
  6. 根据权利要求1至5任一项所述的方法,其特征在于,所述标准文本中包括多音文本,所述多音文本具有第一发音以及第二发音,所述第一发音与所述第二发音不同。
  7. 根据权利要求1至6任一项所述的方法,其特征在于,所述方法还包括:
    利用语音识别引擎对所述待识别语音进行语音识别,得到初始文本;
    对所述初始文本进行注音,得到所述待识别语音对应的第二音素。
  8. 根据权利要求7所述的方法,其特征在于,所述目标文本为对所述初始文本进行纠错而得到的文本。
  9. 一种信息识别装置,其特征在于,所述装置包括:
    信息获取模块,用于获取输入信息,所述输入信息包括基于第一类型语言的第一音 素、待识别语音或待纠错文本,所述待识别语音包括第一类型语言以及第二类型语言的语音,所述待纠错文本包括所述第一类型语言的文本,所述第一类型与所述第二类型不同;
    识别模块,用于根据所述输入信息对应的音素,将标准文本中基于第二类型语言的目标词识别为目标文本中的词,所述输入信息对应的音素包括所述第一音素或第二音素,所述第二音素包括识别所述待识别语音所确定的基于第一类型语言的音素或所述待纠错文本对应的音素,所述目标词的音素与所述输入信息对应的音素之间的差异程度小于所述标准文本中的其它词与所述输入信息对应的音素之间的差异程度,所述目标文本包括基于所述第一类型语言的第一文本以及基于所述第二类型语言的第二文本。
  10. 根据权利要求9所述的装置,其特征在于,所述识别模块,具体用于根据所述输入信息对应的音素与所述标准文本的第三音素之间的向量化差异,将标准文本中基于第二类型语言的目标词识别为目标文本中的词。
  11. 根据权利要求10所述的装置,其特征在于,所述识别模块,具体用于:
    对所述输入信息对应的音素进行向量化,得到第一向量;
    计算所述第一向量与所述第三音素所对应的第二向量之间的向量距离;
    根据所述向量距离,将所述标准文本中基于第二类型语言的目标词识别为所述目标文本中的词。
  12. 根据权利要求9至11任一项所述的装置,其特征在于,所述第二音素为基于第一类型语言的音素,所述标准文本的第三音素为基于第二类型语言的音素;
    或,所述标准文本的第三音素包括所述第一类型语言的音素以及所述第二类型语言的音素。
  13. 根据权利要求11或12所述的装置,其特征在于,所述识别模块,具体用于利用向量化模型对所述输入信息对应的音素进行向量化,所述向量化模型基于神经网络进行构建。
  14. 根据权利要求9至13任一项所述的装置,其特征在于,所述标准文本中包括多音文本,所述多音文本具有第一发音以及第二发音,所述第一发音与所述第二发音不同。
  15. 根据权利要求9至14任一项所述的装置,其特征在于,所述装置还包括:
    语音识别模块,用于利用语音识别引擎对所述待识别语音进行语音识别,得到初始文本;
    注音模块,用于对所述初始文本进行注音,得到所述待识别语音对应的第二音素。
  16. 根据权利要求7所述的装置,其特征在于,所述目标文本为对所述初始文本进行纠错而得到的文本。
  17. 一种装置,其特征在于,包括处理器和存储器;
    所述处理器、所述存储器进行相互的通信;
    所述处理器用于执行所述存储器中存储的指令,执行如权利要求1至8任一项所述的方法。
  18. 一种计算机可读存储介质,包括指令,当其在计算机上运行时,使得计算机执行如权利要求1至8中任一项所述的方法。
PCT/CN2021/103287 2020-11-18 2021-06-29 一种信息识别方法、装置及存储介质 WO2022105235A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011293842.2A CN112489626B (zh) 2020-11-18 2020-11-18 一种信息识别方法、装置及存储介质
CN202011293842.2 2020-11-18

Publications (1)

Publication Number Publication Date
WO2022105235A1 true WO2022105235A1 (zh) 2022-05-27

Family

ID=74931723

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/103287 WO2022105235A1 (zh) 2020-11-18 2021-06-29 一种信息识别方法、装置及存储介质

Country Status (2)

Country Link
CN (1) CN112489626B (zh)
WO (1) WO2022105235A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114783419A (zh) * 2022-06-21 2022-07-22 深圳市友杰智新科技有限公司 结合先验知识的文本识别方法、装置、计算机设备

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112489626B (zh) * 2020-11-18 2024-01-16 华为技术有限公司 一种信息识别方法、装置及存储介质
CN113205805B (zh) * 2021-03-18 2024-02-20 福建马恒达信息科技有限公司 一种语音插件辅助的表格便捷操作方法
CN113160795B (zh) * 2021-04-28 2024-03-05 平安科技(深圳)有限公司 语种特征提取模型训练方法、装置、设备及存储介质
CN113345442B (zh) * 2021-06-30 2024-06-04 西安乾阳电子科技有限公司 语音识别方法、装置、电子设备及存储介质
CN114596840B (zh) * 2022-03-04 2024-06-18 腾讯科技(深圳)有限公司 语音识别方法、装置、设备及计算机可读存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101415259A (zh) * 2007-10-18 2009-04-22 三星电子株式会社 嵌入式设备上基于双语语音查询的信息检索系统及方法
CN101727901A (zh) * 2009-12-10 2010-06-09 清华大学 嵌入式系统的汉英双语语音识别方法
CN107731228A (zh) * 2017-09-20 2018-02-23 百度在线网络技术(北京)有限公司 英文语音信息的文本转换方法和装置
CN109492202A (zh) * 2018-11-12 2019-03-19 浙江大学山东工业技术研究院 一种基于拼音的编码与解码模型的中文纠错方法
US20190378495A1 (en) * 2019-07-23 2019-12-12 Lg Electronics Inc. Artificial intelligence apparatus for recognizing speech of user using personalized language model and method for the same
CN112489626A (zh) * 2020-11-18 2021-03-12 华为技术有限公司 一种信息识别方法、装置及存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2533370A (en) * 2014-12-18 2016-06-22 Ibm Orthographic error correction using phonetic transcription
US9852728B2 (en) * 2015-06-08 2017-12-26 Nuance Communications, Inc. Process for improving pronunciation of proper nouns foreign to a target language text-to-speech system
CN111192570B (zh) * 2020-01-06 2022-12-06 厦门快商通科技股份有限公司 语言模型训练方法、系统、移动终端及存储介质
CN111816165A (zh) * 2020-07-07 2020-10-23 北京声智科技有限公司 语音识别方法、装置及电子设备
CN111933129B (zh) * 2020-09-11 2021-01-05 腾讯科技(深圳)有限公司 音频处理方法、语言模型的训练方法、装置及计算机设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101415259A (zh) * 2007-10-18 2009-04-22 三星电子株式会社 嵌入式设备上基于双语语音查询的信息检索系统及方法
CN101727901A (zh) * 2009-12-10 2010-06-09 清华大学 嵌入式系统的汉英双语语音识别方法
CN107731228A (zh) * 2017-09-20 2018-02-23 百度在线网络技术(北京)有限公司 英文语音信息的文本转换方法和装置
CN109492202A (zh) * 2018-11-12 2019-03-19 浙江大学山东工业技术研究院 一种基于拼音的编码与解码模型的中文纠错方法
US20190378495A1 (en) * 2019-07-23 2019-12-12 Lg Electronics Inc. Artificial intelligence apparatus for recognizing speech of user using personalized language model and method for the same
CN112489626A (zh) * 2020-11-18 2021-03-12 华为技术有限公司 一种信息识别方法、装置及存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114783419A (zh) * 2022-06-21 2022-07-22 深圳市友杰智新科技有限公司 结合先验知识的文本识别方法、装置、计算机设备
CN114783419B (zh) * 2022-06-21 2022-09-27 深圳市友杰智新科技有限公司 结合先验知识的文本识别方法、装置、计算机设备

Also Published As

Publication number Publication date
CN112489626A (zh) 2021-03-12
CN112489626B (zh) 2024-01-16

Similar Documents

Publication Publication Date Title
WO2022105235A1 (zh) 一种信息识别方法、装置及存储介质
US10176804B2 (en) Analyzing textual data
CN107305768B (zh) 语音交互中的易错字校准方法
CN107729313B (zh) 基于深度神经网络的多音字读音的判别方法和装置
US9672817B2 (en) Method and apparatus for optimizing a speech recognition result
US9502036B2 (en) Correcting text with voice processing
KR102390940B1 (ko) 음성 인식을 위한 컨텍스트 바이어싱
US11043213B2 (en) System and method for detection and correction of incorrectly pronounced words
CN102725790B (zh) 识别词典制作装置及声音识别装置
CN113811946A (zh) 数字序列的端到端自动语音识别
CN112599128B (zh) 一种语音识别方法、装置、设备和存储介质
JP5799733B2 (ja) 認識装置、認識プログラムおよび認識方法
US11615779B2 (en) Language-agnostic multilingual modeling using effective script normalization
US20240185840A1 (en) Method of training natural language processing model method of natural language processing, and electronic device
CN112927679A (zh) 一种语音识别中添加标点符号的方法及语音识别装置
CN111199726A (zh) 基于语音成分的细粒度映射的语言语音处理
CN110010136A (zh) 韵律预测模型的训练和文本分析方法、装置、介质和设备
KR20230009564A (ko) 앙상블 스코어를 이용한 학습 데이터 교정 방법 및 그 장치
US20230044266A1 (en) Machine learning method and named entity recognition apparatus
US10614170B2 (en) Method of translating speech signal and electronic device employing the same
WO2023045186A1 (zh) 意图识别方法、装置、电子设备和存储介质
US12026982B2 (en) Handwriting recognition with language modeling
JP2024038566A (ja) キーワード検出装置、キーワード検出方法、およびキーワード検出プログラム
WO2022185437A1 (ja) 音声認識装置、音声認識方法、学習装置、学習方法、及び、記録媒体
JP2011175046A (ja) 音声検索装置および音声検索方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21893387

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21893387

Country of ref document: EP

Kind code of ref document: A1