WO2022134984A1 - 热词识别方法、装置、介质和电子设备 - Google Patents

热词识别方法、装置、介质和电子设备 Download PDF

Info

Publication number
WO2022134984A1
WO2022134984A1 PCT/CN2021/132124 CN2021132124W WO2022134984A1 WO 2022134984 A1 WO2022134984 A1 WO 2022134984A1 CN 2021132124 W CN2021132124 W CN 2021132124W WO 2022134984 A1 WO2022134984 A1 WO 2022134984A1
Authority
WO
WIPO (PCT)
Prior art keywords
hot word
score
word
path
hot
Prior art date
Application number
PCT/CN2021/132124
Other languages
English (en)
French (fr)
Inventor
姚佳立
Original Assignee
北京有竹居网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京有竹居网络技术有限公司 filed Critical 北京有竹居网络技术有限公司
Publication of WO2022134984A1 publication Critical patent/WO2022134984A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/008Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing

Definitions

  • the present disclosure relates to the field of speech recognition, and in particular, to a hot word recognition method, apparatus, medium, computer program product and electronic device.
  • the present disclosure provides a hot word recognition method, including: performing speech decoding on speech information input by a user to obtain a path score for each decoding path; score, wherein the hot word incentive score is related to the adaptive incentive probability of each word in the hot word; based on the difference between the path score and the hot word incentive score, the hot word is identified .
  • the present disclosure provides a hot word recognition device, comprising: a path score acquisition module for performing speech decoding on speech information input by a user to obtain a path score for each decoding path; a hot word excitation score acquisition module for using for obtaining the hot word excitation score of each hot word on the decoding path, wherein the hot word excitation score is related to the adaptive excitation probability of each word segment in the hot word; the hot word recognition module is used for The hot word is identified based on the difference between the path score and the hot word excitation score.
  • the present disclosure provides a computer-readable medium on which a computer program is stored, and when the program is executed by a processing apparatus, implements the steps of the method described in the first aspect of the present disclosure.
  • the present disclosure provides an electronic device, comprising: a storage device on which a computer program is stored; and a processing device for executing the computer program in the storage device, so as to implement the first aspect of the present disclosure. steps of the method.
  • the present disclosure provides a computer program product, comprising: instructions that, when executed by a processing device, implement the steps of the method described in the first aspect of the present disclosure.
  • FIG. 1A is a flowchart of a hot word recognition method according to an embodiment of the present disclosure.
  • FIG. 1B is a flowchart of an example method of obtaining a hot word incentive score, according to some embodiments.
  • 1C is a flow diagram of an example method of calculating an adaptive excitation probability, according to some embodiments.
  • Figure 2 shows a schematic diagram of the adaptive excitation probability of word segmentation under different scaling factor scale values.
  • FIG. 3 is a schematic block diagram of a hot word recognition apparatus according to an embodiment of the present disclosure.
  • FIG. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
  • the term “including” and variations thereof are open-ended inclusions, ie, "including but not limited to”.
  • the term “based on” is “based at least in part on.”
  • the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one additional embodiment”; the term “some embodiments” means “at least some embodiments”. Relevant definitions of other terms will be given in the description below.
  • FIG. 1A is a flowchart of a hot word recognition method according to an embodiment of the present disclosure. The method can be applied to any electronic device with processing capability. As shown in FIG. 1A, the method includes the following steps 11 to 13.
  • step 11 speech decoding is performed on the speech information input by the user to obtain a path score of each decoding path.
  • all speech layer information is pre-combined into a language model before speech recognition. Then, in the process of speech decoding, it is performed on the decoding space composed of the language model. For example, the speech information input by the user is traversed through each decoding path in the decoding space, so as to obtain the path score of each decoding path. .
  • the present disclosure does not limit the construction of the language model, the construction of the decoding space, the calculation method of the path score of the decoding path, and the like.
  • step 12 the hot word excitation score of the hot word on each decoding path is obtained, wherein the hot word excitation score is related to the adaptive excitation probability of each segmented word in the hot word.
  • a hot word refers to a word that needs to be recognized more easily than other words in speech recognition.
  • the name “Jia Li” is determined to be a hot word, then in speech recognition, when encountering the voice “jia li” about a person's name, it needs to be more easily recognized as “Jia Li” rather than “Jia Li” ".
  • the hot word incentive score refers to intervening on the recognition probability of the hot word, so that the recognition probability of the hot word is enhanced, so as to improve the probability of the hot word being correctly recognized.
  • the adaptive excitation probability refers to that the excitation probability is adaptively adjusted rather than fixed.
  • FIG. 1B shows a flowchart of an example method of obtaining a hot word incentive score, according to some embodiments.
  • the hot word excitation scores of hot words on each decoding path can be obtained in the following manner.
  • the hot words are segmented (step 121). That is, according to the word formation rules of the language, the hot words are divided into individual minimum unit words. For example, the hot word “Jiali” is divided into the following two participles after word segmentation: "jia" and "li".
  • the adaptive excitation probability of each participle is calculated (step 122).
  • the adaptive excitation probability can be calculated as follows. That is, first calculate the hot word path probability of each participle (step 1221). Then, adaptively adjust the hot word path probability by using the adaptive adjustment coefficient corresponding to each hot word path probability to obtain the adaptive excitation probability of each word segmentation (step 1222), wherein, the smaller the hot word path probability, the Then the adaptive adjustment coefficient corresponding to the hot word path probability is larger.
  • the hot word path probabilities of the two participles “jia” and “li” of the hot word “jiali” are P 1 and P 2 respectively, and the adaptive adjustment coefficients corresponding to the two hot word path probabilities are a and b, then, the adaptive excitation probability of the participle “jia” is a ⁇ P 1 , and the adaptive excitation probability of the participle “li” is b ⁇ P 2 .
  • the hot word path probability of each participle can be calculated by the following formula:
  • P 1 represents the hot word path probability of the first participle W 1 in the hot word
  • ⁇ s> represents the beginning of the hot word, which is a virtual concept and does not need specific symbols to correspond, as long as it is a word At the beginning, this symbol ⁇ s> can be added additionally
  • P i represents the hot word path probability of the ith participle Wi in the hot word, i ⁇ 2.
  • ⁇ s>), and the hot word path probability of the participle "li” is P 2 P(li
  • the adaptive adjustment coefficient corresponding to each hot word path probability is:
  • the adaptive adjustment coefficient i represents the adaptive adjustment coefficient corresponding to the hot word path probability P i of the ith participle Wi in the hot words, i ⁇ 1; scale represents the scaling factor located in the range of 0 to 1, which is The value can be adjusted according to the actual situation.
  • the adaptive adjustment coefficient corresponding to the hot word path probability of the segmented word "jia” is: Others and so on.
  • the adaptive adjustment principle of the adaptive adjustment coefficient to the hot word path probability is: first invert the hot word path probability of the word segmentation, and then use a smoothing function (such as an exponential function) for smoothing, so as to achieve Smooth adaptive adjustment of hot word path probability for word segmentation.
  • a smoothing function such as an exponential function
  • scale is a scaling factor in the range of 0 to 1, it can be seen that the closer the scale is to 0, the more the probability is excited, and the closer the scale is to 1, the less the probability is excited, and will not exceed 1; if The scale is set to 1, then the probability of each participle word in the hot word is maximized to 1, in this case, the cost of the negative log of the hot word is 0.
  • Figure 2 shows a schematic diagram of the adaptive excitation probability of word segmentation under different scale values. In practical applications, the size of the scale value can be set according to the actual situation.
  • the probability of the hot word path of each word segment can be intervened in equal proportions, that is, the smaller the hot word path probability of a word segment, the higher the probability of the hot word path The greater the degree of improvement. For example, suppose that P(W 1
  • ⁇ s>) 0.01, P(W 2
  • ⁇ s>W 1 ) 0.0001, which means that the participle W2 is a relatively rare word, and its hot word path probability needs to be updated. Intervention with a large probability means to increase the probability of its hot word path to a greater extent.
  • the adaptive excitation probability of all word segments can be multiplied (step 123).
  • the multiplied value is converted into a hot word excitation score (step 124). For example, perform negative log processing on the multiplied value to obtain the hot word incentive score.
  • the hot word path probability of each word segment can be adaptively adjusted by the adaptive adjustment coefficient of each word segment to obtain the adaptive excitation probability, and then the adaptive excitation probability of each word segment is multiplied and converted into a hot word.
  • Word incentive score so as to obtain the incentive score for intervening in the recognition of the hot word.
  • the calculation algorithm of the hot word excitation score described above can be built into the speech recognition decoder, so that in the process of speech decoding by the speech recognition decoder, the calculation algorithm can be used to automatically calculate each Hotword excitation scores for hotwords along the decoding paths.
  • a finite-state transducer (FST) composition algorithm can be used to form an FST graph with the hot word excitation scores calculated by the calculation algorithm described above, and then the FST graph can be plugged into speech recognition.
  • FST finite-state transducer
  • step 13 the hot word is identified based on the difference between the path score and the hot word excitation score.
  • the path score of each decoding path is subtracted from the hot word excitation score on the decoding path to obtain a difference, and the hot word on the decoding path with the smallest difference is used as the hot word recognition result .
  • the voice information input by the user is the name "jia li"
  • the decoding result on the first decoding path is "home”
  • its path score is 70
  • the hot word incentive score is 10.
  • the decoding result on the second decoding path is "Jia Li”
  • its path score is 80
  • the hot word incentive score is 25
  • the decoding result on the third decoding path is "Jia Li”
  • its path score is 90
  • the hot word excitation score is 60. Since the difference between the path score of the third decoding path and the hot word excitation score is the smallest, the final hot word recognition result is "Jia Li".
  • the path score of each decoding path is obtained, and then the hot word excitation score of the hot word on each decoding path is obtained, wherein the hot word excitation score is It is related to the adaptive excitation probability of each word segment in the hot word. Finally, based on the difference between the path score and the hot word excitation score, the hot word is identified. In this way, the hot word can be recognized by the adaptive excitation probability of the hot word. Adaptive excitation improves the robustness of hot word recognition intervention and greatly improves the accuracy of hot word recognition. Moreover, it is suitable for any hot word recognition intervention, and will not cause the path score containing the hot word to become abnormally small, so it will not improve the recall rate of the hot word.
  • FIG. 3 is a schematic block diagram of a hot word recognition apparatus according to an embodiment of the present disclosure.
  • the device includes: a path score acquisition module 31 for performing voice decoding on the voice information input by the user to obtain a path score for each decoding path; a hot word excitation score acquisition module 32 for acquiring each path score The hot word excitation score of the hot word on the decoding path, wherein the hot word excitation score is related to the adaptive excitation probability of each word in the hot word; the hot word recognition module 33, based on the difference between the path score and the hot word excitation score , to identify hot words.
  • the path score of each decoding path is obtained, and then the hot word excitation score of the hot word on each decoding path is obtained, wherein the hot word excitation score is It is related to the adaptive excitation probability of each word in the hot word.
  • the hot word is recognized based on the difference between the path score and the hot word excitation score.
  • the adaptive incentive improves the robustness of the hot word recognition intervention and greatly improves the accuracy of the hot word recognition.
  • it is suitable for any hot word recognition intervention, and will not cause the path score containing the hot word to become abnormally small, so it will not improve the recall rate of the hot word.
  • acquiring the hot word excitation score of the hot words on each decoding path includes: segmenting the hot words; calculating the adaptive excitation probability of each segmented word; multiplying the adaptive excitation probabilities of all the segmented words; The multiplied value is converted into a hot word incentive score.
  • the obtaining the hot word excitation score of the hot words on each decoding path includes: obtaining the hot word excitation score on each decoding path from the finite automatic state machine graph about hot words formed by the finite automatic state machine graph algorithm. Hot word incentive score for hot words;
  • the finite automatic state machine diagram is pre-constructed in the following ways: segmenting hot words; calculating the adaptive excitation probability of each segment; multiplying the adaptive excitation probabilities of all segmented words; converting the multiplied values into Hot word incentive score; through the finite automatic state machine diagram algorithm, the hot word incentive score is used to construct a finite automatic state machine diagram about hot words.
  • the calculating the self-adaptive excitation probability of each word segment includes: calculating the hot word path probability of each word segment; using the adaptive adjustment coefficient corresponding to each hot word path probability to automatically perform the hot word path probability.
  • the adaptive adjustment is performed to obtain the adaptive excitation probability of each word segment, wherein the smaller the hot word path probability, the larger the adaptive adjustment coefficient corresponding to the hot word path probability.
  • the calculation of the hot word path probability of each word segment is realized by the following formula:
  • P 1 represents the hot word path probability of the first participle W 1 in the hot words
  • ⁇ s> represents the beginning of the hot word
  • P i represents the hot word path probability of the ith participle Wi in the hot words, i ⁇ 2.
  • the adaptive adjustment coefficient corresponding to each hot word path probability is:
  • the adaptive adjustment coefficient i represents the adaptive adjustment coefficient corresponding to the hot word path probability P i of the i -th participle Wi in the hot words, i ⁇ 1; scale represents the scaling factor in the range of 0 to 1.
  • the converting the multiplied value into the hot word incentive score includes: performing negative log processing on the multiplied value to obtain the hot word incentive score.
  • identifying the hot word based on the difference between the path score and the hot word excitation score includes: taking the hot word on the decoding path with the smallest difference as the hot word recognition result.
  • each module performs operations has been described in detail in the embodiments of the related method, and will not be described in detail here.
  • the division of the above-mentioned modules does not limit the specific implementation manner, and the above-mentioned various modules may be implemented by, for example, software, hardware, or a combination of software and hardware.
  • the above-mentioned modules may be implemented as independent physical entities, or may also be implemented by a single entity (eg, a processor (CPU or DSP, etc.), an integrated circuit, etc.).
  • a processor CPU or DSP, etc.
  • the respective modules are shown as separate modules in the figures, one or more of these modules may also be combined into one module or split into multiple modules.
  • the path score of each decoding path is obtained, and then the hot word excitation score of the hot word on each decoding path is obtained, wherein the hot word excitation score is It is related to the adaptive excitation probability of each word segment in the hot word. Finally, based on the difference between the path score and the hot word excitation score, the hot word is identified. In this way, the hot word can be recognized by the adaptive excitation probability of the hot word. Adaptive excitation improves the robustness of hot word recognition intervention and greatly improves the accuracy of hot word recognition. Moreover, it is suitable for any hot word recognition intervention, and will not cause the path score containing the hot word to become abnormally small, so it will not improve the recall rate of the hot word.
  • Terminal devices in the embodiments of the present disclosure may include, but are not limited to, such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablets), PMPs (portable multimedia players), vehicle-mounted terminals (eg, mobile terminals such as in-vehicle navigation terminals), etc., and stationary terminals such as digital TVs, desktop computers, and the like.
  • the electronic device shown in FIG. 4 is only an example, and should not impose any limitation on the function and scope of use of the embodiments of the present disclosure.
  • an electronic device 600 may include a processing device (eg, a central processing unit, a graphics processor, etc.) 601 that may be loaded into random access according to a program stored in a read only memory (ROM) 602 or from a storage device 608 Various appropriate actions and processes are executed by the programs in the memory (RAM) 603 . In the RAM 603, various programs and data required for the operation of the electronic device 600 are also stored.
  • the processing device 601, the ROM 602, and the RAM 603 are connected to each other through a bus 604.
  • An input/output (I/O) interface 605 is also connected to bus 604 .
  • I/O interface 605 input devices 606 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speakers, vibration An output device 607 of a computer, etc.; a storage device 608 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 609.
  • Communication means 609 may allow electronic device 600 to communicate wirelessly or by wire with other devices to exchange data. While FIG. 4 shows electronic device 600 having various means, it should be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
  • Embodiments of the present disclosure also include a computer program product, comprising: instructions, when executed by a processing device, the instructions implement the steps of the hot word recognition method of the embodiments of the present disclosure.
  • embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated in the flowchart.
  • the computer program may be downloaded and installed from the network via the communication device 609, or from the storage device 608, or from the ROM 602.
  • the processing apparatus 601 the above-mentioned functions defined in the methods of the embodiments of the present disclosure are executed.
  • the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
  • the computer-readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. More specific examples of computer readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable read only memory (EPROM or flash memory), fiber optics, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with computer-readable program code embodied thereon. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted using any suitable medium including, but not limited to, electrical wire, optical fiber cable, RF (radio frequency), etc., or any suitable combination of the foregoing.
  • the client and server can use any currently known or future developed network protocol such as HTTP (HyperText Transfer Protocol) to communicate, and can communicate with digital data in any form or medium Communication (eg, a communication network) interconnects.
  • HTTP HyperText Transfer Protocol
  • Examples of communication networks include local area networks (“LAN”), wide area networks (“WAN”), the Internet (eg, the Internet), and peer-to-peer networks (eg, ad hoc peer-to-peer networks), as well as any currently known or future development network of.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or may exist alone without being assembled into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device: performs voice decoding on the voice information input by the user, and obtains the path score of each decoding path ; Obtain the hot word excitation score of the hot word on each of the described decoding paths, wherein, the hot word excitation score is related to the adaptive excitation probability of each word segment in the hot word; based on the path score and all The difference between the hot word excitation scores is used to identify the hot word.
  • Computer program code for performing operations of the present disclosure may be written in one or more programming languages, including but not limited to object-oriented programming languages - such as Java, Smalltalk, C++, and This includes conventional procedural programming languages - such as the "C" language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider to via Internet connection).
  • LAN local area network
  • WAN wide area network
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more logical functions for implementing the specified functions executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or operations , or can be implemented in a combination of dedicated hardware and computer instructions.
  • the modules involved in the embodiments of the present disclosure may be implemented in software or hardware. Among them, the name of the module does not constitute a limitation of the module itself under certain circumstances.
  • exemplary types of hardware logic components include: Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), Systems on Chips (SOCs), Complex Programmable Logical Devices (CPLDs) and more.
  • FPGAs Field Programmable Gate Arrays
  • ASICs Application Specific Integrated Circuits
  • ASSPs Application Specific Standard Products
  • SOCs Systems on Chips
  • CPLDs Complex Programmable Logical Devices
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with the instruction execution system, apparatus or device.
  • the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), fiber optics, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM compact disk read only memory
  • magnetic storage or any suitable combination of the foregoing.
  • exemplary embodiment 1 provides a method for recognizing hot words, including: performing speech decoding on speech information input by a user to obtain a path score for each decoding path; obtaining a path score for each decoding path; The hot word incentive score of the hot word on the decoding path, wherein the hot word incentive score is related to the adaptive incentive probability of each word in the hot word; based on the path score and the hot word incentive score The difference of , identify the hot word.
  • Exemplary Embodiment 2 provides the method of Exemplary Embodiment 1, wherein the obtaining the hot word excitation score of the hot words on each of the decoding paths includes: The hot word is segmented; the adaptive excitation probability of each segmented word is calculated; the adaptive excitation probability of all the segmented words is multiplied; and the multiplied value is converted into the hot word excitation score.
  • exemplary embodiment 3 provides the method of exemplary embodiment 1, wherein the acquiring the hot word incentive score of the hot word includes: from a finite automatic state machine diagram Obtain the hot word excitation score of each hot word on the decoding path in the finite automatic state machine diagram of hot words formed by the algorithm;
  • the finite automatic state machine diagram is pre-constructed by the following methods: segmenting the hot words; calculating the adaptive excitation probability of each segmented word; multiplying the adaptive excitation probabilities of all segmented words; The numerical value of is converted into the hot word excitation score; through the finite automatic state machine diagram algorithm, the hot word excitation score is used to construct a finite automatic state machine diagram about the hot word.
  • Exemplary Embodiment 4 provides the method of Exemplary Embodiment 2 or 3, wherein the calculating the adaptive excitation probability of each participle includes: calculating the hotness of each participle Word path probability; adaptively adjust the hot word path probability by using the adaptive adjustment coefficient corresponding to each of the hot word path probability to obtain the adaptive excitation probability of each word segmentation, wherein the hot word path The smaller the probability, the larger the adaptive adjustment coefficient corresponding to the hot word path probability.
  • Exemplary Embodiment 5 provides the method of Exemplary Embodiment 4, wherein the calculating the hot word path probability of each participle is implemented by the following formula:
  • P 1 represents the hot word path probability of the first participle W 1 in the hot word
  • ⁇ s> represents the beginning of the hot word
  • P i represents the i -th participle Wi in the hot word.
  • exemplary embodiment 6 provides the method of exemplary embodiment 4, wherein the adaptive adjustment coefficient corresponding to each of the hot word path probabilities is:
  • the adaptive adjustment coefficient i represents the adaptive adjustment coefficient corresponding to the hot word path probability P i of the i -th participle Wi in the hot words, i ⁇ 1; scale represents the scaling factor located in the range of 0 to 1 .
  • Exemplary Embodiment 7 provides the method of Exemplary Embodiment 2 or 3, wherein the converting the multiplied numerical value into the hot word excitation score includes: pairing The value obtained by multiplication is subjected to negative log processing to obtain the hot word incentive score.
  • exemplary embodiment 8 provides the method of exemplary embodiment 1, wherein the The word recognition includes: taking the hot word on the decoding path with the smallest difference as the hot word recognition result.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Mathematical Physics (AREA)
  • Signal Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Machine Translation (AREA)

Abstract

一种热词识别方法、装置、介质和电子设备,该方法包括:对用户输入的语音信息进行语音解码,得到每一条解码路径的路径分数(11);获取每一条该解码路径上的热词的热词激励分数(12),其中,该热词激励分数与该热词中的每个分词的自适应激励概率有关;基于该路径分数与该热词激励分数的差值,对该热词进行识别(13)。该热词识别方法、装置、介质和电子设备能够提高热词的语音识别正确率。

Description

热词识别方法、装置、介质和电子设备
本申请是以申请号为202011529691.6,申请日为2020年12月22日的中国申请为基础,并主张其优先权,该中国申请的公开内容在此作为整体引入本申请中。
技术领域
本公开涉及语音识别领域,具体地,涉及一种热词识别方法、装置、介质、计算机程序产品和电子设备。
背景技术
相关技术中,在对热词进行概率干预的过程中,通常是对出现的热词进行一个经验主义的概率增强。
发明内容
提供该发明内容部分以便以简要的形式介绍构思,这些构思将在后面的具体实施方式部分被详细描述。该发明内容部分并不旨在标识要求保护的技术方案的关键特征或必要特征,也不旨在用于限制所要求的保护的技术方案的范围。
第一方面,本公开提供一种热词识别方法,包括:对用户输入的语音信息进行语音解码,得到每一条解码路径的路径分数;获取每一条所述解码路径上的热词的热词激励分数,其中,所述热词激励分数与所述热词中的每个分词的自适应激励概率有关;基于所述路径分数与所述热词激励分数的差值,对所述热词进行识别。
第二方面,本公开提供一种热词识别装置,包括:路径分数获取模块,用于对用户输入的语音信息进行语音解码,得到每一条解码路径的路径分数; 热词激励分数获取模块,用于获取每一条所述解码路径上的热词的热词激励分数,其中,所述热词激励分数与所述热词中的每个分词的自适应激励概率有关;热词识别模块,用于基于所述路径分数与所述热词激励分数的差值,对所述热词进行识别。
第三方面,本公开提供一种计算机可读介质,其上存储有计算机程序,该程序被处理装置执行时实现本公开第一方面所述方法的步骤。
第四方面,本公开提供一种电子设备,包括:存储装置,其上存储有计算机程序;处理装置,用于执行所述存储装置中的所述计算机程序,以实现本公开第一方面所述方法的步骤。
第五方面,本公开提供一种算机程序产品,包括:指令,该指令在被处理装置执行时实现本公开第一方面所述方法的步骤。
本公开的其他特征和优点将在随后的具体实施方式部分予以详细说明。
附图说明
结合附图并参考以下具体实施方式,本公开各实施例的上述和其他特征、优点及方面将变得更加明显。贯穿附图中,相同或相似的附图标记表示相同或相似的元素。应当理解附图是示意性的,原件和元素不一定按照比例绘制。在附图中:
图1A是根据本公开一种实施例的热词识别方法的流程图。
图1B是根据一些实施例的获取热词激励分数的示例方法的流程图。
图1C是根据一些实施例的计算自适应激励概率的示例方法的流程图。
图2示出了不同缩放因子scale数值情况下分词的自适应激励概率的示意图。
图3是根据本公开一种实施例的热词识别装置的示意框图。
图4是根据本公开一种实施例的电子设备的结构示意图。
具体实施方式
下面将参照附图更详细地描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。
应当理解,本公开的方法实施方式中记载的各个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
需要注意,本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。
需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
对热词进行概率干预的相关方法经常会产生很多副作用,比如一些常见的词因为经常出现,导致包含这种热词的解码路径的路径分数变的异常的小,热词的召回率变高,但是误识别率大大提升。
图1A是根据本公开一种实施例的热词识别方法的流程图。该方法可应用于具有处理能力的任何电子设备中。如图1A所示,该方法包括以下步骤11至13。
在步骤11中,对用户输入的语音信息进行语音解码,得到每一条解码路径的路径分数。
通常,在进行语音识别之前,会预先把所有语音层信息组合成一个语言模型。然后,在进行语音解码的过程中,是在语言模型所组成的解码空间上进行的,例如,将用户输入的语音信息遍历解码空间中的每一条解码路径,从而得到每一条解码路径的路径分数。
本公开对语言模型的构建、解码空间的构建、解码路径的路径分数的计算方式等等均不作限制。
在步骤12中,获取每一条解码路径上的热词的热词激励分数,其中,热词激励分数与热词中的每个分词的自适应激励概率有关。
在本公开中,热词指的是在语音识别中需要比其他单词更容易被识别出的单词。举例而言,人名“佳立”被确定为热词,那么在语音识别中,在遇到关于人名的语音“jia li”时,需要将其更容易地识别为“佳立”而非“佳丽”。
在本公开中,热词激励分数指的是对热词的识别概率进行干预,使得热词识别概率得到增强,以提升热词被正确识别出来的概率。
在本公开中,自适应激励概率指的是激励概率是自适应调整的,而非固定不变的。
图1B示出了根据一些实施例的获取热词激励分数的示例方法的流程图。在一个示例中,可以通过如下的方式来获取各条解码路径上的热词的热词激励分数。
首先,对热词进行分词(步骤121)。也即,按照语言的组词规则,将热词划分成一个个单独的最小单位的词。例如,热词“佳立”经过分词处理之后被划分成如下两个分词:“佳”、“立”。
然后,计算每一个分词的自适应激励概率(步骤122)。
图1C是根据一些实施例的计算自适应激励概率的示例方法的流程图。自适应激励概率可以通过如下的方式进行计算。也即,首先计算每一个分词的热词路径概率(步骤1221)。然后,利用每个热词路径概率各自所对应的自 适应调整系数对热词路径概率进行自适应调整,得到每一个分词的自适应激励概率(步骤1222),其中,热词路径概率越小,则该热词路径概率所对应的自适应调整系数越大。
仍然以热词“佳立”为例。假设,热词“佳立”的两个分词“佳”、“立”的热词路径概率分别为P 1和P 2,这两个热词路径概率各自所对应的自适应调整系数分别是a和b,那么,分词“佳”的自适应激励概率为a×P 1,分词“立”的自适应激励概率为b×P 2
每一个分词的热词路径概率可以通过如下的公式来计算:
P 1=P(W 1|<s>)         (1)
P i=P(W i|<s>W 1W 2...W i-1)       (2)
其中,P 1表示热词中的第1个分词W 1的热词路径概率;<s>表示热词的开头,这是一个虚拟的概念,不需要具体的符号来对应,只要是一个词的开头,就可以额外加入这个符号<s>;P i表示热词中的第i个分词W i的热词路径概率,i≥2。
仍然以热词“佳立”为例。分词“佳”的热词路径概率是P 1=P(佳|<s>),分词“立”的热词路径概率是P 2=P(立|<s>佳)。
各个热词路径概率所对应的自适应调整系数为:
Figure PCTCN2021132124-appb-000001
其中,自适应调整系数i表示热词中的第i个分词W i的热词路径概率P i所对应的自适应调整系数,i≥1;scale表示位于0至1范围内的缩放因子,其数值可以根据实际情况进行调整。
仍然以热词“佳立”为例。分词“佳”的热词路径概率所对应的自适应调整系数为
Figure PCTCN2021132124-appb-000002
其他的以此类推。
另外,公式(3)中,自适应调整系数对热词路径概率的自适应调整原理是:先对分词的热词路径概率取反,然后利用平滑函数(例如指数函数)进行平滑处理,从而实现对分词的热词路径概率的平滑自适应调整。由于scale是 位于0至1范围内的缩放因子,可以看出,scale越接近于0,则概率激励地越多,scale越接近于1,则概率激励地越少,且不会超过1;如果scale设置为1,那么热词中的每一个分词单词的概率都最大化成1,这样的话,该热词的负log的代价(cost)就为0。图2示出了不同scale数值情况下分词的自适应激励概率的示意图。在实际应用中,scale数值的大小可以根据实际情况进行设置。
通过对自适应调整系数进行如此配置,使得能够对每一个分词的热词路径概率进行等比的概率干预,也即某个分词的热词路径概率越小,则对该分词的热词路径概率进行提升的程度就越大。例如,假设P(W 1|<s>)=0.01,P(W 2|<s>W 1)=0.0001,这就说明分词W2是一个相对的生僻词,需要对其热词路径概率进行更大的概率干预,也即要将其热词路径概率进行更大程度的提升。
然后,在得到了每个分词的自适应激励概率之后,就可以将所有分词的自适应激励概率相乘(步骤123)。
然后,将相乘得到的数值转换成热词激励分数(步骤124)。例如,对相乘得到的数值进行负log处理,得到热词激励分数。
通过如此配置,就能够利用每个分词的自适应调整系数对每个分词的热词路径概率进行自适应调整,得到自适应激励概率,然后将各个分词的自适应激励概率相乘后转换成热词激励分数,从而得到了对该热词的识别进行干预的激励分数。
在一个实施例中,可以将上面描述的热词激励分数的计算算法内置到语音识别解码器中,这样,在语音识别解码器进行语音解码的过程中,就可以利用该计算算法来自动计算各条解码路径上的热词的热词激励分数。
在又一实施例中,可以通过有限自动状态机(finite-state transducers,FST)构图算法,将利用前面描述的计算算法算出的热词激励分数构成FST图,然后将该FST图外挂到语音识别解码器上,这样,在语音识别解码器进行语音解码的过程中,就可以从FST图中自动获取到各条解码路径上的热词的热词激励分数,而无需再次利用前面描述的计算算法来计算热词的热词激励分数, 从而节省了计算量。
在步骤13中,基于路径分数与热词激励分数的差值,对热词进行识别。
在一个实施例中,将各条解码路径的路径分数分别与该条解码路径上的热词激励分数相减,得到差值,并将差值最小的解码路径上的热词作为热词识别结果。
举例而言,用户输入的语音信息是人名“jia li”,在解码空间上存在三条解码路径,第一条解码路径上的解码结果是“家里”,其路径分数是70、热词激励分数是10,第二条解码路径上的解码结果是“佳丽”,其路径分数是80、热词激励分数是25,第三条解码路径上的解码结果是“佳立”,其路径分数是90、热词激励分数是60,则由于第三条解码路径的路径分数与热词激励分数的差值最小,因此最终的热词识别结果为“佳立”。
通过采用上述技术方案,由于,首先对用户输入的语音信息进行语音解码,得到每一条解码路径的路径分数,然后获取每一条解码路径上的热词的热词激励分数,其中,热词激励分数与热词中的每个分词的自适应激励概率有关,最后基于路径分数与热词激励分数的差值,对热词进行识别,这样,就能够利用热词的自适应激励概率对热词识别进行自适应性的激励,提高了热词识别干预的鲁棒性,大大提高了热词识别的正确率。而且,其适用于任意热词的识别干预,且不会导致包含热词的路径分数变的异常的小,因此不会提高热词的召回率。
图3是根据本公开一种实施例的热词识别装置的示意框图。如图3所示,该装置包括:路径分数获取模块31,用于对用户输入的语音信息进行语音解码,得到每一条解码路径的路径分数;热词激励分数获取模块32,用于获取每一条解码路径上的热词的热词激励分数,其中,热词激励分数与热词中的每个分词的自适应激励概率有关;热词识别模块33,基于路径分数与热词激励分数的差值,对热词进行识别。
通过采用上述技术方案,由于,首先对用户输入的语音信息进行语音解 码,得到每一条解码路径的路径分数,然后获取每一条解码路径上的热词的热词激励分数,其中,热词激励分数与热词中的每个分词的自适应激励概率有关,最后基于路径分数与热词激励分数的差值对热词进行识别,这样,就能够利用热词的自适应激励概率对热词识别进行自适应性的激励,提高了热词识别干预的鲁棒性,大大提高了热词识别的正确率。而且,其适用于任意热词的识别干预,且不会导致包含热词的路径分数变的异常的小,因此不会提高热词的召回率。
可选地,获取每一条解码路径上的热词的热词激励分数,包括:对热词进行分词;计算每一个分词的自适应激励概率;将所有分词的自适应激励概率相乘;将相乘得到的数值转换成热词激励分数。
可选地,所述获取每一条解码路径上的热词的热词激励分数,包括:从通过有限自动状态机构图算法构成的关于热词的有限自动状态机图中获取每一条解码路径上的热词的热词激励分数;
其中,有限自动状态机图通过下述方式被预先构建:对热词进行分词;计算每一个分词的自适应激励概率;将所有分词的自适应激励概率相乘;将相乘得到的数值转换成热词激励分数;通过有限自动状态机构图算法,利用热词激励分数来构建关于热词的有限自动状态机图。
可选地,所述计算每一个分词的自适应激励概率,包括:计算每一个分词的热词路径概率;利用每个热词路径概率各自所对应的自适应调整系数对热词路径概率进行自适应调整,得到每一个分词的自适应激励概率,其中,热词路径概率越小,则该热词路径概率所对应的自适应调整系数越大。
可选地,所述计算每一个分词的热词路径概率通过以下公式来实现:
P 1=P(W 1|<s>)
P i=P(W i|<s>W 1W 2...W i-1)
其中,P 1表示热词中的第1个分词W 1的热词路径概率;<s>表示热词的开头;P i表示热词中的第i个分词W i的热词路径概率,i≥2。
可选地,所述各个热词路径概率所对应的自适应调整系数为:
Figure PCTCN2021132124-appb-000003
其中,自适应调整系数i表示热词中的第i个分词W i的热词路径概率P i所对应的自适应调整系数,i≥1;scale表示位于0至1范围内的缩放因子。
可选地,所述将相乘得到的数值转换成热词激励分数,包括:对相乘得到的数值进行负log处理,得到热词激励分数。
可选地,所述基于路径分数与热词激励分数的差值,对热词进行识别,包括:将差值最小的解码路径上的热词作为热词识别结果。
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关方法的实施例中进行了详细描述,此处将不做详细阐述说明。应注意,上述各个模块的划分并非限制具体的实现方式,上述各个模块例如可以以软件、硬件或者软硬件结合的方式来实现。在实际实现时,上述各个模块可被实现为独立的物理实体,或者也可由单个实体(例如,处理器(CPU或DSP等)、集成电路等)来实现。需要注意的是,尽管图中将各个模块示为分立的模块,但是这些模块中的一个或多个也可以合并为一个模块,或者拆分为多个模块。
通过采用上述技术方案,由于,首先对用户输入的语音信息进行语音解码,得到每一条解码路径的路径分数,然后获取每一条解码路径上的热词的热词激励分数,其中,热词激励分数与热词中的每个分词的自适应激励概率有关,最后基于路径分数与热词激励分数的差值,对热词进行识别,这样,就能够利用热词的自适应激励概率对热词识别进行自适应性的激励,提高了热词识别干预的鲁棒性,大大提高了热词识别的正确率。而且,其适用于任意热词的识别干预,且不会导致包含热词的路径分数变的异常的小,因此不会提高热词的召回率。
下面参考图4,其示出了适于用来实现本公开实施例的电子设备600的结构示意图。本公开实施例中的终端设备可以包括但不限于诸如移动电话、笔 记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图4示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图4所示,电子设备600可以包括处理装置(例如中央处理器、图形处理器等)601,其可以根据存储在只读存储器(ROM)602中的程序或者从存储装置608加载到随机访问存储器(RAM)603中的程序而执行各种适当的动作和处理。在RAM 603中,还存储有电子设备600操作所需的各种程序和数据。处理装置601、ROM 602以及RAM 603通过总线604彼此相连。输入/输出(I/O)接口605也连接至总线604。
通常,以下装置可以连接至I/O接口605:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置606;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置607;包括例如磁带、硬盘等的存储装置608;以及通信装置609。通信装置609可以允许电子设备600与其他设备进行无线或有线通信以交换数据。虽然图4示出了具有各种装置的电子设备600,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。本公开的实施例还包括一种计算机程序产品,包括:指令,该指令在被处理装置执行时实现本公开实施例的热词识别方法的步骤。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置609从网络上被下载和安装,或者从存储装置608被安装,或者从ROM 602被安装。在该计算机程序被处理装置601执行时,执行本公开实施例的方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介 质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如HTTP(HyperText Transfer Protocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:对用户输入的语音信息进行语音 解码,得到每一条解码路径的路径分数;获取每一条所述解码路径上的热词的热词激励分数,其中,所述热词激励分数与所述热词中的每个分词的自适应激励概率有关;基于所述路径分数与所述热词激励分数的差值,对所述热词进行识别。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言-诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言——诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)——连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的模块可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,模块的名称在某种情况下并不构成对该模块本身的限定。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执 行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
根据本公开的一个或多个实施例,示例性实施例1提供了一种热词识别方法,包括:对用户输入的语音信息进行语音解码,得到每一条解码路径的路径分数;获取每一条所述解码路径上的热词的热词激励分数,其中,所述热词激励分数与所述热词中的每个分词的自适应激励概率有关;基于所述路径分数与所述热词激励分数的差值,对所述热词进行识别。
根据本公开的一个或多个实施例,示例性实施例2提供了示例性实施例1的方法,其中,所述获取每一条所述解码路径上的热词的热词激励分数,包括:对所述热词进行分词;计算每一个分词的自适应激励概率;将所有分词的自适应激励概率相乘;将相乘得到的数值转换成所述热词激励分数。
根据本公开的一个或多个实施例,示例性实施例3提供了示例性实施例1的方法,其中,所述获取所述热词的热词激励分数,包括:从通过有限自动状态机构图算法构成的关于热词的有限自动状态机图中获取每一条所述解码路径上的热词的热词激励分数;
其中,所述有限自动状态机图通过下述方式被预先构建:对所述热词进 行分词;计算每一个分词的自适应激励概率;将所有分词的自适应激励概率相乘;将相乘得到的数值转换成所述热词激励分数;通过有限自动状态机构图算法,利用所述热词激励分数来构建关于所述热词的有限自动状态机图。
根据本公开的一个或多个实施例,示例性实施例4提供了示例性实施例2或3的方法,其中,所述计算每一个分词的自适应激励概率,包括:计算每一个分词的热词路径概率;利用每个所述热词路径概率各自所对应的自适应调整系数对所述热词路径概率进行自适应调整,得到每一个分词的自适应激励概率,其中,所述热词路径概率越小,则该热词路径概率所对应的自适应调整系数越大。
根据本公开的一个或多个实施例,示例性实施例5提供了示例性实施例4的方法,其中,所述计算每一个分词的热词路径概率通过以下公式来实现:
P 1=P(W 1|<s>)
P i=P(W i|<s>W 1W 2...W i-1)
其中,P 1表示所述热词中的第1个分词W 1的热词路径概率;<s>表示所述热词的开头;P i表示所述热词中的第i个分词W i的热词路径概率,i≥2。
根据本公开的一个或多个实施例,示例性实施例6提供了示例性实施例4的方法,其中,各个所述热词路径概率所对应的自适应调整系数为:
Figure PCTCN2021132124-appb-000004
其中,自适应调整系数i表示所述热词中的第i个分词W i的热词路径概率P i所对应的自适应调整系数,i≥1;scale表示位于0至1范围内的缩放因子。
根据本公开的一个或多个实施例,示例性实施例7提供了示例性实施例2或3的方法,其中,所述将相乘得到的数值转换成所述热词激励分数,包括:对相乘得到的数值进行负log处理,得到所述热词激励分数。
根据本公开的一个或多个实施例,示例性实施例8提供了示例性实施例1的方法,其中,所述基于所述路径分数与所述热词激励分数的差值,对所述热词进行识别,包括:将所述差值最小的解码路径上的热词作为热词识别结 果。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例性实施例形式。关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。

Claims (12)

  1. 一种热词识别方法,包括:
    对用户输入的语音信息进行语音解码,得到每一条解码路径的路径分数;
    获取每一条所述解码路径上的热词的热词激励分数,其中,所述热词激励分数与所述热词中的每个分词的自适应激励概率有关;
    基于所述路径分数与所述热词激励分数的差值,对所述热词进行识别。
  2. 根据权利要求1所述的方法,其中,所述获取每一条所述解码路径上的热词的热词激励分数,包括:
    对所述热词进行分词;
    计算每一个分词的自适应激励概率;
    将所有分词的自适应激励概率相乘;
    将相乘得到的数值转换成所述热词激励分数。
  3. 根据权利要求1所述的方法,其中,所述获取每一条所述解码路径上的热词的热词激励分数,包括:从通过有限自动状态机构图算法构成的关于热词的有限自动状态机图中获取每一条所述解码路径上的热词的热词激励分数;
    其中,所述有限自动状态机图通过下述方式被预先构建:
    对所述热词进行分词;
    计算每一个分词的自适应激励概率;
    将所有分词的自适应激励概率相乘;
    将相乘得到的数值转换成所述热词激励分数;
    通过有限自动状态机构图算法,利用所述热词激励分数来构建关于所述热词的有限自动状态机图。
  4. 根据权利要求2或3所述的方法,其中,所述计算每一个分词的自适应激励概率,包括:
    计算每一个分词的热词路径概率;
    利用每个所述热词路径概率各自所对应的自适应调整系数对所述热词路径概率进行自适应调整,得到每一个分词的自适应激励概率,其中,所述热词路径概率越小,则该热词路径概率所对应的自适应调整系数越大。
  5. 根据权利要求4所述的方法,其中,所述计算每一个分词的热词路径概率通过以下公式来实现:
    P 1=P(W 1|<s>)
    P i=P(W i|<s>W 1W 2...W i-1)
    其中,P 1表示所述热词中的第1个分词W 1的热词路径概率;<s>表示所述热词的开头;P i表示所述热词中的第i个分词W i的热词路径概率,i≥2。
  6. 根据权利要求4所述的方法,其中,各个所述热词路径概率所对应的自适应调整系数为:
    Figure PCTCN2021132124-appb-100001
    其中,自适应调整系数 i表示所述热词中的第i个分词W i的热词路径概率P i所对应的自适应调整系数,i≥1;scale表示位于0至1范围内的缩放因子。
  7. 根据权利要求2至6中任一项所述的方法,其中,所述将相乘得到的数值转换成所述热词激励分数,包括:
    对相乘得到的数值进行负log处理,得到所述热词激励分数。
  8. 根据权利要求1至7中任一项所述的方法,其中,所述基于所述路径分数与所述热词激励分数的差值,对所述热词进行识别,包括:
    将所述差值最小的解码路径上的热词作为热词识别结果。
  9. 一种热词识别装置,包括:
    路径分数获取模块,用于对用户输入的语音信息进行语音解码,得到每一条解码路径的路径分数;
    热词激励分数获取模块,用于获取每一条所述解码路径上的热词的热词激励分数,其中,所述热词激励分数与所述热词中的每个分词的自适应激励概率有关;
    热词识别模块,用于基于所述路径分数与所述热词激励分数的差值,对所述热词进行识别。
  10. 一种计算机可读介质,其上存储有计算机程序,其中,该程序被处理装置执行时实现权利要求1-8中任一项所述方法的步骤。
  11. 一种电子设备,包括:
    存储装置,其上存储有计算机程序;
    处理装置,用于执行所述存储装置中的所述计算机程序,以实现权利要求1-8中任一项所述方法的步骤。
  12. 一种计算机程序产品,包括:指令,所述指令在被处理装置执行时实现权利要求1-8中任一项所述方法的步骤。
PCT/CN2021/132124 2020-12-22 2021-11-22 热词识别方法、装置、介质和电子设备 WO2022134984A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011529691.6 2020-12-22
CN202011529691.6A CN112634904A (zh) 2020-12-22 2020-12-22 热词识别方法、装置、介质和电子设备

Publications (1)

Publication Number Publication Date
WO2022134984A1 true WO2022134984A1 (zh) 2022-06-30

Family

ID=75321222

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/132124 WO2022134984A1 (zh) 2020-12-22 2021-11-22 热词识别方法、装置、介质和电子设备

Country Status (2)

Country Link
CN (1) CN112634904A (zh)
WO (1) WO2022134984A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117351944A (zh) * 2023-12-06 2024-01-05 科大讯飞股份有限公司 语音识别方法、装置、设备及可读存储介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112634904A (zh) * 2020-12-22 2021-04-09 北京有竹居网络技术有限公司 热词识别方法、装置、介质和电子设备

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102592595A (zh) * 2012-03-19 2012-07-18 安徽科大讯飞信息科技股份有限公司 语音识别方法及系统
US20140188883A1 (en) * 2012-12-28 2014-07-03 Kabushiki Kaisha Toshiba Information extracting server, information extracting client, information extracting method, and information extracting program
US20180330717A1 (en) * 2017-05-11 2018-11-15 International Business Machines Corporation Speech recognition by selecting and refining hot words
CN109885812A (zh) * 2019-01-15 2019-06-14 北京捷通华声科技股份有限公司 一种动态添加热词的方法、装置及可读存储介质
CN111354347A (zh) * 2018-12-21 2020-06-30 中国科学院声学研究所 一种基于自适应热词权重的语音识别方法及系统
CN111462751A (zh) * 2020-03-27 2020-07-28 京东数字科技控股有限公司 解码语音数据的方法、装置、计算机设备和存储介质
CN111681661A (zh) * 2020-06-08 2020-09-18 北京有竹居网络技术有限公司 语音识别的方法、装置、电子设备和计算机可读介质
CN111968648A (zh) * 2020-08-27 2020-11-20 北京字节跳动网络技术有限公司 语音识别方法、装置、可读介质及电子设备
CN112634904A (zh) * 2020-12-22 2021-04-09 北京有竹居网络技术有限公司 热词识别方法、装置、介质和电子设备

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103903619B (zh) * 2012-12-28 2016-12-28 科大讯飞股份有限公司 一种提高语音识别准确率的方法及系统
CN110390093B (zh) * 2018-04-20 2023-08-11 普天信息技术有限公司 一种语言模型建立方法及装置
CN110378346B (zh) * 2019-06-14 2021-12-24 北京百度网讯科技有限公司 建立文字识别模型的方法、装置、设备和计算机存储介质
CN111583909B (zh) * 2020-05-18 2024-04-12 科大讯飞股份有限公司 一种语音识别方法、装置、设备及存储介质
CN111402895B (zh) * 2020-06-08 2020-10-02 腾讯科技(深圳)有限公司 语音处理、语音评测方法、装置、计算机设备和存储介质

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102592595A (zh) * 2012-03-19 2012-07-18 安徽科大讯飞信息科技股份有限公司 语音识别方法及系统
US20140188883A1 (en) * 2012-12-28 2014-07-03 Kabushiki Kaisha Toshiba Information extracting server, information extracting client, information extracting method, and information extracting program
US20180330717A1 (en) * 2017-05-11 2018-11-15 International Business Machines Corporation Speech recognition by selecting and refining hot words
CN111354347A (zh) * 2018-12-21 2020-06-30 中国科学院声学研究所 一种基于自适应热词权重的语音识别方法及系统
CN109885812A (zh) * 2019-01-15 2019-06-14 北京捷通华声科技股份有限公司 一种动态添加热词的方法、装置及可读存储介质
CN111462751A (zh) * 2020-03-27 2020-07-28 京东数字科技控股有限公司 解码语音数据的方法、装置、计算机设备和存储介质
CN111681661A (zh) * 2020-06-08 2020-09-18 北京有竹居网络技术有限公司 语音识别的方法、装置、电子设备和计算机可读介质
CN111968648A (zh) * 2020-08-27 2020-11-20 北京字节跳动网络技术有限公司 语音识别方法、装置、可读介质及电子设备
CN112634904A (zh) * 2020-12-22 2021-04-09 北京有竹居网络技术有限公司 热词识别方法、装置、介质和电子设备

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117351944A (zh) * 2023-12-06 2024-01-05 科大讯飞股份有限公司 语音识别方法、装置、设备及可读存储介质
CN117351944B (zh) * 2023-12-06 2024-04-12 科大讯飞股份有限公司 语音识别方法、装置、设备及可读存储介质

Also Published As

Publication number Publication date
CN112634904A (zh) 2021-04-09

Similar Documents

Publication Publication Date Title
WO2022134984A1 (zh) 热词识别方法、装置、介质和电子设备
CN110413812B (zh) 神经网络模型的训练方法、装置、电子设备及存储介质
CN110826567B (zh) 光学字符识别方法、装置、设备及存储介质
CN113436620B (zh) 语音识别模型的训练方法、语音识别方法、装置、介质及设备
WO2022227886A1 (zh) 超分修复网络模型生成方法、图像超分修复方法及装置
WO2020207174A1 (zh) 用于生成量化神经网络的方法和装置
CN111597825B (zh) 语音翻译方法、装置、可读介质及电子设备
CN113327599B (zh) 语音识别方法、装置、介质及电子设备
CN113449070A (zh) 多模态数据检索方法、装置、介质及电子设备
CN111968648B (zh) 语音识别方法、装置、可读介质及电子设备
CN113378586A (zh) 语音翻译方法、翻译模型训练方法、装置、介质及设备
WO2022012178A1 (zh) 用于生成目标函数的方法、装置、电子设备和计算机可读介质
CN112418249A (zh) 掩膜图像生成方法、装置、电子设备和计算机可读介质
WO2023179291A1 (zh) 图像修复方法、装置、设备、介质及产品
CN115272760A (zh) 适用于森林火灾烟雾检测的小样本烟雾图像细分类方法
CN111680754B (zh) 图像分类方法、装置、电子设备及计算机可读存储介质
CN110209851B (zh) 模型训练方法、装置、电子设备及存储介质
CN111797263A (zh) 图像标签生成方法、装置、设备和计算机可读介质
CN111580890A (zh) 用于处理特征的方法、装置、电子设备和计算机可读介质
CN112215789B (zh) 图像去雾方法、装置、设备和计算机可读介质
CN111814807B (zh) 用于处理图像的方法、装置、电子设备和计算机可读介质
CN117172220B (zh) 文本相似信息生成方法、装置、设备和计算机可读介质
WO2023140790A2 (zh) 图像处理方法、装置、电子设备和存储介质
CN111797932B (zh) 图像分类方法、装置、设备和计算机可读介质
CN116416981A (zh) 一种关键词检测方法、装置、电子设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21908983

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21908983

Country of ref document: EP

Kind code of ref document: A1