CN111292741B - Intelligent voice interaction robot - Google Patents
Intelligent voice interaction robot Download PDFInfo
- Publication number
- CN111292741B CN111292741B CN201911410485.0A CN201911410485A CN111292741B CN 111292741 B CN111292741 B CN 111292741B CN 201911410485 A CN201911410485 A CN 201911410485A CN 111292741 B CN111292741 B CN 111292741B
- Authority
- CN
- China
- Prior art keywords
- information
- input
- voice
- vector
- fuzzy
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000003993 interaction Effects 0.000 title claims abstract description 40
- 239000013598 vector Substances 0.000 claims abstract description 83
- 238000006243 chemical reaction Methods 0.000 claims abstract description 9
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 7
- 238000000034 method Methods 0.000 claims abstract description 7
- 241001672694 Citrus reticulata Species 0.000 claims description 9
- 239000002245 particle Substances 0.000 claims description 6
- 230000011218 segmentation Effects 0.000 claims description 6
- 238000007781 pre-processing Methods 0.000 claims description 3
- 239000000126 substance Substances 0.000 claims description 3
- 238000011176 pooling Methods 0.000 description 3
- 241000282414 Homo sapiens Species 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/02—Feature extraction for speech recognition; Selection of recognition unit
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/06—Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/27—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
- G10L25/30—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/02—Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Multimedia (AREA)
- Acoustics & Sound (AREA)
- Human Computer Interaction (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Artificial Intelligence (AREA)
- Theoretical Computer Science (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Data Mining & Analysis (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Machine Translation (AREA)
Abstract
The invention discloses an intelligent voice interaction robot which comprises an intelligent interaction system, wherein the intelligent interaction system comprises a user image module, a database module, a voice acquisition module, a data processing module and an execution module. Decomposing instruction information in a voice interaction word bank and user voice input information into word vectors according to the parts of speech; then, calculating a similarity value between the instruction information and the input information by using a convolutional neural network, and providing corresponding feedback information for a user according to the similarity value; according to the method and the device, fuzzy sound conversion processing can be carried out on the input information which cannot be successfully matched, then the input information which is subjected to the fuzzy sound conversion processing is matched with the instruction information, and the dialect voice information recognition efficiency can be improved.
Description
Technical Field
The invention relates to an intelligent voice interaction robot.
Background
With the rapid development of scientific technology, intelligent robots participate in various industries, bring great convenience to production and life of human beings, and enable people to repeat the process from heavy to heavy. At present, the application of the voice recognition technology in the control of the intelligent robot is continuously expanded, and a voice recognition system of the intelligent robot has higher recognition precision. However, due to the fact that part of users pronounce differently due to dialects, the voice recognition efficiency of the voice recognition system of the intelligent robot is easy to result.
Disclosure of Invention
The invention aims to provide an intelligent voice interaction robot to solve the problem that the existing robot is low in voice recognition efficiency due to dialect pronunciation difference.
In order to solve the above technical problem, the present invention provides an intelligent voice interaction robot, which includes an intelligent interaction system, wherein the intelligent interaction system includes:
the user image module is used for acquiring user registration information and sound information to establish a user image;
the database module is used for constructing a robot voice interaction word bank according to the general robot instruction information and matching corresponding feedback information for each instruction information in the voice interaction word bank; performing word segmentation processing on each piece of instruction information in the voice interaction word bank according to a part-of-speech classification standard to obtain a plurality of reference word vectors;
the voice acquisition module is used for acquiring voice information of a registered user and preprocessing the acquired voice information to obtain standardized voice input information;
the data processing module is used for carrying out word segmentation processing on the standardized voice input information according to a part-of-speech classification standard to obtain a plurality of input word vectors; calculating similarity values between the input word vectors and a plurality of reference word vectors of each piece of instruction information in the voice interaction word bank according to the convolutional neural network, and providing corresponding feedback information for the user according to the similarity values;
and the execution module is used for executing the feedback task according to the feedback information output by the voice recognition module.
Further, the method for obtaining the standardized voice information by standardizing the collected voice information comprises the following steps:
step a: judging whether the input information acquired by the voice acquisition module comprises dialects, if so, converting the dialects in the input information into standard mandarin and then converting the whole section of input information into character information; otherwise, directly converting the input information into character information;
step b: judging whether the standard mandarin language characters contain foreign language information, if so, translating the foreign language information in the standard mandarin language characters into Chinese and then outputting standardized voice input information; otherwise, directly outputting the character information obtained in the step a as the standardized voice input information.
Further, when the maximum similarity value between the input information and the instruction information is larger than or equal to the maximum threshold value, directly outputting feedback information matched with the instruction information corresponding to the maximum similarity value; when the maximum similarity value between the input information and the instruction information is smaller than the maximum threshold value and larger than the minimum threshold value, extracting a reference word vector of the instruction information corresponding to the maximum similarity value, performing fuzzy sound conversion processing on the input word vector different from the extracted reference word vector to obtain fuzzy input information consisting of fuzzy input word vectors, calculating the maximum fuzzy similarity value between the fuzzy input word vector of the fuzzy input information and each instruction information, and when the maximum fuzzy similarity value is larger than or equal to the maximum threshold value, outputting feedback information matched with the instruction information corresponding to the maximum fuzzy similarity value; otherwise, outputting the input information invalid feedback information.
Further, when the maximum fuzzy similarity value between the fuzzy input word vector of the fuzzy input information and the instruction information is greater than or equal to the maximum threshold value, the current fuzzy input information replaces the instruction information with the maximum fuzzy similarity value in the voice interaction word library.
Further, a plurality of reference word vectors included in each instruction in the speech interaction lexicon respectively form a segment of sequence X, where the sequence X can be represented as:
wherein A is n Is a noun reference vector, B v For verb reference vectors, C a For adjective reference vectors, D num For reference vectors of words, E pron For quantifier reference vectors, F com As pronoun reference vectors, G emp Reference vectors for the particle.
Further, the input information includes a plurality of input word vectors respectively forming a segment of sequence Y, where Y may be represented as:
wherein the content of the first and second substances,is a noun input vector, is asserted>Inputs a vector for the verb, and>entering a vector for an adjective, based on the location of the adjective in the frame, and based on the location of the adjective in the frame>Inputting a vector for a number, and->Inputting a vector for a quantifier, and->Inputs a vector for a pronoun, and holds>A vector is input for the particle.
The invention has the beneficial effects that: the robot decomposes the instruction information in the voice interaction word bank and the user voice input information into word vectors according to the part of speech; then, calculating a similarity value between the instruction information and the input information by using a convolutional neural network, and providing corresponding feedback information for a user according to the similarity value; according to the method and the device, fuzzy sound conversion processing can be carried out on the input information which cannot be successfully matched, then the input information which is subjected to the fuzzy sound conversion processing is matched with the instruction information, and the dialect voice information recognition efficiency can be improved.
Drawings
FIG. 1 is a schematic block diagram of one embodiment of the present invention.
Detailed Description
The intelligent voice interaction robot shown in fig. 1 comprises an intelligent interaction system, wherein the intelligent interaction system comprises a user image module, a database module, a voice acquisition module, a data processing module and an execution module.
The user image module is used for acquiring user registration information and sound information to establish a user image; the database module is used for constructing a robot voice interaction word bank according to the general robot instruction information and matching corresponding feedback information for each instruction information in the voice interaction word bank; performing word segmentation processing on each piece of instruction information in the voice interaction word bank according to a part-of-speech classification standard to obtain a plurality of reference word vectors; the voice acquisition module is used for acquiring voice information of a registered user and preprocessing the acquired voice information to obtain standardized voice input information; the data processing module is used for carrying out word segmentation processing on the standardized voice input information according to a part-of-speech classification standard to obtain a plurality of input word vectors; calculating similarity values between the input word vectors and a plurality of reference word vectors of each piece of instruction information in the voice interaction word bank according to the convolutional neural network, and providing corresponding feedback information for the user according to the similarity values; the execution module is used for executing the feedback task according to the feedback information output by the voice recognition module.
Decomposing instruction information in a speech interaction word bank and user speech input information into word vectors according to parts of speech by the applicant; then, calculating a similarity value between the instruction information and the input information by using a convolutional neural network, and providing corresponding feedback information for a user according to the similarity value; according to the method and the device, fuzzy sound conversion processing can be carried out on the input information which cannot be successfully matched, then the input information which is subjected to the fuzzy sound conversion processing is matched with the instruction information, and the dialect voice information recognition efficiency can be improved.
According to an embodiment of the present application, the method for obtaining the standardized speech information by normalizing the collected speech information includes:
step a: judging whether the input information acquired by the voice acquisition module comprises dialects, if so, converting the dialects in the input information into standard mandarin and then converting the whole section of input information into character information; otherwise, directly converting the input information into character information;
step b: judging whether the standard mandarin language characters contain foreign language information, if so, translating the foreign language information in the standard mandarin language characters into Chinese and then outputting standardized voice input information; otherwise, directly outputting the character information obtained in the step a as the standardized voice input information.
According to one embodiment of the application, when the maximum similarity value between the input information and the instruction information is larger than or equal to the maximum threshold value, the feedback information matched with the instruction information corresponding to the maximum similarity value is directly output; when the maximum similarity value between the input information and the instruction information is smaller than the maximum threshold value and larger than the minimum threshold value, extracting a reference word vector of the instruction information corresponding to the maximum similarity value, performing fuzzy sound conversion processing on the input word vector different from the extracted reference word vector to obtain fuzzy input information consisting of fuzzy input word vectors, calculating the maximum fuzzy similarity value between the fuzzy input word vector of the fuzzy input information and each instruction information, and when the maximum fuzzy similarity value is larger than or equal to the maximum threshold value, outputting feedback information matched with the instruction information corresponding to the maximum fuzzy similarity value; otherwise, outputting the input information invalid feedback information.
According to one embodiment of the application, when the maximum fuzzy similarity value of the fuzzy input word vector of the fuzzy input information and the instruction information is larger than or equal to the maximum threshold value, the current fuzzy input information replaces the instruction information with the maximum fuzzy similarity value in the voice interaction word library.
According to an embodiment of the present application, a plurality of reference word vectors included in each instruction in the speech interaction lexicon respectively form a sequence X, where the sequence X can be represented as:
wherein A is n Is a noun reference vector, B v For verb reference vectors, C a For adjective reference vectors, D num For reference vectors of words, E pron For quantifier reference vectors, F com As pronoun reference vectors, G emp Reference vectors for the particle.
According to an embodiment of the present application, the input information includes a plurality of input word vectors respectively forming a segment of sequence Y, where Y may be represented as:
wherein the content of the first and second substances,is a noun input vector, is asserted>Inputs a vector for the verb, and>inputting a vector for an adjective, and->Inputting a vector for a number, and->Entering a vector for a quantifier, based on a predetermined criterion>Inputs a vector for a pronoun, and holds>A vector is input for the particle.
The convolutional neural network comprises an attention layer, a convolutional layer, a pooling layer and an output layer; the input of attention layer is input word vector and reference word vector, the input layer of convolution layer is attention layer, carries out two-dimentional convolution to the output of attention layer, the pooling layer is the pooling of convolution layer, the output layer adopts the softmax function to produce the classification.
Finally, although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that various changes and modifications may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (4)
1. An intelligent voice interaction robot, comprising an intelligent interaction system, wherein the intelligent interaction system comprises:
the user image module is used for acquiring user registration information and sound information to establish a user image;
the database module is used for constructing a robot voice interaction word bank according to the general robot instruction information and matching corresponding feedback information for each instruction information in the voice interaction word bank; performing word segmentation processing on each piece of instruction information in the voice interaction word bank according to a part-of-speech classification standard to obtain a plurality of reference word vectors;
the voice acquisition module is used for acquiring voice information of a registered user and preprocessing the acquired voice information to obtain standardized voice input information;
the data processing module is used for carrying out word segmentation processing on the standardized voice input information according to a part-of-speech classification standard to obtain a plurality of input word vectors; calculating similarity values between the input word vectors and a plurality of reference word vectors of each piece of instruction information in the voice interaction word bank according to the convolutional neural network, and providing corresponding feedback information for the user according to the similarity values;
the execution module is used for executing a feedback task according to the feedback information output by the voice recognition module;
the method for standardizing the collected voice information to obtain standardized voice information comprises the following steps:
step a: judging whether the input information acquired by the voice acquisition module comprises a dialect, if so, converting the dialect in the input information into standard Mandarin and then converting the whole section of input information into character information; otherwise, directly converting the input information into character information;
step b: judging whether the standard mandarin language characters contain foreign language information, if so, translating the foreign language information in the standard mandarin language characters into Chinese and then outputting standardized voice input information; otherwise, directly outputting the character information obtained in the step a as standardized voice input information;
when the maximum similarity value between the input information and the instruction information is larger than or equal to the maximum threshold value, directly outputting feedback information matched with the instruction information corresponding to the maximum similarity value; when the maximum similarity value between the input information and the instruction information is smaller than the maximum threshold value and larger than the minimum threshold value, extracting a reference word vector of the instruction information corresponding to the maximum similarity value, performing fuzzy sound conversion processing on the input word vector different from the extracted reference word vector to obtain fuzzy input information consisting of fuzzy input word vectors, calculating the maximum fuzzy similarity value between the fuzzy input word vector of the fuzzy input information and each instruction information, and when the maximum fuzzy similarity value is larger than or equal to the maximum threshold value, outputting feedback information matched with the instruction information corresponding to the maximum fuzzy similarity value; otherwise, outputting the input information invalid feedback information.
2. The intelligent voice interaction robot of claim 1, wherein when the maximum fuzzy similarity value between the fuzzy input word vector of the fuzzy input information and the instruction information is greater than or equal to a maximum threshold value, the current fuzzy input information is made to replace the instruction information with the maximum fuzzy similarity value in the voice interaction lexicon.
3. The intelligent voice interaction robot of claim 1, wherein a plurality of reference word vectors included in each instruction in the voice interaction lexicon respectively form a sequence X, wherein the sequence X can be represented as:
wherein A is n Is a noun reference vector, B v Is a verb reference vector, C a For adjective reference vectors, D num Reference vector for number, E pron Reference vector for quantifier, F com As pronoun reference vectors, G emp Reference vectors for the particle.
4. The intelligent voice interaction robot of claim 3, wherein the input information comprises a plurality of input word vectors respectively forming a sequence Y, wherein Y can be expressed as:
wherein the content of the first and second substances,for a noun input vector, based on a number of words in a set of words>Inputs a vector for the verb, and>inputting a vector for an adjective, and->Inputting a vector for a number, and->Is measured byWord input vector, <' > based on>Inputs a vector for a pronoun, and holds>Vectors are input for the particle. />
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911410485.0A CN111292741B (en) | 2019-12-31 | 2019-12-31 | Intelligent voice interaction robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911410485.0A CN111292741B (en) | 2019-12-31 | 2019-12-31 | Intelligent voice interaction robot |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111292741A CN111292741A (en) | 2020-06-16 |
CN111292741B true CN111292741B (en) | 2023-04-18 |
Family
ID=71021549
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911410485.0A Active CN111292741B (en) | 2019-12-31 | 2019-12-31 | Intelligent voice interaction robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111292741B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115631753A (en) * | 2022-12-23 | 2023-01-20 | 无锡迪富智能电子股份有限公司 | Intelligent remote controller for toilet and use method thereof |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101464896A (en) * | 2009-01-23 | 2009-06-24 | 安徽科大讯飞信息科技股份有限公司 | Voice fuzzy retrieval method and apparatus |
CN106776545A (en) * | 2016-11-29 | 2017-05-31 | 西安交通大学 | A kind of method that Similarity Measure between short text is carried out by depth convolutional neural networks |
CN108595696A (en) * | 2018-05-09 | 2018-09-28 | 长沙学院 | A kind of human-computer interaction intelligent answering method and system based on cloud platform |
CN108712366A (en) * | 2018-03-27 | 2018-10-26 | 西安电子科技大学 | That morphology meaning of a word fuzzy search is supported in cloud environment can search for encryption method and system |
CN108921747A (en) * | 2018-07-06 | 2018-11-30 | 重庆和贯科技有限公司 | Make the wisdom education system of student's feeling of immersion |
CN109545197A (en) * | 2019-01-02 | 2019-03-29 | 珠海格力电器股份有限公司 | Recognition methods, device and the intelligent terminal of phonetic order |
CN109710929A (en) * | 2018-12-18 | 2019-05-03 | 金蝶软件(中国)有限公司 | A kind of bearing calibration, device, computer equipment and the storage medium of speech recognition text |
CN110164427A (en) * | 2018-02-13 | 2019-08-23 | 阿里巴巴集团控股有限公司 | Voice interactive method, device, equipment and storage medium |
CN110245216A (en) * | 2019-06-13 | 2019-09-17 | 出门问问信息科技有限公司 | For the semantic matching method of question answering system, device, equipment and storage medium |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7296011B2 (en) * | 2003-06-20 | 2007-11-13 | Microsoft Corporation | Efficient fuzzy match for evaluating data records |
US10796697B2 (en) * | 2017-01-31 | 2020-10-06 | Microsoft Technology Licensing, Llc | Associating meetings with projects using characteristic keywords |
US20190340503A1 (en) * | 2018-05-07 | 2019-11-07 | Ebay Inc. | Search system for providing free-text problem-solution searching |
-
2019
- 2019-12-31 CN CN201911410485.0A patent/CN111292741B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101464896A (en) * | 2009-01-23 | 2009-06-24 | 安徽科大讯飞信息科技股份有限公司 | Voice fuzzy retrieval method and apparatus |
CN106776545A (en) * | 2016-11-29 | 2017-05-31 | 西安交通大学 | A kind of method that Similarity Measure between short text is carried out by depth convolutional neural networks |
CN110164427A (en) * | 2018-02-13 | 2019-08-23 | 阿里巴巴集团控股有限公司 | Voice interactive method, device, equipment and storage medium |
CN108712366A (en) * | 2018-03-27 | 2018-10-26 | 西安电子科技大学 | That morphology meaning of a word fuzzy search is supported in cloud environment can search for encryption method and system |
CN108595696A (en) * | 2018-05-09 | 2018-09-28 | 长沙学院 | A kind of human-computer interaction intelligent answering method and system based on cloud platform |
CN108921747A (en) * | 2018-07-06 | 2018-11-30 | 重庆和贯科技有限公司 | Make the wisdom education system of student's feeling of immersion |
CN109710929A (en) * | 2018-12-18 | 2019-05-03 | 金蝶软件(中国)有限公司 | A kind of bearing calibration, device, computer equipment and the storage medium of speech recognition text |
CN109545197A (en) * | 2019-01-02 | 2019-03-29 | 珠海格力电器股份有限公司 | Recognition methods, device and the intelligent terminal of phonetic order |
CN110245216A (en) * | 2019-06-13 | 2019-09-17 | 出门问问信息科技有限公司 | For the semantic matching method of question answering system, device, equipment and storage medium |
Non-Patent Citations (2)
Title |
---|
Fuzzy Entropy and Its Application for Enhanced Subspace Filtering;Hong-Bo Xie,et al.;《IEEE Transactions on Fuzzy Systems 》;IEEE;20170925;第26卷(第4期);全文 * |
基于音码相似度的拼音模糊查询算法;阎红灿等;《计算机与现代化》;中国知网;20080815(第8期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN111292741A (en) | 2020-06-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110609891A (en) | Visual dialog generation method based on context awareness graph neural network | |
CN107273913B (en) | Short text similarity calculation method based on multi-feature fusion | |
CN108170848B (en) | Chinese mobile intelligent customer service-oriented conversation scene classification method | |
CN112101044B (en) | Intention identification method and device and electronic equipment | |
CN113268974B (en) | Method, device and equipment for marking pronunciations of polyphones and storage medium | |
WO2023030105A1 (en) | Natural language processing model training method and natural language processing method, and electronic device | |
CN115238029A (en) | Construction method and device of power failure knowledge graph | |
CN112988970A (en) | Text matching algorithm serving intelligent question-answering system | |
CN115935959A (en) | Method for labeling low-resource glue word sequence | |
CN115759071A (en) | Government affair sensitive information identification system and method based on big data | |
CN115687609A (en) | Zero sample relation extraction method based on Prompt multi-template fusion | |
CN115759119A (en) | Financial text emotion analysis method, system, medium and equipment | |
Zhao et al. | Knowledge-aware bayesian co-attention for multimodal emotion recognition | |
CN111292741B (en) | Intelligent voice interaction robot | |
CN113065352B (en) | Method for identifying operation content of power grid dispatching work text | |
CN112307179A (en) | Text matching method, device, equipment and storage medium | |
CN117290478A (en) | Knowledge graph question-answering method, device, equipment and storage medium | |
CN112989829A (en) | Named entity identification method, device, equipment and storage medium | |
CN112257432A (en) | Self-adaptive intention identification method and device and electronic equipment | |
CN112349294A (en) | Voice processing method and device, computer readable medium and electronic equipment | |
CN111723583A (en) | Statement processing method, device, equipment and storage medium based on intention role | |
Kazakova et al. | Analysis of natural language processing technology: Modern problems and approaches | |
CN106682642A (en) | Multi-language-oriented behavior identification method and multi-language-oriented behavior identification system | |
CN115510230A (en) | Mongolian emotion analysis method based on multi-dimensional feature fusion and comparative reinforcement learning mechanism | |
CN116483314A (en) | Automatic intelligent activity diagram generation method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |