CN111292741B - Intelligent voice interaction robot - Google Patents

Intelligent voice interaction robot Download PDF

Info

Publication number
CN111292741B
CN111292741B CN201911410485.0A CN201911410485A CN111292741B CN 111292741 B CN111292741 B CN 111292741B CN 201911410485 A CN201911410485 A CN 201911410485A CN 111292741 B CN111292741 B CN 111292741B
Authority
CN
China
Prior art keywords
information
input
voice
vector
fuzzy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911410485.0A
Other languages
Chinese (zh)
Other versions
CN111292741A (en
Inventor
刘兵
田佳雯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Hounify Technology Co ltd
Original Assignee
Chongqing Hounify Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Hounify Technology Co ltd filed Critical Chongqing Hounify Technology Co ltd
Priority to CN201911410485.0A priority Critical patent/CN111292741B/en
Publication of CN111292741A publication Critical patent/CN111292741A/en
Application granted granted Critical
Publication of CN111292741B publication Critical patent/CN111292741B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • Acoustics & Sound (AREA)
  • Human Computer Interaction (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Artificial Intelligence (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Machine Translation (AREA)

Abstract

The invention discloses an intelligent voice interaction robot which comprises an intelligent interaction system, wherein the intelligent interaction system comprises a user image module, a database module, a voice acquisition module, a data processing module and an execution module. Decomposing instruction information in a voice interaction word bank and user voice input information into word vectors according to the parts of speech; then, calculating a similarity value between the instruction information and the input information by using a convolutional neural network, and providing corresponding feedback information for a user according to the similarity value; according to the method and the device, fuzzy sound conversion processing can be carried out on the input information which cannot be successfully matched, then the input information which is subjected to the fuzzy sound conversion processing is matched with the instruction information, and the dialect voice information recognition efficiency can be improved.

Description

Intelligent voice interaction robot
Technical Field
The invention relates to an intelligent voice interaction robot.
Background
With the rapid development of scientific technology, intelligent robots participate in various industries, bring great convenience to production and life of human beings, and enable people to repeat the process from heavy to heavy. At present, the application of the voice recognition technology in the control of the intelligent robot is continuously expanded, and a voice recognition system of the intelligent robot has higher recognition precision. However, due to the fact that part of users pronounce differently due to dialects, the voice recognition efficiency of the voice recognition system of the intelligent robot is easy to result.
Disclosure of Invention
The invention aims to provide an intelligent voice interaction robot to solve the problem that the existing robot is low in voice recognition efficiency due to dialect pronunciation difference.
In order to solve the above technical problem, the present invention provides an intelligent voice interaction robot, which includes an intelligent interaction system, wherein the intelligent interaction system includes:
the user image module is used for acquiring user registration information and sound information to establish a user image;
the database module is used for constructing a robot voice interaction word bank according to the general robot instruction information and matching corresponding feedback information for each instruction information in the voice interaction word bank; performing word segmentation processing on each piece of instruction information in the voice interaction word bank according to a part-of-speech classification standard to obtain a plurality of reference word vectors;
the voice acquisition module is used for acquiring voice information of a registered user and preprocessing the acquired voice information to obtain standardized voice input information;
the data processing module is used for carrying out word segmentation processing on the standardized voice input information according to a part-of-speech classification standard to obtain a plurality of input word vectors; calculating similarity values between the input word vectors and a plurality of reference word vectors of each piece of instruction information in the voice interaction word bank according to the convolutional neural network, and providing corresponding feedback information for the user according to the similarity values;
and the execution module is used for executing the feedback task according to the feedback information output by the voice recognition module.
Further, the method for obtaining the standardized voice information by standardizing the collected voice information comprises the following steps:
step a: judging whether the input information acquired by the voice acquisition module comprises dialects, if so, converting the dialects in the input information into standard mandarin and then converting the whole section of input information into character information; otherwise, directly converting the input information into character information;
step b: judging whether the standard mandarin language characters contain foreign language information, if so, translating the foreign language information in the standard mandarin language characters into Chinese and then outputting standardized voice input information; otherwise, directly outputting the character information obtained in the step a as the standardized voice input information.
Further, when the maximum similarity value between the input information and the instruction information is larger than or equal to the maximum threshold value, directly outputting feedback information matched with the instruction information corresponding to the maximum similarity value; when the maximum similarity value between the input information and the instruction information is smaller than the maximum threshold value and larger than the minimum threshold value, extracting a reference word vector of the instruction information corresponding to the maximum similarity value, performing fuzzy sound conversion processing on the input word vector different from the extracted reference word vector to obtain fuzzy input information consisting of fuzzy input word vectors, calculating the maximum fuzzy similarity value between the fuzzy input word vector of the fuzzy input information and each instruction information, and when the maximum fuzzy similarity value is larger than or equal to the maximum threshold value, outputting feedback information matched with the instruction information corresponding to the maximum fuzzy similarity value; otherwise, outputting the input information invalid feedback information.
Further, when the maximum fuzzy similarity value between the fuzzy input word vector of the fuzzy input information and the instruction information is greater than or equal to the maximum threshold value, the current fuzzy input information replaces the instruction information with the maximum fuzzy similarity value in the voice interaction word library.
Further, a plurality of reference word vectors included in each instruction in the speech interaction lexicon respectively form a segment of sequence X, where the sequence X can be represented as:
Figure GDA0004067988700000031
wherein A is n Is a noun reference vector, B v For verb reference vectors, C a For adjective reference vectors, D num For reference vectors of words, E pron For quantifier reference vectors, F com As pronoun reference vectors, G emp Reference vectors for the particle.
Further, the input information includes a plurality of input word vectors respectively forming a segment of sequence Y, where Y may be represented as:
Figure GDA0004067988700000032
wherein the content of the first and second substances,
Figure GDA0004067988700000033
is a noun input vector, is asserted>
Figure GDA0004067988700000034
Inputs a vector for the verb, and>
Figure GDA0004067988700000035
entering a vector for an adjective, based on the location of the adjective in the frame, and based on the location of the adjective in the frame>
Figure GDA0004067988700000036
Inputting a vector for a number, and->
Figure GDA0004067988700000037
Inputting a vector for a quantifier, and->
Figure GDA0004067988700000038
Inputs a vector for a pronoun, and holds>
Figure GDA0004067988700000039
A vector is input for the particle.
The invention has the beneficial effects that: the robot decomposes the instruction information in the voice interaction word bank and the user voice input information into word vectors according to the part of speech; then, calculating a similarity value between the instruction information and the input information by using a convolutional neural network, and providing corresponding feedback information for a user according to the similarity value; according to the method and the device, fuzzy sound conversion processing can be carried out on the input information which cannot be successfully matched, then the input information which is subjected to the fuzzy sound conversion processing is matched with the instruction information, and the dialect voice information recognition efficiency can be improved.
Drawings
FIG. 1 is a schematic block diagram of one embodiment of the present invention.
Detailed Description
The intelligent voice interaction robot shown in fig. 1 comprises an intelligent interaction system, wherein the intelligent interaction system comprises a user image module, a database module, a voice acquisition module, a data processing module and an execution module.
The user image module is used for acquiring user registration information and sound information to establish a user image; the database module is used for constructing a robot voice interaction word bank according to the general robot instruction information and matching corresponding feedback information for each instruction information in the voice interaction word bank; performing word segmentation processing on each piece of instruction information in the voice interaction word bank according to a part-of-speech classification standard to obtain a plurality of reference word vectors; the voice acquisition module is used for acquiring voice information of a registered user and preprocessing the acquired voice information to obtain standardized voice input information; the data processing module is used for carrying out word segmentation processing on the standardized voice input information according to a part-of-speech classification standard to obtain a plurality of input word vectors; calculating similarity values between the input word vectors and a plurality of reference word vectors of each piece of instruction information in the voice interaction word bank according to the convolutional neural network, and providing corresponding feedback information for the user according to the similarity values; the execution module is used for executing the feedback task according to the feedback information output by the voice recognition module.
Decomposing instruction information in a speech interaction word bank and user speech input information into word vectors according to parts of speech by the applicant; then, calculating a similarity value between the instruction information and the input information by using a convolutional neural network, and providing corresponding feedback information for a user according to the similarity value; according to the method and the device, fuzzy sound conversion processing can be carried out on the input information which cannot be successfully matched, then the input information which is subjected to the fuzzy sound conversion processing is matched with the instruction information, and the dialect voice information recognition efficiency can be improved.
According to an embodiment of the present application, the method for obtaining the standardized speech information by normalizing the collected speech information includes:
step a: judging whether the input information acquired by the voice acquisition module comprises dialects, if so, converting the dialects in the input information into standard mandarin and then converting the whole section of input information into character information; otherwise, directly converting the input information into character information;
step b: judging whether the standard mandarin language characters contain foreign language information, if so, translating the foreign language information in the standard mandarin language characters into Chinese and then outputting standardized voice input information; otherwise, directly outputting the character information obtained in the step a as the standardized voice input information.
According to one embodiment of the application, when the maximum similarity value between the input information and the instruction information is larger than or equal to the maximum threshold value, the feedback information matched with the instruction information corresponding to the maximum similarity value is directly output; when the maximum similarity value between the input information and the instruction information is smaller than the maximum threshold value and larger than the minimum threshold value, extracting a reference word vector of the instruction information corresponding to the maximum similarity value, performing fuzzy sound conversion processing on the input word vector different from the extracted reference word vector to obtain fuzzy input information consisting of fuzzy input word vectors, calculating the maximum fuzzy similarity value between the fuzzy input word vector of the fuzzy input information and each instruction information, and when the maximum fuzzy similarity value is larger than or equal to the maximum threshold value, outputting feedback information matched with the instruction information corresponding to the maximum fuzzy similarity value; otherwise, outputting the input information invalid feedback information.
According to one embodiment of the application, when the maximum fuzzy similarity value of the fuzzy input word vector of the fuzzy input information and the instruction information is larger than or equal to the maximum threshold value, the current fuzzy input information replaces the instruction information with the maximum fuzzy similarity value in the voice interaction word library.
According to an embodiment of the present application, a plurality of reference word vectors included in each instruction in the speech interaction lexicon respectively form a sequence X, where the sequence X can be represented as:
Figure GDA0004067988700000051
wherein A is n Is a noun reference vector, B v For verb reference vectors, C a For adjective reference vectors, D num For reference vectors of words, E pron For quantifier reference vectors, F com As pronoun reference vectors, G emp Reference vectors for the particle.
According to an embodiment of the present application, the input information includes a plurality of input word vectors respectively forming a segment of sequence Y, where Y may be represented as:
Figure GDA0004067988700000061
wherein the content of the first and second substances,
Figure GDA0004067988700000062
is a noun input vector, is asserted>
Figure GDA0004067988700000063
Inputs a vector for the verb, and>
Figure GDA0004067988700000064
inputting a vector for an adjective, and->
Figure GDA0004067988700000065
Inputting a vector for a number, and->
Figure GDA0004067988700000066
Entering a vector for a quantifier, based on a predetermined criterion>
Figure GDA0004067988700000067
Inputs a vector for a pronoun, and holds>
Figure GDA0004067988700000068
A vector is input for the particle.
The convolutional neural network comprises an attention layer, a convolutional layer, a pooling layer and an output layer; the input of attention layer is input word vector and reference word vector, the input layer of convolution layer is attention layer, carries out two-dimentional convolution to the output of attention layer, the pooling layer is the pooling of convolution layer, the output layer adopts the softmax function to produce the classification.
Finally, although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that various changes and modifications may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (4)

1. An intelligent voice interaction robot, comprising an intelligent interaction system, wherein the intelligent interaction system comprises:
the user image module is used for acquiring user registration information and sound information to establish a user image;
the database module is used for constructing a robot voice interaction word bank according to the general robot instruction information and matching corresponding feedback information for each instruction information in the voice interaction word bank; performing word segmentation processing on each piece of instruction information in the voice interaction word bank according to a part-of-speech classification standard to obtain a plurality of reference word vectors;
the voice acquisition module is used for acquiring voice information of a registered user and preprocessing the acquired voice information to obtain standardized voice input information;
the data processing module is used for carrying out word segmentation processing on the standardized voice input information according to a part-of-speech classification standard to obtain a plurality of input word vectors; calculating similarity values between the input word vectors and a plurality of reference word vectors of each piece of instruction information in the voice interaction word bank according to the convolutional neural network, and providing corresponding feedback information for the user according to the similarity values;
the execution module is used for executing a feedback task according to the feedback information output by the voice recognition module;
the method for standardizing the collected voice information to obtain standardized voice information comprises the following steps:
step a: judging whether the input information acquired by the voice acquisition module comprises a dialect, if so, converting the dialect in the input information into standard Mandarin and then converting the whole section of input information into character information; otherwise, directly converting the input information into character information;
step b: judging whether the standard mandarin language characters contain foreign language information, if so, translating the foreign language information in the standard mandarin language characters into Chinese and then outputting standardized voice input information; otherwise, directly outputting the character information obtained in the step a as standardized voice input information;
when the maximum similarity value between the input information and the instruction information is larger than or equal to the maximum threshold value, directly outputting feedback information matched with the instruction information corresponding to the maximum similarity value; when the maximum similarity value between the input information and the instruction information is smaller than the maximum threshold value and larger than the minimum threshold value, extracting a reference word vector of the instruction information corresponding to the maximum similarity value, performing fuzzy sound conversion processing on the input word vector different from the extracted reference word vector to obtain fuzzy input information consisting of fuzzy input word vectors, calculating the maximum fuzzy similarity value between the fuzzy input word vector of the fuzzy input information and each instruction information, and when the maximum fuzzy similarity value is larger than or equal to the maximum threshold value, outputting feedback information matched with the instruction information corresponding to the maximum fuzzy similarity value; otherwise, outputting the input information invalid feedback information.
2. The intelligent voice interaction robot of claim 1, wherein when the maximum fuzzy similarity value between the fuzzy input word vector of the fuzzy input information and the instruction information is greater than or equal to a maximum threshold value, the current fuzzy input information is made to replace the instruction information with the maximum fuzzy similarity value in the voice interaction lexicon.
3. The intelligent voice interaction robot of claim 1, wherein a plurality of reference word vectors included in each instruction in the voice interaction lexicon respectively form a sequence X, wherein the sequence X can be represented as:
Figure QLYQS_1
wherein A is n Is a noun reference vector, B v Is a verb reference vector, C a For adjective reference vectors, D num Reference vector for number, E pron Reference vector for quantifier, F com As pronoun reference vectors, G emp Reference vectors for the particle.
4. The intelligent voice interaction robot of claim 3, wherein the input information comprises a plurality of input word vectors respectively forming a sequence Y, wherein Y can be expressed as:
Figure QLYQS_2
/>
wherein the content of the first and second substances,
Figure QLYQS_3
for a noun input vector, based on a number of words in a set of words>
Figure QLYQS_4
Inputs a vector for the verb, and>
Figure QLYQS_5
inputting a vector for an adjective, and->
Figure QLYQS_6
Inputting a vector for a number, and->
Figure QLYQS_7
Is measured byWord input vector, <' > based on>
Figure QLYQS_8
Inputs a vector for a pronoun, and holds>
Figure QLYQS_9
Vectors are input for the particle. />
CN201911410485.0A 2019-12-31 2019-12-31 Intelligent voice interaction robot Active CN111292741B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911410485.0A CN111292741B (en) 2019-12-31 2019-12-31 Intelligent voice interaction robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911410485.0A CN111292741B (en) 2019-12-31 2019-12-31 Intelligent voice interaction robot

Publications (2)

Publication Number Publication Date
CN111292741A CN111292741A (en) 2020-06-16
CN111292741B true CN111292741B (en) 2023-04-18

Family

ID=71021549

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911410485.0A Active CN111292741B (en) 2019-12-31 2019-12-31 Intelligent voice interaction robot

Country Status (1)

Country Link
CN (1) CN111292741B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115631753A (en) * 2022-12-23 2023-01-20 无锡迪富智能电子股份有限公司 Intelligent remote controller for toilet and use method thereof

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101464896A (en) * 2009-01-23 2009-06-24 安徽科大讯飞信息科技股份有限公司 Voice fuzzy retrieval method and apparatus
CN106776545A (en) * 2016-11-29 2017-05-31 西安交通大学 A kind of method that Similarity Measure between short text is carried out by depth convolutional neural networks
CN108595696A (en) * 2018-05-09 2018-09-28 长沙学院 A kind of human-computer interaction intelligent answering method and system based on cloud platform
CN108712366A (en) * 2018-03-27 2018-10-26 西安电子科技大学 That morphology meaning of a word fuzzy search is supported in cloud environment can search for encryption method and system
CN108921747A (en) * 2018-07-06 2018-11-30 重庆和贯科技有限公司 Make the wisdom education system of student's feeling of immersion
CN109545197A (en) * 2019-01-02 2019-03-29 珠海格力电器股份有限公司 Recognition methods, device and the intelligent terminal of phonetic order
CN109710929A (en) * 2018-12-18 2019-05-03 金蝶软件(中国)有限公司 A kind of bearing calibration, device, computer equipment and the storage medium of speech recognition text
CN110164427A (en) * 2018-02-13 2019-08-23 阿里巴巴集团控股有限公司 Voice interactive method, device, equipment and storage medium
CN110245216A (en) * 2019-06-13 2019-09-17 出门问问信息科技有限公司 For the semantic matching method of question answering system, device, equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7296011B2 (en) * 2003-06-20 2007-11-13 Microsoft Corporation Efficient fuzzy match for evaluating data records
US10796697B2 (en) * 2017-01-31 2020-10-06 Microsoft Technology Licensing, Llc Associating meetings with projects using characteristic keywords
US20190340503A1 (en) * 2018-05-07 2019-11-07 Ebay Inc. Search system for providing free-text problem-solution searching

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101464896A (en) * 2009-01-23 2009-06-24 安徽科大讯飞信息科技股份有限公司 Voice fuzzy retrieval method and apparatus
CN106776545A (en) * 2016-11-29 2017-05-31 西安交通大学 A kind of method that Similarity Measure between short text is carried out by depth convolutional neural networks
CN110164427A (en) * 2018-02-13 2019-08-23 阿里巴巴集团控股有限公司 Voice interactive method, device, equipment and storage medium
CN108712366A (en) * 2018-03-27 2018-10-26 西安电子科技大学 That morphology meaning of a word fuzzy search is supported in cloud environment can search for encryption method and system
CN108595696A (en) * 2018-05-09 2018-09-28 长沙学院 A kind of human-computer interaction intelligent answering method and system based on cloud platform
CN108921747A (en) * 2018-07-06 2018-11-30 重庆和贯科技有限公司 Make the wisdom education system of student's feeling of immersion
CN109710929A (en) * 2018-12-18 2019-05-03 金蝶软件(中国)有限公司 A kind of bearing calibration, device, computer equipment and the storage medium of speech recognition text
CN109545197A (en) * 2019-01-02 2019-03-29 珠海格力电器股份有限公司 Recognition methods, device and the intelligent terminal of phonetic order
CN110245216A (en) * 2019-06-13 2019-09-17 出门问问信息科技有限公司 For the semantic matching method of question answering system, device, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Fuzzy Entropy and Its Application for Enhanced Subspace Filtering;Hong-Bo Xie,et al.;《IEEE Transactions on Fuzzy Systems 》;IEEE;20170925;第26卷(第4期);全文 *
基于音码相似度的拼音模糊查询算法;阎红灿等;《计算机与现代化》;中国知网;20080815(第8期);全文 *

Also Published As

Publication number Publication date
CN111292741A (en) 2020-06-16

Similar Documents

Publication Publication Date Title
CN110609891A (en) Visual dialog generation method based on context awareness graph neural network
CN107273913B (en) Short text similarity calculation method based on multi-feature fusion
CN108170848B (en) Chinese mobile intelligent customer service-oriented conversation scene classification method
CN112101044B (en) Intention identification method and device and electronic equipment
CN113268974B (en) Method, device and equipment for marking pronunciations of polyphones and storage medium
WO2023030105A1 (en) Natural language processing model training method and natural language processing method, and electronic device
CN115238029A (en) Construction method and device of power failure knowledge graph
CN112988970A (en) Text matching algorithm serving intelligent question-answering system
CN115935959A (en) Method for labeling low-resource glue word sequence
CN115759071A (en) Government affair sensitive information identification system and method based on big data
CN115687609A (en) Zero sample relation extraction method based on Prompt multi-template fusion
CN115759119A (en) Financial text emotion analysis method, system, medium and equipment
Zhao et al. Knowledge-aware bayesian co-attention for multimodal emotion recognition
CN111292741B (en) Intelligent voice interaction robot
CN113065352B (en) Method for identifying operation content of power grid dispatching work text
CN112307179A (en) Text matching method, device, equipment and storage medium
CN117290478A (en) Knowledge graph question-answering method, device, equipment and storage medium
CN112989829A (en) Named entity identification method, device, equipment and storage medium
CN112257432A (en) Self-adaptive intention identification method and device and electronic equipment
CN112349294A (en) Voice processing method and device, computer readable medium and electronic equipment
CN111723583A (en) Statement processing method, device, equipment and storage medium based on intention role
Kazakova et al. Analysis of natural language processing technology: Modern problems and approaches
CN106682642A (en) Multi-language-oriented behavior identification method and multi-language-oriented behavior identification system
CN115510230A (en) Mongolian emotion analysis method based on multi-dimensional feature fusion and comparative reinforcement learning mechanism
CN116483314A (en) Automatic intelligent activity diagram generation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant