CN113239166A - Automatic man-machine interaction method based on semantic knowledge enhancement - Google Patents
Automatic man-machine interaction method based on semantic knowledge enhancement Download PDFInfo
- Publication number
- CN113239166A CN113239166A CN202110567502.2A CN202110567502A CN113239166A CN 113239166 A CN113239166 A CN 113239166A CN 202110567502 A CN202110567502 A CN 202110567502A CN 113239166 A CN113239166 A CN 113239166A
- Authority
- CN
- China
- Prior art keywords
- text
- semantic
- vector
- sequence
- vector sequence
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 36
- 230000003993 interaction Effects 0.000 title claims abstract description 18
- 230000007246 mechanism Effects 0.000 claims abstract description 11
- 239000013598 vector Substances 0.000 claims description 105
- 238000012549 training Methods 0.000 claims description 46
- 238000013507 mapping Methods 0.000 claims description 21
- 239000011159 matrix material Substances 0.000 claims description 9
- 238000013528 artificial neural network Methods 0.000 claims description 5
- 230000008569 process Effects 0.000 claims description 5
- 238000004422 calculation algorithm Methods 0.000 claims description 2
- 230000011218 segmentation Effects 0.000 claims description 2
- 230000010354 integration Effects 0.000 claims 1
- 238000005516 engineering process Methods 0.000 abstract description 12
- 238000013461 design Methods 0.000 abstract description 3
- 238000004519 manufacturing process Methods 0.000 abstract description 2
- 238000003062 neural network model Methods 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 8
- 230000000694 effects Effects 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000004883 computer application Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/332—Query formulation
- G06F16/3329—Natural language query formulation or dialogue systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
- G06F16/3332—Query translation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/237—Lexical tools
- G06F40/247—Thesauruses; Synonyms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/289—Phrasal analysis, e.g. finite state techniques or chunking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Databases & Information Systems (AREA)
- Human Computer Interaction (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Machine Translation (AREA)
Abstract
The invention discloses an automatic human-computer interaction method based on semantic knowledge enhancement. The method can be used for constructing an intelligent writing system, namely candidate texts with different expression modes can be provided for the text created by the user, and the user can select the candidate texts and further modify the candidate texts, so that the created text has more expressive force and diversity, and repeated and monotonous expression of words and sentences is avoided; the invention solves the problem that the traditional language generation technology cannot be used for effectively and automatically rewriting texts, designs a neural network model based on a multi-head attention mechanism, and simultaneously improves the diversity and the flexibility of text rewriting by utilizing external semantic knowledge. Compared with the traditional text rewriting method based on the template, the method has higher flexibility and universality, greatly reduces the manual participation and reduces the production cost.
Description
Technical Field
The invention relates to the technical field of computer application, computer systems and technical products thereof, in particular to an automatic man-machine interaction method based on semantic knowledge enhancement.
Background
Natural language generation belongs to the interdisciplinary discipline of artificial intelligence and computational language, aims to enable a machine to generate understandable human language texts, and is a novel human-computer interaction method. The progress of natural language generation technology is helpful to construct a strong artificial intelligence system, improve the understanding of human language and break the boundary of human-computer interaction. The traditional natural language generation technology is successfully applied to a plurality of fields, such as an automatic text rewriting technology, and the text rewriting technology can be used in a writing system to change the expression mode of a text, so that the diversity of the writing system is improved; the question of the user can be rewritten in the question-answering system by using a text rewriting technology, so that the complex text is simplified, and the difficulty of understanding the text by a machine is reduced; in the dialog system, the text which is difficult to understand can be simplified into the text which is easy to understand by utilizing the text rewriting technology, and the applicability of the dialog system to crowds with different knowledge levels is enhanced.
The text rewriting technology is taken as a typical man-machine interaction method, and the existing technology has certain problems, one is to use templates customized by experts in some fields to rewrite slots in a filling mode, so that the rewritten sentences lack diversity, and even grammar cannot be smooth. In addition, the realization of text rewriting by customizing the template still requires more manpower, the template design difficulty is higher, a unified standard is lacked, and a professional person needs to write the template, so that the labor cost of the system is further increased, and the technical flexibility is poor. In order to avoid the problems brought by the template method, some text rewriting technologies based on neural models are designed correspondingly, the technologies utilize a sequence generation model to model a text rewriting scene, and a target repeat sentence is directly generated from an original sentence through a neural network. However, such techniques can only perform limited modifications on the original text, such as changing word order or part of speech, resulting in low sentence modification amplitude, insufficient diversity, and still far from the target requirement in effect.
Disclosure of Invention
The invention aims to make up the defects of low sentence modification amplitude and insufficient diversity in the prior art, and provides an automatic human-computer interaction method based on semantic knowledge enhancement.
The invention is realized by the following technical scheme:
an automatic man-machine interaction method based on semantic knowledge enhancement comprises an encoding module, a decoding module, a semantic knowledge alignment module and an output mapping module; the trainer device proofreads the result of the intermediate output in the training stage of the text rewriting device, and obtains a feedback signal by comparing the intermediate output and the correct output of the device, so as to guide the text rewriting device to make reasonable output.
The training device comprises a training phase and a generating phase. The training phase defines an initial text rewriting device, parameters in the device are initialized randomly, in the training process, the text rewriting device reads individual 'original text-rewritten text' pairs, the training is carried out through a training device, specifically, the cross entropy between the prediction output and the correct answer of the text rewriting device is used as a loss function to measure the prediction effect of the text rewriting device, when the value of the loss function tends to be stable, the effect of the text rewriting device is converged, the training is finished, and the final version of the text rewriting device is obtained after the training is finished. The generation stage is to use the trained text rewriting device to take the given text as the input of the device and generate the corresponding rewritten text.
The encoding module encodes an original input text sequence into a context semantic vector sequence;
the decoding module decodes a text vector sequence to be revised according to the context semantic vector sequence;
the semantic knowledge alignment module carries out vectorization coding on externally provided semantic information to obtain external semantic information and obtain an external semantic vector;
and the output mapping module fuses the text vector sequence to be revised and the external semantic vector, and the text is output by gradually revising.
The coding module firstly segments a continuous original input text sequence into a word sequence through a Chinese word segmentation tool; then preparing a word embedding matrix, and finding out the unique corresponding word vector representation in the word embedding matrix for each word in the word sequence to obtain a word vector sequence; and finally, coding the word vector sequence through an eight-layer multi-head attention network to obtain a context semantic vector sequence.
The decoding module adopts an eight-layer decoding multi-head attention network to decode the input context semantic vector sequence to obtain a text vector sequence to be revised.
The externally provided semantic information includes two parts of information: one is synonyms of words in the original input text sequence, and the other is position information of the words corresponding to the synonyms in the original input text sequence; the synonym is used as text input, and a word embedding matrix which is the same as the encoding module is used for table lookup or unique vectorization representation; the position information is used as a number, vectorization is carried out by using a position coding mechanism, after the vector representation of the external semantic information is obtained, a text vector sequence to be modified is taken as weight adjusting information, and semantic redistribution is carried out on the vector representation of the external semantic information according to the weight by using a cross attention mechanism, so that an external semantic vector is obtained.
The output mapping module reads a text vector sequence to be revised and an external semantic vector as input, the text vector sequence to be revised is gradually revised on the text vector sequence to be revised through a mapping unit to obtain a revised text vector sequence, the mapping unit fuses the text vector sequence to be revised into a final output vector through a fully-connected neural network layer, and the mapping unit controls the degree of influence of the text vector sequence to be revised by the external semantic vector; and finally mapping the revised text vector sequence to the final text output through a vector-word list mapping function to obtain a complete output text.
Training and tuning the whole automatic text rewriting device through a trainer device, and obtaining an optimal automatic text rewriting device in an iterative training process through a local-integral multi-task training mode by using a gradient descent algorithm.
The trainer device firstly carries out local training on the coding module once, and the training aims at carrying out synonym identification judgment on each word input by the coding module, namely judging whether each word in input can find a corresponding synonym in a word list; and starting global training when the training of the encoding module is finished, wherein during the global training, on one hand, the local training is continuously executed, on the other hand, the whole automatic text rewriting device is integrally trained, and the training aims to guide the text output by the automatic text rewriting device to be close to the given reference text. During global training, the guide signals provided by local training and the target supply guide signals generated by the device are subjected to weighted summation, and the guide signals provided by the local training and the target supply guide signals are ensured to play a complementary role.
An automatic human-computer interaction method based on semantic knowledge enhancement specifically comprises the following steps:
s1, inputting an original input text sequence, and coding the original input text sequence into a context semantic vector sequence;
s2, decoding a text vector sequence to be revised according to the context semantic vector sequence;
s3, inputting externally provided semantic information, vectorizing and coding the externally provided semantic information, and acquiring the external semantic information to obtain an external semantic vector;
and S4, fusing the text vector sequence to be revised and the external semantic vector, and gradually revising the output text.
And training and tuning the automatic text rewriting method by adopting a trainer device.
The invention has the advantages that: the invention gives a text, can automatically generate a text with the same semantics and stronger expression ability, can be used for constructing an intelligent writing system, namely can provide candidate texts with different expression modes for the text created by a user, and the user can select the candidate texts and further modify the candidate texts, so that the created text has better expressive force and diversity, and avoids repeated and monotonous use of words and sentences;
the invention solves the problem that the traditional language generation technology cannot be used for effectively and automatically rewriting texts, designs a neural network model based on a multi-head attention mechanism, and simultaneously improves the diversity and the flexibility of text rewriting by utilizing external semantic knowledge.
Compared with the traditional text rewriting method based on the template, the method has higher flexibility and universality, greatly reduces the manual participation and reduces the production cost.
Drawings
FIG. 1 is a block diagram of an apparatus according to an embodiment of the present invention.
Fig. 2 is a working schematic diagram of the embodiment of the invention.
FIG. 3 is a flowchart illustrating the operation of the present invention.
Detailed Description
As shown in fig. 1, 2 and 3, the present invention includes a text rewriting device and an exerciser device. The text rewriting device includes four sub-modules: the device comprises an encoding module 1, a decoding module 2, a semantic knowledge alignment module 3 and an output mapping module 4. The trainer device 5 is independent of a module and comprises a local-global training method.
Coding module
The encoding module 1 first converts the input word sequence into a word embedding vector and then encodes the input through a multi-head attention mechanism. The first step of the encoding module is to represent the input sentence as a sequence of word-embedded vectors by looking up the word-embedding matrix. In addition to word vector encoding, a position vector is introduced to encode the position information of the jth word in a sentence. The dimension of the position vector (ith dimension) is the same as the dimension of the word vector, and the formula is as follows:
the input vector of the final coding module is the sum of the word vector and the position vector, and the formula is as follows:
k=e+v
after the vector representation of the sequence is obtained as input, the sentence coding module codes the input word vector sequence through an eight-layer multi-head attention network to obtain a context semantic vector sequence. Each multi-head attention network consists of a self-attention mechanism, a full connection layer and a residual structure, and the specific formula is as follows:
Block(Q,K,V)=LNorm(FFNN(m))+m
m=LNorm(MultiAttn(Q,K,V))+Q
where FFNN represents a fully connected feed forward network and LNorm represents layer normalization. MultiAttn is a key component of the encoder, allowing the model to focus on information from different representation subspaces at different locations in common. The input of the MultiAttn comprises three partial queries Q, a key K and a value V, and the operation corresponding to the three inputs is performed to realize global semantic information understanding, wherein a specific formula is as follows:
MultiAttn(Q,K,V)=(h1,h2...,hi)W
according to the multi-head attention network, the context semantic vector sequence converted from the input vector sequence through the sentence encoder can be:
the output of the encoding module 1 is a context semantic vector sequence, which can then be input to a decoding module for the next process. Furthermore, we consider the encoding module as a shared module in the trainer device. The output z is not only used in "global" training, but is also an important ring of "local" training.
Decoding module
The input to the decoding module 2 is a sequence of context semantic vectors from the encoding module. The decoding module uses an eight-layer decoding multi-head attention network as a main body framework, and decodes the input context semantic vector in a gradual decoding mode to obtain a text vector sequence to be revised.
The decoding module 2 is also constructed by a multi-head attention network, and therefore has a similar structure as the encoding module, but when the multi-head attention mechanism is realized, there are differences: a multi-headed attention layer is added before the fully-connected layer, and the three vectors input by the multi attn layer are no longer the same, where Q is from the context semantic vector sequence of the decoding module, and K and V are the outputs of the last multi-headed attention layer. The text vector sequence to be revised output by the decoding module is represented as:
semantic knowledge alignment module 3
In the semantic knowledge alignment module 3, the input is a synonym-position pair, and the input is retrieved from a synonym library and used as external knowledge for improving the diversity of text rewriting results. Wherein the synonym position pair must be converted to vector representation before being input to the repeat decoder. Specifically, the unique vector representation of the synonym is obtained by looking up a word embedding matrix used by the encoding module. In particular, if synonyms are phrases, we sum the embedding of each token to get a phrase vector. The position vector uses the same set of position coding scheme in the coding module, converts the number into a high-dimensional vector, and the position vector plays a role in: links are established between synonyms and tokens in the entered text, thereby directing the decoding module to pay more attention to the location of each synonym pair. The format of the "synonym-position" pair is as follows:
the initial output of the basic decoder is the text vector sequence to be revised from the decoding module, and the text vector sequence to be revised cannot generate the diversified expressive text. Therefore, we further use a cross-attention mechanism to integrate the "synonym-location" vector representation provided by the external knowledge base into the text vector sequence to be modified. The essence of the cross attention mechanism is to revise the text vector sequence by using synonym information, replace some words by using synonyms, and adjust the word sequence of the replaced text and the word collocation for the context, which increases the diversity of the vocabulary and phrase level. The synonym information vector is calculated as follows:
output mapping module 4
The module mainly converts various vectorization representations in the device into texts to be output. The module reads a text vector sequence to be revised and an external semantic vector as input, and gradually revises the text vector sequence through a mapping unit, wherein the mapping unit fuses revision information of the semantic vector into a final output vector through a fully-connected neural network layer, and the mapping unit controls the influence degree of the final text vector by the semantic vector, and the specific calculation formula is as follows:
where W is a parameter matrix, the vectors are projected to match the dimensions of the output vocabulary. In the output mapping module, each time a new word is to be generated, a corresponding generation probability is calculated, and the process continues until an end symbol is encountered or a preset maximum output text length is reached.
Training device
The trainer device 5 adjusts the parameters of the text rewriting device in a "local-global" training mode. In the local training stage, the trainer only optimizes the coding module, wherein a synonym marking task is used, and the synonym marking is a binary classification task of words one by one and aims to identify whether each word in a sentence has a corresponding synonym. In the present invention, in order to execute the synonym tagging task, the whole encoding module needs to be used independently, and at the same time, the module is adjusted to some extent, that is, an additional fully-connected neural network is added after the encoding module, and the network mainly can map the context semantic vector sequence into a series of synonym tag outputs, and the formula is as follows:
f=σ(Wfz+bf)
the synonym labeling can better help the text rewriting device to locate the position of the synonym and rewrite the synonym by combining the language relationship between the synonym and the phrase in the original sentence. Thus, in the "local" training phase, we independently train the encoding module of the text rewriting device with the synonym tagging task as the training target.
In the "global" training phase, the trainer synchronously executes the synonym tagging task and the sequence generation task. Firstly, multiplexing the coding module parameters obtained in the local training stage, and for the sequence generation task at the moment, the coding module has strong inference capability, so that the feedback information provided by the synonym tagging task needs to be reduced properly, and the feedback information of the sequence generation task should occupy the main body, and the ratio of the two is 1: 9. In the global training stage, the text rewriting device performs joint training under two objective functions of a synonym marking task and a sequence generation task, specifically, the minimized total loss function is a linear combination of the loss functions of two specific tasks:
L=0.9*Loss1+0.1*loss2
therein, loss1Is a loss function of the generating task, loss2Is a loss function for the synonym annotation task.
Claims (10)
1. An automatic man-machine interaction method based on semantic knowledge enhancement is characterized by comprising the following steps: the method specifically comprises the following steps:
s1, inputting an original input text sequence, and coding the original input text sequence into a context semantic vector sequence;
s2, decoding a text vector sequence to be revised according to the context semantic vector sequence;
s3, inputting externally provided semantic information, vectorizing and coding the externally provided semantic information, and acquiring the external semantic information to obtain an external semantic vector;
and S4, fusing the text vector sequence to be revised and the external semantic vector, and gradually revising the output text.
2. The automated human-computer interaction method based on semantic knowledge enhancement as claimed in claim 1, wherein: the specific steps of step S1 are as follows: firstly, segmenting a continuous original input text sequence into word sequences by a Chinese word segmentation tool; then preparing a word embedding matrix, and finding out the unique corresponding word vector representation in the word embedding matrix for each word in the word sequence to obtain a word vector sequence; and finally, coding the word vector sequence through an eight-layer multi-head attention network to obtain a context semantic vector sequence.
3. The automated human-computer interaction method based on semantic knowledge enhancement as claimed in claim 1, wherein: in step S2, an eight-layer decoding multi-head attention network is used to decode the input context semantic vector sequence, so as to obtain a text vector sequence to be revised.
4. The automated human-computer interaction method based on semantic knowledge enhancement as claimed in claim 2, wherein: the externally provided semantic information includes two parts of information: one is synonyms of words in the original input text sequence, and the other is position information of the words corresponding to the synonyms in the original input text sequence; the synonyms are used as text input, and the same word embedding matrix as the step S1 is used for table lookup or unique vectorization representation; the position information is used as a number, vectorization is carried out by using a position coding mechanism, after the vector representation of the external semantic information is obtained, a text vector sequence to be modified is taken as weight adjusting information, and semantic redistribution is carried out on the vector representation of the external semantic information according to the weight by using a cross attention mechanism, so that an external semantic vector is obtained.
5. The method of claim 4, wherein the method comprises the following steps: the specific content of step S3 is as follows: reading a text vector sequence to be revised and an external semantic vector as input, and gradually revising the text vector sequence to be revised through a mapping unit to obtain a revised text vector sequence; the mapping unit fuses the text vector sequence to be revised into a final output vector through a fully connected neural network layer, and the mapping unit controls the influence degree of the text vector sequence to be revised by an external semantic vector; and finally mapping the revised text vector sequence to the final text output through a vector-word list mapping function to obtain a complete output text.
6. The automated human-computer interaction method based on semantic knowledge enhancement as claimed in claim 1, wherein: and training and tuning the automatic text rewriting method by adopting a trainer device.
7. The automated human-computer interaction method based on semantic knowledge enhancement as claimed in claim 1, wherein: the system comprises an encoding module, a decoding module, a semantic knowledge alignment module and an output mapping module;
the encoding module encodes an original input text sequence into a context semantic vector sequence;
the decoding module decodes a text vector sequence to be revised according to the context semantic vector sequence;
the semantic knowledge alignment module carries out vectorization coding on externally provided semantic information to obtain external semantic information and obtain an external semantic vector;
and the output mapping module fuses the text vector sequence to be revised and the external semantic vector, and the text is output by gradually revising.
8. The automated human-computer interaction method based on semantic knowledge enhancement according to claim 7, characterized in that: training and tuning the whole automatic text rewriting device through a trainer device, and obtaining an optimal automatic text rewriting device in an iterative training process through a local-integral multi-task training mode by using a gradient descent algorithm.
9. The automated human-computer interaction method based on semantic knowledge enhancement according to claim 8, characterized in that: the trainer device firstly carries out local training on the coding module once, and the training aims at carrying out synonym identification judgment on each input word, namely judging whether each input word can find a corresponding synonym in a word list; and starting global training when the training is finished, wherein during the global training, on one hand, the local training is continuously executed, on the other hand, the whole automatic text rewriting device is used as an integration to train, and the training aims at guiding the text output by the automatic text rewriting device to be close to the given reference text.
10. A computer-readable storage medium characterized by: the computer-readable storage medium has stored therein program instructions which, when executed by a processor of a computer, cause the processor to carry out the method according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110567502.2A CN113239166B (en) | 2021-05-24 | 2021-05-24 | Automatic man-machine interaction method based on semantic knowledge enhancement |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110567502.2A CN113239166B (en) | 2021-05-24 | 2021-05-24 | Automatic man-machine interaction method based on semantic knowledge enhancement |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113239166A true CN113239166A (en) | 2021-08-10 |
CN113239166B CN113239166B (en) | 2023-06-06 |
Family
ID=77138385
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110567502.2A Active CN113239166B (en) | 2021-05-24 | 2021-05-24 | Automatic man-machine interaction method based on semantic knowledge enhancement |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113239166B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114580354A (en) * | 2022-05-05 | 2022-06-03 | 阿里巴巴达摩院(杭州)科技有限公司 | Synonym-based information encoding method, device, equipment and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107578106A (en) * | 2017-09-18 | 2018-01-12 | 中国科学技术大学 | A kind of neutral net natural language inference method for merging semanteme of word knowledge |
CN109033068A (en) * | 2018-06-14 | 2018-12-18 | 北京慧闻科技发展有限公司 | It is used to read the method, apparatus understood and electronic equipment based on attention mechanism |
CN109933773A (en) * | 2017-12-15 | 2019-06-25 | 上海擎语信息科技有限公司 | A kind of multiple semantic sentence analysis system and method |
CN112084314A (en) * | 2020-08-20 | 2020-12-15 | 电子科技大学 | Knowledge-introducing generating type session system |
CN112685538A (en) * | 2020-12-30 | 2021-04-20 | 北京理工大学 | Text vector retrieval method combined with external knowledge |
-
2021
- 2021-05-24 CN CN202110567502.2A patent/CN113239166B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107578106A (en) * | 2017-09-18 | 2018-01-12 | 中国科学技术大学 | A kind of neutral net natural language inference method for merging semanteme of word knowledge |
CN109933773A (en) * | 2017-12-15 | 2019-06-25 | 上海擎语信息科技有限公司 | A kind of multiple semantic sentence analysis system and method |
CN109033068A (en) * | 2018-06-14 | 2018-12-18 | 北京慧闻科技发展有限公司 | It is used to read the method, apparatus understood and electronic equipment based on attention mechanism |
CN112084314A (en) * | 2020-08-20 | 2020-12-15 | 电子科技大学 | Knowledge-introducing generating type session system |
CN112685538A (en) * | 2020-12-30 | 2021-04-20 | 北京理工大学 | Text vector retrieval method combined with external knowledge |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114580354A (en) * | 2022-05-05 | 2022-06-03 | 阿里巴巴达摩院(杭州)科技有限公司 | Synonym-based information encoding method, device, equipment and storage medium |
CN114580354B (en) * | 2022-05-05 | 2022-10-28 | 阿里巴巴达摩院(杭州)科技有限公司 | Information coding method, device, equipment and storage medium based on synonym |
Also Published As
Publication number | Publication date |
---|---|
CN113239166B (en) | 2023-06-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111199727B (en) | Speech recognition model training method, system, mobile terminal and storage medium | |
CN108415977A (en) | One is read understanding method based on the production machine of deep neural network and intensified learning | |
CN109492227A (en) | It is a kind of that understanding method is read based on the machine of bull attention mechanism and Dynamic iterations | |
CN112989796B (en) | Text naming entity information identification method based on syntactic guidance | |
CN110442880B (en) | Translation method, device and storage medium for machine translation | |
CN117236337B (en) | Method for generating natural language based on mixed prompt learning completion history knowledge graph | |
CN104462072A (en) | Input method and device oriented at computer-assisting translation | |
CN114489669A (en) | Python language code fragment generation method based on graph learning | |
CN115599901A (en) | Machine question-answering method, device, equipment and storage medium based on semantic prompt | |
CN112784603A (en) | Patent efficacy phrase identification method | |
CN116561251A (en) | Natural language processing method | |
CN113326367A (en) | Task type dialogue method and system based on end-to-end text generation | |
CN116881457A (en) | Small sample text classification method based on knowledge contrast enhancement prompt | |
CN116320607A (en) | Intelligent video generation method, device, equipment and medium | |
CN113239166B (en) | Automatic man-machine interaction method based on semantic knowledge enhancement | |
CN117972049A (en) | Medical instrument declaration material generation method and system based on large language model | |
CN112287641B (en) | Synonym sentence generating method, system, terminal and storage medium | |
CN113065324A (en) | Text generation method and device based on structured triples and anchor templates | |
CN118014077A (en) | Multi-mode thinking chain reasoning method and device based on knowledge distillation | |
CN114861627B (en) | Automatic generation method and device for choice question interference item based on deep learning | |
CN116028888A (en) | Automatic problem solving method for plane geometry mathematics problem | |
CN114358021B (en) | Task type dialogue statement reply generation method based on deep learning and storage medium | |
CN115374784A (en) | Chinese named entity recognition method based on multi-mode information selective fusion | |
CN112163414B (en) | Chinese lyric generating method based on Word2Vec, LSTM and attention mechanism | |
CN114417880A (en) | Interactive intelligent question-answering method based on power grid practical training question-answering knowledge base |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |