CN111639165A - Intelligent question-answer optimization method based on natural language processing and deep learning - Google Patents

Intelligent question-answer optimization method based on natural language processing and deep learning Download PDF

Info

Publication number
CN111639165A
CN111639165A CN202010364914.1A CN202010364914A CN111639165A CN 111639165 A CN111639165 A CN 111639165A CN 202010364914 A CN202010364914 A CN 202010364914A CN 111639165 A CN111639165 A CN 111639165A
Authority
CN
China
Prior art keywords
similarity
word
calculating
semantic
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010364914.1A
Other languages
Chinese (zh)
Inventor
陈立
徐雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202010364914.1A priority Critical patent/CN111639165A/en
Publication of CN111639165A publication Critical patent/CN111639165A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/284Lexical analysis, e.g. tokenisation or collocates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • G06F40/35Discourse or dialogue representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Mathematical Physics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Machine Translation (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses an intelligent question-answer optimization method based on natural language processing and deep learning, which is high in accuracy. The invention relates to an intelligent question-answer optimization method based on natural language processing and deep learning, which comprises the following steps: (10) chinese word segmentation and word vector conversion: capturing semantic information of a specific target field, and converting word characters in natural language into N-dimensional matrix vectors which can be understood by a computer; (20) and (3) natural language processing: giving weight to the relation between the ontology semantic concepts, calculating semantic distance by using the weight, and describing semantic similarity based on the semantic distance between the concepts; (30) language disambiguation: and eliminating language ambiguity through threshold comparison.

Description

Intelligent question-answer optimization method based on natural language processing and deep learning
Technical Field
The invention belongs to the field of computer artificial intelligence, in particular to an intelligent question-answer optimization method based on natural language processing and deep learning
Background
The intelligent question answering is an information retrieval system which receives a question described by a user in a natural language form and searches accurate answers capable of answering the question from a large number of heterogeneous data sources by using technologies such as data processing, query expansion and the like. Different from a traditional search engine which is based on keyword query and returns a document link set, the intelligent question-answering system relates to the fields of knowledge representation, IR (Information Retrieval), NLR (Natural Language Processing) and the like, can effectively help a user to extract effective Information from mass Information resources, has more intelligent human-computer interaction experience, and is a new international research hotspot.
The current method for implementing intelligent question answering generally comprises the following steps: firstly, analyzing question sentences for natural language form questions input by a user, extracting syntax forms of the question sentences, and capturing key words and main sentence meanings in the question sentences through a word segmentation model; secondly, extracting a document set from a document library by using the extracted keywords and sentence meanings and using an information retrieval technology; thirdly, searching matched characteristic values from the sorted document set, sorting an answer set according to the characteristic values, and finally, scoring and sorting the answer set, extracting the best answer and returning the best answer to the user.
However, the current implementation method of intelligent question answering is usually based on the logical combination of keywords, the processing of semantic information cannot be touched by adopting an indexing and matching algorithm, and meanwhile, the problems that the search result is not related to the semantics of the question answering content and the information is redundant are brought, so that the accuracy of the question answering feedback result is influenced to a certain extent.
Disclosure of Invention
The invention aims to provide an intelligent question-answer optimization method based on natural language processing and deep learning, which is high in accuracy.
The technical solution for realizing the purpose of the invention is as follows:
an intelligent question-answer optimization method based on natural language processing and deep learning comprises the following steps:
(10) chinese word segmentation and word vector conversion: capturing semantic information of a specific target field, and converting word characters in natural language into N-dimensional matrix vectors which can be understood by a computer;
(20) and (3) natural language processing: giving weight to the relation between the ontology semantic concepts, calculating semantic distance by using the weight, and describing semantic similarity based on the semantic distance between the concepts;
(30) language disambiguation: and eliminating language ambiguity through threshold comparison.
Compared with the prior art, the invention has the following remarkable advantages:
the intelligent question answering accuracy is high: the invention optimizes the semantic matching algorithm and the feedback screening process in the intelligent question-answering step, realizes the vectorization of the question-answering semantics through word vector processing, and performs similarity calculation by utilizing semantic vectors, thereby greatly simplifying the matching problem of the semantics and the document library and simultaneously reducing the information redundancy. After the question-answer feedback is obtained, the accuracy of intelligent question-answer is improved through threshold comparison.
The invention is described in further detail below with reference to the figures and the detailed description.
Drawings
Fig. 1 is a main flow chart of the intelligent question-answering optimization method based on natural language processing and deep learning.
Fig. 2 is a word vector training flow diagram.
Fig. 3 is a flowchart of the natural language processing steps of fig. 1.
FIG. 4 is a semantic similarity hierarchy model.
FIG. 5 is a flow chart of a specific implementation of semantic similarity calculation.
Detailed description of the preferred embodiments
The invention provides an intelligent question-answer optimization method based on a natural language processing technology and a deep learning technology, which is characterized in that a word segmentation module, a word vector module, a similarity calculation module and a threshold comparison module are respectively designed in a summary mode, wherein a FastText model is constructed in the word vector model, a sentence semantic similarity calculation method with multi-feature fusion is provided in the aspect of similarity calculation, an analytic hierarchy process and a semantic distance are respectively applied to the calculation of structural similarity and semantic similarity, the analytic hierarchy process is used for calculating the word shape of a sentence, the word sequence and the corresponding weight of the sentence length characteristic value, and the shortest path algorithm is used for calculating the semantic distance in a weighted ontology graph, so that the semantic similarity is described. And weighting and fusing the result similarity and the semantic similarity to calculate the sentence similarity.
As shown in fig. 1, the intelligent question-answer optimization method based on natural language processing and deep learning of the present invention includes the following steps:
(10) chinese word segmentation and word vector conversion: capturing semantic information of a specific target field, and converting word characters in natural language into N-dimensional matrix vectors which can be understood by a computer;
the step of (10) converting Chinese word segmentation and word vectors comprises:
(11) chinese word segmentation: chinese word segmentation is realized by adopting an algorithm tool HanLP;
the following contents are added to the pom of the project catalog by the word construction and the dictionary segmentation through the HMM-Bigram of the HanLP mode
Figure BDA0002476418090000021
Figure BDA0002476418090000031
Fig. 2 shows a word vector training flowchart.
(12) And (3) word vector conversion: the method adopts a training word vector model based on the Huffman tree to convert word characters in natural language into N-dimensional matrix vectors which can be understood by a computer.
The method comprises the steps of converting word characters in natural language into matrix vectors which can be understood by a computer, creating a training word vector model based on a Huffman tree, and adopting the method of implicit mapping in a shallow neural network to measure and average input vectors instead of adopting linear change. A huffman tree is created as shown in fig. 1, specifying a walk from the left sub-tree as a negative class and a walk from the right sub-tree as a positive class. The probability of registering a positive value is
Figure BDA0002476418090000032
Taking the probability of a negative value as
P(-)=1-P(+),
Where w1, w2, w3 … … wi respectively specify respective internal node word vectors of the Huffman tree, θ represents model parameters trained from a training data set, and the likelihood function is
Figure BDA0002476418090000033
And if the window size of the input layer is 2c, inputting word vectors of c words in front of the current word and c words behind the current word. For the mapping of the input layer to the hidden layer, the 2c words may be averaged. Namely, it is
Figure BDA0002476418090000034
Then only the argmax is passedθΠc∈C(w)And P (w | c; theta) gradient ascending training and updating the word vector and the model parameters, wherein C (w) represents the context of the current word w, and theta represents the model parameters to be trained.
(20) And (3) natural language processing: giving weight to the relation between the ontology semantic concepts, calculating semantic distance by using the weight, and describing semantic similarity based on the semantic distance between the concepts;
as shown in fig. 3, the (20) natural language processing step includes:
(21) and (3) calculating the similarity of the word shapes: the similarity of the word shapes is calculated according to the following formula,
Figure BDA0002476418090000041
where Com (S1, S2) is the result of the participling of sentences S1 and S2, if a certain feature item appears once in sentences S1 and S2, the minimum value of the number of occurrences is taken as the value of Com (S1, S2), Len () represents the length of the sentence;
(22) calculating word order similarity: assuming that the words appearing only Once in S1 and S2 are defined as Once (S1, S2), and S represents the number of words in the Once (S1, S2) set, the word order similarity calculation formula representing the relative positions of the keywords in the two sentences is as follows
Figure BDA0002476418090000042
Where AIN (S1, S2, S) is defined to represent the inverse number of words in the sentence S2, in Morsmim (S1, S2) and OrdSim (S1, S2).
(23) Calculating sentence length similarity: the sentence length similarity LenSim (S1, S2) is calculated as follows:
Figure BDA0002476418090000043
(24) calculating the weight ratio of the word shape, the word sequence and the sentence length: construct each level judgment matrix as follows
Figure BDA0002476418090000044
And calculating the corresponding eigenvector of the hierarchical matrix as a similarity weight.
As shown in fig. 4, the (24) weight comparing step includes:
(241) building a structural model: the similarity of the sentence structure is obtained by integrating the similarity of the word shape, the similarity of the word sequence and the similarity of the sentence length
StrSim(S1,S2)=
α×MorSim(S1,S2)+β×OrdSim(S1,S2)+γ×LenSim(S1,S2),
Wherein alpha, beta and gamma respectively represent weight values of word form similarity, word order similarity and sentence length similarity and satisfy alpha + beta + gamma as 1, and each weight system in the structural similarity is calculated through AHP, and a hierarchical structure model is established
(242) Calculating a similarity value weight: constructing decision matrices in each level
Figure BDA0002476418090000051
The maximum eigenvalue λ max of the matrix is calculated to be 3, and the corresponding eigenvector is p ═ 5, 5, 1]τDefining a consistency index
Figure BDA0002476418090000052
Where λ max is the maximum eigenvalue and n is the dimension of the eigenvector. According toMatrix a calculates the available CI to be 0, and finds the average random consistency index RI to be 0.52. Calculating a consistency ratio
Figure BDA0002476418090000053
Get CR equal to 0, according to the test coefficient standard, when CR is<0.1, the consistency of the judgment matrix is acceptable, so the eigenvector corresponding to the judgment matrix A can be used as the similarity weight.
(243) Calculating the weight: for the above matrix eigenvector P ═ 5, 5, 1]τThe weight vector obtained by normalization is W ═ 0.455, 0.455, 0.09]I.e., α -0.455, β -0.455, and γ -0.09.
As shown in fig. 5, the step of (25) calculating the semantic similarity includes:
(25) calculating semantic similarity: semantic similarity of concept C1 and concept C2 was calculated using the following formula
Figure BDA0002476418090000054
Where concept Ci represents the set of nodes that are the shortest path between a given node and the following node, and SemSim (C1, C2) represents the semantic distance between concepts C1 and C2.
The step of (25) calculating semantic similarity comprises
(251) Establishing a semantic similarity model: the semantic similarity is calculated as a formula
Figure BDA0002476418090000055
In the above formula, m, n are two directly connected concept nodes in the ontology, depth (m) represents the maximum depth of the node m, n (g) represents the total number of concept nodes, order (n) represents the sequential number of the concept node n in the brother concept node, and depth (m) is defined as follows
Figure BDA0002476418090000056
(252) Weight initialization and recursive computation: and (4) regarding the ontology graph as a weighted directed graph, and calculating the shortest path from the concept node to the root node by using the shortest path algorithm idea. By the formula
Figure BDA0002476418090000061
Initializing the weight of the concept node relation, assigning the initial node to be 0, setting other nodes to be infinite, and then using a formula
Figure BDA0002476418090000062
In the formula, m, n and x represent three nodes, x and n are direct-connected nodes, S is a set of all nodes in the body, and Wk(m, n) represents the weight of the path (m, n) at iteration k.
(253) Calculating a semantic distance: in the calculation process of the semantic distance, the common part in the shortest path is deleted, and the semantic similarity between the two concepts is calculated
SemDis(C1,C2)=WshrotestPath1+WshrotestPath2-2×WcomShortestPath
Wherein, C1 and C2 represent two concept nodes in the ontology graph, and shortestPathi represents the shortest path between the concept node Ci and the node. comShortestPath represents the shortest path from the first public node between C1 and C2 to the following node
(254) Calculating the shortest path: the calculation method is as follows:
Figure BDA0002476418090000063
wherein m, n represents two nodes directly connected in shortestPath, and k is a set of node relations.
(255) Calculating semantic similarity: the formula defining the semantic similarity of concept C1 and concept C2 is
Figure BDA0002476418090000064
Where concept Ci represents the set of nodes that are the shortest path between a given node and the following node, and SemSim (C1, C2) represents the semantic distance between concepts C1 and C2.
(30) Language disambiguation: and eliminating language ambiguity through threshold comparison.
The (30) language disambiguation step comprising:
(31) preprocessing a question and answer: preloading a knowledge base, selecting 100 problems from the knowledge base, sequentially putting the prepared problems into a system, and sequentially acquiring rules to acquire a threshold value
(32) And (3) threshold comparison: and after the threshold value is obtained, threshold value comparison can be carried out, whether the answer to the question is returned or not is determined through the threshold value comparison, if the similarity is greater than the threshold value, the answer to the question is returned as output, and if the similarity is less than the threshold value, the answer to the question is returned to be empty, which indicates that no matched answer is found.
Aiming at the problems of information redundancy and question-answer inaccuracy in the traditional intelligent question-answer, the invention realizes the intelligent question-answer method through word vector conversion and the improved multi-semantic similarity comparison algorithm, and improves the accuracy and precision of the intelligent question-answer.

Claims (6)

1. An intelligent question-answer optimization method based on natural language processing and deep learning is characterized by comprising the following steps:
(10) chinese word segmentation and word vector conversion: capturing semantic information of a specific target field, and converting word characters in natural language into N-dimensional matrix vectors which can be understood by a computer;
(20) and (3) natural language processing: giving weight to the relation between the ontology semantic concepts, calculating semantic distance by using the weight, and describing semantic similarity based on the semantic distance between the concepts;
(30) language disambiguation: and eliminating language ambiguity through threshold comparison.
2. The intelligent question-answer optimization method according to claim 1, wherein the (10) chinese participle and word vector conversion step comprises:
(11) chinese word segmentation: chinese word segmentation is realized by adopting an algorithm tool HanLP;
(12) and (3) word vector conversion: the method adopts a training word vector model based on the Huffman tree to convert word characters in natural language into N-dimensional matrix vectors which can be understood by a computer.
3. The intelligent question-answer optimization method according to claim 1, characterized in that said (20) natural language processing step comprises:
(21) and (3) calculating the similarity of the word shapes: the similarity of the word shapes is calculated according to the following formula,
Figure FDA0002476418080000011
where Com (S1, S2) is the result of the participling of sentences S1 and S2, if a certain feature item appears once in sentences S1 and S2, the minimum value of the number of occurrences is taken as the value of Com (S1, S2), Len () represents the length of the sentence;
(22) calculating word order similarity: assuming that the words appearing only Once in S1 and S2 are defined as Once (S1, S2), and S represents the number of words in the Once (S1, S2) set, the word order similarity calculation formula representing the relative positions of the keywords in the two sentences is as follows
Figure FDA0002476418080000012
Where AIN (S1, S2, S) is defined to represent the inverse number of words in the sentence S2, in Morsmim (S1, S2) and OrdSim (S1, S2).
(23) Calculating sentence length similarity: the sentence length similarity LenSim (S1, S2) is calculated as follows:
Figure FDA0002476418080000013
(24) calculating the weight ratio of the word shape, the word sequence and the sentence length: construct each level judgment matrix as follows
Figure FDA0002476418080000021
And calculating the corresponding eigenvector of the hierarchical matrix as a similarity weight.
(25) Calculating semantic similarity: semantic similarity of concept C1 and concept C2 was calculated using the following formula
Figure FDA0002476418080000022
Where concept Ci represents the set of nodes that are the shortest path between a given node and the following node, and SemSim (C1, C2) represents the semantic distance between concepts C1 and C2.
4. The intelligent question-answer optimization method according to claim 3, characterized in that said (24) weight comparison step comprises
(241) Building a structural model: the similarity of the sentence structure is obtained by integrating the similarity of the word shape, the similarity of the word sequence and the similarity of the sentence length
StrSim(S1,S2)=
α×MorSim(S1,S2)+β×OrdSim(S1,S2)+γ×LenSim(S1,S2),
Wherein alpha, beta and gamma respectively represent weight values of word form similarity, word order similarity and sentence length similarity and satisfy alpha + beta + gamma as 1, and each weight system in the structural similarity is calculated through AHP, and a hierarchical structure model is established
(242) Calculating a similarity value weight: constructing decision matrices in each level
Figure FDA0002476418080000023
The maximum eigenvalue λ max of the matrix is calculated to be 3, and the corresponding eigenvector is p ═ 5, 5, 1]τDefining a consistency index
Figure FDA0002476418080000024
Where λ max is the maximum eigenvalue and n is the dimension of the eigenvector. Calculating CI (0) according to the matrix A, and searching average random consistencyThe index RI is 0.52. Calculating a consistency ratio
Figure FDA0002476418080000025
And obtaining CR equal to 0, and judging that the consistency of the matrix is acceptable when CR is less than 0.1 according to a check coefficient standard, so that the feature vector corresponding to the judgment matrix A can be used as a similarity weight.
(243) Calculating the weight: for the above matrix eigenvector p ═ 5, 5, 1]τThe weight vector obtained by normalization is W ═ 0.455, 0.455, 0.09]I.e., α -0.455, β -0.455, and γ -0.09.
5. The intelligent question-answer optimization method according to claim 4, characterized in that said step (25) of calculating semantic similarities comprises
(251) Establishing a semantic similarity model: the semantic similarity is calculated as a formula
Figure FDA0002476418080000031
In the above formula, m, n are two directly connected concept nodes in the ontology, depth (m) represents the maximum depth of the node m, n (g) represents the total number of concept nodes, order (n) represents the sequential number of the concept node n in the brother concept node, and depth (m) is defined as follows
Figure FDA0002476418080000032
(252) Weight initialization and recursive computation: and (4) regarding the ontology graph as a weighted directed graph, and calculating the shortest path from the concept node to the root node by using the shortest path algorithm idea. By the formula
Figure FDA0002476418080000033
Initializing the weight of the concept node relation, assigning the initial node to be 0, setting other nodes to be infinite, and then using a formula
Figure FDA0002476418080000034
In the formula, m, n and x represent three nodes, x and n are direct-connected nodes, S is a set of all nodes in the body, and Wk(m, n) represents the weight of the path (m, n) at iteration k.
(253) Calculating a semantic distance: in the calculation process of the semantic distance, the common part in the shortest path is deleted, and the semantic similarity between the two concepts is calculated
SemDis(C1,C2)=WshrotestPath1+Wshrotestpath2-2×WcomShortestPath
Wherein, C1 and C2 represent two concept nodes in the ontology graph, and shortestPathi represents the shortest path between the concept node Ci and the node. comShortestPath represents the shortest path from the first public node between C1 and C2 to the following node
(254) Calculating the shortest path: the calculation method is as follows:
Figure FDA0002476418080000041
wherein m, n represents two nodes directly connected in shortestPath, and k is a set of node relations.
(255) Calculating semantic similarity: the formula defining the semantic similarity of concept C1 and concept C2 is
Figure FDA0002476418080000042
Where concept Ci represents the set of nodes that are the shortest path between a given node and the following node, and SemSim (C1, C2) represents the semantic distance between concepts C1 and C2.
6. The intelligent question-answer optimization method according to claim 5, characterized in that said (30) language disambiguation step comprises:
(31) preprocessing a question and answer: preloading a knowledge base, selecting 100 problems from the knowledge base, sequentially putting the prepared problems into a system, and sequentially acquiring rules to acquire a threshold value
(32) And (3) threshold comparison: and after the threshold value is obtained, threshold value comparison can be carried out, whether the answer to the question is returned or not is determined through the threshold value comparison, if the similarity is greater than the threshold value, the answer to the question is returned as output, and if the similarity is less than the threshold value, the answer to the question is returned to be empty, which indicates that no matched answer is found.
CN202010364914.1A 2020-04-30 2020-04-30 Intelligent question-answer optimization method based on natural language processing and deep learning Withdrawn CN111639165A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010364914.1A CN111639165A (en) 2020-04-30 2020-04-30 Intelligent question-answer optimization method based on natural language processing and deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010364914.1A CN111639165A (en) 2020-04-30 2020-04-30 Intelligent question-answer optimization method based on natural language processing and deep learning

Publications (1)

Publication Number Publication Date
CN111639165A true CN111639165A (en) 2020-09-08

Family

ID=72329013

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010364914.1A Withdrawn CN111639165A (en) 2020-04-30 2020-04-30 Intelligent question-answer optimization method based on natural language processing and deep learning

Country Status (1)

Country Link
CN (1) CN111639165A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112308387A (en) * 2020-10-20 2021-02-02 深圳思为科技有限公司 Client intention degree evaluation method and device and cloud server
CN113255345A (en) * 2021-06-10 2021-08-13 腾讯科技(深圳)有限公司 Semantic recognition method, related device and equipment
CN113722452A (en) * 2021-07-16 2021-11-30 上海通办信息服务有限公司 Semantic-based quick knowledge hit method and device in question-answering system
CN113742458A (en) * 2021-09-18 2021-12-03 苏州大学 Natural language instruction disambiguation method and system for mechanical arm grabbing
CN115828930A (en) * 2023-01-06 2023-03-21 山东建筑大学 Distributed word vector space correction method for dynamically fusing semantic relations

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李兆兆: "基于语义理解的智能问答系统关键技术研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112308387A (en) * 2020-10-20 2021-02-02 深圳思为科技有限公司 Client intention degree evaluation method and device and cloud server
CN112308387B (en) * 2020-10-20 2024-05-14 深圳思为科技有限公司 Customer intention evaluation method and device and cloud server
CN113255345A (en) * 2021-06-10 2021-08-13 腾讯科技(深圳)有限公司 Semantic recognition method, related device and equipment
CN113255345B (en) * 2021-06-10 2021-10-15 腾讯科技(深圳)有限公司 Semantic recognition method, related device and equipment
CN113722452A (en) * 2021-07-16 2021-11-30 上海通办信息服务有限公司 Semantic-based quick knowledge hit method and device in question-answering system
CN113722452B (en) * 2021-07-16 2024-01-19 上海通办信息服务有限公司 Semantic-based rapid knowledge hit method and device in question-answering system
CN113742458A (en) * 2021-09-18 2021-12-03 苏州大学 Natural language instruction disambiguation method and system for mechanical arm grabbing
CN113742458B (en) * 2021-09-18 2023-04-25 苏州大学 Natural language instruction disambiguation method and system oriented to mechanical arm grabbing
CN115828930A (en) * 2023-01-06 2023-03-21 山东建筑大学 Distributed word vector space correction method for dynamically fusing semantic relations

Similar Documents

Publication Publication Date Title
CN109271505B (en) Question-answering system implementation method based on question-answer pairs
CN109408627B (en) Question-answering method and system fusing convolutional neural network and cyclic neural network
CN108804521B (en) Knowledge graph-based question-answering method and agricultural encyclopedia question-answering system
CN111639165A (en) Intelligent question-answer optimization method based on natural language processing and deep learning
CN110309268B (en) Cross-language information retrieval method based on concept graph
CN111522910B (en) Intelligent semantic retrieval method based on cultural relic knowledge graph
CN111737496A (en) Power equipment fault knowledge map construction method
CN113239700A (en) Text semantic matching device, system, method and storage medium for improving BERT
CN110674252A (en) High-precision semantic search system for judicial domain
CN106372187B (en) Cross-language retrieval method for big data
CN109783806B (en) Text matching method utilizing semantic parsing structure
CN110765755A (en) Semantic similarity feature extraction method based on double selection gates
CN112163425A (en) Text entity relation extraction method based on multi-feature information enhancement
CN113377897B (en) Multi-language medical term standard standardization system and method based on deep confrontation learning
CN112597285B (en) Man-machine interaction method and system based on knowledge graph
CN106446162A (en) Orient field self body intelligence library article search method
CN114912449B (en) Technical feature keyword extraction method and system based on code description text
Zhang et al. Hierarchical scene parsing by weakly supervised learning with image descriptions
CN112036178A (en) Distribution network entity related semantic search method
CN114332519A (en) Image description generation method based on external triple and abstract relation
CN113326267A (en) Address matching method based on inverted index and neural network algorithm
CN115422323A (en) Intelligent intelligence question-answering method based on knowledge graph
CN114004236B (en) Cross-language news event retrieval method integrating knowledge of event entity
CN114841353A (en) Quantum language model modeling system fusing syntactic information and application thereof
CN112417170B (en) Relationship linking method for incomplete knowledge graph

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20200908