CN115952270B - Intelligent question-answering method and device for refrigerator and storage medium - Google Patents

Intelligent question-answering method and device for refrigerator and storage medium Download PDF

Info

Publication number
CN115952270B
CN115952270B CN202310193530.1A CN202310193530A CN115952270B CN 115952270 B CN115952270 B CN 115952270B CN 202310193530 A CN202310193530 A CN 202310193530A CN 115952270 B CN115952270 B CN 115952270B
Authority
CN
China
Prior art keywords
recall
question
user
similarity
text
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310193530.1A
Other languages
Chinese (zh)
Other versions
CN115952270A (en
Inventor
刘昊
王晓薇
马坚
魏志强
孔令磊
曾谁飞
李桂玺
张景瑞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ocean University of China
Qingdao Haier Refrigerator Co Ltd
Original Assignee
Ocean University of China
Qingdao Haier Refrigerator Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ocean University of China, Qingdao Haier Refrigerator Co Ltd filed Critical Ocean University of China
Priority to CN202310193530.1A priority Critical patent/CN115952270B/en
Publication of CN115952270A publication Critical patent/CN115952270A/en
Application granted granted Critical
Publication of CN115952270B publication Critical patent/CN115952270B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses an intelligent question-answering method, device and storage medium of a refrigerator, belonging to the field of intelligent question-answering, wherein the method comprises the following steps: existing document data processing and text vectorization; storing data; problem classification and recall; and generating an answer. The invention also provides a device and a storage medium capable of running the intelligent question-answering method. The invention can realize faster and more accurate answer retrieval based on refrigerator specifications and nutrition and health knowledge question-answer pairs.

Description

Intelligent question-answering method and device for refrigerator and storage medium
Technical Field
The invention belongs to the field of intelligent question and answer, and particularly relates to an intelligent question and answer method for refrigerator specifications and specific fields of nutrition and health.
Background
With the development of artificial intelligence technology, household appliances such as refrigerators are more intelligent, intelligent refrigerator robots can only answer user questions according to refrigerator specifications and nutrition health, but with people increasingly focusing on diet health and increasingly sound refrigerator functions, corresponding refrigerator specification contents and nutrition health knowledge are also increasingly increased, and massive knowledge contents bring great challenges for rapidly and accurately retrieving user questions and answers. Therefore, an efficient intelligent question-answering method is urgently needed, and answer retrieval speed and accuracy are improved.
Disclosure of Invention
The invention aims to provide an intelligent question-answering method for refrigerator specifications and nutritional health knowledge.
The invention is realized by the following technical proposal
An intelligent question-answering method of a refrigerator comprises the following steps:
step one, existing document data processing and text vectorization;
step two, data storage;
step three, problem classification and recall;
step four, generating an answer;
the specific operation of the first step is as follows: firstly, carrying out data cleaning work on the existing question-answer pairs; secondly, expanding cleaned user question data based on a deep learning method, and establishing a corresponding relation table between questions and answers; and finally, vectorizing the expanded user problem data by adopting a deep learning related model to obtain vectorized text.
Further, the data cleaning in the first step includes removing unnecessary punctuation, blank spaces and the same problems.
Further, the user problem data after the expansion cleaning in the first step is that similar questions are generated by using a Simbert-base version.
Further, the step one uses a pre-training model Bert to vectorize the expanded user problem data.
The specific operation of the second step is as follows: based on the vectorized text obtained in the step one, storing the vectorized text into a multi-level graph structure by adopting a graph-based algorithm in the search field, and the vectorized text is called a problem vector graph;
further, the step two adopts HNSW algorithm to construct multi-level problem vector diagram.
The specific operation of the third step is as follows: based on the vectorized text obtained in the first step and the multi-level problem vector diagram stored in the second step, firstly, classifying the vectorized text into a predefined category by adopting a model based on deep learning; secondly, based on a multi-level problem vector diagram, a graph-based algorithm in the search field is adopted to recall the user problem.
Further, the third step is that firstly, the classification of the limited text classification is two types of refrigerator specifications and nutrition and health knowledge; secondly, using the vectorized text in the first step as the input of the full connection layer to change the dimension of the vectorized text, obtaining a new vectorized text, and using the new vectorized text as the input of the softmax layer for classification; and finally, selecting the category with the highest probability value as the classification result.
Further, the algorithm in the third step refers to HNSW search algorithm.
The specific operation of the fourth step is as follows: and (3) synthesizing various similarity matching methods, accurately sequencing recall results, selecting the question with the highest score as the matching of the user question, and acquiring and returning the answer of the question according to the question and answer corresponding table established in the step (1).
Further, the algorithm adopted by the similarity matching comprises Jacquard distance, BM25, cosine similarity and Euclidean distance.
An electronic device, comprising: one or more processors; a storage device having one or more programs stored thereon; the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the intelligent question-answering method.
The present invention also provides a computer-readable storage medium having a computer program stored thereon, wherein the program when executed by a processor implements the intelligent question-answering method.
Compared with the prior art, the invention has the beneficial effects that:
the invention can realize intelligent question and answer based on the nutrition health knowledge and the instruction book content user question and answer pairs, and compared with the traditional search question and answer, the intelligent question and answer method provided by the invention realizes higher accuracy, stronger generalization and faster search speed. That is, the text is represented as a vector containing context semantics by using a model based on deep learning before retrieval, so that the user questions of more expression methods can be conveniently answered; and storing the questions in the form of a multi-level graph by adopting a graph-based algorithm so as to accelerate the answer retrieval speed.
Drawings
FIG. 1 is a schematic diagram of a Bert model input;
FIG. 2 is a diagram of a text vectorization model of the present invention;
FIG. 3 is a vector hierarchy diagram of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
The intelligent question-answering system for refrigerator specifications and nutritional health knowledge provided by the invention comprises the following steps:
step one, existing document data processing and text vectorization
Firstly, using python to clean up the original question and answer data, including removing useless punctuation, blank spaces and the same problems;
second, the use of the sambert pre-training model expands the question-answer pairs related to the content of the specification and the question-answer pairs related to the nutritional health knowledge by a factor of 10. The Simbert is a model which is obtained by further fine tuning based on the UniLM idea of Microsoft and is based on the Bert pre-training model, and has the capability of generating similar sentences. In the invention, similar question sentences are generated by using a Simbert-base version, and the dimension of the model is 768 dimensions. Meanwhile, a corresponding relation table of the questions and the answers is established and is used for the subsequent answer generation step;
finally, a pre-training model Bert is used to obtain vectorized text. Since the Bert pre-training model is internally bi-directional convertors, the model can output vectors that fuse context semantics.
When using the Bert pre-training model,let text to be vectorized bexAdding "[ CLS" to the sentence head of the input text]"mark, set the input text of the model asx Bert Thenx Bert The method comprises the following steps:
Figure SMS_1
obtainingx Bert After that, thex Bert Performing token processing to obtainx token x token Is composed of the following formula:x token the term "= Position embedding + Segment embedding +token embedding, wherein Position embedding means that the position information of a word in a sentence is encoded into a vector, segment embedding is used to distinguish two sentences, if punctuation marks appear in an input text sequence, the punctuation marks are used as boundaries, different sentences are represented by different serial numbers, and Token embedding is a word vector. As shown in fig. 1, forx Bert The three parts are fused together to form a representation vector as the input of Bert, as shown in fig. 2, and the vectorization representation Word embedding of the fusion context semantic information of all the characters in the input text is obtained through a transducer, and is marked as S: />
Figure SMS_2
Wherein L is the length of the input text, d represents the dimension of the selected word vector, the Bert-base version is used in the invention, and the model dimension is 768 dimensions. Specifically, let->
Figure SMS_3
Representing the i-th word vector in sentence S, whose dimension is d, sentence S with sentence length L may be represented as:
Figure SMS_4
wherein (1)>
Figure SMS_5
Representing a join operation.
Step two, data storage
Based on the vector processed in step oneS, constructing a multi-level problem vector diagram by adopting an HNSW algorithm. Specifically, as shown in fig. 3, HNSW organizes vectors of spatial types in a form in the graph, and when each node is inserted, data is first stored in layer 0, that is, layer 0 includes all problem vectors in step one. Then randomly selecting a layer number, traversing from the layer to the layer downwards, inserting the node in each layer, and connecting M adjacent nodes according to a certain rule until the layer 0. The maximum layer number of the finally constructed hierarchical graph is
Figure SMS_6
Wherein N is the data number.
Step three, problem classification and recall
Firstly, classifying the problems, and adding a fully connected neural network and a softmax layer to complete problem classification based on the vector S processed in the step one. Specifically, firstly, the categories defining the text classification include refrigerator specifications and nutritional health knowledge; secondly, taking the vector S as the input of the full connection layer to change the dimension of the vector S to obtain a vector D, wherein the dimension of the vector D is L x 2, and taking the vector D as the input of the softmax layer for classification; and finally, selecting the category with the highest probability value as the classification result.
And secondly, carrying out vector recall on the user problem by adopting an HNSW search algorithm based on the HNSW diagram constructed in the step two, and obtaining a series of recalled problem vectors. Specifically, when searching is performed by using HNSW, searching is performed from the highest layer, and each layer searches for the closest point to the target point, and the closest point is used as an entry of the next layer, and the search is performed layer by layer until the 0 th layer.
Step four, answer generation
And (3) calculating the similarity between the user questions and the recall questions based on the recall result in the step (three), obtaining the score of each recall question, selecting the question with the highest score as the match of the user questions, and returning the user answers according to the question answer corresponding table in the step (1).
Specifically, when calculating the similarity between the user problem and the recall problem, the similarity of the multiple algorithms is synthesized as the final similarity score, and the adopted algorithms include the Jacquard distance, the BM25, the cosine similarity and the Euclidean distance.
Specifically, first, a similarity score based on the string distance is calculated based on the question text recalled in step three and the user question. Firstly, calculating the similarity of the user problem and each recall problem at the character string level by using a Jaccard distance formula:
Figure SMS_7
/>
wherein, the liquid crystal display device comprises a liquid crystal display device,xandythe text of the character string representing the two questions respectively,
Figure SMS_8
representing the number of shared items in two strings,/-, and>
Figure SMS_9
representing the number of all unique items in the two strings. Secondly, calculating the similarity of the user problem and each recall problem at the character string level by using a BM25 algorithm: />
Figure SMS_10
Wherein Q is the user question, d is each recall question,W i as the weight value of the weight,
Figure SMS_11
representing individual morphemes in a user questionq i Relevance score to recall problem d: />
Figure SMS_12
Wherein, the liquid crystal display device comprises a liquid crystal display device,k 1 andk 2 is a coordination factor, is generally set to 2,1,f i sign language elementq i The number of occurrences in the presently calculated recall problem d,qf i is thatq i The number of times that this occurs in the user's question,Kcalculated by the following formula:
Figure SMS_13
wherein, the liquid crystal display device comprises a liquid crystal display device,k 1 andbfor the coordination factor, generally set to 2,0.75,dlfor the length of the recall problem currently being calculated,avg_dlis the average length of all recall problems.
Second, a similarity score based on the vector distance is calculated based on the recall problem in step three and the vector representation of the user problem obtained by step one. First, the cosine similarity is used to calculate the similarity between the user questions and the individual recall questions:
Figure SMS_14
wherein, the liquid crystal display device comprises a liquid crystal display device,W 1 andW 2 respectively representing two vectorized problems, and recording the similarity calculated by the formula to the corresponding recall problem; secondly, calculating the similarity between the user problem and each recall problem by using Euclidean distance:
Figure SMS_15
wherein, the liquid crystal display device comprises a liquid crystal display device,xandyand respectively representing two vectorized problems, and recording the similarity calculated by the above formula to the corresponding recall problem.
Finally, based on the similarity scores calculated by the above-described respective algorithms, a final similarity score s is calculated using the following formula:
Figure SMS_16
wherein, to prevent the denominator part from being 0, add
Figure SMS_17
,/>
Figure SMS_18
Is a sufficiently small positive number. The higher the score, the higher the similarity of the two sentences. Most preferably, the first to fourthAnd finally selecting the recall problem with the highest score as the match of the user problem, and returning the corresponding answer as a result. />
Example 2
In this embodiment, 10000 nutrition and health question pairs and 10000 refrigerator instruction question pairs are processed, and the question "how does pumpkin and red bean fry lily? "illustrates the whole process of the question and answer system returning the answer.
Step one, existing document data processing and text vectorization
Firstly, processing original 20000 pieces of data by using Python to remove redundant blank spaces, useless punctuation and the same problems;
secondly, similar question generation processing is carried out on two types of processed data of the nutrition health knowledge and the refrigerator specification by using a Simbert-base version, the original questions are expanded to 10 times, 100000 pieces of expanded question-answer data are respectively obtained, and a corresponding relation table of the questions and the answers is established;
finally, 100000 pieces of data are respectively input as the Bert-base, and because of the limitation of Bert on the input length, the number of questions for limiting each input is 40 for the questions of nutrition and health, and the number of questions for limiting each input is 20 for the questions of refrigerator specifications, and symbol [ SEP ] intervals are used among the questions. And after the conversion process, obtaining vectorized text, and performing connection operation on vector representation of each sentence by using vectors of single characters.
What do the obtained user questions "pumpkin red bean stir-fried lily? "the Position embedding, segment embedding, token embedding three parts of the problem are fused together as the Bert-base input, resulting in vector S that fuses the context semantics.
Step two, data storage
For two types of data of nutrition and health and refrigerator specifications, respectively based on the vector representations of 100000 questions obtained by processing in the first step, constructing a multi-level question vector diagram by adopting an HNSW algorithm to obtain two layers of diagrams, obtaining that the layers of diagrams have 17 layers at most according to the number of questions, and the 0 th layer contains all question vectors of corresponding categories.
Step three, problem classification and recall
At the time of the first treatment, the classifier was trained based on 200000 pieces of data in total of the nutritional health and the refrigerator specifications obtained in the step one. Specifically, at 7:2:1 is divided into a training set, a verification set and a test set by proportion, the full connection layer and the softmax are connected to the Bert for classification, and the accuracy of text classification on the data set by using the Bert reaches 94% through test.
For the problem obtained in step one, "how to make pumpkin red bean stir-fried lily? The vector of' represents S, firstly, classifying the problem by using the trained classification model to obtain the problem classification result which is nutritional and healthy; secondly, based on the vector hierarchical graph of the nutritional health problem constructed in the second step, carrying out recall operation on the vector representation S of the problem by adopting an HNSW (home-based software) search algorithm to obtain 4 problem vectors stored in the vector graph, wherein the text problems corresponding to the problem vectors are respectively as follows: how to make pumpkin red bean stir-fried lily? How do "crystal sugar lily steamed pumpkin? How do "lily fried beef? How to make red date steamed pumpkin? "
Step four, answer generation
Based on the 4 questions obtained in the third step, the algorithm of Jacaded distance, BM25, cosine similarity and Euclidean distance is adopted to calculate the recall questions and the user questions of how do pumpkin red bean stir-fried lily? "similarity, the highest similarity problem is" how to make pumpkin red bean stir-fried lily? And (3) returning the answer of the question with highest similarity as the answer of the user question according to the corresponding table of the question and the answer in the step (I).
It should be understood that the above description is not intended to limit the invention, and the invention is not limited to the examples described above, but various modifications and substitutions within the scope of the invention will be apparent to those skilled in the art, and are intended to be included within the scope of the patent claims.

Claims (8)

1. An intelligent question-answering method of a refrigerator is characterized by comprising the following steps:
step one, existing document data processing and text vectorization;
step two, data storage;
step three, problem classification and recall;
step four, generating an answer;
the specific operation of the first step; firstly, carrying out data cleaning work on the existing question-answer pairs; secondly, expanding cleaned user question data based on a deep learning method, and establishing a corresponding relation table between questions and answers; finally, vectorizing the expanded user problem data by adopting a deep learning related model to obtain vectorized text;
the specific operation of the second step is as follows: based on the vectorized text obtained in the step one, storing the vectorized text into a multi-level graph structure by adopting a graph-based algorithm in the search field, namely a problem vector graph, and constructing the multi-level problem vector graph by adopting an HNSW algorithm;
the specific operation of the third step is as follows: based on the vectorized text obtained in the first step and the multi-level problem vector diagram stored in the second step, firstly, classifying the vectorized text into a predefined category by adopting a model based on deep learning; secondly, based on a multi-level problem vector diagram, carrying out recall operation on the user problem by adopting a diagram-based algorithm in the search field;
the specific operation of the fourth step is as follows:
(1) Calculating a similarity score based on the character string distance based on the problem text recalled in the step three and the user problem; calculating the similarity of the user problem and each recall problem at the character string level by using a Jacard distance formula:
Figure FDA0004201528500000011
wherein x and y represent the text of the two questions, respectively, |xjuy| represents the number of shared items in the two strings, |yjux| represents the number of all unique items in the two strings;
(2) The similarity of the user questions and the recall questions at the character string level is calculated by using a BM25 algorithm:
Figure FDA0004201528500000021
wherein Q is the user question, d is each recall question, W i is a weight value, R (q i D) represents each morpheme q in the user question i Relevance score to recall problem d:
Figure FDA0004201528500000022
wherein k is 1 And k 2 Is a coordination factor, and is generally respectively set as 2,1 and f i Representing morpheme q i Number of occurrences, qf, in the currently calculated recall problem d i Is q i The number of occurrences in the user problem, K, is calculated by:
Figure FDA0004201528500000023
wherein k is 1 And b is a coordination factor, which is generally set to 2,0.75, dl is the length of the currently calculated recall problem, avg_dl is the average length of all recall problems;
(3) Calculating a similarity score based on the vector distance based on the recall problem in the third step and the vector representation of the user problem obtained in the first step; calculating the similarity between the user problem and each recall problem by using cosine similarity:
Figure FDA0004201528500000024
wherein W is 1 And W is 2 Two problems, respectively representing vectorization, phases to be calculated by the above equationAfter the similarity records the corresponding recall problem;
(4) Calculating the similarity between the user problem and each recall problem by using Euclidean distance:
Figure FDA0004201528500000031
wherein x and y respectively represent two vectorized problems, and the similarity calculated by the above formula is recorded to the corresponding recall problem;
(5) Based on the similarity scores calculated by the four algorithms (1) to (4), the final similarity score s is calculated using the following formula:
Figure FDA0004201528500000032
in order to prevent the denominator part from being 0, epsilon is added, epsilon is a sufficiently small positive number, the higher the score is, the higher the similarity of two sentences is indicated, the recall problem with the highest score is finally selected as the match of the user problem, and the corresponding answer is returned as a result.
2. The intelligent question-answering method according to claim 1, wherein the data cleaning in the first step includes removing unnecessary punctuation, blank spaces and the same problem.
3. The intelligent question-answering method of a refrigerator according to claim 1, wherein the expanded and cleaned user question data in the first step is similar question sentence generation using a sambert-base version.
4. The intelligent question-answering method of a refrigerator according to claim 1, wherein the step one uses a pretrained model Bert to vectorize the extended user question data.
5. The intelligent question-answering method of a refrigerator according to claim 1, wherein the third step is that, firstly, the classification of the limited text classification is two types of refrigerator specifications and nutritional health knowledge; secondly, using the vectorized text in the first step as the input of the full connection layer to change the dimension of the vectorized text, obtaining a new vectorized text, and using the new vectorized text as the input of the softmax layer for classification; and finally, selecting the category with the highest probability value as the classification result.
6. The intelligent question-answering method of a refrigerator according to claim 1, wherein the algorithm in the third step is HNSW search algorithm.
7. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the intelligent question-answering method of any one of claims 1-6.
8. A computer-readable storage medium having stored thereon a computer program, wherein the program when executed by a processor implements the intelligent question-answering method according to any one of claims 1-6.
CN202310193530.1A 2023-03-03 2023-03-03 Intelligent question-answering method and device for refrigerator and storage medium Active CN115952270B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310193530.1A CN115952270B (en) 2023-03-03 2023-03-03 Intelligent question-answering method and device for refrigerator and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310193530.1A CN115952270B (en) 2023-03-03 2023-03-03 Intelligent question-answering method and device for refrigerator and storage medium

Publications (2)

Publication Number Publication Date
CN115952270A CN115952270A (en) 2023-04-11
CN115952270B true CN115952270B (en) 2023-05-30

Family

ID=85906888

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310193530.1A Active CN115952270B (en) 2023-03-03 2023-03-03 Intelligent question-answering method and device for refrigerator and storage medium

Country Status (1)

Country Link
CN (1) CN115952270B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109101493A (en) * 2018-08-01 2018-12-28 东北大学 A kind of intelligence house-purchase assistant based on dialogue robot
CN110442760A (en) * 2019-07-24 2019-11-12 银江股份有限公司 A kind of the synonym method for digging and device of question and answer searching system
WO2021004228A1 (en) * 2019-07-08 2021-01-14 汉海信息技术(上海)有限公司 Generation of recommendation reason
WO2021135455A1 (en) * 2020-05-13 2021-07-08 平安科技(深圳)有限公司 Semantic recall method, apparatus, computer device, and storage medium
CN113742469A (en) * 2021-09-03 2021-12-03 科讯嘉联信息技术有限公司 Pipeline processing and ES storage based question-answering system construction method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110287296A (en) * 2019-05-21 2019-09-27 平安科技(深圳)有限公司 A kind of problem answers choosing method, device, computer equipment and storage medium
CN113656570B (en) * 2021-08-25 2024-05-10 平安科技(深圳)有限公司 Visual question-answering method and device based on deep learning model, medium and equipment
CN114416927B (en) * 2022-01-24 2024-04-02 招商银行股份有限公司 Intelligent question-answering method, device, equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109101493A (en) * 2018-08-01 2018-12-28 东北大学 A kind of intelligence house-purchase assistant based on dialogue robot
WO2021004228A1 (en) * 2019-07-08 2021-01-14 汉海信息技术(上海)有限公司 Generation of recommendation reason
CN110442760A (en) * 2019-07-24 2019-11-12 银江股份有限公司 A kind of the synonym method for digging and device of question and answer searching system
WO2021135455A1 (en) * 2020-05-13 2021-07-08 平安科技(深圳)有限公司 Semantic recall method, apparatus, computer device, and storage medium
CN113742469A (en) * 2021-09-03 2021-12-03 科讯嘉联信息技术有限公司 Pipeline processing and ES storage based question-answering system construction method

Also Published As

Publication number Publication date
CN115952270A (en) 2023-04-11

Similar Documents

Publication Publication Date Title
CN111415740B (en) Method and device for processing inquiry information, storage medium and computer equipment
CN110059160B (en) End-to-end context-based knowledge base question-answering method and device
CN111611361A (en) Intelligent reading, understanding, question answering system of extraction type machine
CN105393265A (en) Active featuring in computer-human interactive learning
Nagao Knowledge and inference
CN115292469B (en) Question-answering method combining paragraph search and machine reading understanding
CN111143672B (en) Knowledge graph-based professional speciality scholars recommendation method
CN112597316B (en) Method and device for interpretive reasoning question-answering
CN112380325A (en) Knowledge graph question-answering system based on joint knowledge embedded model and fact memory network
CN112328800A (en) System and method for automatically generating programming specification question answers
CN115599899B (en) Intelligent question-answering method, system, equipment and medium based on aircraft knowledge graph
CN112925918B (en) Question-answer matching system based on disease field knowledge graph
CN110968708A (en) Method and system for labeling education information resource attributes
Deng et al. A survey of knowledge based question answering with deep learning
Zhu Machine reading comprehension: Algorithms and practice
Andriopoulos et al. Augmenting LLMs with Knowledge: A survey on hallucination prevention
CN117648429A (en) Question-answering method and system based on multi-mode self-adaptive search type enhanced large model
Chenze et al. Iterative approach for novel entity recognition of foods in social media messages
EP4030355A1 (en) Neural reasoning path retrieval for multi-hop text comprehension
Hasan et al. Content based document classification using soft cosine measure
WO2023160346A1 (en) Meaning and sense preserving textual encoding and embedding
CN115952270B (en) Intelligent question-answering method and device for refrigerator and storage medium
Brundha et al. Vector model based information retrieval system with word embedding transformation
CN113468311A (en) Knowledge graph-based complex question and answer method, device and storage medium
CN112966095A (en) Software code recommendation method based on JEAN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant