CN115563249A - Form retrieval enhancement method for question and answer in open field - Google Patents

Form retrieval enhancement method for question and answer in open field Download PDF

Info

Publication number
CN115563249A
CN115563249A CN202211227233.6A CN202211227233A CN115563249A CN 115563249 A CN115563249 A CN 115563249A CN 202211227233 A CN202211227233 A CN 202211227233A CN 115563249 A CN115563249 A CN 115563249A
Authority
CN
China
Prior art keywords
retrieval
question
tables
sql
sim
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211227233.6A
Other languages
Chinese (zh)
Inventor
陈思芹
吴洁
石微微
侯磊
张廷意
侯孟书
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202211227233.6A priority Critical patent/CN115563249A/en
Publication of CN115563249A publication Critical patent/CN115563249A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/25Integrating or interfacing systems involving database management systems
    • G06F16/256Integrating or interfacing systems involving database management systems in federated or virtual databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/25Integrating or interfacing systems involving database management systems
    • G06F16/258Data format conversion from or to a database
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Human Computer Interaction (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention belongs to the technical field of natural language processing and information retrieval, and provides an execution guidance-based open field question and answer oriented form retrieval enhancement method. Firstly, a searcher is used for primarily screening related tables from a table corpus to obtain a table pool, then for each table in the table pool, a deep learning Text-to-SQL model is used, a question and table mode information are combined to convert the question into a standardized logic form of SQL and the like, then SQL is executed on the table and whether an execution result is wrong or not is judged, and the SQL is used as a correlation basis to be merged into a new round of similarity calculation. The invention fully utilizes the mode information of the form in the process of form retrieval, blends the executed result into the retrieval similarity score, and effectively improves the accuracy of the form retrieval stage in the open domain question-answering process.

Description

Form retrieval enhancement method for question and answer in open field
Technical Field
The invention belongs to the technical field of natural language processing and information retrieval, and relates to a table retrieval enhancement method of open domain question-answer based on a Text-to-SQL deep learning model.
Background
Open domain questions versus closed domain questions, the answers are not limited to a given reading material, but rather exist in a large corpus. Tables, an important way to store information, exist in large numbers in web services and relational databases. Compared with a free text form, the amount of information stored in the table is large, and the content is more specific, and is an important information source of the open-domain question answering. The form open domain question-answering task changes a retrieved object from a text into a form, researches information concerned by a user obtained from a large number of forms, and plays an important role in both a search engine and manual customer service.
At present, the mainstream method of open domain question answering is a two-stage framework, namely a retrieval-reading model, which divides the open domain question answering into two stages: the purpose of the retrieval phase is to find sections of text from thousands of texts that are relevant to the question, from which the reading phase will extract answers. For the table open domain question-answer, the reading phase can be regarded as a closed domain table question-answer.
In the aspect of retrieval, the traditional BM25 method and the DPR method based on deep learning are both directed to free text and do not optimize tables particularly. Some researchers have proposed DTR, which uses Tapas instead of Bert as an encoder to encode the table structure, but the search task is strongly related to the table content, and only encoding the table structure results in an unsatisfactory table search effect, compared with DPR.
The closed-domain form question-answering makes a great research progress, and there are two main methods, one is to encode the structure and the content of the form, and select the target cell as the answer, such as Tapas; one is based on semantic parsing, which converts a question described in a natural language into a logical form executable on a table, such as SQL, and then executes the statement to obtain a corresponding answer to the question. At present, the model of the closed-domain Text-to-SQL task achieves more than 90% of accuracy on large-scale data sets such as WikiSQL and the like. Therefore, the table retrieval becomes the bottleneck of improving the accuracy of the question answering of the whole open domain.
Disclosure of Invention
In order to solve the problems, the invention provides an open field question and answer oriented form retrieval enhancement method based on execution guidance. Firstly, a searcher is used for primarily screening related tables from a table corpus to obtain a table pool, then for each table in the table pool, a deep learning Text-to-SQL model is used, a question and table mode information are combined to convert the question into a standardized logic form of SQL and the like, then SQL is executed on the table and whether an execution result is wrong or not is judged, and the SQL is used as a correlation basis to be merged into a new round of similarity calculation.
The technical scheme of the invention is as follows:
the open domain question-answering process is divided into: a table retrieval stage and an answer extraction stage; comprises the following steps:
s1, preprocessing a table. The table corpus is converted into db form by using tools such as SQLite, so that SQL sentences can be executed on any table. If the table itself exists in the relational database, the preprocessing stage is skipped.
And S2, implementing preliminary retrieval. Spreading the table according to rows to form table contents in a text form; encoding the content and loading into an index library; calculating the original similarity of the question and each table in the table corpus by using a searcher; and sorting according to the loss of the original similarity from large to small, and selecting the first N tables to form a table pool.
And S3, executing the sentence on the table and obtaining a new similarity score. Inputting question sentence and form mode information into Text-to-SQL deep learning model to obtain corresponding SQL sentence, executing the obtained SQL semantic on the form to judge whether execution error occurs, if not, executing the SQL semantic to obtain SQL sentence with different modesIf the line is wrong, let res EG =1, if an execution error occurs, let res EG =0; calculate a new similarity score:
sim withEG =(1-α)·sim origin /maxsim origin +α·res EG
wherein sim origin Is the raw similarity score, maxsim, of the retrieval phase origin Is the maximum raw similarity score, sim, of all tables at the retrieval stage withEG Is a new similarity score;
and S4, reordering the tables. The N tables in the table pool are scored according to the new similarity sim withEG Reordering from big to small, selecting top-k tables with highest score, and entering an extraction and answer stage;
and S5, extracting an answer stage. And obtaining answers by using a deep learning model for answer extraction such as cell classification or generation and the like based on the input question sentences and the obtained top-k tables.
The invention has the beneficial effects that: the mode information of the form is fully utilized in the form retrieval process, the executed result is merged into the retrieval similarity score, and the accuracy of the form retrieval stage in the open domain question answering process is effectively improved.
Drawings
FIG. 1 is a schematic diagram of an open-domain question-answering two-phase model.
FIG. 2 is a flow diagram of table search enhancement based on execution of a boot.
FIG. 3 is an experimental result comparing table search enhancement with no enhancement based on execution of a guide.
Detailed Description
The present invention is described in detail below with reference to the attached drawings.
In the present invention, a two-phase model of the open domain question answering is shown in FIG. 1. The table retrieval enhancement flow based on execution guidance is shown in fig. 2, wherein the table retrieval phase may involve two deep learning models: a retriever and a Text-to-SQL model. Before actual retrieval, corresponding model training needs to be completed by combining the labeled table corpus data.
And (5) preprocessing a table. The table corpus is converted into db form by using tools such as SQLite, so that SQL sentences can be executed on any table. If the table itself is present in the relational database, the preprocessing phase is skipped.
The tiling of the table expands. All tables in the corpus are processed into continuous text form before the index library is built. The title, column name, and line content of the table are concatenated as follows. If there is no table header, space is filled.
Table header | Table column name | first line content | second line content | … | nth line content
During the splicing process, separators are added to the spliced text after the form conversion. "represents the division of each cell in a row, and" represents the division from row to row. The delimiters enable the retriever to learn to some extent the structure of the table.
And carrying out preliminary retrieval. The goal of the searcher is to search N tables T with scores from large to small according to the correlation degree of the problem q from tens of thousands of large table corpora 1 ,T 2 ,...,T N As a pool of tables associated with question sentences. Since this is a preliminary stage of the search, N is typically a large number. In consideration of the efficiency of the subsequent screening, N =200 was set.
The retriever can load the preprocessed table into an elastic search index library by adopting a traditional method such as bm25, set an elastic search built-in similarity algorithm to bm25, set an algorithm parameter to be k =1.2, b =0.75, and specify that the number of results returned by the query is N, that is, the retriever can be used for realizing the initial retrieval of the table. The bm25 method does not require model training.
The retriever may also use a deep learning dense retriever DPR. DPR is a Bert-based dual encoder, which separately solves the problem q nl And table T i Coded as a vector v q And
Figure BDA0003880319040000031
the vector length is identical to the coding length of BERT, and d =728. After fine tuning, DPR will switchQuestions and tables are in vector form and ensure that the inner product of semantically related questions-table pairs is larger than that of other irrelevant questions and can better capture semantic similarity, while the traditional methods such as bm25 and the like are more sensitive to keywords.
In the implementation process, a DPR searcher is trained based on a table corpus spread In a tiled mode, 1 non-gold table with the highest bm25 score is selected as a hard case In the training process, the Batch _ size =32 is set, and an In-Batch negative training method is used. After training is completed, one of the dual encoders using DPR: and the table encoder performs similar processing on all tables in the corpus to obtain encoded table vectors, and loads the encoded table vectors into a dense vector search library (FASSI). In the preliminary search, one of the dual encoders of DPR is used: the question encoder encodes the question into a vector and searches in a FASSI library.
The retriever returns the top N tables with higher original similarity as a table pool for the target question sentence.
The tables are reordered. In the process, a Text-to-SQL deep learning model taking HydraNet as an example is introduced as a semantic parser of the question. Before performing table reordering, a HydraNet model was trained based on the WikiSQL dataset using batch _ size =64, left \ rate =6 × 10 -6 The base model used Roberta. In the process of reordering tables, firstly, inputting a question and table mode information into a HydraNet model for each table in the table pool, and analyzing an SQL (structured query language) statement corresponding to the question. Then, the SQL statement is executed on the table, and it is counted whether an error occurs during execution.
Converting results performed on the candidate table into additional parameters according to different types: res EG Res when no execution error occurs EG =1, res when execution error occurs EG And =0. In real-world situations, the result is often not empty when desired at the time of the question, and so the result may be considered empty as a special execution error.
A new similarity score is calculated based on the following function,
sim withEG =(1-α)·sim origin /maxsi origin +α·res EG
wherein sim origin Is the raw similarity score, maxsim, of the retrieval phase origin Is the maximum raw similarity score, sim, of all tables at the retrieval stage withEG Is a new similarity score; and measuring the importance degree of the execution result by using a coefficient alpha, and linearly summing the correlation score of the original retrieval and the execution result to obtain a new similarity estimation score. And taking different coefficient alpha values according to the actual conditions of the table corpus. In the present embodiment, α =0.9 is taken.
The N tables in the table pool are scored according to the new similarity sim withEG Reordering from big to small, selecting top-k tables with highest score, and entering an extraction and answer stage;
and (5) extracting an answer stage. And obtaining answers by using a deep learning model for answer extraction such as cell classification or generation and the like based on the input question sentences and the obtained top-k tables.

Claims (1)

1. A form retrieval enhancement method for open field question answering is characterized by comprising the following steps:
a table retrieval stage and an answer extraction stage;
in the table retrieval stage, based on an input question and a given table corpus, top-k tables selected based on relevance sorting are obtained through retrieval, and the specific method is as follows: spreading the table according to rows to form table contents in a text form; calculating the original similarity of each table in the table corpus by using a searcher; sorting according to the loss of the original similarity from large to small, selecting the first N tables to form a table pool, and selecting the top-k tables from the table pool;
the specific method for selecting the top-k tables from the table pool comprises the following steps: inputting question sentences and form mode information into a Text-to-SQL deep learning model to obtain corresponding SQL sentences, executing the obtained SQL semantics on the forms, judging whether execution errors occur or not, and enabling res if execution errors do not occur EG =1, if an execution error occurs, let res EG =0; calculate a new similarity score:
sim withEG =(1-α)·sim origin /maxsim origin +α·res EG
wherein sim origin Is the raw similarity score, maxsim, of the retrieval phase origin Is the maximum raw similarity score, sim, of all tables at the retrieval stage withEG Is a new similarity score;
the N tables in the table pool are scored according to the new similarity sim withEG Reordering from large to small, and selecting top-k tables with the highest scores;
and the answer extracting stage obtains answers based on the input question and the obtained top-k tables.
CN202211227233.6A 2022-10-09 2022-10-09 Form retrieval enhancement method for question and answer in open field Pending CN115563249A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211227233.6A CN115563249A (en) 2022-10-09 2022-10-09 Form retrieval enhancement method for question and answer in open field

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211227233.6A CN115563249A (en) 2022-10-09 2022-10-09 Form retrieval enhancement method for question and answer in open field

Publications (1)

Publication Number Publication Date
CN115563249A true CN115563249A (en) 2023-01-03

Family

ID=84744509

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211227233.6A Pending CN115563249A (en) 2022-10-09 2022-10-09 Form retrieval enhancement method for question and answer in open field

Country Status (1)

Country Link
CN (1) CN115563249A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117033469A (en) * 2023-10-07 2023-11-10 之江实验室 Database retrieval method, device and equipment based on table semantic annotation
CN117035064A (en) * 2023-10-10 2023-11-10 北京澜舟科技有限公司 Combined training method for retrieving enhanced language model and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117033469A (en) * 2023-10-07 2023-11-10 之江实验室 Database retrieval method, device and equipment based on table semantic annotation
CN117033469B (en) * 2023-10-07 2024-01-16 之江实验室 Database retrieval method, device and equipment based on table semantic annotation
CN117035064A (en) * 2023-10-10 2023-11-10 北京澜舟科技有限公司 Combined training method for retrieving enhanced language model and storage medium
CN117035064B (en) * 2023-10-10 2024-02-20 北京澜舟科技有限公司 Combined training method for retrieving enhanced language model and storage medium

Similar Documents

Publication Publication Date Title
CN109271505B (en) Question-answering system implementation method based on question-answer pairs
Lee et al. Ranking paragraphs for improving answer recall in open-domain question answering
CN115563249A (en) Form retrieval enhancement method for question and answer in open field
KR100756921B1 (en) Method of classifying documents, computer readable record medium on which program for executing the method is recorded
CN112035730B (en) Semantic retrieval method and device and electronic equipment
CN110879834B (en) Viewpoint retrieval system based on cyclic convolution network and viewpoint retrieval method thereof
CN112307182B (en) Question-answering system-based pseudo-correlation feedback extended query method
CN111666764B (en) Automatic abstracting method and device based on XLNet
CN103927330A (en) Method and device for determining characters with similar forms in search engine
CN113761890A (en) BERT context sensing-based multi-level semantic information retrieval method
CN113220864B (en) Intelligent question-answering data processing system
CN116501861B (en) Long text abstract generation method based on hierarchical BERT model and label migration
CN112417119A (en) Open domain question-answer prediction method based on deep learning
CN115563248A (en) Table open domain question-answering method based on Text-to-SQL
Alshammari et al. TAQS: an Arabic question similarity system using transfer learning of BERT with BILSTM
Adewumi et al. Exploring Swedish & English fastText embeddings for NER with the transformer
CN116757188A (en) Cross-language information retrieval training method based on alignment query entity pairs
CN112667797A (en) Question-answer matching method, system and storage medium for adaptive transfer learning
CN117131383A (en) Method for improving search precision drainage performance of double-tower model
CN110826341A (en) Semantic similarity calculation method based on seq2seq model
CN115795018A (en) Multi-strategy intelligent searching question-answering method and system for power grid field
CN102508920B (en) Information retrieval method based on Boosting sorting algorithm
CN114154496A (en) Coal prison classification scheme comparison method and device based on deep learning BERT model
CN114238595A (en) Metallurgical knowledge question-answering method and system based on knowledge graph
CN112199461A (en) Document retrieval method, device, medium and equipment based on block index structure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination