CN117520515A - Bank field question and answer method, system and device based on local knowledge base - Google Patents
Bank field question and answer method, system and device based on local knowledge base Download PDFInfo
- Publication number
- CN117520515A CN117520515A CN202311580989.3A CN202311580989A CN117520515A CN 117520515 A CN117520515 A CN 117520515A CN 202311580989 A CN202311580989 A CN 202311580989A CN 117520515 A CN117520515 A CN 117520515A
- Authority
- CN
- China
- Prior art keywords
- vector
- model
- knowledge base
- similar
- document
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 41
- 239000013598 vector Substances 0.000 claims abstract description 100
- 239000012634 fragment Substances 0.000 claims abstract description 26
- 238000004590 computer program Methods 0.000 claims description 10
- 101100153586 Caenorhabditis elegans top-1 gene Proteins 0.000 claims description 7
- 101100370075 Mus musculus Top1 gene Proteins 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 6
- 230000011218 segmentation Effects 0.000 claims description 5
- 238000000605 extraction Methods 0.000 claims description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000000047 product Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/332—Query formulation
- G06F16/3329—Natural language query formulation or dialogue systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/31—Indexing; Data structures therefor; Storage structures
- G06F16/313—Selection or weighting of terms for indexing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/338—Presentation of query results
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/02—Knowledge representation; Symbolic representation
- G06N5/022—Knowledge engineering; Knowledge acquisition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q40/00—Finance; Insurance; Tax strategies; Processing of corporate or income taxes
- G06Q40/02—Banking, e.g. interest calculation or account maintenance
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Business, Economics & Management (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Databases & Information Systems (AREA)
- Accounting & Taxation (AREA)
- Software Systems (AREA)
- Finance (AREA)
- Human Computer Interaction (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Development Economics (AREA)
- Economics (AREA)
- Marketing (AREA)
- Strategic Management (AREA)
- Technology Law (AREA)
- General Business, Economics & Management (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a banking field question-answering method, a system and a device based on a local knowledge base, and the related method comprises the steps of constructing a banking field vector model; constructing a local knowledge vector library, carrying out fragment vectorization on all documents in the knowledge library, and establishing a fragment vector index comprising a multi-level label and establishing an index for document fragment vectorization; receiving user questions, positioning the questions according to the multi-level labels, and outputting the most similar documents under a specific business knowledge base; vectorizing a user problem through a bank domain vector model, performing similarity matching on vectors in a segment vector index library corresponding to the output most similar document, and outputting similar vector segments; and judging whether the similar vector segments meet the preset requirements, if so, inputting the similar segments, the context information of the similar segments in the most similar documents and the user questions into the LLM model, and outputting answer contents through the LLM model. According to the invention, the knowledge base is constructed through the knowledge base layer, the document layer and the paragraph layer, so that the user intention recognition is improved.
Description
Technical Field
The invention relates to an artificial intelligence technology, in particular to a banking field question-answering method, system and device based on a local knowledge base.
Background
The traditional question-answering system is realized by firstly constructing a knowledge base and then searching knowledge in the knowledge base according to user questions, and the scheme has the difficulty that a business expert is required to take a great deal of time to construct the knowledge base and convert a domain text into knowledge in the knowledge base, and the process cannot be directly migrated to other domains, so that time and labor are consumed.
In addition, most of traditional retrieval models use keywords or sentence vectors for simple matching, a question-answering system cannot deeply understand user problems, and generalization capability depends on the size of a knowledge base.
The large model (LLM) has strong natural language understanding and generating capability, can be used for reducing the intention of a user, and gathers and integrates the original knowledge points to generate more pertinent answers, so that the thinking of question and answer by using the large model is as follows: performing the Embedding on the user problem and the local knowledge, and realizing recall through vector similarity (Vector Similarity); carrying out intention recognition on the user problem through LLM; and processing and integrating the original answers.
The current widely applied implementation scheme is that a question and answer application Langchain based on a local knowledge base is implemented by the following steps: loading a file, reading a text, segmenting the text, vectorizing a question, matching top k most similar to the question vector in the text vector, adding the matched text as a context and a question into a prompt, and submitting the text to LLM to generate an answer. However, with the widespread use of langchain+llm, some deep problems in the vertical field are gradually exposed: under the condition of matching massive text data, the knowledge base recall precision is low.
Disclosure of Invention
Aiming at the problem of lower recall precision of a knowledge base under the condition of matching massive text data in the prior art, the invention provides a bank field question-answering method based on a local knowledge base, which comprises the following steps:
constructing a bank field vector model;
constructing a local knowledge vector library, carrying out fragment vectorization on all documents in the knowledge library, and establishing a fragment vector index comprising a multi-level label and establishing an index for document fragment vectorization;
receiving user questions, positioning the questions according to the multi-level labels, and outputting the most similar documents under a specific business knowledge base;
vectorizing a user problem through a bank domain vector model, performing similarity matching on vectors in a segment vector index library corresponding to the output most similar document, and outputting similar vector segments;
and judging whether the similar vector segments meet the preset requirements, if so, inputting the similar segments, the context information of the similar segments in the most similar documents and the user questions into the LLM model, and outputting answer contents through the LLM model.
As one preferred, the local knowledge vector base of the multi-level tag comprises: the method comprises the steps of combining a two-level label and a one-level index, wherein the one-level label comprises a plurality of business knowledge bases, the two-level label comprises a plurality of documents under the business knowledge bases, and the three-level index is a segment vector index in a certain document under each business knowledge base.
Preferably, the method for constructing the vector model of the bank domain comprises the steps of adjusting m3e-base by using original text, text keywords and keyword position information of the bank domain to obtain a vertical vector model m3e-base aiming at the bank domain.
Preferably, the method for constructing the local knowledge vector base includes:
extracting all document summary information by using LLM, and performing keyword extraction on all the documents after vectorization according to the summary information, so as to form a primary label of a document library layer; extracting the subject words of each document by using LLM, forming a secondary label of a document layer by combining the abstract of each document, carrying out semantic segmentation on each document, vectorizing the fragments by using a bank domain vector model, and finally forming a vector knowledge base with two-stage labels and one-stage indexes.
As one preferable, the question answering method further includes a model output control method including:
and carrying out public subset calculation on the large model answer and the top-1, outputting a final result if the public subset length/the large model answer length is greater than or equal to a preset parameter, and replying to the bottom-call operation if the public subset length/the large model answer length is less than the preset parameter.
The invention also provides a banking field question-answering system based on the local knowledge base, which comprises the following structures:
the vector model unit is used for constructing a vector model in the bank field;
the knowledge base unit is used for constructing a local knowledge vector base, carrying out fragment vectorization on all documents in the knowledge base, and establishing fragment vector indexes comprises multistage labels and establishing indexes for document fragment vectorization;
the document positioning unit is used for receiving user problems, positioning the problems according to the multi-level labels and outputting the most similar documents under the specific business knowledge base;
the comparison unit is used for vectorizing the user problem through a bank field vector model, carrying out similarity matching on vectors in a segment vector index library corresponding to the output most similar document, and outputting similar vector segments;
and the output unit is used for judging whether the similar vector segments meet the preset requirement, if so, inputting the similar segments, the context information of the similar segments in the most similar documents and the user questions into the LLM model, and outputting answer contents through the LLM model.
Preferably, the vector model unit is used for adjusting m3e-base according to original text, text keywords and keyword position information in the banking field to obtain a vertical vector model m3e-bank aiming at the banking field.
Preferably, the question-answering system further comprises an output control unit, which is used for carrying out public subset calculation on the large model answer and top-1, outputting a final result if the public subset length/the length of the large model answer is greater than or equal to a preset parameter, and replying to the bottom-holding technology if the public subset length/the length of the large model answer is smaller than the preset parameter.
The invention also provides a banking field question-answering device which comprises a processor, a memory and a computer program which is stored in the memory and can run on the processor, wherein the banking field question-answering method based on the local knowledge base is realized when the computer program is executed by the processor.
The invention also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and the computer program realizes the bank field question-answering method based on the local knowledge base when being executed by a processor.
The invention has the beneficial effects that:
by analyzing the reasons possibly caused by the problem of low recall precision of the knowledge base under the condition of matching massive text data, an improved strategy for the user intention positioning mode is provided, and the knowledge base, namely the knowledge base layer, the document layer and the paragraph layer, is constructed from three dimensions, so that the user intention recognition is improved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the invention, and that other drawings can be obtained according to these drawings without inventive faculty for a person skilled in the art.
FIG. 1 is a flow chart of a banking domain question-answering method based on a local knowledge base;
FIG. 2 is a flow chart of a banking domain vector model fine-tuning;
fig. 3 is a detailed flowchart of a question-answering method of the preferred embodiment.
Detailed Description
The present invention will be described in further detail with reference to the following examples, which are illustrative of the present invention and are not intended to limit the present invention thereto.
A banking field question-answering method based on a local knowledge base comprises the following steps:
constructing a bank field vector model;
constructing a local knowledge vector library, carrying out fragment vectorization on all documents in the knowledge library, and establishing a fragment vector index comprising a multi-level label and establishing an index for document fragment vectorization;
receiving user questions, positioning the questions according to the multi-level labels, and outputting the most similar documents under a specific business knowledge base;
vectorizing a user problem through a bank domain vector model, performing similarity matching on vectors in a segment vector index library corresponding to the output most similar document, and outputting similar vector segments;
and judging whether the similar vector segments meet the preset requirements, if so, inputting the similar segments, the context information of the similar segments in the most similar documents and the user questions into the LLM model, and outputting answer contents through the LLM model.
Wherein, the local knowledge vector base of the multistage label includes: the method comprises the steps of combining a two-level label and a one-level index, wherein the one-level label comprises a plurality of business knowledge bases, the two-level label comprises a plurality of documents under the business knowledge bases, and the three-level index is a segment vector index in a certain document under each business knowledge base.
The scheme provides a knowledge base for constructing a multi-level tag and a vector index, and when mass data is searched, the one-level tag can quickly position a specific business knowledge base, so that the search range is reduced; the secondary labels can accurately position specific related documents, and the retrieval recall precision (judging the user intention recognition stage) is improved.
As a preferable mode, the method for constructing the vector model of the bank domain comprises the steps of adjusting m3e-base by using original text, text keywords and keyword position information of the bank domain to obtain a vertical vector model m3e-bank aiming at the bank domain.
The utility model provides an after adjustment bank field vector model m3e-bank, because bank data belongs to private domain data, can't disclose, current open source data does not contain bank field data when training, can lead to open source vector model can not fully represent bank field text semantic information, lead to the semantic model to carry out vector matching effect poor, pertinently propose the fine setting of general text embedding model, with the help of the idea of big model fine setting, use text, keyword position information as input to carry out fine setting vector model for vector model after the fine setting can express bank field text information more.
As a preferred mode, the method for constructing the local knowledge vector base includes: extracting all document summary information by using LLM, and performing keyword extraction on all the documents after vectorization according to the summary information, so as to form a primary label of a document library layer; extracting the subject words of each document by using LLM, forming a secondary label of a document layer by combining the abstract of each document, carrying out semantic segmentation on each document, vectorizing the fragments by using a bank domain vector model, and finally forming a vector knowledge base with two-stage labels and one-stage indexes.
The prior art has the common practice that semantic cutting is carried out on an original text, then an index is built for each segment, the segments are vectorized to form a knowledge vector base, the retrieval efficiency is low when mass knowledge is obtained, the problem of users is difficult to accurately position, and the invention provides a knowledge base method for building a multi-level label and an index, which aims at the situation:
1) Extracting all document abstract information by using LLM;
2) Clustering all documents according to vectorization of abstract information;
3) Extracting keywords from each class to form a first-level label of a document library layer;
4) Extracting the subject words of each document by using LLM, and forming a secondary label of the document layer by combining the abstract of each document;
5) For each document to be subjected to semantic segmentation, the segmented segments need to be subjected to artificial verification, the segments are more than or equal to 100, and secondary segmentation is needed;
6) And vectorizing the fragments by using m3e-bank to finally form a vector knowledge base with two-stage labels and one-stage indexes.
As a preferred mode, the question answering method further includes a model output control method including:
and carrying out public subset calculation on the large model answer and the top-1, outputting a final result if the public subset length/the large model answer length is greater than or equal to a preset parameter, and replying to the bottom-call operation if the public subset length/the large model answer length is less than the preset parameter.
When the large model answers, the retrieved related information is input into the large model, the large model answers the questions according to the retrieved information, but the large model sometimes answers contents exceeding given information, the large model outputs illusions, the Langchain scheme faces a general scene, the model outputs have high illusion condition tolerance, but in a bank scene, illusion zero tolerance, and therefore the model outputs need to be controlled. Through analysis of the output result of the large model, the large model does not answer knowledge completely irrelevant to the known information, the illusion of the large model mainly supplements description on the known information, and the illusion easily appears at the tail of a sentence.
So calculate the public subset between the answer of model and the top-1 fragment retrieved by knowledge base, if the length of public subset/length of the answer of big model > =90%, it is stated that big model does not produce illusion, otherwise produce illusion, reply to the bottom-of-the-book. Of course, the 90% parameter in this solution can be adjusted according to different scenes. The invention provides a method for calculating the common subset of the LLM output result and the similar segment in the original text to stop the LLM large model output illusion problem, more accords with the use scene requirement in the bank field, and finally ensures that the question-answering system has very good application in the bank field, and the scheme can be also transferred to other fields for application.
The invention also provides a banking field question-answering system based on the local knowledge base, which comprises the following structures:
the vector model unit is used for constructing a vector model in the bank field;
the knowledge base unit is used for constructing a local knowledge vector base, carrying out fragment vectorization on all documents in the knowledge base, and establishing fragment vector indexes comprises multistage labels and establishing indexes for document fragment vectorization;
the document positioning unit is used for receiving user problems, positioning the problems according to the multi-level labels and outputting the most similar documents under the specific business knowledge base;
the comparison unit is used for vectorizing the user problem through a bank field vector model, carrying out similarity matching on vectors in a segment vector index library corresponding to the output most similar document, and outputting similar vector segments;
and the output unit is used for judging whether the similar vector segments meet the preset requirement, if so, inputting the similar segments, the context information of the similar segments in the most similar documents and the user questions into the LLM model, and outputting answer contents through the LLM model.
Preferably, the vector model unit is used for adjusting m3e-base according to original text, text keywords and keyword position information in the banking field to obtain a vertical vector model m3e-bank aiming at the banking field.
Preferably, the question-answering system further comprises an output control unit, which is used for carrying out public subset calculation on the large model answer and top-1, outputting a final result if the public subset length/the length of the large model answer is greater than or equal to a preset parameter, and replying to the bottom-holding technology if the public subset length/the length of the large model answer is smaller than the preset parameter.
The invention also provides a banking field question-answering device which comprises a processor, a memory and a computer program which is stored in the memory and can run on the processor, wherein the banking field question-answering method based on the local knowledge base is realized when the computer program is executed by the processor.
The invention also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and the computer program realizes the bank field question-answering method based on the local knowledge base when being executed by a processor.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted, or not performed.
The units may or may not be physically separate, and the components shown as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present invention may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions for causing a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the method described in the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely illustrative of specific embodiments of the present invention, and the scope of the present invention is not limited thereto, but any changes or substitutions within the technical scope of the present invention should be covered by the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (10)
1. The bank field question-answering method based on the local knowledge base is characterized by comprising the following steps of:
constructing a bank field vector model;
constructing a local knowledge vector library, carrying out fragment vectorization on all documents in the knowledge library, and establishing a fragment vector index comprising a multi-level label and establishing an index for document fragment vectorization;
receiving user questions, positioning the questions according to the multi-level labels, and outputting the most similar documents under a specific business knowledge base;
vectorizing a user problem through a bank domain vector model, performing similarity matching on vectors in a segment vector index library corresponding to the output most similar document, and outputting similar vector segments;
and judging whether the similar vector segments meet the preset requirements, if so, inputting the similar segments, the context information of the similar segments in the most similar documents and the user questions into the LLM model, and outputting answer contents through the LLM model.
2. The local knowledge base based banking area question-answering method according to claim 1, wherein the local knowledge vector base of the multi-level tag includes: the method comprises the steps of combining a two-level label and a one-level index, wherein the one-level label comprises a plurality of business knowledge bases, the two-level label comprises a plurality of documents under the business knowledge bases, and the three-level index is a segment vector index in a certain document under each business knowledge base.
3. The local knowledge base-based banking domain question-answering method according to claim 1, wherein the method for constructing a banking domain vector model is to adjust m3e-base by using original text, text keywords and keyword position information comprising the banking domain to obtain a vertical vector model m3e-bank for the banking domain.
4. The local knowledge base-based banking domain question-answering method according to claim 2, wherein the method of constructing a local knowledge vector base comprises:
extracting all document summary information by using LLM, and performing keyword extraction on all the documents after vectorization according to the summary information, so as to form a primary label of a document library layer; extracting the subject words of each document by using LLM, forming a secondary label of a document layer by combining the abstract of each document, carrying out semantic segmentation on each document, vectorizing the fragments by using a bank domain vector model, and finally forming a vector knowledge base with two-stage labels and one-stage indexes.
5. The local knowledge base based banking domain question-answering method according to claim 1, wherein the question-answering method further comprises a model output control method including:
and carrying out public subset calculation on the large model answer and the top-1, outputting a final result if the public subset length/the large model answer length is greater than or equal to a preset parameter, and replying to the bottom-call operation if the public subset length/the large model answer length is less than the preset parameter.
6. The bank field question-answering system based on the local knowledge base is characterized by comprising the following structures:
the vector model unit is used for constructing a vector model in the bank field;
the knowledge base unit is used for constructing a local knowledge vector base, carrying out fragment vectorization on all documents in the knowledge base, and establishing fragment vector indexes comprises multistage labels and establishing indexes for document fragment vectorization;
the document positioning unit is used for receiving user problems, positioning the problems according to the multi-level labels and outputting the most similar documents under the specific business knowledge base;
the comparison unit is used for vectorizing the user problem through a bank field vector model, carrying out similarity matching on vectors in a segment vector index library corresponding to the output most similar document, and outputting similar vector segments;
and the output unit is used for judging whether the similar vector segments meet the preset requirement, if so, inputting the similar segments, the context information of the similar segments in the most similar documents and the user questions into the LLM model, and outputting answer contents through the LLM model.
7. The local knowledge base based banking domain question-answering system according to claim 6, wherein the use of the vector model unit includes adjusting m3e-base with banking domain original text, text keywords and keyword location information to obtain a vertical class vector model m3e-bank for banking domain.
8. The local knowledge base based banking area question-answering system according to claim 6, further comprising an output control unit for performing public subset calculation on the large model answers and top-1, outputting a final result if the public subset length/the large model answer length is greater than or equal to a preset parameter, and replying to the tuo-bottom technique if the public subset length/the large model answer length is less than the preset parameter.
9. A banking area question and answer device comprising a processor, a memory and a computer program stored on the memory and capable of running on the processor, which when executed by the processor implements a banking area question and answer method based on a local knowledge base as claimed in any one of claims 1 to 5.
10. A computer readable storage medium, wherein a computer program is stored on the computer readable storage medium, which when executed by a processor implements a banking area question-answering method according to any one of claims 1 to 5 based on a local knowledge base.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311580989.3A CN117520515A (en) | 2023-11-24 | 2023-11-24 | Bank field question and answer method, system and device based on local knowledge base |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311580989.3A CN117520515A (en) | 2023-11-24 | 2023-11-24 | Bank field question and answer method, system and device based on local knowledge base |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117520515A true CN117520515A (en) | 2024-02-06 |
Family
ID=89760563
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311580989.3A Pending CN117520515A (en) | 2023-11-24 | 2023-11-24 | Bank field question and answer method, system and device based on local knowledge base |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117520515A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117725189A (en) * | 2024-02-18 | 2024-03-19 | 国家超级计算天津中心 | Method for generating questions and answers in professional field and electronic equipment |
CN117725423A (en) * | 2024-02-18 | 2024-03-19 | 青岛海尔科技有限公司 | Method and device for generating feedback information based on large model |
CN117743558A (en) * | 2024-02-20 | 2024-03-22 | 青岛海尔科技有限公司 | Knowledge processing and knowledge question-answering method, device and medium based on large model |
CN118036753A (en) * | 2024-03-25 | 2024-05-14 | 行至智能(北京)技术有限公司 | Large language model reasoning method and system based on multistage knowledge retrieval enhancement |
-
2023
- 2023-11-24 CN CN202311580989.3A patent/CN117520515A/en active Pending
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117725189A (en) * | 2024-02-18 | 2024-03-19 | 国家超级计算天津中心 | Method for generating questions and answers in professional field and electronic equipment |
CN117725423A (en) * | 2024-02-18 | 2024-03-19 | 青岛海尔科技有限公司 | Method and device for generating feedback information based on large model |
CN117725189B (en) * | 2024-02-18 | 2024-04-16 | 国家超级计算天津中心 | Method for generating questions and answers in professional field and electronic equipment |
CN117725423B (en) * | 2024-02-18 | 2024-05-24 | 青岛海尔科技有限公司 | Method and device for generating feedback information based on large model |
CN117743558A (en) * | 2024-02-20 | 2024-03-22 | 青岛海尔科技有限公司 | Knowledge processing and knowledge question-answering method, device and medium based on large model |
CN117743558B (en) * | 2024-02-20 | 2024-05-24 | 青岛海尔科技有限公司 | Knowledge processing and knowledge question-answering method, device and medium based on large model |
CN118036753A (en) * | 2024-03-25 | 2024-05-14 | 行至智能(北京)技术有限公司 | Large language model reasoning method and system based on multistage knowledge retrieval enhancement |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11816441B2 (en) | Device and method for machine reading comprehension question and answer | |
CN107679039B (en) | Method and device for determining statement intention | |
CN117520515A (en) | Bank field question and answer method, system and device based on local knowledge base | |
CN111291177B (en) | Information processing method, device and computer storage medium | |
CN111767716B (en) | Method and device for determining enterprise multi-level industry information and computer equipment | |
CN116701431A (en) | Data retrieval method and system based on large language model | |
CN107895000B (en) | Cross-domain semantic information retrieval method based on convolutional neural network | |
CN117688163B (en) | Online intelligent question-answering method and device based on instruction fine tuning and retrieval enhancement generation | |
CN110347790B (en) | Text duplicate checking method, device and equipment based on attention mechanism and storage medium | |
CN110895559A (en) | Model training method, text processing method, device and equipment | |
CN112287069A (en) | Information retrieval method and device based on voice semantics and computer equipment | |
CN115795061B (en) | Knowledge graph construction method and system based on word vector and dependency syntax | |
CN113761868B (en) | Text processing method, text processing device, electronic equipment and readable storage medium | |
CN110188359B (en) | Text entity extraction method | |
CN112966117A (en) | Entity linking method | |
Ma et al. | Another look at DPR: reproduction of training and replication of retrieval | |
Dilawari et al. | Neural attention model for abstractive text summarization using linguistic feature space | |
CN113342949A (en) | Matching method and system of intellectual library experts and topic to be researched | |
CN115203388A (en) | Machine reading understanding method and device, computer equipment and storage medium | |
CN113590768B (en) | Training method and device for text relevance model, question answering method and device | |
CN114637852A (en) | Method, device and equipment for extracting entity relationship of medical text and storage medium | |
CN113849639A (en) | Method and system for constructing theme model categories of urban data warehouse | |
CN113761104A (en) | Method and device for detecting entity relationship in knowledge graph and electronic equipment | |
CN112988952A (en) | Multi-level-length text vector retrieval method and device and electronic equipment | |
Yomie et al. | Application of the multilingual acoustic representation model XLSR-53 for the transcription of Ewondo |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |