CN116932694A - Intelligent retrieval method, device and storage medium for knowledge base - Google Patents
Intelligent retrieval method, device and storage medium for knowledge base Download PDFInfo
- Publication number
- CN116932694A CN116932694A CN202310889263.1A CN202310889263A CN116932694A CN 116932694 A CN116932694 A CN 116932694A CN 202310889263 A CN202310889263 A CN 202310889263A CN 116932694 A CN116932694 A CN 116932694A
- Authority
- CN
- China
- Prior art keywords
- knowledge
- feature
- ranking
- fine
- text
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 51
- 238000003860 storage Methods 0.000 title claims abstract description 18
- 230000004927 fusion Effects 0.000 claims abstract description 62
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 28
- 230000006870 function Effects 0.000 claims description 51
- 230000003993 interaction Effects 0.000 claims description 27
- 239000013598 vector Substances 0.000 claims description 27
- 238000012549 training Methods 0.000 claims description 26
- 238000004364 calculation method Methods 0.000 claims description 20
- 230000002452 interceptive effect Effects 0.000 claims description 10
- 238000000605 extraction Methods 0.000 claims description 7
- 238000013528 artificial neural network Methods 0.000 claims description 4
- 238000011478 gradient descent method Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 238000012360 testing method Methods 0.000 claims description 3
- 238000013473 artificial intelligence Methods 0.000 abstract description 2
- 230000010365 information processing Effects 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 8
- 241000219109 Citrullus Species 0.000 description 7
- 235000012828 Citrullus lanatus var citroides Nutrition 0.000 description 7
- 238000004590 computer program Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 238000005457 optimization Methods 0.000 description 5
- 230000003068 static effect Effects 0.000 description 5
- 238000012545 processing Methods 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 208000028659 discharge Diseases 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000002238 attenuated effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
- G06F16/334—Query execution
- G06F16/3344—Query execution using natural language analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/332—Query formulation
- G06F16/3329—Natural language query formulation or dialogue systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/338—Presentation of query results
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Human Computer Interaction (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The application discloses an intelligent retrieval method, equipment and a storage medium for a knowledge base, and belongs to the technical field of information processing and artificial intelligence. The method comprises the following steps: acquiring user problems, constructing a feature fusion model and constructing a fine-ranking language model; extracting a knowledge text feature set of a preset knowledge base based on the feature fusion model, and associating the knowledge text feature set with the knowledge base to obtain a knowledge database; extracting problem features of user problems based on the feature fusion model; matching the problem features with the knowledge database one by one, and calculating the similarity between the problem features and knowledge text features of all knowledge points in the knowledge database based on a similarity algorithm to obtain a coarse ranking set; and carrying out fine ranking on knowledge points of the knowledge base corresponding to the coarse ranking set through a text matching algorithm based on the fine ranking language model so as to obtain a search result. The method improves the arrangement accuracy and generalization of the intelligent retrieval aiming at the knowledge base.
Description
Technical Field
The present application relates to the field of information processing and artificial intelligence, and in particular, to an intelligent retrieval method, apparatus and storage medium for a knowledge base.
Background
The intelligent retrieval system for the knowledge base is a brand new information retrieval mode, and is developed on the existing information retrieval technology and model. The intelligent retrieval aiming at the knowledge base is different from the information retrieval in that the intelligent retrieval aiming at the knowledge base emphasizes the semantics, is not the same as the information retrieval, is based on literal mechanical matching, and starts from the semantics and concepts of the articles, and can reveal the intrinsic meaning of the articles. Indexing work on semantic and concept layers is achieved, recall ratio and precision ratio of retrieval are improved aiming at intelligent retrieval of a knowledge base, and retrieval burden of a user is reduced.
In the intelligent retrieval system of ten thousand-level professions aiming at the knowledge base in the prior art, the common retrieval thinking is to firstly coarsely arrange and then finely arrange the knowledge base. The rough ranking refers to calculating the distance between the text feature of the problem and the text feature extracted in advance by each knowledge point in the knowledge base after the text feature of the knowledge base is extracted by the neural network language model, and only once feature of the problem is extracted in the process, so that the ten-thousand-level matching speed is high; fine ranking refers to high-precision ranking by using an interactive text matching algorithm again in hundreds of knowledge items based on coarse ranking output, and finally outputting Top5, top10 and other results. However, the loss of information of the feature text feature is high in the rough arrangement stage. In the fine discharge stage, the precision of fine discharge is insufficient due to insufficient generalization capability of contrast learning.
Therefore, how to improve the arrangement accuracy of the intelligent search for the knowledge base and the generalization of the intelligent search for the knowledge base is a problem to be solved.
Disclosure of Invention
The embodiment of the application provides an intelligent retrieval method, equipment and storage medium for a knowledge base, which are used for solving the following technical problems: how to improve the arrangement accuracy of the intelligent retrieval for the knowledge base and the generalization of the intelligent retrieval for the knowledge base.
In a first aspect, an embodiment of the present application provides an intelligent retrieval method for a knowledge base, where the method includes: acquiring user problems, constructing a feature fusion model and constructing a fine-ranking language model; extracting a knowledge text feature set of a preset knowledge base based on the feature fusion model, and associating the knowledge text feature set with the knowledge base to obtain a knowledge database; extracting problem features of the user problem based on a feature fusion model; matching the problem features with the knowledge database one by one, calculating the similarity between the problem features and knowledge text features of knowledge points in the knowledge database based on a similarity algorithm, and arranging the similarity in order from large to small to obtain a coarse ranking set; and carrying out fine ranking on knowledge points of the knowledge base corresponding to the coarse ranking set through a text matching algorithm based on the fine ranking language model, and arranging matching results in sequence from high to low in matching degree to obtain retrieval results.
In one possible implementation manner, the method for constructing the fusion feature model specifically includes: obtaining a standard representation feature model and a standard interaction vector feature model; splicing the standard representation feature model and the output end of the standard interaction vector feature model based on a preset Softmax function to obtain a preliminary feature fusion model; and testing the preliminary feature fusion model, repeatedly adjusting parameters of the preliminary feature fusion model, and constructing a feature fusion model when the output of the preliminary feature fusion model is larger than a parameter threshold.
In one possible implementation manner, the construction of the fine-pitch language model specifically includes: obtaining a standard fine-ranking language model; training the standard fine-ranking language model based on the binary contrast learning loss function to obtain a first fine-ranking language model; adding anti-noise to the first fine language training in the first fine language model training stage to obtain a fine language model.
In one possible implementation manner, training the standard fine-ranking language model based on the binary contrast learning loss function to obtain a first fine-ranking language model specifically includes: replacing the binary contrast loss function with the loss function in the standard fine-ranking training model to obtain a preliminary fine-ranking language model; training the preliminary fine-ranking language model, and acquiring a first fine-ranking language model when the output of the preliminary fine-ranking language model is greater than a preset parameter threshold; wherein the binary contrast loss function is expressed by the following formula:
L=-log(σ(r ψ (x,y i )-r ψ (x,y j )))
Wherein L is a binary contrast loss function, x is a problem feature, y i and yj For knowledge text feature, r ψ (x,y i ) Is (x, y) i ) Similarity of r ψ (x,y j ) For the sample (x, y i ) Sigma is a Sigmoid function; alternatively, the binary contrast loss function may be expressed by the following formula:
wherein L is a binary contrast loss function, x is a problem feature, y i and yj For knowledge text features, k is x, y i and yj Is used in the number of (a) and (b),for sampling theorem function, ++>Is x, y i and yj Average expectation, r ψ (x,y i ) Is (x, y) i ) Similarity of r ψ (x,y j ) For the sample (x, y i ) Sigma is a Sigmoid function.
In one possible implementation manner, adding the anti-noise to the first fine language training in the first fine language model training stage to obtain a fine language model specifically includes: adding an anti-noise in an input preset training set before starting training the first refined language model, and training the first refined language model based on the anti-noise; when the output of the first refined language model is larger than a preset parameter threshold value, obtaining the refined language model; wherein the anti-noise function is expressed by the following formula:
wherein ,for general loss gradient- >The gradient is calculated, L is a binary opposite loss function, x is a problem feature, y is a knowledge text feature, θ is a neural network parameter, and ε is a super parameter.
In one possible implementation manner, the matching of the problem features with the knowledge database is performed piece by piece, and similarity between the problem features and knowledge text features of knowledge points in the knowledge database is calculated based on a similarity algorithm, which specifically includes: acquiring the problem characteristics and the knowledge database; matching the question features with knowledge text features in the knowledge database one by one; calculating the problem characteristics and each knowledge text characteristic in the knowledge database through a similarity calculation formula to obtain the similarity of the problem characteristics and each knowledge text characteristic in the knowledge database; the similarity calculation formula is represented by the following formula:
Sim(T 1 ,T 2 )=Softmax(∑α i f i (φ i (T 1 ),φ i (T 2 ))+Σβ j f j (φ j (T 1 ),φ j (T 2 )))
wherein Sim (T) 1 ,T 2 ) To the similarity of the question character and the knowledge text character, phi i 、φ j Refers to the fact that in different representation vector feature encoders and interaction feature encoders, f i 、f j Respectively refers to a text representation vector feature extraction network and an interactive feature extraction network, T 1 To be a problem feature, T 2 For knowledge text characteristics, the alpha and beta parameters in the formula meet the requirement of Sigma alpha i +∑β j =1。
In one possible implementation, the α, β parameters in the formula are required to satisfy Σα i +∑β j =1, specifically including: the values of α and β are resolved by the lagrangian multiplier method and/or iterated continuously by the gradient descent method.
In one possible implementation manner, the knowledge points of the knowledge base corresponding to the coarse arrangement set are finely arranged through a text matching algorithm based on the fine arrangement language model, and matching results are arranged according to the sequence of the matching degree from large to small so as to obtain a search result, wherein the search result comprises the steps of obtaining the knowledge base corresponding to the coarse arrangement set; calculating the matching degree of the user problem and each knowledge point in the knowledge base; and sequentially arranging the similarity according to the sequence from the large value to the small value, selecting the knowledge points with the maximum similarity as the optimal retrieval result, and selecting the preset first number of knowledge points according to the sequence from the large value to the small value as the auxiliary retrieval result, thereby obtaining the retrieval result.
In a second aspect, an embodiment of the present application further provides an intelligent retrieval device for a knowledge base, where the device includes: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to: acquiring user problems, constructing a feature fusion model and constructing a fine-ranking language model; extracting a knowledge text feature set of a preset knowledge base based on the feature fusion model, and associating the knowledge text feature set with the knowledge base to obtain a knowledge database; extracting problem features of the user problem based on a feature fusion model; matching the problem features with the knowledge database one by one, calculating the similarity between the problem features and knowledge text features of knowledge points in the knowledge database based on a similarity algorithm, and arranging the similarity in order from large to small to obtain a coarse ranking set; and carrying out fine ranking on knowledge points of the knowledge base corresponding to the coarse ranking set through a text matching algorithm based on the fine ranking language model, and arranging matching results in sequence from high to low in matching degree to obtain retrieval results.
In a third aspect, an embodiment of the present application further provides a nonvolatile computer storage medium for intelligent retrieval of a knowledge base, storing computer executable instructions, wherein the computer executable instructions are configured to: acquiring user problems, constructing a feature fusion model and constructing a fine-ranking language model; extracting a knowledge text feature set of a preset knowledge base based on the feature fusion model, and associating the knowledge text feature set with the knowledge base to obtain a knowledge database; extracting problem features of the user problem based on a feature fusion model; matching the problem features with the knowledge database one by one, calculating the similarity between the problem features and knowledge text features of knowledge points in the knowledge database based on a similarity algorithm, and arranging the similarity in order from large to small to obtain a coarse ranking set; and carrying out fine ranking on knowledge points of the knowledge base corresponding to the coarse ranking set through a text matching algorithm based on the fine ranking language model, and arranging matching results in sequence from high to low in matching degree to obtain retrieval results.
The embodiment of the application provides an intelligent retrieval method, equipment and a storage medium for a knowledge base, which are characterized in that a feature fusion model is constructed, a refined language model is constructed, features of the knowledge base are acquired through the feature fusion model, user problems are acquired, problem features are extracted on the basis of the feature fusion model, firstly, in a coarse ranking stage, a coarse ranking set is acquired through comparing the problem features with problem text features, so that the overall retrieval time is reduced, in the refined stage, knowledge points of the knowledge base corresponding to the user problems and the coarse ranking set are matched through a file matching algorithm of the refined language model for resisting noise interference, and retrieval results are given according to the matching results, so that the arrangement accuracy of intelligent retrieval for the knowledge base and the generalization of intelligent retrieval for the knowledge base are improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
FIG. 1 is a flowchart of an intelligent retrieval method for a knowledge base according to an embodiment of the present application;
fig. 2 is a schematic diagram of an internal structure of an intelligent retrieval device for a knowledge base according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be clearly and completely described below with reference to specific embodiments of the present application and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The embodiment of the application provides an intelligent retrieval method, equipment and storage medium for a knowledge base, which are used for solving the following technical problems: how to improve the arrangement accuracy of the intelligent retrieval for the knowledge base and the generalization of the intelligent retrieval for the knowledge base.
The following describes the technical scheme provided by the embodiment of the application in detail through the attached drawings.
Fig. 1 is a flowchart of an intelligent retrieval method for a knowledge base according to an embodiment of the present application. As shown in fig. 1, the method for intelligent searching for a knowledge base provided by the embodiment of the application specifically includes the following steps:
and step 1, acquiring a user problem, constructing a feature fusion model and constructing a fine-ranking language model.
And 11, constructing a fusion characteristic model.
Before user questions are acquired, a feature fusion model needs to be built. The feature fusion model is constructed because if the feature used in the course of coarse-line is based on only text representing features (such as Word2 Vec) or only text interacting features (such as Bert), the feature information is lost, because the text representing feature is a static feature, and the static feature only acquires the relative positions of characters of a segment of text and the number of characters, and cannot acquire the meaning of the characters according to the context. The interactive text feature is a dynamic feature, can acquire the interdependence relationship among characters in a section of characters, has different interaction vectors in different semantics in one word, and can acquire the meaning of the characters according to the context relationship.
In a specific example, two simple knowledge point words exist in the knowledge base:
knowledge point 1. Want to eat steamed stuffed bun: a method of scheduling steamed stuffed bun;
knowledge point 2. Want to eat watermelon: a method for reserving watermelon.
The user asks the following questions in an open format:
problem 1 i eat this immediately after the season of the watermelon.
Problem 2. The watermelon is good today, but the steamed stuffed bun is better, i am eating this day.
In the character representation feature, the character "watermelon, eat this" representation feature can be acquired for the question 1, and the knowledge point 2 is matched with the question 1, and the character "watermelon, steamed stuffed bun, eat this" can be acquired for the question 2, and here, the knowledge point 1 and the knowledge point 2 are corresponding to the question 2. This uncertainty results because the training data uses a determined knowledge space and does not take into account user personality problems, resulting in a feature with a stronger static representation of the knowledge space but a dynamic interactive representation of the missing user problem.
In the interaction vector representation feature, the character of 'watermelon and steamed stuffed bun' can be obtained aiming at the problem 2, the 'steamed stuffed bun' can be known, the 'eating' in the 'eating' is the 'steamed stuffed bun', and the problem 2 corresponds to the knowledge point 1 which is trained by using the relation between the user problem and the knowledge point, so that the interaction representation between the user problem and the knowledge point is stronger.
The recognition speed of the character representation features is high, the recognition accuracy of the interactive representation features is high, but the character representation features cannot be understood without a question method for the same problem, and the interactive text features cannot be understood as a static knowledge base. It is therefore desirable to merge textual representation features with interactive text features.
In a specific example, the standard representation feature model for obtaining the text representation feature and the standard interaction vector feature model for obtaining the text interaction feature are fused, so that the fusion feature model can extract the text representation feature and the text interaction feature of the text at the same time.
And step 111, acquiring a standard representation feature model and a standard interaction vector feature model.
The standard representation feature model is a model capable of extracting text representation features, and the standard interaction vector feature model is a model capable of extracting text interaction vector features. The standard representation feature model and the standard interaction vector feature model are both models in the prior art, and are not described herein.
And 112, splicing the standard representation feature model and the output end of the standard interaction vector feature model based on a preset Softmax function to obtain a preliminary feature fusion model.
The Softmax function is a normalized exponential function, and is used for normalizing and integrating the feature output end of the standard representation feature model and the feature output end of the standard interaction vector feature model in the embodiment of the application, so as to obtain a primary feature fusion model.
And 113, testing the preliminary feature fusion model, repeatedly adjusting parameters of the preliminary feature fusion model, and constructing a feature fusion model when the output of the preliminary feature fusion model is larger than a parameter threshold.
Because the standard representation feature model and the standard interaction vector feature model are different models, the preliminary feature fusion model needs to be adjusted after the output ends of the standard representation feature model and the standard interaction vector feature model are integrated, so that the accuracy and the speed of the preliminary feature fusion model are improved. In a specific debugging process, the output result of the single standard representation feature model and the output result of the standard interaction vector feature model are arranged, and repeated parts are deleted to obtain a comparison result. And comparing the output result of the preliminary fusion model with the comparison result, checking the similarity of the output result of the preliminary fusion model and the comparison result, and when the similarity is lower than a certain numerical value, adjusting the output result of the single standard representation feature model and the standard interaction vector feature model according to the method of the output result of the single standard representation feature model and the standard interaction vector feature model until the similarity of the output result of the single standard representation feature model and the standard interaction vector feature model is greater than a preset parameter threshold value, and outputting the feature fusion model. It is understood that the preset parameter threshold may be set manually.
And 12, constructing a fine-ranking language model.
The refined language model is used for comparing the similarity of two sections of characters, but the standard refined language model has insufficient learning generalization capability, and the learning generalization capability is a key factor for guaranteeing the refined precision under the input condition of uncertainty problems, so that the refined language model with stronger learning generalization capability needs to be obtained.
And step 121, acquiring a standard fine-ranking language model.
The standard fine-ranking language model is used for comparing the similarity of two sections of characters, but the learning generalization capability of the standard fine-ranking language is insufficient, and the standard fine-ranking language model is the prior art and is not described herein.
Step 122, training the standard fine-ranking language model based on the binary contrast learning loss function to obtain a first fine-ranking language model.
In this embodiment, two binary contrast learning loss functions are provided to train a standard fine-ranking language model, and the standard fine-ranking language model trained by the binary contrast learning loss functions can improve the information extraction degree of knowledge points and customer problems.
Wherein the first binary contrast loss function is expressed by the following formula:
L=-log(σ(r ψ (x,y i )-r ψ (x,y j )))
wherein L is a binary contrast loss function, x is a problem feature, y i and yj For knowledge text feature, r ψ (x,y i ) Is (x, y) i ) Similarity of r ψ (x,y j ) For the sample (x, y i ) Sigma is a Sigmoid function;
the second binary contrast loss function may be expressed by the following formula:
wherein L is a binary contrast loss function, x is a problem feature, y i and yj For knowledge text features, k is x, y i and yj Is used in the number of (a) and (b),for sampling the theorem function, i.e. selecting two samples from the sample set of k for calculating the loss function,/>Is x, y i and yj Average expectation, r ψ (x,y i ) Is (x, y) i ) Similarity of r ψ (x,y j ) For the sample (x, y i ) Sigma is a Sigmoid function.
Step 123, adding an anti-noise to the first fine language training in the first fine language model training stage to obtain a fine language model.
Because the learning generalization capability of the standard fine-ranking language model is insufficient, and the learning generalization capability of the first fine-ranking language model trained by the binary contrast learning loss function is unchanged, the learning generalization capability of the first fine-ranking language model needs to be improved, namely, random noise is added to the input end of the first fine-ranking language model to interfere when the first fine-ranking language model is trained, so that the learning generalization capability of the first fine-ranking language model to uncertainty problems is improved.
Wherein the anti-noise function is expressed by the following formula:
in the above-mentioned formula(s),for general loss gradient->The gradient is calculated, L is a binary opposite loss function, x is a problem feature, y is a knowledge text feature, θ is a neural network parameter, and ε is a super parameter.
It can be understood that the influence of different noises on the first fine-pitch language model is different, so that the noise is self-countered by self-loss of the noise function, so that more different noises are used for interfering with the first fine-pitch language model to improve the learning generalization capability of the first fine-pitch language model, and when the counternoise function is attenuated to 0, the first fine-pitch language model is subjected to interference of multiple random noises, namely, the first fine-pitch language model subjected to noise interference is output as the fine-pitch language model.
And step 13, acquiring a user problem.
The acquired client questions are text questions input by the user, and the mode of acquiring the user questions is different according to different action scenes of the embodiment.
For example, at the web site, the user question is obtained through a web site question edit box or through a microphone.
And 2, extracting a knowledge text feature set of a preset knowledge base based on the feature fusion model, and associating the knowledge text feature set with the knowledge base to obtain a knowledge database.
The knowledge text feature set includes a plurality of knowledge text features, and the knowledge base is a database storing a plurality of knowledge points, from which one or more knowledge points closest to the user question are extracted as answers when the user inputs the user question.
After the feature fusion module is obtained, the feature fusion module is required to extract the representation features and the interaction vector features of the knowledge points in the preset knowledge base, and the representation features and the interaction vector features are fused to form knowledge text features. A knowledge text feature corresponds to a knowledge point of a knowledge base, i.e., a knowledge text feature set is associated with the knowledge database, thereby obtaining the knowledge database.
And 3, extracting the problem characteristics of the user problem based on the characteristic fusion model.
Referring to step 2, the problem features of the user problem are not described herein.
And 4, matching the problem features with the knowledge database one by one, calculating the similarity between the problem features and knowledge text features of all knowledge points in the knowledge database based on a similarity algorithm, and arranging the similarity in order from large to small to obtain a coarse ranking set.
The similarity algorithm is an algorithm for calculating a problem feature and a text feature only formula by a similarity calculation formula. After the user problem is acquired, the problem features of the user problem are matched with the knowledge text feature set in the knowledge database one by one, and the features are only required to be ordered because only coarse ranking is performed. The sorting results are arranged according to the order of the similarity from large to small, so that a coarse ranking set can be obtained, and it is understood that the precision of the coarse ranking set is inaccurate, for example, the similarity between a knowledge point a corresponding to a knowledge text feature a and a user problem is 0.98, the similarity between the knowledge text feature a and the problem feature is 0.90, the similarity between a knowledge point b corresponding to a knowledge text feature b and the user problem is 0.95, and the similarity between the knowledge text feature b and the problem feature is 0.95, in the coarse ranking set, the sorting of the knowledge text feature a is smaller than that of the knowledge text feature b, which obviously does not conform to the actual situation, so that the coarse ranking set only dynamically arranges the knowledge points.
Step 41, obtaining the problem characteristics and the knowledge database.
The problem features and knowledge databases have been obtained in step 2 and step 3, so they will not be described in detail here.
And 42, matching the question features with knowledge text features in the knowledge database one by one.
Combining the question features with the knowledge text features in the knowledge database one by one to form a plurality of sets, in a specific case, if the knowledge database comprises 100 knowledge points, the knowledge database comprises 100 knowledge text features, and matching the question features with the 100 knowledge text features to form 100 sets, and calculating the similarity of each set by using the similarity calculation formula in the step 43 for the sets.
And 43, calculating the problem features and each knowledge text feature in the knowledge database through a similarity calculation formula so as to obtain the similarity of the problem features and each knowledge text feature in the knowledge database.
And automatically calculating the similarity of the problem features and the knowledge text features through a similarity calculation formula, wherein the similarity calculation formula is expressed by the following formula:
Sim(T 1 ,T 2 )=Softmax(∑α i f i (φ i (T 1 ),φ i (T 2 ))
+Σβ j f j (φ j (T 1 ),φ j (T 2 )))
wherein Sim (T) 1 ,T 2 ) To the similarity of the question character and the knowledge text character, phi i 、φ j Refers to the fact that in different representation vector feature encoders and interaction feature encoders, f i 、f j Respectively refers to a text representation vector feature extraction network and an interactive feature extraction network, T 1 To be a problem feature, T 2 For knowledge text characteristics, the alpha and beta parameters in the formula meet the requirement of Sigma alpha i +∑β j=1. wherein αi and βj Parameters of the formula for similarity calculation can be calculated by adjusting alpha i and βj Optimizing a similarity calculation formula; wherein alpha is i and βj To satisfy Sigma alpha i +∑β j The values of α and β can be resolved by the lagrangian multiplier method, and the values of α and β are iterated continuously by the gradient descent method, which are both prior art and are not described here in detail.
The optimization similarity calculation formula adopts the following method:
selecting the problem characteristics and knowledge text characteristics with similarity of 1 as T respectively 1 and T2 Inputting a similarity calculation formula, simultaneously obtaining the similarity output by the similarity calculation formula, wherein the closer the similarity is to 1, the better the similarity calculation formula is optimized, setting an optimization threshold, and modifying alpha when the similarity output by the similarity calculation formula is smaller than the optimization threshold i and βj Until the similarity outputted by the similarity calculation formula is greater than or equal to the optimization threshold, it is understood that the optimization threshold may be set manually.
And step 44, selecting the knowledge text features with the preset second proportion according to the sequence from the high similarity to the low similarity to form a coarse rank set.
After calculating the similarity between the problem feature and the knowledge text feature, the knowledge text features are arranged according to the sequence from big to small in similarity, and the second proportion of knowledge text features in the knowledge database are selected according to the sequence from big to small, wherein in a specific example, the second proportion is usually set to 10%, that is, 1000 knowledge points select the knowledge text features with the similarity from big to small to form a coarse arrangement set.
And 5, finely arranging knowledge points of the knowledge base corresponding to the coarse arrangement set through a text matching algorithm based on the finely arranged language model, and arranging matching results in sequence from high to low in matching degree to obtain a retrieval result.
In the actual operation process, the coarse arrangement stage takes only a few milliseconds, but the coarse arrangement set obtained by sequencing in the coarse arrangement stage is not accurate enough, and the coarse arrangement set needs to be finely arranged.
And 51, acquiring a knowledge base corresponding to the coarse row set.
And obtaining knowledge points corresponding to the knowledge text features in the coarse row set through the coarse row set, and forming a knowledge base corresponding to the coarse row according to the knowledge points.
And step 52, calculating the matching degree of the user problem and each knowledge point in the knowledge base.
The rough ranking is the similarity calculation based on the problem features and the knowledge text features, and because the character length of the user problem is shortened and the character length of the knowledge points is shortened, the retrieval precision is reduced while the operation speed is improved, the calculation is performed based on the complete user problem and the complete knowledge points, the matching degree of the user problem and the knowledge points is calculated, and the calculation adopts a refined ranking language model.
And step 53, arranging the similarity in sequence according to the order of the numerical values from the large to the small, selecting the knowledge points with the maximum similarity as the optimal retrieval result, and selecting the preset first number of knowledge points with the similarity according to the order of the numerical values from the large to the small as the auxiliary retrieval result, so as to obtain the retrieval result.
And arranging the matching degree of the corresponding knowledge points in the calculated rough ranking set and the client problems according to the sequence from big to small, and obtaining a ranking list of the matching degree of the client problems, wherein the first sequence is the knowledge point with the highest matching degree of the client problems, the knowledge point is usually used as the optimal search result, and in order to avoid that the optimal search result is not the knowledge point required by the client, the knowledge point is selected according to the preset first quantity according to the sequence from big to small to serve as the auxiliary search result.
The above is a method embodiment of the present application. Based on the same inventive concept, the embodiment of the application also provides an intelligent retrieval device for a knowledge base, and the structure of the intelligent retrieval device is shown in fig. 2.
Fig. 2 is a schematic diagram of an internal structure of an intelligent retrieval device for a knowledge base according to an embodiment of the present application. As shown in fig. 2, the apparatus includes:
at least one processor 201;
and a memory 202 communicatively coupled to the at least one processor;
wherein the memory 202 stores instructions executable by the at least one processor, the instructions being executable by the at least one processor 201 to enable the at least one processor 201 to: acquiring user problems, constructing a feature fusion model and constructing a fine-ranking language model; extracting a knowledge text feature set of a preset knowledge base based on the feature fusion model, and associating the knowledge text feature set with the knowledge base to obtain a knowledge database; extracting problem features of the user problem based on a feature fusion model; matching the problem features with the knowledge database one by one, calculating the similarity between the problem features and knowledge text features of knowledge points in the knowledge database based on a similarity algorithm, and arranging the similarity in order from large to small to obtain a coarse ranking set; and carrying out fine ranking on knowledge points of the knowledge base corresponding to the coarse ranking set through a text matching algorithm based on the fine ranking language model, and arranging matching results in sequence from high to low in matching degree to obtain retrieval results.
Some embodiments of the application provide a non-volatile computer storage medium corresponding to the intelligent retrieval of a knowledge base of fig. 1, storing computer executable instructions configured to: acquiring user problems, constructing a feature fusion model and constructing a fine-ranking language model; extracting a knowledge text feature set of a preset knowledge base based on the feature fusion model, and associating the knowledge text feature set with the knowledge base to obtain a knowledge database; extracting problem features of the user problem based on a feature fusion model; matching the problem features with the knowledge database one by one, calculating the similarity between the problem features and knowledge text features of knowledge points in the knowledge database based on a similarity algorithm, and arranging the similarity in order from large to small to obtain a coarse ranking set; and carrying out fine ranking on knowledge points of the knowledge base corresponding to the coarse ranking set through a text matching algorithm based on the fine ranking language model, and arranging matching results in sequence from high to low in matching degree to obtain retrieval results.
The embodiments of the present application are described in a progressive manner, and the same and similar parts of the embodiments are all referred to each other, and each embodiment is mainly described in the differences from the other embodiments. In particular, for the internet of things device and the medium embodiment, since they are substantially similar to the method embodiment, the description is relatively simple, and the relevant points are referred to in the description of the method embodiment.
The system, the medium and the method provided by the embodiment of the application are in one-to-one correspondence, so that the system and the medium also have similar beneficial technical effects to the corresponding method, and the beneficial technical effects of the method are explained in detail above, so that the beneficial technical effects of the system and the medium are not repeated here.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.
Claims (10)
1. An intelligent retrieval method for a knowledge base, the method comprising:
acquiring user problems, constructing a feature fusion model and constructing a fine-ranking language model;
extracting a knowledge text feature set of a preset knowledge base based on the feature fusion model, and associating the knowledge text feature set with the knowledge base to obtain a knowledge database;
extracting problem features of the user problem based on a feature fusion model;
matching the problem features with the knowledge database one by one, calculating the similarity between the problem features and knowledge text features of knowledge points in the knowledge database based on a similarity algorithm, and arranging the similarity in order from large to small to obtain a coarse ranking set;
and carrying out fine ranking on knowledge points of the knowledge base corresponding to the coarse ranking set through a text matching algorithm based on the fine ranking language model, and arranging matching results in sequence from high to low in matching degree to obtain retrieval results.
2. The method for intelligent retrieval of knowledge base according to claim 1, wherein constructing a fusion feature model specifically comprises:
obtaining a standard representation feature model and a standard interaction vector feature model;
splicing the standard representation feature model and the output end of the standard interaction vector feature model based on a preset Softmax function to obtain a preliminary feature fusion model;
and testing the preliminary feature fusion model, repeatedly adjusting parameters of the preliminary feature fusion model, and constructing a feature fusion model when the output of the preliminary feature fusion model is larger than a parameter threshold.
3. The intelligent retrieval method for a knowledge base according to claim 1, wherein the building of the refined language model specifically comprises:
obtaining a standard fine-ranking language model;
training the standard fine-ranking language model based on the binary contrast learning loss function to obtain a first fine-ranking language model;
adding anti-noise to the first fine language training in the first fine language model training stage to obtain a fine language model.
4. The method for intelligent retrieval of a knowledge base according to claim 3, wherein training the standard refined language model based on a binary contrast learning loss function to obtain a first refined language model comprises:
Replacing the binary contrast loss function with the loss function in the standard fine-ranking training model to obtain a preliminary fine-ranking language model;
training the preliminary fine-ranking language model, and acquiring a first fine-ranking language model when the output of the preliminary fine-ranking language model is greater than a preset parameter threshold;
wherein the binary contrast loss function is expressed by the following formula:
L=-log(σ(r ψ (x,y i )-r ψ (x,y j )))
wherein L is a binary contrast loss function, x is a problem feature, y i and yj For knowledge text feature, r ψ (x,y i ) Is (x, y) i ) Similarity of r ψ (x,y j ) For the sample (x, y i ) Sigma is a Sigmoid function; or,
the binary contrast loss function can be expressed by the following formula:
wherein L is a binary contrast loss function, x is a problem feature, y i and yj For knowledge text features, k is x, y i and yj Is used in the number of (a) and (b),for sampling theorem function, ++>Is x, y i and yj Average expectation, r ψ (x,y i ) Is (x, y) i ) Similarity of r ψ (x,y j ) For the sample (x, y i ) Sigma is a Sigmoid function.
5. The method for intelligent retrieval of knowledge base according to claim 3, wherein adding noise-countermeasure to the first fine language training in the first fine language model training stage to obtain a fine language model, specifically comprising:
Adding an anti-noise in an input preset training set before starting training the first refined language model, and training the first refined language model based on the anti-noise;
when the output of the first refined language model is larger than a preset parameter threshold value, obtaining the refined language model; wherein the anti-noise function is expressed by the following formula:
wherein ,for general loss gradient->The gradient is calculated, L is a binary opposite loss function, x is a problem feature, y is a knowledge text feature, θ is a neural network parameter, and ε is a super parameter.
6. The intelligent retrieval method for a knowledge base according to claim 1, wherein the matching of the question feature and the knowledge database is performed piece by piece, and the similarity between the question feature and knowledge text features of knowledge points in the knowledge base is calculated based on a similarity algorithm, specifically comprising:
acquiring the problem characteristics and the knowledge database;
matching the question features with knowledge text features in the knowledge database one by one;
calculating the problem characteristics and each knowledge text characteristic in the knowledge database through a similarity calculation formula to obtain the similarity of the problem characteristics and each knowledge text characteristic in the knowledge database; the similarity calculation formula is represented by the following formula:
Sim(T 1 ,T 2 )=Softmax(∑α i f i (φ i (T 1 ),φ i (T 2 ))+β j f j (φ j (T 1 ),φ j (T 2 )))
Wherein Sim (T) 1 ,T 2 ) To the similarity of the question character and the knowledge text character, phi i 、φ j Refers to the fact that in different representation vector feature encoders and interaction feature encoders, f i 、f j Respectively refers to a text representation vector feature extraction network and an interactive feature extraction network, T 1 To be a problem feature, T 2 For knowledge text characteristics, the alpha and beta parameters in the formula meet the requirement of Sigma alpha i +∑β j =1。
7. The intelligent retrieval method for knowledge base according to claim 1, wherein the alpha and beta parameters in the formula satisfy Σα requirement i +∑β j =1, specifically including:
resolving alpha and beta values by Lagrangian multiplier, and/or
The values of α and β are iterated continuously by the gradient descent method.
8. The intelligent retrieval method for a knowledge base according to claim 1, wherein the knowledge points of the knowledge base corresponding to the coarse rank set are finely ranked by a text matching algorithm based on the finely ranked language model, and the matching results are arranged according to the order of the matching degree from large to small to obtain the retrieval result, and the method specifically comprises the following steps:
acquiring a knowledge base corresponding to the coarse row set;
calculating the matching degree of the user problem and each knowledge point in the knowledge base;
and sequentially arranging the similarity according to the sequence from the large value to the small value, selecting the knowledge points with the maximum similarity as the optimal retrieval result, and selecting the preset first number of knowledge points according to the sequence from the large value to the small value as the auxiliary retrieval result, thereby obtaining the retrieval result.
9. An intelligent retrieval device for a knowledge base, the device comprising:
at least one processor;
and a memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to:
acquiring user problems, constructing a feature fusion model and constructing a fine-ranking language model;
extracting a knowledge text feature set of a preset knowledge base based on the feature fusion model, and associating the knowledge text feature set with the knowledge base to obtain a knowledge database;
extracting problem features of the user problem based on a feature fusion model;
matching the problem features with the knowledge database one by one, calculating the similarity between the problem features and knowledge text features of knowledge points in the knowledge database based on a similarity algorithm, and arranging the similarity in order from large to small to obtain a coarse ranking set;
and carrying out fine ranking on knowledge points of the knowledge base corresponding to the coarse ranking set through a text matching algorithm based on the fine ranking language model, and arranging matching results in sequence from high to low in matching degree to obtain retrieval results.
10. A non-volatile computer storage medium storing computer-executable instructions for intelligent retrieval of a knowledge base, the computer-executable instructions configured to:
acquiring user problems, constructing a feature fusion model and constructing a fine-ranking language model;
extracting a knowledge text feature set of a preset knowledge base based on the feature fusion model, and associating the knowledge text feature set with the knowledge base to obtain a knowledge database;
extracting problem features of the user problem based on a feature fusion model;
matching the problem features with the knowledge database one by one, calculating the similarity between the problem features and knowledge text features of knowledge points in the knowledge database based on a similarity algorithm, and arranging the similarity in order from large to small to obtain a coarse ranking set;
and carrying out fine ranking on knowledge points of the knowledge base corresponding to the coarse ranking set through a text matching algorithm based on the fine ranking language model, and arranging matching results in sequence from high to low in matching degree to obtain retrieval results.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310889263.1A CN116932694A (en) | 2023-07-19 | 2023-07-19 | Intelligent retrieval method, device and storage medium for knowledge base |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310889263.1A CN116932694A (en) | 2023-07-19 | 2023-07-19 | Intelligent retrieval method, device and storage medium for knowledge base |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116932694A true CN116932694A (en) | 2023-10-24 |
Family
ID=88378557
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310889263.1A Pending CN116932694A (en) | 2023-07-19 | 2023-07-19 | Intelligent retrieval method, device and storage medium for knowledge base |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116932694A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117272073A (en) * | 2023-11-23 | 2023-12-22 | 杭州朗目达信息科技有限公司 | Text unit semantic distance pre-calculation method and device, and query method and device |
-
2023
- 2023-07-19 CN CN202310889263.1A patent/CN116932694A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117272073A (en) * | 2023-11-23 | 2023-12-22 | 杭州朗目达信息科技有限公司 | Text unit semantic distance pre-calculation method and device, and query method and device |
CN117272073B (en) * | 2023-11-23 | 2024-03-08 | 杭州朗目达信息科技有限公司 | Text unit semantic distance pre-calculation method and device, and query method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN117033608B (en) | Knowledge graph generation type question-answering method and system based on large language model | |
CN110147445A (en) | Intension recognizing method, device, equipment and storage medium based on text classification | |
CN109471889B (en) | Report accelerating method, system, computer equipment and storage medium | |
CN114168619B (en) | Training method and device of language conversion model | |
CN110738059B (en) | Text similarity calculation method and system | |
CN110968725B (en) | Image content description information generation method, electronic device and storage medium | |
CN116932694A (en) | Intelligent retrieval method, device and storage medium for knowledge base | |
CN114818729A (en) | Method, device and medium for training semantic recognition model and searching sentence | |
CN116881470A (en) | Method and device for generating question-answer pairs | |
CN111368093B (en) | Information acquisition method, information acquisition device, electronic equipment and computer readable storage medium | |
CN114995729A (en) | Voice drawing method and device and computer equipment | |
CN117271558A (en) | Language query model construction method, query language acquisition method and related devices | |
CN116861269A (en) | Multi-source heterogeneous data fusion and analysis method in engineering field | |
CN114282513A (en) | Text semantic similarity matching method and system, intelligent terminal and storage medium | |
CN118093625A (en) | Financial data query method, equipment and medium for ERP system | |
CN112988982B (en) | Autonomous learning method and system for computer comparison space | |
CN114186059A (en) | Article classification method and device | |
CN116881471B (en) | Knowledge graph-based large language model fine tuning method and device | |
CN116028626A (en) | Text matching method and device, storage medium and electronic equipment | |
CN115544230A (en) | Question answer retrieval processing method and device | |
CN112685623B (en) | Data processing method and device, electronic equipment and storage medium | |
CN114254622A (en) | Intention identification method and device | |
CN112667666A (en) | SQL operation time prediction method and system based on N-gram | |
CN118535715B (en) | Automatic reply method, equipment and storage medium based on tree structure knowledge base | |
CN117556263B (en) | Sample construction method, code generation method, electronic device, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |