CN117034135A - API recommendation method based on prompt learning and double information source fusion - Google Patents

API recommendation method based on prompt learning and double information source fusion Download PDF

Info

Publication number
CN117034135A
CN117034135A CN202310778665.4A CN202310778665A CN117034135A CN 117034135 A CN117034135 A CN 117034135A CN 202310778665 A CN202310778665 A CN 202310778665A CN 117034135 A CN117034135 A CN 117034135A
Authority
CN
China
Prior art keywords
api
candidate
prompt
query
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310778665.4A
Other languages
Chinese (zh)
Inventor
陈希希
王楚越
宗烜逸
程实
文万志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong University
Original Assignee
Nantong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nantong University filed Critical Nantong University
Priority to CN202310778665.4A priority Critical patent/CN117034135A/en
Publication of CN117034135A publication Critical patent/CN117034135A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/211Selection of the most significant subset of features
    • G06F18/2113Selection of the most significant subset of features by ranking or filtering the set of features, e.g. using a measure of variance or of feature cross-correlation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Machine Translation (AREA)

Abstract

The invention provides an API recommendation method based on prompt learning and double information source fusion, which comprises the following steps: s1, screening out problems related to an API from an SO question-answering website Stack overflow, and capturing words in a dialogue text; s2, extracting relevant information from the API reference document; s3, constructing a relation between the API and the question-answer QA based on a heuristic method through fusion of two types of API knowledge; s4, training the BERT variant model RoBERTa by using the fused knowledge representation; s5, inputting a query statement to obtain a group of candidate APIs; s6, reordering the candidate APIs by using the prompt learning calculation probability. The invention utilizes the double information source fusion to improve the efficiency of API retrieval, and the API reference document and the SO question-answering website complement each other, SO that support can be provided for the API query and retrieval together. In the training model stage, the invention is different from the fine adjustment of the model in the past, takes the query statement as the prompt, provides enough context information to enable the RoBERTa model to adapt to the API recommendation task, and improves the accuracy of API recommendation.

Description

API recommendation method based on prompt learning and double information source fusion
Technical Field
The invention belongs to the technical field of API recommendation, and particularly relates to an API recommendation method based on prompt learning and double information source fusion, which is mainly used for better utilizing rich semantic information and language knowledge embedded in a previous training process, re-planning a downstream task into a pre-training language model (PLM) training task and providing more efficient and accurate API selection and use experience.
Background
With the advent of the digitization age, application Programming Interfaces (APIs) have become an integral part of modern software development. An API is a set of specifications that define the manner in which software components interact, allowing sharing of data and functionality between different applications. To assist in API searching, a number of automated API recommendation methods have been proposed. There are two orthogonal approaches to this task, namely information retrieval based approaches and neural based approaches. However, these approaches ignore the rich semantic and linguistic information in the real large-scale corpus.
More recently, a pre-trained language model (pre-train language model, PLM) was introduced to learn API recommendations. Some common approaches use a common pre-training and fine tuning paradigm to adjust recommended tasks. While these methods have improved well in performance, they do not take good advantage of the rich encyclopedia knowledge in large-scale PLMs due to the inconsistent downstream tasks and PLM training goals. As shown in fig. 4, a recent new pre-training, hinting and predictive paradigm, i.e., hint learning, has met with significant success in many applications in the field of Natural Language Processing (NLP). The basis of this new paradigm is to re-plan downstream tasks into PLM training tasks by designing prompt templates and answer word spaces that are related to the tasks. Thus, prompt learning is incorporated into downstream API recommendation tasks.
Disclosure of Invention
The invention aims to provide an API recommendation method based on prompt learning and double information source fusion, which reorders an API recommendation list based on prompt learning, and combines SO website question-answer posts and API reference document double information sources to conduct API recommendation, SO that recommendation accuracy is improved.
In order to solve the technical problems, the implementation of the invention provides an API recommendation method based on prompt learning and double information source fusion, which comprises the following steps:
s1, screening out problems related to an API from a Stack overflow, and capturing words in a dialogue text;
s2, extracting relevant information from the API reference document;
s3, constructing a relation between the API and the question-answer QA based on a heuristic method through fusion of two types of API knowledge;
s4, training the latest sentences by the fused knowledge representation into a model to obtain a BERT variant model RoBERTa;
s5, inputting a query statement to obtain a group of candidate APIs;
and S6, reordering the candidate APIs by using the prompt learning calculation probability to finish the API recommendation.
The method comprises the following steps of:
s1.1, acquiring API related problems related to an SO question-answering website;
s1.2, deleting long code fragments contained in an HTML tag;
s1.3, splitting the parsed text into words through a natural language processing tool kit (NLTK) package;
s1.4, forming a word corpus based on the text analysis and word splitting;
s1.5, learning word embedding through a training word2vec model.
Wherein, extracting related entity information and relation from API reference document, step S2 includes the following steps:
s2.1, providing function description and related attribute information of all APIs by using an API reference document, and mainly selecting the API reference document of a PyTorch framework for knowledge representation and acquisition;
s2.2, performing lexical and sentence structure analysis on the text;
s2.3, extracting inheritance relations between classes and base classes by adopting regular expressions according to declaration rules of the classes, and realizing identification and extraction of API entities and relations;
s2.4, storing the extracted entity information and relations by using a document database.
Wherein, establish the association between word and entity, step S3 includes the following steps:
s3.1, loading the word vector and API entity and relation information stored in the document database into a memory for processing;
s3.2, constructing a question-answer model by using a heuristic method;
s3.3, identifying a module or class of the API mentioned by the question-answer QA by analyzing the < code > tag of the HTML;
s3.4, determining the API by formulating a regular expression identification code module or class;
s3.5, after the unambiguous API is identified, a 'mention' association can be established between the question and answer QA and the API.
Wherein, step S4 includes the following steps:
s4.1, constructing an input sequence for each question-answer pair according to the input requirement of the RoBERTa model, wherein a question part comprises a representation of word information of a Stack Overflow website, and an answer part comprises a representation of fused API document entity information.
S4.2, encoding the input sequence in the step S4.1, and converting the input sequence into word vectors;
s4.3, using the fused API knowledge as training data, and then using the pre-training weight of the RoBERTa model as a starting point to enable the model to learn the semantic information related to the API.
Wherein, reasoning is carried out on the query statement, and candidate APIs related to the query are retrieved from the API knowledge base, and the step S5 comprises the following steps:
s5.1, giving a query Q described by natural language, wherein the first step is to retrieve the first k candidate questions from an SO website;
s5.2, converting the training RoBERTa model into sentences to be embedded, and obtaining a candidate API list;
and S5.3, obtaining and sequencing the relevance score of each API through the output of the RoBERTa model.
Wherein, step S6 includes the following steps:
s6.1, preparing a training data set comprising a candidate API list, a query statement Q and a correlation label, wherein each sample represents a query statement and a corresponding candidate API list, and the correlation label of each candidate API and the query;
s6.2, extracting feature representations for the query statement and the candidate API, and converting the query statement and the API into vector representations by using a word vector model to ensure that the feature representations of the query statement and the candidate API have consistent dimensionality;
s6.3, selecting a RoBERTa model for prompt learning, inputting characteristic representations of query sentences and candidate APIs, and outputting a score representing the correlation;
s6.4, taking a prompt template T (-) as a core component of a prompt learning framework (promtpapirec), packaging input data (< candidate >, < query >), converting an API recommendation task into a completion task, and predicting [ MASK ]; the specific expression is shown as a formula (1):
x prompt =T(<candidate>,<query>,[MASK]) (1)
wherein x is prompt Is a variable that suggests the definition of the template,<candidate>a list of candidate APIs is represented and,<query>representing query statements, [ MASK ]]Representing the predicted value; x is x prompt 、<candidate>、<query>And [ MASK ]]Input text data as a follow-up prompt template;
s6.5, designing a prompt template, and capturing a matching signal between the query sentence and the candidate post;
s6.6, a group of candidate API list and a corresponding query statement query are given, wherein the list corresponds to a real label y epsilon {0,1}, whether a user selects the candidate API or not is reflected, and a label word mapping verbalizer v (-);
s6.7, extracting features of the new problems and candidate APIs by using the trained model, and calculating the probability of each API; the APIs are ranked according to probability to provide a reasonable ranking of candidate APIs according to the likelihood of model prediction, completing API recommendation.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a flowchart of the API recommendation as a whole;
FIG. 3 is a framework concept based on a hint learning paradigm;
fig. 4 is a flow chart of the present invention prompting learning of the promtpapirec framework.
Detailed Description
In order to make the technical problems, technical solutions and advantages to be solved more apparent, the following detailed description will be given with reference to the accompanying drawings and specific embodiments.
1-3, the invention provides an API recommendation method based on prompt learning and double information source fusion, which comprises the following steps:
s1, screening out problems related to an API from a Stack overflow, and capturing words in a dialogue text, wherein the method comprises the following steps:
s1.1, downloading official dump data of an SO website, and extracting 1347908 problems with Java labels;
s1.2, matching non-text contents such as HTML labels, special characters, links and the like by using regular expressions, and deleting the non-text contents.
S1.3, removing common stop words such as "the", "is", etc.;
s1.3.1, acquiring a text of an SO website, and removing interfering words;
s1.3.2, obtaining word frequency;
s1.3.3, stop words are removed.
S1.4, based on the questions and answers thereof in the step S1.1, constructing a text prediction library by using plain text to train a word embedding model after processing in the steps S1.2-S1.3;
s1.4.1 learning Word embedding by training Word2vec model using python kit Gensim library;
s1.4.2 the Word vector dimension of Word2vec model is set as 100, the window size is 5, the minimum Word frequency is 5, the trained Word vector is stored in the text file named as Word2vec.
S2, the API reference document belongs to semi-structured data, different HTML labels represent different types of API entities, and the API labels contain attribute information of the API entities, such as function description, parameters, return values, return value types and the like. Thus, extracting relevant information from the API reference document comprises the steps of:
s2.1, opening and downloading a JavaAPI online document;
s2.2, decomposing the text into words or marks by using a lexical analyzer (token);
s2.3, analyzing the sentence structure by a syntax analyzer (parser) to identify grammatical analysis of nouns, verbs, adjectives and the like;
s2.4, performing entity recognition on the text by using a trained entity recognition model, such as a Conditional Random Field (CRF), so as to accurately recognize and mark the entity in the API document;
s2.5, after the entity is identified, defining a mode of 'parameter name + type' to extract the data type attribute of the parameter in order to further identify the attribute of the entity.
S2.6, identifying and extracting the relation between the API entities according to the context information in the API document;
s2.6.1, preparing unlabeled JavaAPI document data;
s2.6.2 extracting text representation of each document by using a word frequency-inverse text frequency (TF-IDF) method, namely firstly dividing the document into words, splitting the words into independent words or words, then calculating the frequency of each word appearing in the document, and then calculating the inverse document frequency of each word, wherein a specific formula is shown in a formula (2), so that a TF-IDF value is obtained.
IDF=log(N/DF) (2);
Where N is the total number of API documents, DF is the number of documents containing the term;
s2.7, storing the extracted entity information and relations by using a document database.
S3, constructing a relation between the API and the question and answer QA based on a heuristic method through fusion of two types of API knowledge, wherein the relation comprises the following steps:
s3.1, loading Word vectors, and loading a text file named as word_word2Vec. Txt in the step S1.4.2 into a memory by using a Word2Vec module in a Gensim library and using the text file;
s3.2, extracting stored API entity and relation information from the document database, and loading the API entity and relation information into a memory for processing;
s3.2.1, connection database;
s3.2.2, executing the query;
s3.2.3, obtaining a query result;
s3.2.4, load data into memory, use the list to save information of API entities and relationships.
S3.3, constructing a question-answer model, and constructing the relation between the API and the question-answer QA by using a heuristic method;
s3.3.1, code elements in SO questions and answers have the limitation that APIs that appear in the same question and answer typically belong to the same module or class. The module or class in the code block is identified by formulating a regular expression to determine its API. When an unambiguous API is identified, a "mention" association can be established between the question and answer QA and the API.
S4, training the RoBERTa model by using the fused knowledge representation, and comprising the following steps of:
s4.1, preprocessing data to adapt to an input format of the RoBERTa model;
s4.1.1, marking text data into word or subword units;
s4.1.2, adding special tags to the tagged text so that the model can identify the beginning and end of a sentence. Such as adding a '[ cls ]' tag at the beginning of each input sequence, and a separation tag, such as '[ SEP ]', between the question and the answer;
s4.1.3 dividing the long text into a plurality of shorter fragments and adding a separation mark at the beginning of each fragment;
s4.1.4, converting the tokenized text into a corresponding index representation.
S4.2, using a pre-training weight of the Roberta model as a starting point, adopting a Pytorch deep learning frame, and using association information among questions, answers and API references as input to construct a question-reference association model structure;
s4.2.1 the pre-training weight of the Roberta model is loaded by using 'RobertaModel. From_pre-trained (' Roberta-base ')', then a 'Dropout' layer and a full connection layer are added on the Roberta model as classifiers, and finally the forward propagation process of the model is defined by a 'forward' method.
S4.3, defining a binary cross entropy loss function to measure the difference between model prediction and real association;
s4.3.1 if the API mention of the question and answer corresponds to is correct, they are considered positive examples; if not the correct API mention, they are considered negative examples.
And S4.4, in the training process, feeding data into the model, selecting an optimization algorithm of Adam self-adaptive learning rate by back propagation of the weight of the updated model, using 'torch.optim.adam', and using torch.optim.lr_schedule.stepLR when epoch=40 is used for reducing the learning rate.
S5, inputting a query sentence to obtain a group of candidate APIs, wherein the method comprises the following steps:
s5.1, providing a query sentence as input, and carrying out the preprocessing step which is the same as training data on the query sentence, wherein the step S4.1 is shown;
s5.2, inputting and transmitting the preprocessed query statement to a trained Roberta model for reasoning;
s5.3, obtaining the relevance score of each API through the output of the model;
s5.4, sorting the APIs according to the relevance scores, and selecting the top N APIs with the highest scores as a recommendation list.
S6, reordering candidate APIs by using prompt learning calculation probability, wherein the method comprises the following steps of:
s6.1, fig. 4 is an overall framework diagram prompting learning of propapitec, which comprises three main modules: (1) data format conversion; (2) prompting a template; (3) answer prediction;
s6.2, given a set of candidate API list and a query sentence query, we convert them into natural language sentences to adapt to the later prompt learning paradigm, denoted as < candidate > and < user >, respectively, and add a virtual token [ token ] at the beginning of each title to segment each API, as shown in equation (3):
<candidate>←[token]API 1 …[token]API n (3);
wherein,<candidate>refers to a candidate API list, [ token ]]Is a virtual token, API 1 ,API n Etc. represent recommended APIs, n represents the number;
s6.3, taking a prompt template T (-) as a core component of a prompt learning framework promtpapire, packaging input data (< candidate, < query >) and converting an API recommendation task into a completion task, and predicting [ MASK ], wherein the concrete expression is shown in a formula (1).
x prompt =T(<condidate>,<query>,[MASK]) (1);
Wherein x is prompt Is a variable that suggests the definition of the template,<candidate>representing a candidate API list, query representing query statements, all of which serve as input text data for the underlying prompt template;
two hint templates were designed, namely < template > is [ MASK ] to < user > and reading < template > to the user is a [ MASK ] choice according to < user >.
S6.4, a group of candidate API list and corresponding query statement query are given, wherein the list corresponds to a real label y epsilon {0,1}, whether a user selects the candidate API or not is reflected, and a verbalizer (-) is designed to map the label to two candidate intervals corresponding to PLM, as shown in a formula (4);
where pos represents the correct API selected by the user, neg represents the API not selected by the user;
the probability calculation formula is shown as formula (5):
P(y|candidate,query)=P M ([MASK]=v(y)|x prompt ) (5);
wherein P is M ([MASK]=v(y)|x prompt ) The confidence level of the current API can be regarded as whether to recommend, the recommendation list is formed again by using the confidence level as the ranking score, and finally, the evaluation index and the related information of the query are output, so that the average value of the evaluation index MRR and the MAP is 0.67 and 0.62 respectively.
While the foregoing is directed to the preferred embodiments of the present invention, it will be appreciated by those skilled in the art that various modifications and adaptations can be made without departing from the principles of the present invention, and such modifications and adaptations are intended to be comprehended within the scope of the present invention.

Claims (7)

1. The API recommendation method based on prompt learning and double information source fusion is characterized by comprising the following steps:
s1, screening out problems related to an API from a Stack overflow, and capturing words in a dialogue text;
s2, extracting relevant information from the API reference document;
s3, constructing a relation between the API and the question-answer QA based on a heuristic method through fusion of two types of API knowledge;
s4, training the latest sentences by the fused knowledge representation into a model to obtain a BERT variant model RoBERTa;
s5, inputting a query statement to obtain a group of candidate APIs;
and S6, reordering the candidate APIs by using the prompt learning calculation probability to finish the API recommendation.
2. The API recommendation method based on prompt learning and dual information source fusion according to claim 1, wherein the step S1 includes the steps of:
s1.1, acquiring API related problems related to an SO question-answering website;
s1.2, deleting long code fragments contained in an HTML tag;
s1.3, splitting the parsed text into words through a natural language processing tool kit;
s1.4, forming a word corpus based on the text analysis and word splitting;
s1.5, learning word embedding by training a word embedding model.
3. The API recommendation method based on prompt learning and dual information source fusion as recited in claim 1, wherein the step S2 includes the steps of:
s2.1, providing function descriptions and related attribute information of all APIs by using an API reference document, and selecting an API reference document of a PyTorch framework for knowledge representation and acquisition;
s2.2, performing lexical and sentence structure analysis on the text;
s2.3, extracting inheritance relations between classes and base classes by adopting regular expressions according to declaration rules of the classes, and realizing identification and extraction of API entities and relations;
s2.4, storing the extracted entity information and relations by using a document database.
4. The API recommendation method based on prompt learning and dual information source fusion as recited in claim 1, wherein the step S3 of establishing an association between a word and an entity comprises the steps of:
s3.1, loading the word vector and API entity and relation information stored in the document database into a memory for processing;
s3.2, constructing a question-answer model by using a heuristic method;
s3.3, identifying a module or class of the API mentioned by the question-answer QA by analyzing the < code > tag of the HTML;
s3.4, determining the API by formulating a regular expression identification code module or class;
s3.5, when the unambiguous API is identified, establishing a mentioned association between the question-answer QA and the API.
5. The API recommendation method based on prompt learning and dual information source fusion as recited in claim 1, wherein step S4 includes the steps of:
s4.1, constructing an input sequence for each question-answer pair according to the input requirement of the RoBERTa model, wherein a question part comprises the representation of word information of a Stack over flow website, and an answer part comprises the representation of fused API document entity information;
s4.2, encoding the input sequence in the step S4.1, and converting the input sequence into word vectors;
s4.3, using the fused API knowledge as training data, and then using the pre-training weight of the RoBERTa model as a starting point to enable the model to learn the semantic information related to the API.
6. The API recommendation method based on prompt learning and dual information source fusion as recited in claim 1, wherein the query statement is inferred, candidate APIs related to the query are retrieved from the API knowledge base, and step S5 includes the steps of:
s5.1, giving a query Q described by natural language, wherein the first step is to retrieve the first k candidate questions from an SO website;
s5.2, converting the training RoBERTa model into sentences to be embedded, and obtaining a candidate API list;
and S5.3, obtaining and sequencing the relevance score of each API through the output of the RoBERTa model.
7. The API recommendation method based on prompt learning and dual information source fusion as recited in claim 1, wherein step S6 includes the steps of:
s6.1, preparing a training data set comprising a candidate API list, a query statement Q and a correlation label, wherein each sample represents a query statement and a corresponding candidate API list, and the correlation label of each candidate API and the query;
s6.2, extracting feature representations for the query statement and the candidate API, and converting the query statement and the API into vector representations by using a word vector model to ensure that the feature representations of the query statement and the candidate API have consistent dimensionality;
s6.3, selecting a RoBERTa model for prompt learning, inputting characteristic representations of query sentences and candidate APIs, and outputting a score representing the correlation;
s6.4, taking a prompt template T (-) as a core component of a prompt learning framework, packaging input data (< candidate >, < query >), converting an API recommendation task into a completion task, and predicting [ MASK ]; the specific expression is shown as a formula (1):
x prompt =T(<candidate>,<query>,[MASK]) (1)
wherein x is prompt Is a variable that suggests the definition of the template,<candidate>a list of candidate APIs is represented and,<query>representing query statements, [ MASK ]]Representing the predicted value; x is x prompt 、<candidate>、<query>And [ MASK ]]Input text data as a follow-up prompt template;
s6.5, designing a prompt template, and capturing a matching signal between the query statement and the candidate post;
s6.6, a group of candidate API list and a corresponding query statement query are given, a real label y epsilon {0,1} is corresponding to reflect whether a user selects the candidate API, and a label word mapping verbalizer (;
s6.7, using a trained RoBERTa model to extract characteristics of the new problem and candidate APIs, and calculating the probability of each API; the APIs are ranked according to probability to provide a reasonable ranking of candidate APIs according to the likelihood of model prediction, completing API recommendation.
CN202310778665.4A 2023-06-29 2023-06-29 API recommendation method based on prompt learning and double information source fusion Pending CN117034135A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310778665.4A CN117034135A (en) 2023-06-29 2023-06-29 API recommendation method based on prompt learning and double information source fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310778665.4A CN117034135A (en) 2023-06-29 2023-06-29 API recommendation method based on prompt learning and double information source fusion

Publications (1)

Publication Number Publication Date
CN117034135A true CN117034135A (en) 2023-11-10

Family

ID=88628741

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310778665.4A Pending CN117034135A (en) 2023-06-29 2023-06-29 API recommendation method based on prompt learning and double information source fusion

Country Status (1)

Country Link
CN (1) CN117034135A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117874211A (en) * 2024-03-13 2024-04-12 蒲惠智造科技股份有限公司 Intelligent question-answering method, system, medium and electronic equipment based on SAAS software

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117874211A (en) * 2024-03-13 2024-04-12 蒲惠智造科技股份有限公司 Intelligent question-answering method, system, medium and electronic equipment based on SAAS software

Similar Documents

Publication Publication Date Title
CN109684448B (en) Intelligent question and answer method
CN110727779A (en) Question-answering method and system based on multi-model fusion
CN112650840A (en) Intelligent medical question-answering processing method and system based on knowledge graph reasoning
CN110390049B (en) Automatic answer generation method for software development questions
US20220004545A1 (en) Method of searching patent documents
CN112328800A (en) System and method for automatically generating programming specification question answers
CN112883175B (en) Meteorological service interaction method and system combining pre-training model and template generation
US20210350125A1 (en) System for searching natural language documents
CN117149984B (en) Customization training method and device based on large model thinking chain
CN114416942A (en) Automatic question-answering method based on deep learning
CN111666376B (en) Answer generation method and device based on paragraph boundary scan prediction and word shift distance cluster matching
CN114818717A (en) Chinese named entity recognition method and system fusing vocabulary and syntax information
CN115292457A (en) Knowledge question answering method and device, computer readable medium and electronic equipment
CN113821605A (en) Event extraction method
CN113157885A (en) Efficient intelligent question-answering system for knowledge in artificial intelligence field
CN115599899A (en) Intelligent question-answering method, system, equipment and medium based on aircraft knowledge graph
CN114218379B (en) Attribution method for question answering incapacity of intelligent question answering system
CN117034135A (en) API recommendation method based on prompt learning and double information source fusion
CN111666374A (en) Method for integrating additional knowledge information into deep language model
Chernova Occupational skills extraction with FinBERT
CN112035629B (en) Method for implementing question-answer model based on symbolized knowledge and neural network
CN113705207A (en) Grammar error recognition method and device
Hughes Automatic inference of causal reasoning chains from student essays
Barale et al. Automated refugee case analysis: An nlp pipeline for supporting legal practitioners
CN115658845A (en) Intelligent question-answering method and device suitable for open-source software supply chain

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination