CN111290756A - Code-annotation conversion method based on dual reinforcement learning - Google Patents

Code-annotation conversion method based on dual reinforcement learning Download PDF

Info

Publication number
CN111290756A
CN111290756A CN202010085043.XA CN202010085043A CN111290756A CN 111290756 A CN111290756 A CN 111290756A CN 202010085043 A CN202010085043 A CN 202010085043A CN 111290756 A CN111290756 A CN 111290756A
Authority
CN
China
Prior art keywords
word
code
annotation
probability
dual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010085043.XA
Other languages
Chinese (zh)
Other versions
CN111290756B (en
Inventor
陈荣
唐文君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Maritime University
Original Assignee
Dalian Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Maritime University filed Critical Dalian Maritime University
Priority to CN202010085043.XA priority Critical patent/CN111290756B/en
Publication of CN111290756A publication Critical patent/CN111290756A/en
Application granted granted Critical
Publication of CN111290756B publication Critical patent/CN111290756B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/40Transformation of program code
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Machine Translation (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a code-annotation conversion method based on dual reinforcement learning, which comprises the following steps: convert code to annotation phase: establishing a code annotation generation model, converting codes into word vectors, and performing feature extraction on sequences and structural information in the code word vectors by using an LSTM bidirectional neural network; assigning weights to all words in the word vector by using an attention mechanism to obtain the weight of each word; fusing the word vectors and the weights thereof, and calculating the probability of each word being selected by using a gradient descent method; carrying out dual constraint on the weight of each word and the probability of each word being selected; the BLEU estimation method was used to calculate the degree of match of each sequence to the standard annotations in the dataset and divide by n to average as the reward value in reinforcement learning for each word.

Description

Code-annotation conversion method based on dual reinforcement learning
Technical Field
The invention relates to the technical field of automatic software development, in particular to a code-annotation conversion method based on dual reinforcement learning.
Background
Annotation conversion to code and transcoding to annotations are two key tasks in the field of automated software development. The conversion of annotations into code can generate code (code) from the natural language description, while the conversion of code into annotations automatically generates annotations (annotations s) from the code. Various neural network based approaches have been proposed in previous studies to address these two tasks separately. However, there is a particular intuitive association between annotation conversion to code and transcoding to annotation, and exploiting the relationship between the two tasks can improve the performance of both tasks. Considering the duality between two tasks, a dual training framework is proposed in paper [1] to train both annotation-to-code and transcoding-to-annotation tasks simultaneously. Duality of probability and attention weight is considered in the framework, and corresponding regularization terms are designed to restrict duality.
However, in the previous studies, the seq2seq model was used when dealing with the dual task, but seq2seq has certain limitations and is liable to cause exposure variation, and in order to solve or reduce the influence of this problem, the dual learning task may be dealt with using reinforcement learning. And when the action is selected, the probability distribution and a Monte Carlo algorithm (MC algorithm) in reinforcement learning are used for searching, and the quality of the action is judged from the complete sequence. And the attention mechanism is used for carrying out dual constraint on the dual and the probability dual, so that the performance of the dual learning model is improved.
Most of the previous researches are carried out by separately realizing the processes of converting the annotation into the code and converting the code into the annotation, and do not consider that the input and the output of the two processes are in a reciprocal relation, the input of the process of converting the annotation into the code is the output of converting the code into the annotation, and the input of the process of converting the code into the annotation is the output of converting the annotation into the code. There has been little research on using the link between annotation-to-code and transcoding-to-annotation processes to simultaneously improve their performance. Therefore, the dual model is considered to solve the code-annotation conversion problem, and the previous researchers all use the seq2seq model when dealing with the dual problem, but seq2seq has certain limitations, is easy to generate exposure deviation,
disclosure of Invention
According to the problems existing in the prior art, the invention discloses a code-annotation conversion method based on dual reinforcement learning, which specifically comprises the following steps:
convert code to annotation phase:
establishing a code annotation generation model, converting codes into word vectors, and performing feature extraction on sequences and structural information in the code word vectors by using an LSTM bidirectional neural network;
assigning weights to all words in the word vector by using an attention mechanism to obtain the weight of each word;
fusing the word vectors and the weights thereof, and calculating the probability of each word being selected by using a gradient descent method;
carrying out dual constraint on the weight of each word and the probability of each word being selected;
selecting a word with the highest Reward value Reward by using a Monte Carlo algorithm (MC algorithm) in reinforcement learning, and taking the selection of the word as an action in the reinforcement learning; sampling and exploring the words which are not generated in the follow-up process of each word until n follow-up word sequences corresponding to the word are obtained, calculating the matching degree of each sequence and standard annotations in a data set by using a BLEU (block error rate) evaluation method, and dividing the matching degree by n to obtain an average value as an incentive value of each word in the reinforcement learning;
updating parameters of a code generation annotation process according to the size of the reward value through a neural network and a reverse transfer mechanism, and updating a selection strategy through calculating the mean square error rewarded by a target sequence and an actual sequence;
convert annotations into code phase:
converting the annotation into a word vector, and performing feature extraction on sequence and structure information in the annotation word vector by using an LSTM bidirectional neural network;
assigning weights to all words in the word vector by using an attention mechanism to obtain the weight of each word;
fusing each word vector and the weight thereof, and calculating the probability of each word being selected by using a gradient descent method;
carrying out dual constraint on the weight of each word and the probability of each word being selected;
selecting a word with the highest reward value by using a Monte Carlo algorithm (MC algorithm) in reinforcement learning, taking the selection of the word as an action in the reinforcement learning, sampling words which are not generated in the follow-up process of each word, exploring n follow-up word sequences corresponding to the word after the sampling is completed, calculating the matching degree of each sequence and a standard code in a data set by using a BLEU (block error rate) evaluation method, and dividing the matching degree by n to obtain an average value as the reward value of each word in the reinforcement learning;
parameters of the annotation code generation process are updated according to the size of the reward value through a neural network and a reverse transfer mechanism, and the selection strategy is updated through calculating the mean square error of the reward of the target sequence and the actual sequence.
Further, the dual constraint is constructed in the following manner: the dual constraint on the weight of each word and the probability of each word being selected specifically adopts the following mode:
passing the probability that each word is selected into a dual constraint;
when the gradient descent method PG is used for action selection, the process principle of the code conversion stage and the process principle of the comment conversion stage are the same, and the input and output of the code conversion stage are reciprocal, and the conditions and the results in the conditional probability are replaced; the code conversion to annotation stage and the annotation to code stage are identical in structure, only the input and the output are different, the input of the code conversion to annotation stage is the output of the annotation to code stage, the input of the annotation to code stage is the output of the code conversion to annotation stage, and the input and the output of the annotation and the code stage are mutually inverse.
Conditional probability of transcoding into annotation phase:
Figure BDA0002381736700000031
conditional probability of annotation conversion to code phase:
Figure BDA0002381736700000032
the two conditional probabilities are part of the joint probability, are both constrained by the joint probability, and add a probability constraint regular term into the loss function, wherein the probability dual regular term is as follows:
Figure BDA0002381736700000033
passing attention weights into dual constraints;
a certain symmetry exists between the code conversion annotation stage and the annotation conversion code stage, the alignment mode of the code conversion annotation stage and the annotation conversion code stage is balanced through an attention mechanism, and an attention dual regular term of the code conversion annotation stage is as follows:
Figure BDA0002381736700000034
note conversion into a duality regularization term l for the code phase2The formula is the same as the above formula. biAnd bi' separately represent the weights corresponding to the ith words in the two models, and
Figure BDA0002381736700000035
for KL divergence, the difference between one probability distribution p and another probability distribution q is measured,
the overall pair of attentions is:
Figure BDA0002381736700000036
constructing a loss function used in back propagation:
LOSS=loss1+loss2+ldual+Adual
due to the adoption of the technical scheme, the dual reinforcement learning-based code-annotation conversion method provided by the invention can be used for simultaneously training the code annotation generation model and the code generation model by utilizing the duality between the two models. The method considers duality of probability and attention weight, designs a corresponding regularization item to restrict duality, and can improve conversion accuracy between codes and annotations.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of the method of the present invention.
Detailed Description
In order to make the technical solutions and advantages of the present invention clearer, the following describes the technical solutions in the embodiments of the present invention clearly and completely with reference to the drawings in the embodiments of the present invention:
as shown in fig. 1, a dual reinforcement learning-based code-annotation conversion method is as follows:
transcoding into annotation phase:
step 1: the code is converted to a word vector for representation.
Step 2: sequence and structure information in the code word vector is feature extracted using an LSTM bidirectional neural network.
And step 3: the weights for each word are derived using the Attention mechanism (Attention) to assign weights to the individual words in the word vector.
And 4, step 4: and fusing each word vector and the weight thereof in Hybird.
And 5: the probability of each word being selected is calculated using the gradient descent method (PG). And carrying out dual constraint on the weight of each word and the probability of each word being selected;
step 6: and (3) selecting the word with the highest Reward value (Reward) by using a Monte Carlo algorithm (MC algorithm) in the reinforcement learning, and taking the selection of the word as an Action (Action) in the reinforcement learning. For each word, sampling the words which are not generated subsequently, searching n subsequent word sequences (sequences formed by a series of words which should appear after each word) corresponding to the word after sampling, calculating the matching degree of each sequence and the standard annotation in the data set by using a BLEU (block and error) evaluation method, and dividing the average value by n to be used as the reward value of each word in the reinforcement learning. The reward value is calculated in the manner (i represents a word sequence obtained by exploration):
Figure BDA0002381736700000051
and 7: through the neural network and the reverse transmission, parameters of the model are updated according to the size of the reward value, and the selection strategy is updated by calculating the mean square error of the reward of the target sequence and the actual sequence. The actual sequence reward is a reward value corresponding to the action sequence with the largest average BLEU estimation value obtained through the search of the MC algorithm, and the target sequence reward is a reward value of the action sequence corresponding to the target sequence.
Comment conversion to code phase (the process is basically the same as the code conversion to comment phase, the output is the reverse):
step 1: the annotations are converted to word vectors for representation.
Step 2: sequence and structural information in the annotated word vector is feature extracted using an LSTM bidirectional neural network.
And step 3: the weights for each word are derived using the Attention mechanism (Attention) to assign weights to the individual words in the word vector.
And 4, step 4: and fusing each word vector and the weight thereof in Hybird.
And 5: the probability of each word being selected is calculated using the gradient descent method (PG). And carrying out dual constraint on the weight of each word and the probability of each word being selected;
step 6: and (3) selecting the word with the highest Reward value (Reward) by using a Monte Carlo algorithm (MC algorithm) in the reinforcement learning, and taking the selection of the word as an Action (Action) in the reinforcement learning. For each word, sampling the words which are not generated subsequently, searching n subsequent word sequences (sequences formed by a series of words which should appear after each word) corresponding to the word after sampling, calculating the matching degree of each sequence and the standard codes in the data set by using a BLEU (block and error) evaluation method, and dividing the matching degree by n to obtain an average value to be used as a reward value of each word in the reinforcement learning. The reward value is calculated in the manner (i represents a word sequence obtained by exploration):
Figure BDA0002381736700000052
and 7: through the neural network and the reverse transmission, parameters of the model are updated according to the size of the reward value, and the selection strategy is updated by calculating the mean square error of the reward of the target sequence and the actual sequence. The actual sequence reward is the reward of the action sequence with the largest average BLEU estimation value obtained by searching through the MC algorithm, and the target sequence reward is the reward of the action sequence with the largest probability generated by the model.
The construction process of the dual constraint is as follows:
step 1: the probabilities are passed into dual constraints.
When the gradient descent method PG is used for action selection, the process principle of the code conversion stage and the process principle of the comment conversion stage are the same, and the input and output of the code conversion stage are reciprocal, and the conditions and the results in the conditional probability are replaced; the code conversion to comment phase and the comment conversion to code phase are the same in structure, only the input and the output are different, the input of the code conversion to comment phase is the output of the comment conversion to code phase, the input of the comment conversion to code phase is the output of the code conversion to comment phase, and the input and the output of the comment conversion to code phase are the inverse.
Conditional probability of transcoding into annotation phase:
Figure BDA0002381736700000061
conditional probability of annotation conversion to code phase:
Figure BDA0002381736700000062
both of which are part of the joint probability, are constrained by the joint probability, and a probability-constrained regularization term may be added to the Loss (Loss) function. The probability dual regularization term is:
Figure BDA0002381736700000063
step 2: attention weights are passed into the dual constraints.
There is some symmetry between the transcoding and transcoding stages, and the alignment between the two can be balanced by a mechanism of attention. The attention pair regularization term for the transcoding to annotation phase is:
Figure BDA0002381736700000064
note conversion into a duality regularization term l for the code phase2The formula is the same as the above formula. biAnd bi' separately represent the weights corresponding to the ith words in the two models, and
Figure BDA0002381736700000065
for KL divergence, the difference between one probability distribution p and another probability distribution q is measured.
The overall pair of attentions is:
Figure BDA0002381736700000071
and step 3: a Loss function (Loss function) used in the back propagation is constructed.
LOSS=loss1+loss2+ldual+Adual
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.

Claims (2)

1. A code-annotation conversion method based on dual reinforcement learning, comprising:
convert code to annotation phase:
establishing a code annotation generation model, converting codes into word vectors, and performing feature extraction on sequences and structural information in the code word vectors by using an LSTM bidirectional neural network;
assigning weights to all words in the word vector by using an attention mechanism to obtain the weight of each word;
fusing the word vectors and the weights thereof, and calculating the probability of each word being selected by using a gradient descent method;
carrying out dual constraint on the weight of each word and the probability of each word being selected;
selecting a word with the highest Reward value Reward by using a Monte Carlo algorithm in reinforcement learning, and taking the selection of the word as an action in the reinforcement learning; sampling and exploring the words which are not generated in the follow-up process of each word until n follow-up word sequences corresponding to the word are obtained, calculating the matching degree of each sequence and standard annotations in a data set by using a BLEU (block error rate) evaluation method, and dividing the matching degree by n to obtain an average value as an incentive value of each word in the reinforcement learning;
updating parameters of a code generation annotation process according to the size of the reward value through a neural network and a reverse transfer mechanism, and updating a selection strategy through calculating the mean square error rewarded by a target sequence and an actual sequence;
convert annotations into code phase:
converting the annotation into a word vector, and performing feature extraction on sequence and structure information in the annotation word vector by using an LSTM bidirectional neural network;
assigning weights to all words in the word vector by using an attention mechanism to obtain the weight of each word;
fusing the word vectors and the weights thereof, and calculating the probability of each word being selected by using a gradient descent method;
carrying out dual constraint on the weight of each word and the probability of each word being selected;
selecting a word with the highest reward value by using a Monte Carlo algorithm in reinforcement learning, taking the selection of the word as an action in the reinforcement learning, sampling the words which are not generated in the follow-up process of each word, exploring n follow-up word sequences corresponding to the word after the sampling is completed, calculating the matching degree of each sequence and a standard code in a data set by using a BLEU (block error rate) evaluation method, and dividing the matching degree by n to obtain an average value as the reward value of each word in the reinforcement learning;
parameters of the annotation code generation process are updated according to the size of the reward value through a neural network and a reverse transfer mechanism, and the selection strategy is updated through calculating the mean square error of the reward of the target sequence and the actual sequence.
2. The method of claim 1, further characterized by: the dual constraint is constructed in the following way: the dual constraint on the weight of each word and the probability of each word being selected specifically adopts the following mode:
passing the probability that each word is selected into a dual constraint;
when the gradient descent method PG is used for action selection, the process principle of the code conversion stage and the process principle of the comment conversion stage are the same, and the input and output of the code conversion stage are reciprocal, and the conditions and the results in the conditional probability are replaced;
conditional probability of transcoding into annotation phase:
Figure FDA0002381736690000021
conditional probability of annotation conversion to code phase:
Figure FDA0002381736690000022
the two conditional probabilities are part of the joint probability, are both constrained by the joint probability, and add a probability constraint regular term into the loss function, wherein the probability dual regular term is as follows:
Figure FDA0002381736690000023
passing attention weights into dual constraints;
a certain symmetry exists between the code conversion annotation stage and the annotation conversion code stage, the alignment mode of the code conversion annotation stage and the annotation conversion code stage is balanced through an attention mechanism, and an attention dual regular term of the code conversion annotation stage is as follows:
Figure FDA0002381736690000024
note conversion into a duality regularization term l for the code phase2The formula is the same as the above formula. biAnd bi' separately represent the weights corresponding to the ith words in the two models, and
Figure FDA0002381736690000025
for KL divergence, the difference between one probability distribution p and another probability distribution q is measured,
the overall pair of attentions is:
Figure FDA0002381736690000026
constructing a loss function used in back propagation:
LOSS=loss1+loss2+ldual+Adual
CN202010085043.XA 2020-02-10 2020-02-10 Code-annotation conversion method based on dual reinforcement learning Active CN111290756B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010085043.XA CN111290756B (en) 2020-02-10 2020-02-10 Code-annotation conversion method based on dual reinforcement learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010085043.XA CN111290756B (en) 2020-02-10 2020-02-10 Code-annotation conversion method based on dual reinforcement learning

Publications (2)

Publication Number Publication Date
CN111290756A true CN111290756A (en) 2020-06-16
CN111290756B CN111290756B (en) 2023-08-18

Family

ID=71026709

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010085043.XA Active CN111290756B (en) 2020-02-10 2020-02-10 Code-annotation conversion method based on dual reinforcement learning

Country Status (1)

Country Link
CN (1) CN111290756B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111857728A (en) * 2020-07-22 2020-10-30 中山大学 Code abstract generation method and device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050027664A1 (en) * 2003-07-31 2005-02-03 Johnson David E. Interactive machine learning system for automated annotation of information in text
CN106021410A (en) * 2016-05-12 2016-10-12 中国科学院软件研究所 Source code annotation quality evaluation method based on machine learning
CN108491208A (en) * 2018-01-31 2018-09-04 中山大学 A kind of code annotation sorting technique based on neural network model
CN109799990A (en) * 2017-11-16 2019-05-24 中标软件有限公司 Source code annotates automatic generation method and system
CN109960506A (en) * 2018-12-03 2019-07-02 复旦大学 A kind of code annotation generation method based on structure perception
CN110427464A (en) * 2019-08-13 2019-11-08 腾讯科技(深圳)有限公司 A kind of method and relevant apparatus of code vector generation
CN110705273A (en) * 2019-09-02 2020-01-17 腾讯科技(深圳)有限公司 Information processing method and device based on neural network, medium and electronic equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050027664A1 (en) * 2003-07-31 2005-02-03 Johnson David E. Interactive machine learning system for automated annotation of information in text
CN106021410A (en) * 2016-05-12 2016-10-12 中国科学院软件研究所 Source code annotation quality evaluation method based on machine learning
CN109799990A (en) * 2017-11-16 2019-05-24 中标软件有限公司 Source code annotates automatic generation method and system
CN108491208A (en) * 2018-01-31 2018-09-04 中山大学 A kind of code annotation sorting technique based on neural network model
CN109960506A (en) * 2018-12-03 2019-07-02 复旦大学 A kind of code annotation generation method based on structure perception
CN110427464A (en) * 2019-08-13 2019-11-08 腾讯科技(深圳)有限公司 A kind of method and relevant apparatus of code vector generation
CN110705273A (en) * 2019-09-02 2020-01-17 腾讯科技(深圳)有限公司 Information processing method and device based on neural network, medium and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
司念文;王衡军;李伟;单义栋;谢鹏程;: "基于注意力长短时记忆网络的中文词性标注模型" *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111857728A (en) * 2020-07-22 2020-10-30 中山大学 Code abstract generation method and device
CN111857728B (en) * 2020-07-22 2021-08-31 中山大学 Code abstract generation method and device

Also Published As

Publication number Publication date
CN111290756B (en) 2023-08-18

Similar Documents

Publication Publication Date Title
Gad Pygad: An intuitive genetic algorithm python library
WO2023065545A1 (en) Risk prediction method and apparatus, and device and storage medium
CN113535984B (en) Knowledge graph relation prediction method and device based on attention mechanism
CN112016332B (en) Multi-modal machine translation method based on variational reasoning and multi-task learning
CN109857846B (en) Method and device for matching user question and knowledge point
US20230325687A1 (en) System and method for de novo drug discovery
WO2019154411A1 (en) Word vector retrofitting method and device
JP7430820B2 (en) Sorting model training method and device, electronic equipment, computer readable storage medium, computer program
WO2024032096A1 (en) Reactant molecule prediction method and apparatus, training method and apparatus, and electronic device
CN111353033A (en) Method and system for training text similarity model
JP2022169743A (en) Information extraction method and device, electronic equipment, and storage medium
CN116959613B (en) Compound inverse synthesis method and device based on quantum mechanical descriptor information
CN110990596A (en) Multi-mode hash retrieval method and system based on self-adaptive quantization
CN112925857A (en) Digital information driven system and method for predicting associations based on predicate type
CN112463989A (en) Knowledge graph-based information acquisition method and system
CN110597956A (en) Searching method, searching device and storage medium
CN111290756A (en) Code-annotation conversion method based on dual reinforcement learning
WO2022072237A1 (en) Lifecycle management for customized natural language processing
US20240013074A1 (en) Self-supervised self supervision by combining probabilistic logic with deep learning
CN112989803A (en) Entity link model based on topic vector learning
CN115510193B (en) Query result vectorization method, query result determination method and related devices
CN113297385B (en) Multi-label text classification system and method based on improved GraphRNN
WO2023112169A1 (en) Training method, estimation method, training device, estimation device, training program, and estimation program
CN117520665B (en) Social recommendation method based on generation of countermeasure network
WO2023097515A1 (en) Rna-protein interaction prediction method and apparatus, and medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant