CN111290756B - Code-annotation conversion method based on dual reinforcement learning - Google Patents

Code-annotation conversion method based on dual reinforcement learning Download PDF

Info

Publication number
CN111290756B
CN111290756B CN202010085043.XA CN202010085043A CN111290756B CN 111290756 B CN111290756 B CN 111290756B CN 202010085043 A CN202010085043 A CN 202010085043A CN 111290756 B CN111290756 B CN 111290756B
Authority
CN
China
Prior art keywords
word
annotation
code
probability
dual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010085043.XA
Other languages
Chinese (zh)
Other versions
CN111290756A (en
Inventor
陈荣
唐文君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Maritime University
Original Assignee
Dalian Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Maritime University filed Critical Dalian Maritime University
Priority to CN202010085043.XA priority Critical patent/CN111290756B/en
Publication of CN111290756A publication Critical patent/CN111290756A/en
Application granted granted Critical
Publication of CN111290756B publication Critical patent/CN111290756B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/40Transformation of program code
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Machine Translation (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application discloses a code-annotation conversion method based on dual reinforcement learning, which comprises the following steps: transcoding into annotation phase: establishing a code annotation generation model, converting codes into word vectors, and extracting features of sequences and structural information in the code word vectors by using an LSTM bidirectional neural network; using an attention mechanism to assign weights to each word in the word vector to obtain the weight of each word; fusing the word vectors and the weights thereof, and calculating the probability of each word being selected by using a gradient descent method; performing dual constraint on the weight of each word and the probability of each word being selected; the BLEU assessment method is used to calculate the degree of matching of standard annotations in each sequence and dataset and divide by n for averaging as the reward value in reinforcement learning for each word.

Description

Code-annotation conversion method based on dual reinforcement learning
Technical Field
The application relates to the technical field of automatic software development, in particular to a code-annotation conversion method based on dual reinforcement learning.
Background
Conversion of annotations into code and transcoding into annotations are two critical tasks in the field of automated software development. The conversion of notes into code can generate code (code) from natural language descriptions, and the conversion of code into notes automatically generates notes (notes s) from the code. Various neural network-based approaches have been proposed in previous studies to address these two tasks separately. However, there is a specific visual association between the conversion of annotations to code and the conversion of code to annotations, and the use of the relationship between these two tasks can improve the performance of the two tasks. In view of the duality between the two tasks, a dual training framework is proposed in paper [1] to train both the conversion of notes into codes and the conversion of codes into notes. The duality of probability and attention weights is considered in this framework, and corresponding regularization terms are designed to constrain the duality.
However, in the previous study, the seq2seq model was used for processing the dual task, but the seq2seq has a certain limitation, exposure deviation is easy to generate, and in order to solve/reduce the influence caused by the problem, reinforcement learning can be used for processing the dual learning task. And the Monte Carlo algorithm (MC algorithm) in probability distribution and reinforcement learning is used for searching during action selection, and the advantages and disadvantages of the actions are judged from the complete sequence. And the dual constraint is carried out by using the attention mechanism dual and the probability dual, so that the performance of the dual learning model is improved.
Most previous studies have implemented the two processes of annotation conversion into code and annotation into code separately without considering that the input and output of the two are a reciprocal relationship, the input of the annotation into code process is the output of the annotation into code, and the input of the annotation into code is the output of the annotation into code. There have been no studies to simultaneously enhance their performance using the link between the conversion of notes to code and the conversion of code to notes. Therefore, considering the use of dual models to solve the code-annotation transformation problem, previous researchers used the seq2seq model when dealing with the dual problem, but the seq2seq had certain limitations, was prone to exposure bias,
disclosure of Invention
According to the problems existing in the prior art, the application discloses a code-annotation conversion method based on dual reinforcement learning, which comprises the following steps:
transcoding into annotation phase:
establishing a code annotation generation model, converting codes into word vectors, and extracting features of sequences and structural information in the code word vectors by using an LSTM bidirectional neural network;
using an attention mechanism to assign weights to each word in the word vector to obtain the weight of each word;
fusing the word vectors and the weights thereof, and calculating the probability of each word being selected by using a gradient descent method;
performing dual constraint on the weight of each word and the probability of each word being selected;
selecting a word with the highest Reward value Reward by using a Monte Carlo algorithm (MC algorithm) in reinforcement learning, and taking the selection of the word as one action in reinforcement learning; sampling and exploring the word which is not generated after each word to obtain n subsequent word sequences corresponding to the word, calculating the matching degree of each sequence and standard annotation in a data set by using a BLEU evaluation method, and dividing n to average value to obtain a reward value of each word in reinforcement learning;
updating parameters of a code generation annotation process according to the magnitude of a reward value through a neural network and a reverse transmission mechanism, and updating a selection strategy through calculating the mean square error of a target sequence and an actual sequence reward;
converting notes into code phase:
converting the annotation into a word vector, and extracting features of sequence and structure information in the annotated word vector by using an LSTM bidirectional neural network;
using an attention mechanism to assign weights to each word in the word vector to obtain the weight of each word;
fusing each word vector and the weight thereof, and calculating the probability of each word being selected by using a gradient descent method;
performing dual constraint on the weight of each word and the probability of each word being selected;
selecting a word with the highest rewarding value by using a Monte Carlo algorithm (MC algorithm) in reinforcement learning, taking the selection of the word as one action in reinforcement learning, sampling words which are not generated after each word, searching n subsequent word sequences corresponding to the word after the sampling is completed, calculating the matching degree of each sequence and standard codes in a data set by using a BLEU evaluation method, and dividing n by an average value to obtain the rewarding value of each word in reinforcement learning;
updating parameters of the annotation generation code process according to the magnitude of the rewarding value through a neural network and a reverse transmission mechanism, and updating a selection strategy through calculating the mean square error of the rewarding of the target sequence and the actual sequence.
Further, the construction of the dual constraint adopts the following mode: the dual constraint on the weight of each word and the probability of each word being selected is specifically as follows:
the probability of each word being selected is transmitted into a dual constraint;
when the gradient descent method PG is used for selecting actions, the principle of the process of converting codes into annotation phases and converting annotations into code phases is the same, and the conditions and results in the condition probabilities of the input and output reciprocity are replaced; namely, the code conversion into the annotation stage and the code conversion into the annotation stage are structurally the same, only the input and the output are different, the input of the code conversion into the annotation stage is the output of the code conversion into the annotation stage, and the input and the output of the code conversion into the annotation stage are mutually reversed.
Conditional probability of transcoding into annotation phase:
the conditional probability of the annotation being converted into a code phase:
the two conditional probabilities are part of the joint probability, are both constrained by the joint probability, add probability constraint regularization terms into the loss function, and the probability dual regularization terms are:
transmitting the attention weight into the dual constraint;
there is a certain symmetry between the transcoding and annotating phases, the alignment between the two is balanced by the attention mechanism, and the attention-to-annotation phase's attention-to-dual regularization term is:
attention-to-dual regular term/for annotation conversion into code phase 2 The same as above. b i And b i ' respectively represents the weight corresponding to the ith word in the two models, andfor the KL divergence, the difference between one probability distribution p and the other probability distribution q is measured,
the total attentiveness pair is:
constructing a loss function used in back propagation:
LOSS=loss1+loss2+l dual +A dual
by adopting the technical scheme, the code-annotation conversion method based on dual reinforcement learning provided by the application can train the code annotation generation model and the code generation model simultaneously by utilizing the dual between the two models. The method considers the duality of probability and attention weight, designs corresponding regularization items to restrict the duality, and can further improve the conversion accuracy between codes and notes.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings may be obtained according to the drawings without inventive effort to those skilled in the art.
FIG. 1 is a flow chart of the method of the present application.
Detailed Description
In order to make the technical scheme and advantages of the present application more clear, the technical scheme in the embodiment of the present application is clearly and completely described below with reference to the accompanying drawings in the embodiment of the present application:
a code-annotation conversion method based on dual reinforcement learning as shown in fig. 1, the code-annotation conversion method based on dual reinforcement learning:
transcoding into annotation phase:
step 1: the code is converted into a word vector for representation.
Step 2: and extracting the characteristics of the sequence and the structural information in the code word vector by using the LSTM bidirectional neural network.
Step 3: the weight of each word is obtained by using an Attention mechanism (Attention) to assign weights to each word in the word vector.
Step 4: and fusing the word vectors and the weights thereof in Hybird.
Step 5: the probability that each word is selected is calculated using a gradient descent method (PG). And performing dual constraint on the weight of each word and the probability of each word being selected;
step 6: the Monte Carlo algorithm (MC algorithm) in reinforcement learning is used for selecting the word with the highest Reward value (Reward), and the selection of the word is taken as one Action (Action) in reinforcement learning. For each word, sampling the word that has not been generated yet, sampling is completed to find n subsequent word sequences (sequences of a series of words that should appear after each word) corresponding to the word, and the BLEU evaluation method is used to calculate the matching degree of each sequence and the standard annotation in the dataset, and dividing n by the average value as the reward value of each word in reinforcement learning. The prize value is calculated by (i represents a word sequence obtained by exploration):
step 7: and updating parameters of the model according to the magnitude of the rewarding value through a neural network and reverse transmission, and updating a selection strategy by calculating the mean square error of rewarding of the target sequence and the actual sequence. The actual sequence rewards are rewards corresponding to the action sequences with the maximum average BLEU estimated value obtained through MC algorithm searching, and the target sequence rewards are rewards corresponding to the action sequences.
Annotation is converted to code phase (process is basically the same as code-to-annotation phase, output is opposite):
step 1: the annotation is converted into a word vector for representation.
Step 2: and extracting the characteristics of the sequence and the structural information in the annotated word vector by using the LSTM bidirectional neural network.
Step 3: the weight of each word is obtained by using an Attention mechanism (Attention) to assign weights to each word in the word vector.
Step 4: and fusing the word vectors and the weights thereof in Hybird.
Step 5: the probability that each word is selected is calculated using a gradient descent method (PG). And performing dual constraint on the weight of each word and the probability of each word being selected;
step 6: the Monte Carlo algorithm (MC algorithm) in reinforcement learning is used for selecting the word with the highest Reward value (Reward), and the selection of the word is taken as one Action (Action) in reinforcement learning. For each word, sampling the word which is not generated yet subsequently, sampling is completed, n subsequent word sequences (sequences formed by a series of words which should appear after each word) corresponding to the word are explored, a BLEU evaluation method is used for calculating the matching degree of each sequence and standard codes in a data set, and n is divided to average value as a reward value in reinforcement learning of each word. The prize value is calculated by (i represents a word sequence obtained by exploration):
step 7: and updating parameters of the model according to the magnitude of the rewarding value through a neural network and reverse transmission, and updating a selection strategy by calculating the mean square error of rewarding of the target sequence and the actual sequence. The actual sequence rewards are rewards of the action sequence with the largest average BLEU estimated value obtained through MC algorithm search, and the target sequence rewards are rewards of the action sequence with the largest probability generated by the model.
The construction process of the dual constraint is as follows:
step 1: the probabilities are passed into a dual constraint.
When the gradient descent method PG is used for selecting actions, the principle of the process of converting codes into annotation phases and converting annotations into code phases is the same, and the conditions and results in the condition probabilities of the input and output reciprocity are replaced; namely, the code conversion and annotation conversion phases are structurally the same, only the input and output are different, the input of the code conversion and annotation conversion phase is the output of the code conversion and annotation conversion phase, and the input and output of the code conversion and annotation conversion phase are mutually reversed.
Conditional probability of transcoding into annotation phase:
the conditional probability of the annotation being converted into a code phase:
both are part of a joint probability, both constrained by the joint probability, and probability-constrained regularization terms can be added to the Loss (Loss) function. The probability dual regularization term is:
step 2: the attention weights are passed into the dual constraint.
There is some symmetry between transcoding into annotation phase and transcoding into annotation phase, and the alignment between the two can be balanced by the mechanism of attention. The attention-to-dual regularization term of the transcoding stage is:
attention-to-dual regular term/for annotation conversion into code phase 2 The same as above. b i And b i ' respectively represents the weight corresponding to the ith word in the two models, andfor the KL divergence, the difference between one probability distribution p and the other probability distribution q is measured.
The total attentiveness pair is:
step 3: a Loss function (Loss function) used in back propagation is constructed.
LOSS=loss1+loss2+l dual +A dual
The foregoing is only a preferred embodiment of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art, who is within the scope of the present application, should make equivalent substitutions or modifications according to the technical scheme of the present application and the inventive concept thereof, and should be covered by the scope of the present application.

Claims (2)

1. A code-annotation conversion method based on dual reinforcement learning, characterized by comprising:
transcoding into annotation phase:
establishing a code annotation generation model, converting codes into word vectors, and extracting features of sequences and structural information in the code word vectors by using an LSTM bidirectional neural network;
using an attention mechanism to assign weights to each word in the word vector to obtain the weight of each word;
fusing the word vectors and the weights thereof, and calculating the probability of each word being selected by using a gradient descent method;
performing dual constraint on the weight of each word and the probability of each word being selected;
selecting a word with the highest Reward value Reward by using a Monte Carlo algorithm in reinforcement learning, and taking the selection of the word as one action in reinforcement learning; sampling and exploring the word which is not generated after each word to obtain n subsequent word sequences corresponding to the word, calculating the matching degree of each sequence and standard annotation in a data set by using a BLEU evaluation method, and dividing n to average value to obtain a reward value of each word in reinforcement learning;
updating parameters of a code generation annotation process according to the magnitude of a reward value through a neural network and a reverse transmission mechanism, and updating a selection strategy through calculating the mean square error of a target sequence and an actual sequence reward;
converting notes into code phase:
converting the annotation into a word vector, and extracting features of sequence and structure information in the annotated word vector by using an LSTM bidirectional neural network;
using an attention mechanism to assign weights to each word in the word vector to obtain the weight of each word;
fusing the word vectors and the weights thereof, and calculating the probability of each word being selected by using a gradient descent method;
performing dual constraint on the weight of each word and the probability of each word being selected;
selecting a word with the highest rewarding value by using a Monte Carlo algorithm in reinforcement learning, taking the selection of the word as one action in reinforcement learning, sampling words which are not generated in the follow-up of each word, searching n follow-up word sequences corresponding to the word after the sampling is completed, calculating the matching degree of each sequence and standard codes in a data set by using a BLEU evaluation method, and dividing n by an average value to obtain the rewarding value of each word in reinforcement learning;
updating parameters of the annotation generation code process according to the magnitude of the rewarding value through a neural network and a reverse transmission mechanism, and updating a selection strategy through calculating the mean square error of the rewarding of the target sequence and the actual sequence.
2. The method of claim 1, further characterized by: the dual constraint is constructed in the following way: the dual constraint on the weight of each word and the probability of each word being selected is specifically as follows:
the probability of each word being selected is transmitted into a dual constraint;
when the gradient descent method PG is used for selecting actions, the principle of the process of converting codes into annotation phases and converting annotations into code phases is the same, and the conditions and results in the condition probabilities of the input and output reciprocity are replaced;
conditional probability of transcoding into annotation phase:
the conditional probability of the annotation being converted into a code phase:
the two conditional probabilities are part of the joint probability, are both constrained by the joint probability, add probability constraint regularization terms into the loss function, and the probability dual regularization terms are:
transmitting the attention weight into the dual constraint;
there is a certain symmetry between the transcoding and annotating phases, the alignment between the two is balanced by the attention mechanism, and the attention-to-annotation phase's attention-to-dual regularization term is:
attention-to-dual regular term/for annotation conversion into code phase 2 B is as above i And b i ' respectively represents the weight corresponding to the ith word in the two models, andfor the KL divergence, the difference between one probability distribution p and the other probability distribution q is measured,
the total attentiveness pair is:
constructing a loss function used in back propagation:
LOSS=loss1+loss2+l dual +A dual
CN202010085043.XA 2020-02-10 2020-02-10 Code-annotation conversion method based on dual reinforcement learning Active CN111290756B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010085043.XA CN111290756B (en) 2020-02-10 2020-02-10 Code-annotation conversion method based on dual reinforcement learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010085043.XA CN111290756B (en) 2020-02-10 2020-02-10 Code-annotation conversion method based on dual reinforcement learning

Publications (2)

Publication Number Publication Date
CN111290756A CN111290756A (en) 2020-06-16
CN111290756B true CN111290756B (en) 2023-08-18

Family

ID=71026709

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010085043.XA Active CN111290756B (en) 2020-02-10 2020-02-10 Code-annotation conversion method based on dual reinforcement learning

Country Status (1)

Country Link
CN (1) CN111290756B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111857728B (en) * 2020-07-22 2021-08-31 中山大学 Code abstract generation method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106021410A (en) * 2016-05-12 2016-10-12 中国科学院软件研究所 Source code annotation quality evaluation method based on machine learning
CN108491208A (en) * 2018-01-31 2018-09-04 中山大学 A kind of code annotation sorting technique based on neural network model
CN109799990A (en) * 2017-11-16 2019-05-24 中标软件有限公司 Source code annotates automatic generation method and system
CN109960506A (en) * 2018-12-03 2019-07-02 复旦大学 A kind of code annotation generation method based on structure perception
CN110427464A (en) * 2019-08-13 2019-11-08 腾讯科技(深圳)有限公司 A kind of method and relevant apparatus of code vector generation
CN110705273A (en) * 2019-09-02 2020-01-17 腾讯科技(深圳)有限公司 Information processing method and device based on neural network, medium and electronic equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050027664A1 (en) * 2003-07-31 2005-02-03 Johnson David E. Interactive machine learning system for automated annotation of information in text

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106021410A (en) * 2016-05-12 2016-10-12 中国科学院软件研究所 Source code annotation quality evaluation method based on machine learning
CN109799990A (en) * 2017-11-16 2019-05-24 中标软件有限公司 Source code annotates automatic generation method and system
CN108491208A (en) * 2018-01-31 2018-09-04 中山大学 A kind of code annotation sorting technique based on neural network model
CN109960506A (en) * 2018-12-03 2019-07-02 复旦大学 A kind of code annotation generation method based on structure perception
CN110427464A (en) * 2019-08-13 2019-11-08 腾讯科技(深圳)有限公司 A kind of method and relevant apparatus of code vector generation
CN110705273A (en) * 2019-09-02 2020-01-17 腾讯科技(深圳)有限公司 Information processing method and device based on neural network, medium and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
司念文 ; 王衡军 ; 李伟 ; 单义栋 ; 谢鹏程 ; .基于注意力长短时记忆网络的中文词性标注模型.计算机科学.2018,(第04期),全文. *

Also Published As

Publication number Publication date
CN111290756A (en) 2020-06-16

Similar Documents

Publication Publication Date Title
US11501182B2 (en) Method and apparatus for generating model
US20210232948A1 (en) Question responding apparatus, question responding method and program
US11475220B2 (en) Predicting joint intent-slot structure
CN109857846B (en) Method and device for matching user question and knowledge point
CN108701253A (en) The target output training neural network of operating specification
JP6498095B2 (en) Word embedding learning device, text evaluation device, method, and program
CN114385178A (en) Code generation method based on abstract syntax tree structure information enhancement
CN112184391A (en) Recommendation model training method, medium, electronic device and recommendation model
JP7430820B2 (en) Sorting model training method and device, electronic equipment, computer readable storage medium, computer program
CN113128206B (en) Question generation method based on word importance weighting
JP2018147392A (en) Model learning device, score calculation device, method, data structure, and program
CN111291175B (en) Method for automatically generating submitted demand abstract based on strategy gradient algorithm
CN115082920A (en) Deep learning model training method, image processing method and device
CN112463989A (en) Knowledge graph-based information acquisition method and system
CN112000788B (en) Data processing method, device and computer readable storage medium
CN113791757A (en) Software requirement and code mapping method and system
CN112925857A (en) Digital information driven system and method for predicting associations based on predicate type
CN113254716A (en) Video clip retrieval method and device, electronic equipment and readable storage medium
CN111290756B (en) Code-annotation conversion method based on dual reinforcement learning
CN115408551A (en) Medical image-text data mutual detection method, device, equipment and readable storage medium
CN112989803B (en) Entity link prediction method based on topic vector learning
CN111161238A (en) Image quality evaluation method and device, electronic device, and storage medium
CN115631008B (en) Commodity recommendation method, device, equipment and medium
US20240005170A1 (en) Recommendation method, apparatus, electronic device, and storage medium
US20240020531A1 (en) System and Method for Transforming a Trained Artificial Intelligence Model Into a Trustworthy Artificial Intelligence Model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant