CN117806972B - Multi-scale time sequence analysis-based modified code quality assessment method - Google Patents

Multi-scale time sequence analysis-based modified code quality assessment method Download PDF

Info

Publication number
CN117806972B
CN117806972B CN202410009502.4A CN202410009502A CN117806972B CN 117806972 B CN117806972 B CN 117806972B CN 202410009502 A CN202410009502 A CN 202410009502A CN 117806972 B CN117806972 B CN 117806972B
Authority
CN
China
Prior art keywords
sequence
integrated
code
versions
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410009502.4A
Other languages
Chinese (zh)
Other versions
CN117806972A (en
Inventor
李英玲
黄磊
王子翱
任书仪
杨海宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Minzu University
Original Assignee
Southwest Minzu University
Filing date
Publication date
Application filed by Southwest Minzu University filed Critical Southwest Minzu University
Priority to CN202410009502.4A priority Critical patent/CN117806972B/en
Publication of CN117806972A publication Critical patent/CN117806972A/en
Application granted granted Critical
Publication of CN117806972B publication Critical patent/CN117806972B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a modified code quality assessment method based on multi-scale time sequence analysis, and relates to the technical field of software testing. Firstly, dividing a continuous integrated evolution track into a stable sequence section and an abnormal sequence section from time sequence, thereby constructing an integrated sequence sample set; secondly, on the basis of a sample set, training a sequence similarity model based on the dependence of the features between double attention extraction sequences and in the sequences; and finally, carrying out weighted comprehensive evaluation on a plurality of time scales by using the model, judging the quality of the changed code, namely whether the code contains defects or not, and screening out the code review which does not contain defects, thereby reducing the code review workload and effectively improving the code review efficiency.

Description

Multi-scale time sequence analysis-based modified code quality assessment method
Technical Field
The invention relates to the technical field of software testing, in particular to a modified code quality assessment method based on multi-scale time sequence analysis.
Background
With continued integration and the widespread use of devops, a large number of code reviews are required to ensure integrated code quality. Studies have shown that over 70% of code submissions are free of defects and that code review of all submissions, if not differentiated and equal, would waste a significant amount of time and resources, resulting in higher code review costs. Among the existing studies, there are studies that use a self-coding and variation encoder in machine learning to predict the binary result of code review; there are researchers that use machine learning to predict whether code submissions are merged into a mainline code base. Then, researchers have proposed the results of transforming abstract syntax trees of codes into a simplified graph and evaluating the codes through convolutional neural networks. In addition, researchers have proposed a limit level defect prediction technology in recent two years, namely an instant defect prediction technology comprising an instant defect prediction method based on semantic features of different granularities, an instant defect prediction based on deep learning workload perception, an instant defect prediction based on supervised learning and unsupervised learning, and an instant defect prediction in intra-project and inter-project scenes, which are jointly proposed by academia and industry. Through research, quality assessment of altered code has gained widespread attention in academia and industry.
However, the existing method for evaluating the quality of the changed code only uses manual features of static analysis of the current version or coarse-granularity code semantics and structural features, and fails to model dynamic features in the integrated evolution process of the changed code and code change modes under different granularities, so that the quality evaluation is inaccurate, unnecessary code reviews cannot be screened out accurately, and the workload of the code reviews is increased.
Disclosure of Invention
Aiming at the defects in the prior art, the method for evaluating the quality of the modified code based on the multi-scale time sequence analysis solves the problems that the quality evaluation is inaccurate, unnecessary code reviews cannot be screened out accurately and the workload of the code reviews is increased due to the fact that dynamic characteristics in the integrated evolution process of the modified code and code change modes under different granularities are not utilized in the prior art.
In order to achieve the aim of the invention, the invention adopts the following technical scheme:
The method for evaluating the quality of the changed codes based on the multi-scale time sequence analysis comprises the following steps:
S1, acquiring a history integrated version, and constructing an integrated version track based on the history integrated version;
s2, extracting the integrated version track to obtain positive and negative sample sequence pairs and constructing an integrated sequence sample set;
s3, constructing a sequence similarity model, and inputting an integrated sequence sample set into the sequence similarity model for training to obtain a trained sequence similarity model;
S4, acquiring adjacent subsequences corresponding to project code change nodes, and performing multi-scale detection and weighted summarization by using the trained sequence similarity model to obtain corresponding code quality evaluation results; and judging the quality of the corresponding change codes according to the code quality evaluation result, and completing the evaluation of the quality of the change codes.
Further, step S1 further includes:
S1-1, processing a history integrated version based on a time dimension to obtain an integrated abnormality index; acquiring a corresponding evolution track based on the integrated abnormality index;
s1-2, carrying out sectional identification on the evolution track to obtain an identified evolution track, namely an integrated version track. Further, the formula of step S1-1 is:
AnomalyScore(i)=fault(ti)
P=[AnomalyScore1,AnomalyScore2,...,AnomalyScoren]
Wherein AnomalyScore (i) represents an integrated anomaly index of the i-th historical integrated version, t i represents the i-th historical integrated version in time sequence, fault (t i) represents an integrated error function, namely the number of integrated errors of the historical integrated version t i is calculated, P represents an evolution track, n represents the total number of the historical integrated versions, and AnomalyScore 1,AnomalyScore2,…,AnomalyScoren represents the integrated anomaly index corresponding to each historical integrated version.
Further, the specific process of step S1-2 is as follows:
S1-2-1, segmenting an evolution track according to three key time points and an integrated anomaly index of code change problem introduction, integrated anomaly end and code change problem elimination to obtain a segmented evolution track;
s1-2-2, identifying the segmented evolution track to obtain an abnormal sequence segment and a stable sequence segment, namely the identified evolution track.
Further, the positive samples in step S2 are sequences of codes in a stable phase, and the negative samples are sequences of codes before and after defect introduction.
Further, the sequence similarity model in the step S3 comprises an attention module and an average module which are sequentially connected in series; the attention module comprises a serial section attention layer and an intra-serial attention layer which are connected in parallel; the sequence segment attention layer comprises a sequence segment embedded layer, a sequence segment multi-head attention layer and a sequence segment up-sampling layer which are sequentially connected in series; the intra-sequence attention layer comprises an intra-sequence embedded layer, an intra-sequence multi-head attention layer and an intra-sequence upsampling layer which are sequentially connected in series; the averaging module includes parallel averaging layers.
Further, step S3 further includes:
S3-1, constructing a sequence sample pair set based on the integrated sequence sample set;
s3-2, respectively inputting the sequence sample pair set into a sequence segment embedding layer and a sequence inner embedding layer to obtain corresponding sequence segment embedding vectors and sequence inner embedding vectors;
S3-3, respectively inputting the sequence segment embedded vector and the sequence inner embedded vector into a sequence segment multi-head attention layer and a sequence inner multi-head attention layer to obtain corresponding feature vectors between integrated versions and feature vectors between time points of the integrated versions;
S3-4, respectively inputting the feature vectors between the integrated versions and the feature vectors between the time points of the integrated versions to an inter-sequence upsampling layer and an intra-sequence upsampling layer to obtain the sampled feature vectors between the integrated versions and the feature vectors between the time points of the integrated versions;
S3-5, respectively inputting the sampled feature vectors between the integrated versions and the feature vectors between the time points of the integrated versions to an average layer to obtain final feature vectors between the integrated versions and the feature vectors between the time points of the integrated versions;
s3-6, according to the formula:
obtaining a Loss function Loss { P', N; x }; wherein D represents similarity, P 'represents feature vectors between final integrated versions, N' represents feature vectors between time points of the final integrated versions, X represents an input sequence segment, namely a sequence sample pair set, stopgrad (°) represents asynchronous training operation;
s3-7, adjusting parameters of the sequence similarity model through a loss function to obtain the trained sequence similarity model.
Further, step S4 further includes:
s4-1, setting M windows with different scales;
S4-2, respectively inputting adjacent subsequences into the trained sequence similarity model under each scale detection window to obtain detection results corresponding to each scale;
s4-3, carrying out weighted summarization operation on the detection junctions corresponding to each scale to obtain a code quality evaluation result;
S4-4, judging the quality of the corresponding changed code according to the code quality evaluation result, and completing the evaluation of the quality of the changed code.
The beneficial effects of the invention are as follows: according to the evaluation method, the relation between integrated versions is considered, the integration is divided into a stable sequence section and an abnormal sequence section in time sequence, the relation between the characteristics of the stable sequence section and the relation between the characteristics of the abnormal sequence section can be accurately extracted, and the recognition accuracy of the model on the abnormality is effectively improved; the double multi-head self-attention mechanisms are used between sequences and in the sequences, so that the dependence between the sequences and the dependence of the characteristics between the sequences can be extracted, the sequences are divided by using multiple scales, the model is facilitated to identify abnormal sequence segments with different scales, the accuracy of evaluating the quality of the changed codes is improved, and the working efficiency of code evaluation is further improved.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a diagram of the evaluation process of the present invention.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and all the inventions which make use of the inventive concept are protected by the spirit and scope of the present invention as defined and defined in the appended claims to those skilled in the art.
As shown in fig. 1, a method for evaluating quality of a modified code based on multi-scale timing analysis includes the steps of:
S1, acquiring a history integrated version, and constructing an integrated version track based on the history integrated version;
step S1 further comprises:
S1-1, processing a history integrated version based on a time dimension to obtain an integrated abnormality index; acquiring a corresponding evolution track based on the integrated abnormality index;
the formula of step S1-1 is:
AnomalyScore(i)=fault(ti)
P=[AnomalyScore1,AnomalyScore2,…,AnomalyScoren]
Wherein AnomalyScore (i) represents an integrated anomaly index of the i-th historical integrated version, t i represents the i-th historical integrated version in time sequence, fault (t i) represents an integrated error function, namely the number of integrated errors of the historical integrated version t i is calculated, P represents an evolution track, n represents the total number of the historical integrated versions, and AnomalyScore 1,AnomalyScore2,…,AnomalyScoren represents the integrated anomaly index corresponding to each historical integrated version.
S1-2, carrying out sectional identification on the evolution track to obtain an identified evolution track, namely an integrated version track.
The specific process of the step S1-2 is as follows:
S1-2-1, segmenting an evolution track according to three key time points and an integrated anomaly index of code change problem introduction, integrated anomaly end and code change problem elimination to obtain a segmented evolution track;
S1-2-2, identifying the segmented evolution track to obtain an abnormal sequence segment and a stable sequence segment, namely the identified evolution track. The stable sequence segment is a sequence segment which is not introduced by an integration error in the continuous integration process, integrally presents a stable trend segment and repairs the integration error to reduce the integration abnormality index; the abnormal sequence segment is a sequence segment with integrated commit version introducing more integration errors and overall integrated abnormal index rising in the continuous integration process.
S2, extracting the integrated version track to obtain positive and negative sample sequence pairs and constructing an integrated sequence sample set;
The positive samples in step S2 are sequences with codes in a stable phase, namely < P [ v o,vo+w],P[vi-w-1,vi-1 ]; the negative sample is the sequence before and after the introduction of the code defect, namely < P [ v i-w-1,vi-1],P[vi,vi+w ]; wherein w represents an observation window for covering different time scale features in the sequence, v o、vo+w represents the (o) th and (o+w) th integrated anomaly removal versions respectively, and v i-w-1、vi-1、vi、vi+w represents the (i-w-1) th, the (i) th and the (i+w) th historical integrated versions respectively.
S3, constructing a sequence similarity model, and inputting an integrated sequence sample set into the sequence similarity model for training to obtain a trained sequence similarity model;
the sequence similarity model in the step S3 comprises an attention module and an average module which are sequentially connected in series; the attention module comprises a serial section attention layer and an intra-serial attention layer which are connected in parallel; the sequence segment attention layer comprises a sequence segment embedded layer, a sequence segment multi-head attention layer and a sequence segment up-sampling layer which are sequentially connected in series; the intra-sequence attention layer comprises an intra-sequence embedded layer, an intra-sequence multi-head attention layer and an intra-sequence upsampling layer which are sequentially connected in series; the averaging module includes parallel averaging layers.
Step S3 further comprises:
S3-1, constructing a sequence sample pair set based on the integrated sequence sample set;
S3-2, respectively inputting the sequence sample pair set into a sequence segment embedding layer and a sequence inner embedding layer to obtain corresponding sequence segment embedding vectors and sequence inner embedding vectors; the method comprises the steps of embedding a representation among sequence segments into a time window, namely, integrating versions containing a plurality of time points in one time window, and learning the representation among each integrating version by taking the integrating version (fragment) as a unit, namely, learning the dependency relationship among different integrating versions; embedding the representation within the sequence segment is to learn the representation between points in time within each integrated version, i.e. embedding the dependencies within each integrated version.
S3-3, respectively inputting the sequence segment embedded vector and the sequence inner embedded vector into a sequence segment multi-head attention layer and a sequence inner multi-head attention layer to obtain corresponding feature vectors between integrated versions and feature vectors between time points of the integrated versions;
The formula of step S3-3 is as follows:
Attn=Concat(Attn1,...,AttnH)W
Wherein X i represents the ith sequence in the input data X, namely the sequence segment embedded vector and the intra-sequence embedded vector, The method comprises the steps of respectively representing query vectors Q and K of a sequence segment multi-head attention layer and a sequence multi-head attention layer, K i、Qi representing the key vectors and K i T representing the transposed matrix of the key vectors of the ith layer, d model representing the dimensions of the sequence segment multi-head attention layer and the sequence multi-head attention layer, softmax (·) representing a normalization function, attn i representing the sequence segment multi-head attention layer and the ith layer attention score of the sequence multi-head attention layer, H representing the total layer number of the sequence segment multi-head attention layer and the sequence multi-head attention layer, attn representing the total score of the sequence segment multi-head attention layer and the sequence multi-head attention layer, concat (·) representing a splicing function, W representing the weights of the sequence segment multi-head attention layer and the sequence multi-head attention layer, and Attn 1,...,AttnH representing the attention scores of the first layer to the H layer.
S3-4, respectively inputting the feature vectors between the integrated versions and the feature vectors between the time points of the integrated versions to an inter-sequence upsampling layer and an intra-sequence upsampling layer to obtain the sampled feature vectors between the integrated versions and the feature vectors between the time points of the integrated versions; wherein, the up-sampling formula is as follows:
Wherein N represents the final sequence segment representation, i.e. the feature vectors between the sampled integrated versions, P represents the final intra-sequence representation, i.e. the feature vectors between the time points of the sampled integrated versions, Σ (·) represents the summation function, upsampling (·) represents the up-sampling function, PATCH LIST represents the sequence pair list over a time window, attn N、AttnP represents the feature vectors between the corresponding integrated versions, the feature vectors between the time points of the integrated versions, respectively.
For the sequence segment embedded vector, after passing through the multi-head self-attention module, only the dependency relationship among the integrated versions is provided, and no dependency relationship in the integrated versions is provided, and a plurality of similar integrated versions are obtained after up-sampling. Because the relationships among the integrated versions are generated by one integrated version, the relationships can be regarded as the dependent relationships in the integrated version of one integrated version;
For embedded vectors in the sequence, after passing through the multi-head self-attention module, only the dependency relationship in the integrated version is provided, and no dependency relationship among the integrated versions is provided, and a plurality of similar integrated versions are obtained through up-sampling. Although the integrated versions are all generated by upsampling one integrated version, each integrated version is different, so the integrated versions can be regarded as different integrated versions, and the dependency relationship among the integrated versions can be obtained.
S3-5, respectively inputting the sampled feature vectors between the integrated versions and the feature vectors between the time points of the integrated versions to an average layer to obtain final feature vectors between the integrated versions and the feature vectors between the time points of the integrated versions; the formula of the average layer is:
Where m represents a window length, P i represents a feature vector between the i-th sampled integrated versions, N i represents a feature vector between time points of the i-th sampled integrated versions, avg (P) and Avg (N) represent a feature vector between final integrated versions and a feature vector between time points of the integrated versions, respectively.
S3-6, according to the formula:
Obtaining a Loss function Loss { P', N; x }; wherein D represents similarity, P 'represents feature vectors between final integrated versions, i.e. Avg (P), N' represents feature vectors between time points of final integrated versions, i.e. Avg (N), X represents an input sequence segment, i.e. a sequence sample pair set, stopgrad (·) represents an asynchronous training operation; the obtained Loss function Loss { P', N; x } judging whether the input sequence sample pair is a normal sample pair; if the Loss function Loss { P', N; x is larger than the similarity threshold delta, the sequence sample pair is a normal sample pair; otherwise, the sequence sample pair is an abnormal sample pair; the similarity threshold delta is set according to the actual requirement,
S3-7, adjusting parameters of the sequence similarity model through a loss function to obtain the trained sequence similarity model.
S4, acquiring adjacent subsequences corresponding to project code change nodes, and performing multi-scale detection and weighted summarization by using the trained sequence similarity model to obtain corresponding code quality evaluation results; and judging the quality of the corresponding change codes according to the code quality evaluation result, and completing the evaluation of the quality of the change codes.
As shown in fig. 2, step S4 further includes:
s4-1, setting M windows with different scales; the weight of the ith window is as follows:
S4-2, respectively inputting adjacent subsequences into the trained sequence similarity model under each scale detection window to obtain detection results corresponding to each scale;
and S4-3, carrying out weighted summarization operation on the detection junctions corresponding to the scales to obtain a code quality evaluation result.
S4-4, judging the quality of the corresponding changed codes according to the code quality evaluation result; namely judging whether the code quality evaluation result is larger than a threshold epsilon; if yes, judging that the modified code has defects, namely the modified code is a low-quality code; and otherwise, judging that the changed code has no defect, namely the high-quality code. Wherein the defect is an integration error.
In summary, the method and the device consider the relation between integrated versions, divide the historical integrated track into the stable sequence section and the abnormal sequence section from time sequence, and construct the integrated sample sequence set, so that the relation between the characteristics of the stable sequence section and the relation between the characteristics of the abnormal sequence section can be accurately extracted, and the recognition accuracy of the model to the abnormality is effectively improved; based on the integrated sample sequence set, training a sequence similarity model based on a double multi-head self-attention mechanism, and extracting dependence among sequences and dependence of characteristics among sequences is facilitated; and the similarity model is utilized to carry out comprehensive quality evaluation on the current change code on a plurality of time scales, so that the model is beneficial to identifying abnormal sequence segments with different scales, the accuracy of screening unnecessary code review is improved, and the code review workload is effectively reduced.

Claims (2)

1. A change code quality assessment method based on multi-scale time sequence analysis is characterized in that: the method comprises the following steps:
S1, acquiring a history integrated version, and constructing an integrated version track based on the history integrated version;
s2, extracting the integrated version track to obtain positive and negative sample sequence pairs and constructing an integrated sequence sample set;
s3, constructing a sequence similarity model, and inputting an integrated sequence sample set into the sequence similarity model for training to obtain a trained sequence similarity model;
S4, acquiring adjacent subsequences corresponding to project code change nodes, and performing multi-scale detection and weighted summarization by using the trained sequence similarity model to obtain corresponding code quality evaluation results; judging the quality of the corresponding changed codes according to the code quality evaluation result, and completing the evaluation of the quality of the changed codes;
The step S1 further includes:
S1-1, processing a history integrated version based on a time dimension to obtain an integrated abnormality index; acquiring a corresponding evolution track based on the integrated abnormality index;
s1-2, carrying out sectional identification on the evolution track to obtain an identified evolution track, namely an integrated version track; the formula of the step S1-1 is as follows:
AnomalyScore(i)=fault(ti)
P=[AnomalyScore1,AnomalyScore2,...,AnomalyScoren]
Wherein AnomalyScore (i) represents an integrated anomaly index of an ith historical integrated version, t i represents an ith historical integrated version in time sequence, fault (t i) represents an integrated error function, namely, the number of integrated errors of a historical integrated version t i is calculated, P represents an evolution track, n represents the total number of historical integrated versions, and AnomalyScore 1,AnomalyScore2,...,AnomalyScoren represents integrated anomaly indexes corresponding to the historical integrated versions;
The specific process of the step S1-2 is as follows:
S1-2-1, segmenting an evolution track according to three key time points and an integrated anomaly index of code change problem introduction, integrated anomaly end and code change problem elimination to obtain a segmented evolution track;
s1-2-2, identifying the segmented evolution track to obtain an abnormal sequence segment and a stable sequence segment, namely the identified evolution track;
The sequence similarity model in the step S3 comprises an attention module and an average module which are sequentially connected in series; the attention module comprises a sequence segment attention layer and an intra-sequence attention layer which are connected in parallel; the sequence segment attention layer comprises a sequence segment embedded layer, a sequence segment multi-head attention layer and a sequence segment up-sampling layer which are sequentially connected in series; the intra-sequence attention layer comprises an intra-sequence embedded layer, an intra-sequence multi-head attention layer and an intra-sequence upsampling layer which are sequentially connected in series; the average module comprises parallel average layers;
The step S3 further includes:
S3-1, constructing a sequence sample pair set based on the integrated sequence sample set;
s3-2, respectively inputting the sequence sample pair set into a sequence segment embedding layer and a sequence inner embedding layer to obtain corresponding sequence segment embedding vectors and sequence inner embedding vectors;
S3-3, respectively inputting the sequence segment embedded vector and the sequence inner embedded vector into a sequence segment multi-head attention layer and a sequence inner multi-head attention layer to obtain corresponding feature vectors between integrated versions and feature vectors between time points of the integrated versions;
S3-4, respectively inputting the feature vectors between the integrated versions and the feature vectors between the time points of the integrated versions to an inter-sequence upsampling layer and an intra-sequence upsampling layer to obtain the sampled feature vectors between the integrated versions and the feature vectors between the time points of the integrated versions;
S3-5, respectively inputting the sampled feature vectors between the integrated versions and the feature vectors between the time points of the integrated versions to an average layer to obtain final feature vectors between the integrated versions and the feature vectors between the time points of the integrated versions;
s3-6, according to the formula:
Obtaining a Loss function Loss { P ', N'; x }; wherein D represents similarity, P 'represents feature vectors between final integrated versions, N' represents feature vectors between time points of the final integrated versions, X represents an input sequence segment, namely a sequence sample pair set, stopgrad (°) represents asynchronous training operation;
S3-7, adjusting parameters of the sequence similarity model through a loss function to obtain a trained sequence similarity model;
The step S4 further includes:
s4-1, setting M windows with different scales;
S4-2, respectively inputting adjacent subsequences into the trained sequence similarity model under each scale detection window to obtain detection results corresponding to each scale;
s4-3, carrying out weighted summarization operation on the detection junctions corresponding to each scale to obtain a code quality evaluation result;
S4-4, judging the quality of the corresponding changed code according to the code quality evaluation result, and completing the evaluation of the quality of the changed code.
2. The method for evaluating quality of altered code based on multi-scale timing analysis according to claim 1, wherein: the positive samples in the step S2 are sequences of codes in a stable stage, and the negative samples are sequences of codes before and after defect introduction.
CN202410009502.4A 2024-01-03 Multi-scale time sequence analysis-based modified code quality assessment method Active CN117806972B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410009502.4A CN117806972B (en) 2024-01-03 Multi-scale time sequence analysis-based modified code quality assessment method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410009502.4A CN117806972B (en) 2024-01-03 Multi-scale time sequence analysis-based modified code quality assessment method

Publications (2)

Publication Number Publication Date
CN117806972A CN117806972A (en) 2024-04-02
CN117806972B true CN117806972B (en) 2024-07-02

Family

ID=

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110858176A (en) * 2018-08-24 2020-03-03 西门子股份公司 Code quality evaluation method, device, system and storage medium
CN114169394A (en) * 2021-11-04 2022-03-11 浙江大学 Multi-variable time series prediction method for multi-scale adaptive graph learning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110858176A (en) * 2018-08-24 2020-03-03 西门子股份公司 Code quality evaluation method, device, system and storage medium
CN114169394A (en) * 2021-11-04 2022-03-11 浙江大学 Multi-variable time series prediction method for multi-scale adaptive graph learning

Similar Documents

Publication Publication Date Title
CN114610515B (en) Multi-feature log anomaly detection method and system based on log full semantics
CN113434357B (en) Log anomaly detection method and device based on sequence prediction
CN113312447B (en) Semi-supervised log anomaly detection method based on probability label estimation
CN110109835B (en) Software defect positioning method based on deep neural network
CN111460728B (en) Method and device for predicting residual life of industrial equipment, storage medium and equipment
CN111427775B (en) Method level defect positioning method based on Bert model
CN111309607B (en) Software defect positioning method of code method level
CN113064873B (en) Log anomaly detection method with high recall rate
CN111833310A (en) Surface defect classification method based on neural network architecture search
CN114580518A (en) Symbolic representation similarity measurement-based satellite multivariate parameter anomaly detection method
CN115394383A (en) Method and system for predicting luminescence wavelength of phosphorescent material
CN115456107A (en) Time series abnormity detection system and method
CN116561748A (en) Log abnormality detection device for component subsequence correlation sensing
CN115269870A (en) Method for realizing classification and early warning of data link faults in data based on knowledge graph
CN113779590B (en) Source code vulnerability detection method based on multidimensional characterization
CN113392191B (en) Text matching method and device based on multi-dimensional semantic joint learning
CN111737993B (en) Method for extracting equipment health state from fault defect text of power distribution network equipment
CN117806972B (en) Multi-scale time sequence analysis-based modified code quality assessment method
CN117727043A (en) Training and image retrieval methods, devices and equipment of information reconstruction model
CN111090679B (en) Time sequence data representation learning method based on time sequence influence and graph embedding
CN117409890A (en) Transformer fault self-identification method and system based on two-way long and short-time memory
CN113326371B (en) Event extraction method integrating pre-training language model and anti-noise interference remote supervision information
CN115221013B (en) Method, device and equipment for determining log mode
CN117806972A (en) Multi-scale time sequence analysis-based modified code quality assessment method
CN112163423B (en) Method and system for calculating check-in case handling work amount

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant