CN109885482A - Software Defects Predict Methods based on the study of few sample data - Google Patents

Software Defects Predict Methods based on the study of few sample data Download PDF

Info

Publication number
CN109885482A
CN109885482A CN201910040317.0A CN201910040317A CN109885482A CN 109885482 A CN109885482 A CN 109885482A CN 201910040317 A CN201910040317 A CN 201910040317A CN 109885482 A CN109885482 A CN 109885482A
Authority
CN
China
Prior art keywords
sample
data
study
learning
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910040317.0A
Other languages
Chinese (zh)
Inventor
赵林畅
尚赵伟
赵灵
王敏全
周晔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN201910040317.0A priority Critical patent/CN109885482A/en
Publication of CN109885482A publication Critical patent/CN109885482A/en
Pending legal-status Critical Current

Links

Abstract

The present invention relates to a kind of Software Defects Predict Methods based on the study of few sample data, belong to field of software engineering.This method comprises: S1: SDNN of the building based on twin network, i.e., twin fully-connected network;S2: input positive sample and negative sample data carry out few sample learning by SDNN network, extract sample to the high-level depth characteristic of data;S3: being compared study and probability output to the high-level depth characteristic extracted in step S2 using metric learning function, adjusts positive and negative sample proportion, and function learning parameter is arranged, metric learning function is made to focus more on the study to defective data feature;S4: prediction result is obtained.The method applied in the present invention compared with prior art, can obtain better prediction effect on limited, higher-dimension, unbalanced data set, and under different unbalance factors, performance is more stable;Better prediction result can also be obtained under the conditions ofs little data and time etc..

Description

Software Defects Predict Methods based on the study of few sample data
Technical field
The invention belongs to field of software engineering, are related to a kind of Software Defects Predict Methods based on the study of few sample data.
Background technique
Software defect prediction is to whether there is defect in forecasting software with existing historical data, it is in software maintenance A vital task, be directly related to software cost and software quality.At present mainly using machine learning algorithm to history number According to carrying out model construction, training and assessment, and these historical datas are often limited, higher-dimension and class it is unbalanced, tradition Machine learning algorithm need a large amount of data not only to be trained to constructed model, be also difficult to from high dimensional data middle school Effective depth characterization is obtained, especially in the early stage of software test.
For limited software defect data, LinChen et al. (L.Chen, B.Fang, Z.Shang, Y.Tang, Negative samples reduction in cross-company software defects prediction, Information and Software Technology 62 (1) (2015) 67-77.) propose that two stages transfer learning promotes Algorithm extracts most like sample from across company data and increases training set sample size as training set, but is readily incorporated new Redundant samples.Yu et al. (Q.Yu, S.Jiang, Y.Zhang, A feature matching and transfer approach for cross-company defect prediction,Journal of Systems and Software 132(2017) 366-378.) matching characteristic is converted by heterogeneous characteristic using Feature Correspondence Algorithm, the accuracy of Lai Tigao model AUC value, but Algorithm complexity is high.Ma et al. (Y.Ma, G.Luo, X.Zeng, A.Chen, Transfer learning for cross- company software defect prediction,Inform.Softw.Technol.54(3)(2012)248–256.) From prediction should data characteristics relevant to distribution, propose Bayes's transfer learning mould of Case-based Reasoning feature transfer Type weights training data according to software defect data characteristics again, but the parameter that the algorithm need to regulate and control is more.
For the software defect data of higher-dimension, Tong et al. (H.Tong, B.Liu, S.Wang, Software defect prediction using stacked denoising autoencoders and two-stage ensemble 96 (2018) 94-111. of learning, Information and Software Technology) removed dryness using superposition it is self-editing Code device learns the depth characterization of high dimensional data, and superposition removes dryness self-encoding encoder, and to remove dryness one kind that self-encoding encoder forms strong by multiple Big deep learning model, it is a kind of feedforward neural network with input layer, output layer and hidden layer;Dam et al. (H.K.Dam,T.Pham,S.W.Ng,T.Tran,J.Grundy,A.Ghose,T.Kim,C.J.Kim,A deep tree- based model for software defect prediction,eprint arXiv:1802.00921(2018)1– 10.) by length Memory Neural Networks (LSTM) come the abstract syntax tree of learning software higher-dimension historical data;Yuan et al. (J.Yuan,H.Guo,Z.Jin,H.Jin,X.Zhang,J.Luo,Oneshot learning for fine-grained relation extraction via convolutional siamese neural network,in:IEEE International Conference on Big Data, 2017, pp.2194-2199.) it is obtained using convolutional neural networks The semantic structure information of high dimensional data and the depth characterization of high dimension attribute;Lu et al. (G.Lu, Z.Lie, L.Hang, Deep belief network software defect prediction model,Computer Science 44(4)(2017) 229-233.) depth representative learning directly is carried out using historical data of the deepness belief network to higher-dimension and prediction model constructs. But above deep learning model, which is required to a large amount of training time and data, could obtain good result.
For the unbalanced software defect data of class, Cieslak et al. (D.A.Cieslak and N.V.Chawla, Start globally,optimize locally,predict globally:Improving performance on imbalanced data,in Data Mining,2008.ICDM’08.Eighth IEEE International Conference on.IEEE, 2008, pp.143-152.) think data mostly and be multi-modal, handle the weight of unbalanced data The method of sampling should be used for part rather than it is global, therefore proposition the method for resampling based on data subregion.Liao et al. (Jui- Jung Liao,Ching-Hui Shih,Tai-Feng Chen,and Ming-Fu Hsu:“An ensemble-based model for two-class imbalanced financial problem”,Economic Modeling,Vol.37, Pp.175-183,2014) proposing integrated learning approach carrys out processing software defective data, they locate first with SVM technology Unbalanced data is managed, then selects feature using BP neural network, it is last to know according to rough set theory to construct integrated study Know, but classification performance depends on the quality of training data.Sun et al. (Sun Z, Song Q, Zhu X.Using coding- based ensemble learning to improve software defect prediction.Systems,Man,and Cyb ernetics,Part C:Applications and Reviews,IEEE Transactions on,2012,42(6): 1806-1817.) a kind of integrated learning approach based on coding is proposed, converts multiclass balance for class imbalance defective data Data avoid class imbalance problem by specific coding strategy, but coding method is complex.Mariral et al. (Mairal J,Bach F,Ponce J.Task-driven dictionary learning.IEEE Trans.Pattern Anal.Mach.Intell., 2012,34 (2): 791-804) by training sorting code number device coefficient, by sparse coding error and Error in classification is integrated into objective function, proposes software defect prediction model dictionary-based learning, although the model is in correspondence Better predictablity rate is achieved in field, but serious unbalanced data is than making the dictionary to learn tend to zero defect number According to.
According to document analysis, there are following Railway Projects for the prior art: 1) algorithm complexity is high;2) model performance depends on A large amount of training sample data;3) limited, higher-dimension sum can be handled simultaneously there is no a kind of effectively software prediction model Unbalanced software defect historical data.
Summary of the invention
In view of this, the purpose of the present invention is to provide a kind of software defect prediction sides based on the study of few sample data Method will be limited for the prediction model that limited, higher-dimension and unbalanced software defect data can not be uniformly processed , higher-dimension and unbalanced software defect data to look at be a few sample learning problem, devise and connect twin nerve entirely Network extracts category using the network structure once to be learnt or be lacked sample learning to finite sample from high dimensional data Property depthmeter take over for use and in model training and assessment, and on the basis of comparing loss function devise metric learning function as mesh Scalar functions handle data class imbalance problem.
In order to achieve the above objectives, the invention provides the following technical scheme:
A kind of Software Defects Predict Methods based on the study of few sample data, specifically includes the following steps:
S1: building is based on deep learning network (the Siamese Dense of twin network (Siamese networks) Neural networks, SDNN), i.e., twin fully-connected network;
S2: input positive sample and negative sample data carry out few sample learning by SDNN network, extract sample to data High-level depth characteristic;
S3: study is compared to the high-level depth characteristic extracted in step S2 using metric learning function and probability is defeated Out, positive and negative sample proportion is adjusted, function learning parameter is set, focuses more on metric learning function to defective data feature Study;
S4: prediction result is obtained.
Further, in step S1, the twin fully-connected network is a pair of identical fully-connected network, the layer of structure It is secondary identical with parameter setting, for learning high dimensional data feature.
Further, in step S3, the metric learning function includes distance metric function and cosine similarity distance metric Function;
Distance study: if the vector X of input1And X2Indicating positive and negative sample data pair, w indicates SDNN network share parameter, Hw(X1) and Hw(X2) indicate input vector pair Feature Mapping space, then twin fully-connected network distance metric function definition Are as follows:
Dw(X1,X2)=| | Hw(X1)-Hw(X2)||
Wherein, Dw(X1,X2) indicate Euclidean distance, for the distance between metric learning input sample pair.
Further, in step S3, similar study: Euclidean distance primary metric has learnt the between class distance between sample pair, and Have ignored the inter- object distance of sample pair.Therefore, on the basis of distance metric function, present invention introduces inter- object distances to learn letter Number, i.e. cosine similarity distance metric function (cosine-proximity) increase for reinforcing sample to similarity-based learning in class The discriminating power of strong positive negative sample pair;The then cosine similarity distance metric function is defined as:
Wherein, i indicates the i-th layer network, and N is all samples, y(i)Represent the label value of the i-th layer network sample, Li(w,y, X1,X2) value between [- 1,1], if its value, representative sample is higher to the similarity between in class if 1;In order to add Discriminating power between strong positive negative sample pair, the opposite number of remainder string similarity distance metric function value are added to measurement as the degree of discrimination In learning function, then final metric learning function is defined as:
Lend=-α Li(w,y,X1,X2)+Dw(X1,X2)
Wherein, α is positive and negative sample proportion adjustment factor, controls positive and negative sample size by it and learns.
Further, in step S3, if the probability value of output is less than 0.5, the sample learnt belongs to same class to data, If the probability value of output is greater than 0.5, the sample of study is inputted to belonging to inhomogeneity data.
The beneficial effects of the present invention are: the method applied in the present invention compared with prior art, can be in limited, higher-dimension , on unbalanced data set obtain better prediction effect, and under different unbalance factors, performance is more stable;Also Better prediction result can be obtained under the conditions ofs little data and time etc., be better than the prior art.
Detailed description of the invention
In order to keep the purpose of the present invention, technical scheme and beneficial effects clearer, the present invention provides following attached drawing and carries out Illustrate:
Fig. 1 is the block flow diagram of the method for the invention;
Fig. 2 is SDNN network structure;
Fig. 3 is scatter plot of the distinct methods in benchmark dataset about PD and PF;
Fig. 4 is performance chart of the distinct methods under different unbalance factors.
Specific embodiment
Below in conjunction with attached drawing, a preferred embodiment of the present invention will be described in detail.
For the limited historical data of software defect, the invention proposes one kind to be based on twin network (Siamese Networks the deep learning network architecture (Siamese Dense neural networks, SDNN)).By limited, higher-dimension And unbalanced software defect data as few sample learning example, distinguished using two duplicate fully-connected networks Study is compared to defective data and zero defect data, metric learning function handles data class injustice as objective function Weighing apparatus problem.Systems approach process as shown in Figure 1.
Software defect predicts that purpose is to construct the prediction model based on data-driven according to existing software defect data, And go to identify and predict unknown data characteristic using the model, manual testing and software maintenance cost are reduced, software defect is improved Identification and forecasting efficiency.SDNN is twin fully-connected network (network structure is shown in Fig. 2), by two identical fully connected networks Network composition, each fully-connected network handle the batch data of input respectively, it receives different batch data inputs.Top by Metric learning functional link, for handling the high-level depth characteristic of imbalance to learn.SDNN network structure has a characteristic that
(1) SDNN has used for reference the advantages of twin network once learns and lacks sample learning.Twin network has single according to people The learning ability of sample, energy will efficient identification comes out from existing similar knowledge from unstudied object, to mitigate calculating Complexity does not need a large amount of training data and complicated network structure.
(2) twin network share parameter characteristic is utilized.Each sub-network structure in twin network is consistent, shares all ginsengs Number, i.e., all weighted values, bias and training parameter are shared, convenient for being updated by the way that bottom-layer network is unified to twin network parameter And management, reduce parameter amount and parameter computational complexity.
(3) similar characteristic of twin network is used for reference.Twin network is a pair of completely the same deep neural network each other, is led to The comparison measuring function for crossing top connection carries out similitude and differentiates study.The present invention strengthens on comparison measuring functional foundations The identification of positive and negative sample similarity is added to similarity measure learning function, so that positive negative sample is to preferably capable of being learnt and be sentenced Not.
(4) high dimensional data feature is excavated using depth learning technology.Current depth neural network is that high dimensional data carries out spy One of sign study and the best tool extracted, according to the attribute characteristic of software defect data, the present invention devises twin full connection Network carries out high-level feature learning and extraction.
As shown in Fig. 2, the structure detail situation of SDNN network is as follows:
(1) H in network1(x1) and H2(x2) it is a pair of identical fully-connected network, their structures having the same The number of plies and parameter setting are mainly used to learn high dimensional data feature.
(2) input 1 of network and input 2 represent positive sample and negative sample data, carry out few sample by SDNN network It practises, extracts sample and the high-level depth of data is characterized.
(3) metric learning function is compared study to the depth characteristic to learn, adjusts positive and negative sample proportion, and function is arranged Learning parameter makes metric learning function focus more on the study to defective data feature.
(4) recruitment evaluation carries out the model performance of design with five indexs (PD, PF, F-measure, MCC, AUC) It fully assesses.
The top of SDNN network is ined succession metric learning function, after pairs of sample data is by twin e-learning, and Study and probability output are compared by high-level feature of the metric learning function to extraction.If the probability value of output is less than 0.5, then the sample learnt belongs to same class to data, if the probability value of output is greater than 0.5, inputs the sample of study to category In inhomogeneity data.The metric learning function includes:
A, distance study: if the vector X of input1And X2Indicate positive and negative sample data pair, w indicates SDNN network share ginseng Number, Hw(X1) and Hw(X2) indicating the Feature Mapping space of input vector pair, then the distance metric function of twin fully-connected network is fixed Justice are as follows:
Dw(X1,X2)=| | Hw(X1)-Hw(X2)||
Wherein, Dw(X1,X2) indicate Euclidean distance, for the distance between metric learning input sample pair.
B, similar study: Euclidean distance primary metric has learnt the between class distance between sample pair, and has ignored sample pair Inter- object distance.Therefore, on the basis of distance metric function, present invention introduces inter- object distance learning functions, i.e. cosine similarity Distance metric function (cosine-proximity) enhances positive negative sample pair for reinforcing sample to similarity-based learning in class Discriminating power;The then cosine similarity distance metric function is defined as:
Wherein, i indicates the i-th layer network, and N is all samples, y(i)Represent the label value of the i-th layer network sample, Li(w,y, X1,X2) value between [- 1,1], if its value, representative sample is higher to the similarity between in class if 1;In order to add Discriminating power between strong positive negative sample pair, the opposite number of remainder string similarity distance metric function value are added to measurement as the degree of discrimination In learning function, then final metric learning function is defined as:
Lend=-α Li(w,y,X1,X2)+Dw(X1,X2)
Wherein, α is positive and negative sample proportion adjustment factor, controls positive and negative sample size by it and learns.
Specific embodiment:
(1) experimental data
NASA data warehouse includes many different software defect data sets, and 10 data are extracted in experiment from the warehouse Collection carries out experimental analysis, they are AR1, AR4, AR6, CM1, KC1, KC2, MW1, PC1, PC3 and PC4 respectively.Extracting principle is All experimental datas derive from open machine learning databases, so that the method for invention is convenient for verifying and application;And selected number It, can be in the case where not losing any metric directly using experiment in experimentation according to Measure Indexes having the same are collected Data.
Pay attention to the attribute dimensions disunity of these data sets, smallest dimension is 21, and maximum dimension is 57, their class is also Unbalanced, the smallest degree of unbalancedness is 3, and maximum degree of unbalancedness is 12.And the example of each data set is limited, and it is minimum It is 87, preferably at most 2032.Therefore, conventional machine learning techniques are difficult to train according to above data effectively software Bug prediction model.
(2) control methods
SDNN will be chosenOne, DNN, LSTM, DBN, LR, BAG, NB, TNB, DTB and SDNN of the present invention carry out it is each Comparison in item performance, wherein DNN, LSTM, LR are the fresh approach proposed since 2018;DBN is the base proposed in 2017 In the Software Defects Predict Methods of deep learning;NB, TNB, DTB, BAG are the preferable methods of effect proposed in recent years;SDNNOneIt is In order to compare the modelling effect for not introducing similar learning function.
(3) evaluation index
PD (Probability of Detection) defect detection rate, also referred to as recall rate (recall) are measured pre- Survey the ratio that correct defective data accounts for defect sum:
PF (Probability of False alarm) defect rate of false alarm, the zero defect module Zhan for measuring prediction error are total The ratio of body zero defect number of modules:
F-measure is the comprehensive evaluation index of PD and precision, distribution between 0 to 1, PD and Precision is higher, then the value of F-measure is bigger.
MCC (Matthews Correlation Coefficient) is also commonly used for unbalanced dataset recognition result Comprehensive evaluation index calculates complexity the most, contains all elements in confusion matrix, and Distribution value is between -1 to 1, classification Device obtains maximum value 1 when all sample standard deviation classification is correct.
AUC (Area Under ROC Curve) is the value being calculated by ROC curve, and ROC curve is by classifier PD the and PF point pair for taking different threshold values to obtain, is drawn to obtain in ROC plane, and AUC is by calculated curve consisting of Area obtain, it can integrate the performance of PD and PF very well.
(4) experimental result
Firstly, performance of the verifying SDNN method in few sample data, by being compared with 9 pedestal methods, in same data In the experiment for collecting different data amount, table 1, Fig. 3 provide the experimental result of distinct methods respectively.As can be drawn from Table 1, our side Method obtains best PD mean value and minimum PF mean value, and Fig. 3 also demonstrates the optimality of inventive method, it has in the lower right corner More point distributions.Two overall target results in table 2 also demonstrate the superiority of inventive method again, in F-measure Value aspect, compares DNN, LSTM, DBN, NB and LR method, and SDNN method Quan Chaoyue on 10 data sets compares SDNNOne、TNB And LR, only one data set performance is suitable, compares DTB method, and 10 data concentrate only one data set performance lower than DTB Method.
Average PD and PF of all methods of table 1 on experimental data set
Overall target F-measure and MCC of all methods of table 2 on experimental data set
Then, can analyze SDNN method obtain better result within the acceptable time.Each model is solved Average workout times and testing time in 10 benchmark datasets, the time that table 3 gives each model spend result.From Data in table 3 it is found that the average workout times of the method for the present invention and testing time be not it is worst, it can be when acceptable Between better prediction result is obtained in range.
Average time/unit (second) of all methods of table 3 on test set and verifying collection
Finally, demonstrating the stability of institute's inventive method in the case where different classes is uneven.Fig. 4 gives different The performance change curve of every kind of method under class is uneven.It can be seen that this five kinds of methods of NB, DNN, DBN, LSTM and LR are big Performance on most data sets is unstable, their prediction effect is influenced big by unbalance factor;SDNNOne, TNB and Bag this three The performance of kind method is influenced to take second place by unbalance factor, and the performance on partial data collection is more stable;And the SDNN invented herein The property retention of method and DTB method on all data sets is relatively stable, but from evaluation, herein described in invention SDNN method prediction effect ratio DTB method prediction effect it is more preferable.
Finally, it is stated that preferred embodiment above is only used to illustrate the technical scheme of the present invention and not to limit it, although logical It crosses above preferred embodiment the present invention is described in detail, however, those skilled in the art should understand that, can be Various changes are made to it in form and in details, without departing from claims of the present invention limited range.

Claims (5)

1. based on few sample data study Software Defects Predict Methods, which is characterized in that this method specifically includes the following steps:
S1: building is based on deep learning network (the Siamese Dense neural of twin network (Siamese networks) Networks, SDNN), i.e., twin fully-connected network;
S2: input positive sample and negative sample data carry out few sample learning by SDNN network, extract sample to the high level of data Secondary depth characteristic;
S3: being compared study and probability output to the high-level depth characteristic extracted in step S2 using metric learning function, Positive and negative sample proportion is adjusted, function learning parameter is set, metric learning function is made to focus more on to defective data feature It practises;
S4: prediction result is obtained.
2. the Software Defects Predict Methods according to claim 1 based on the study of few sample data, which is characterized in that step In S1, the twin fully-connected network is a pair of identical fully-connected network, and the level of structure is identical with parameter setting, is used In study high dimensional data feature.
3. the Software Defects Predict Methods according to claim 1 based on the study of few sample data, which is characterized in that step In S3, the metric learning function includes distance metric function and cosine similarity distance metric function;
Distance study: if the vector X of input1And X2Indicate positive and negative sample data pair, w indicates SDNN network share parameter, Hw (X1) and Hw(X2) indicate the Feature Mapping space of input vector pair, then the distance metric function of twin fully-connected network is defined as:
Dw(X1,X2)=| | Hw(X1)-Hw(X2)||
Wherein, Dw(X1,X2) indicate Euclidean distance, for the distance between metric learning input sample pair.
4. the Software Defects Predict Methods according to claim 3 based on the study of few sample data, which is characterized in that step In S3, similar study: inter- object distance learning function, i.e. cosine similarity distance metric function (cosine- are introduced Proximity), for reinforcing sample to similarity-based learning in class, enhance the discriminating power of positive negative sample pair;The then cosine phase Like degree distance metric function is defined as:
Wherein, i indicates the i-th layer network, and N is all samples, y(i)Represent the label value of the i-th layer network sample, Li(w,y,X1,X2) Value between [- 1,1], if its value, representative sample is higher to the similarity between in class if 1;It is positive and negative in order to reinforce The opposite number of discriminating power between sample pair, remainder string similarity distance metric function value is added to metric learning letter as the degree of discrimination In number, then final metric learning function is defined as:
Lend=-α Li(w,y,X1,X2)+Dw(X1,X2)
Wherein, α is positive and negative sample proportion adjustment factor, controls positive and negative sample size by it and learns.
5. the Software Defects Predict Methods according to claim 4 based on the study of few sample data, which is characterized in that step In S3, if the probability value of output is less than 0.5, the sample learnt belongs to same class to data, if the probability value of output is big In 0.5, then the sample of study is inputted to belonging to inhomogeneity data.
CN201910040317.0A 2019-01-16 2019-01-16 Software Defects Predict Methods based on the study of few sample data Pending CN109885482A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910040317.0A CN109885482A (en) 2019-01-16 2019-01-16 Software Defects Predict Methods based on the study of few sample data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910040317.0A CN109885482A (en) 2019-01-16 2019-01-16 Software Defects Predict Methods based on the study of few sample data

Publications (1)

Publication Number Publication Date
CN109885482A true CN109885482A (en) 2019-06-14

Family

ID=66926138

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910040317.0A Pending CN109885482A (en) 2019-01-16 2019-01-16 Software Defects Predict Methods based on the study of few sample data

Country Status (1)

Country Link
CN (1) CN109885482A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110570397A (en) * 2019-08-13 2019-12-13 创新奇智(重庆)科技有限公司 Method for detecting ready-made clothes printing defects based on deep learning template matching algorithm
CN111209211A (en) * 2020-01-16 2020-05-29 华南理工大学 Cross-project software defect prediction method based on long-term and short-term memory neural network
CN111612763A (en) * 2020-05-20 2020-09-01 重庆邮电大学 Mobile phone screen defect detection method, device and system, computer equipment and medium
CN112579477A (en) * 2021-02-26 2021-03-30 北京北大软件工程股份有限公司 Defect detection method, device and storage medium
CN112598163A (en) * 2020-12-08 2021-04-02 国网河北省电力有限公司电力科学研究院 Grounding grid trenchless corrosion prediction model based on comparison learning and measurement learning
CN112668527A (en) * 2020-12-31 2021-04-16 华南理工大学 Ultrasonic guided wave semi-supervised imaging detection method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108596958A (en) * 2018-05-10 2018-09-28 安徽大学 A kind of method for tracking target generated based on difficult positive sample
CN109213675A (en) * 2017-07-05 2019-01-15 瞻博网络公司 Software analysis platform

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109213675A (en) * 2017-07-05 2019-01-15 瞻博网络公司 Software analysis platform
CN108596958A (en) * 2018-05-10 2018-09-28 安徽大学 A kind of method for tracking target generated based on difficult positive sample

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LINCHANG ZHAO 等: "Siamese Dense Neural Network for Software Defect Prediction With Small Data", 《IEEE ACCESS》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110570397A (en) * 2019-08-13 2019-12-13 创新奇智(重庆)科技有限公司 Method for detecting ready-made clothes printing defects based on deep learning template matching algorithm
CN110570397B (en) * 2019-08-13 2020-12-04 创新奇智(重庆)科技有限公司 Method for detecting ready-made clothes printing defects based on deep learning template matching algorithm
CN111209211A (en) * 2020-01-16 2020-05-29 华南理工大学 Cross-project software defect prediction method based on long-term and short-term memory neural network
CN111612763A (en) * 2020-05-20 2020-09-01 重庆邮电大学 Mobile phone screen defect detection method, device and system, computer equipment and medium
CN111612763B (en) * 2020-05-20 2022-06-03 重庆邮电大学 Mobile phone screen defect detection method, device and system, computer equipment and medium
CN112598163A (en) * 2020-12-08 2021-04-02 国网河北省电力有限公司电力科学研究院 Grounding grid trenchless corrosion prediction model based on comparison learning and measurement learning
CN112598163B (en) * 2020-12-08 2022-11-22 国网河北省电力有限公司电力科学研究院 Grounding grid trenchless corrosion prediction model based on comparison learning and measurement learning
CN112668527A (en) * 2020-12-31 2021-04-16 华南理工大学 Ultrasonic guided wave semi-supervised imaging detection method
CN112579477A (en) * 2021-02-26 2021-03-30 北京北大软件工程股份有限公司 Defect detection method, device and storage medium

Similar Documents

Publication Publication Date Title
CN109885482A (en) Software Defects Predict Methods based on the study of few sample data
CN106845717B (en) Energy efficiency evaluation method based on multi-model fusion strategy
CN110213222A (en) Network inbreak detection method based on machine learning
CN110213244A (en) A kind of network inbreak detection method based on space-time characteristic fusion
CN105487526B (en) A kind of Fast RVM sewage treatment method for diagnosing faults
Shimada et al. Class association rule mining with chi-squared test using genetic network programming
CN106250442A (en) The feature selection approach of a kind of network security data and system
CN112070128A (en) Transformer fault diagnosis method based on deep learning
CN104881689A (en) Method and system for multi-label active learning classification
Arqawi et al. Predicting Employee Attrition and Performance Using Deep Learning
CN110794360A (en) Method and system for predicting fault of intelligent electric energy meter based on machine learning
Wang et al. Network-combined broad learning and transfer learning: A new intelligent fault diagnosis method for rolling bearings
Yang et al. An intelligent singular value diagnostic method for concrete dam deformation monitoring
CN114897085A (en) Clustering method based on closed subgraph link prediction and computer equipment
CN109164794A (en) Multivariable industrial process Fault Classification based on inclined F value SELM
Arefeen et al. Transjury: Towards explainable transfer learning through selection of layers from deep neural networks
CN108763926A (en) A kind of industrial control system intrusion detection method with security immunization ability
Yamin et al. Research on matching method for case retrieval process in CBR based on FCM
CN108898157B (en) Classification method for radar chart representation of numerical data based on convolutional neural network
CN115277159B (en) Industrial Internet security situation assessment method based on improved random forest
Wang et al. An improved K_means algorithm for document clustering based on knowledge graphs
CN114143210B (en) Command control network key node identification method based on deep learning
CN112465253B (en) Method and device for predicting links in urban road network
Gao et al. Fault detection of electric vehicle charging piles based on extreme learning machine algorithm
Yang et al. An improved probabilistic neural network with ga optimization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190614

RJ01 Rejection of invention patent application after publication