CN105335763A - Fabric defect classification method based on improved extreme learning machine - Google Patents

Fabric defect classification method based on improved extreme learning machine Download PDF

Info

Publication number
CN105335763A
CN105335763A CN201510894453.8A CN201510894453A CN105335763A CN 105335763 A CN105335763 A CN 105335763A CN 201510894453 A CN201510894453 A CN 201510894453A CN 105335763 A CN105335763 A CN 105335763A
Authority
CN
China
Prior art keywords
node
elm
online
hidden
learning machine
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510894453.8A
Other languages
Chinese (zh)
Inventor
马强
陈亮
任正云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Donghua University
National Dong Hwa University
Original Assignee
Donghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Donghua University filed Critical Donghua University
Priority to CN201510894453.8A priority Critical patent/CN105335763A/en
Publication of CN105335763A publication Critical patent/CN105335763A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches

Abstract

The invention relates to a fabric defect classification method based on an improved extreme learning machine. The method comprises the steps that a defect image of a training sample is preprocessed; adaptive wavelets are constructed for decomposition to detect fabric defects, and feature extraction is conducted through a multi-feature fusion method to obtain defect features; the defect features are classified, and in the classifying process, an online ELM algorithm is introduced, and online ELM pruning is conducted on hidden nodes through a sensitivity analysis method. By the adoption of the method, the deficiency of the overall processing mode of bulk data through an ELM is overcome, dependency of algorithm performance on the hidden nodes is reduced, and pruning of the hidden nodes is conducted based on sensitivity analysis.

Description

A kind of Fabric Defects Classification method based on the very fast learning machine of modified
Technical field
The present invention relates to Fabric Defects Classification technical field, particularly relate to a kind of Fabric Defects Classification method based on the very fast learning machine of modified.
Background technology
ELM algorithm puts forward on the basis of Single hidden layer feedforward neural networks.Single hidden layer Qian is the special feedforward neural network of a class Xiang Zhong through network, only comprises the feedforward neural network of a hidden layer.Famous universal approximation theorem shows, single hidden layer feedforward neural network can approach given continuous function with arbitrary accuracy.Theoretical to ensure based on this, single hidden layer feedforward neural network is no matter in theoretical analysis, or engineer applied aspect is all extensively studied.
Compared with traditional neural network, there is a lot of advantages in ELM: ELM Algorithm Learning speed is fast and generalization ability is good, and ELM algorithm is that to improve single hidden layer forward direction Xing Zhong be that starting point proposes through network (SLFN) training speed and generalization ability; Traditional study is based on Gradient Descent principle design, face pace of learning slow, easily be absorbed in the problems such as local minimum, and ELM is based on least square method, the parameter of network is determined by a matrix operation, thus the speed learnt is fast, has been successfully applied to a large amount of recurrence and classification problem at present.
In artificial neural network algorithm learning process, usually adopt batch processing study and on-line study two kinds of patterns.What ELM adopted is batch processing mode of learning, namely all training datas all process and just add network learning to, and a corresponding adjustment weight vector, when this mode of learning process large-scale dataset and real time data collection, do not embody its real-time, learning time is long and effect is undesirable.Namely what on-line study machine OS-ELM adopted is on-line study pattern, and data arrive time series and are increased to network learning, and abandon the data learnt, and do not carry out repetitive learning to it, has good effect when process large-scale dataset and real time data collection.
The structure of network is also the principal element of restriction ELM real-time in addition.In single hidden layer feedforward neural network study, hidden unit quantity and network generalization have direct relation, and training error all can be made comparatively large for too much or very few hidden unit number and generalization ability is poor.Therefore, how to determine that suitable hidden unit quantity is to reach the key problem that comparatively ideal expected result becomes research ELM algorithm.
Summary of the invention
Technical matters to be solved by this invention is to provide a kind of Fabric Defects Classification method based on the very fast learning machine of modified, can carry out defect detection to various plain fabric defect image.
The technical solution adopted for the present invention to solve the technical problems is: provide a kind of Fabric Defects Classification method based on the very fast learning machine of modified, comprise the following steps:
(1) defect image of training sample is carried out pre-service;
(2) construct adaptive wavelet to carry out decomposition and detect fabric defects, and carry out feature extraction by the method for multiple features fusion, obtain fault feature;
(3) fault feature is classified, during classification, introduce online ELM algorithm, and the online ELM beta pruning process of Sensitivity Analysis Method is carried out to hidden node.
In described step (3), Regularization is carried out to the online ELM algorithm introduced.
The online ELM algorithm introduced in described step (3) trains residual error relative to the sensitivity of hidden layer node according to the output layer weights definition of hidden node output and correspondence thereof, accordingly hidden layer node is sorted, and computational grid scale fitness, retain the highly sensitive hidden node specified number; The input layer weights and threshold of deleted node adopts the method for superposition to be averagely allocated to other reservation nodes.
The pre-service of described step (1) comprises homomorphic filtering process and histogram equalization process.
Beneficial effect
Owing to have employed above-mentioned technical scheme, the present invention compared with prior art, has following advantage and good effect: the present invention adopts online ELM algorithm, solves the deficiency of ELM bulk treatment mode when processing chunk data; In order to improve the generalization ability of algorithm, reducing algorithm performance to the dependence of hidden node, adopting, based on sensitivity analysis, beta pruning process is carried out to hidden node; In order to ensure that algorithm can have good structure risk and empiric risk simultaneously, Regularization is carried out to modified hydrothermal process, realize accurately detecting in real time fabric defects.
Accompanying drawing explanation
Fig. 1 is that SAOS-ELM predicts the outcome figure;
Fig. 2 is that RSAOS-ELM predicts the outcome figure.
Embodiment
Below in conjunction with specific embodiment, set forth the present invention further.Should be understood that these embodiments are only not used in for illustration of the present invention to limit the scope of the invention.In addition should be understood that those skilled in the art can make various changes or modifications the present invention, and these equivalent form of values fall within the application's appended claims limited range equally after the content of having read the present invention's instruction.
In order to verify validity of the present invention, the present embodiment to ring, cracked ends, greasy dirt, staplings, crapand, lack through, hang and carry out defect detection through, looped weft 8 kinds of plain cloth defect image.Often kind of fault has 50, picture, 20 as training, 30 and to compare with ELM, OS-ELM, SAOS-ELM as test sample book, verify algorithm performance.
The concrete steps of the present embodiment are as follows:
(1) Image semantic classification.The defect image of training sample is carried out the process such as homomorphic filtering, histogram equalization;
(2) fabric defects detection and extraction.Structure adaptive wavelet carries out decomposition and detects fabric defects, and carries out feature extraction by the method for multiple features fusion, obtains fault feature.
(3) classifier design, classifies to fault feature, during classification, introduces online ELM algorithm, and carries out the online ELM beta pruning process of Sensitivity Analysis Method to hidden node.
Wherein, the online ELM algorithm introduced trains residual error relative to the sensitivity of hidden layer node according to the output layer weights definition of hidden node output and correspondence thereof, accordingly to hidden layer node sequence, and computational grid scale fitness, retain the highly sensitive hidden node specified number; The input layer weights and threshold of deleted node adopts the method for superposition to be averagely allocated to other reservation nodes.
Specific as follows:
Sensitivity network scale fitness defines:
Make k ij=g j(x i)=g (ω jx i+ b j) i=1,2,3 ..., N, contrast (1) formula is known,
Wherein: ω: represent input layer weights; X: the sample set representing input, b represent that hidden layer is biased; β jrepresent: the output layer weight vector that a jth implicit node is corresponding.
y i=k 1iβ 1+k 2iβ 2+...+k Liβ L(1)
Suppose deletion of node j=1, network is now: t i'=k 2iβ 2+ k 3iβ 3+ ...+k liβ L (2)
Wherein: k ijfor a jth implicit node exports for i-th sample; β jrepresent: the output layer weight vector that a jth implicit node is corresponding.
Can residual error be obtained by formula (1) and (2):
||t i-t' i||=||k 1iβ 1||=|k 1i|||β 1||(3)
From formula (3), for i-th sample, remove the product that network output error that the 1st hidden layer node cause is the absolute value that exports of node and corresponding output layer weight vector mould.Therefore, definition study residual error relative to the sensitivity of i-th hidden layer node is:
S j = 1 N Σ i = 1 N | k j i | | | β j | | - - - ( 4 )
Wherein: N is the number of training sample, k ijfor a jth implicit node is for the output of i-th sample, β jfor the output layer weight vector that a jth implicit node is corresponding.S ja larger explanation jth node is more important for study residual error, so can be sorted to the node of hidden layer by sensitivity:
S 1'≥S' 2≥S 3'≥...≥S' L(5)
Because hidden layer node is deleted more, study residual error is larger, and the hidden layer node of deletion is more important, and study residual error is larger, so network size fitness (MSA) can be by sensitivity definition:
M k = Σ j = 1 k S j ′ Σ j = 1 L S j ′ , 1 ≤ k ≤ L - - - ( 6 )
M klarger, network size is larger, and study residual error is less, and the network size therefore matched with learning sample can be defined as by network size fitness:
M=min{k|M k≥γ,1≤k≤L}(7)
Wherein, γ (0≤γ≤1) is network size fitness threshold value, and M is that network retains hidden layer node quantity, and L is network hidden node number, and L-M is redundant node number in network.About the selection of γ, the dependence of problem is stronger, but the performance calculated due to ELM is not very responsive to the number of hidden layer node, so the selection of γ does not need very accurate, the mode can being tried gather by increase and decrease be determined.In order to prevent sample information from disappearing because node is deleted, the input weights being retained node need to upgrade further.
The renewal of sensitivity weights:
According to sensitivity S jl hidden node is carried out sequence as follows:
S j 1 1 ≥ S j 2 2 ≥ ... ≥ S j M M ≥ S j M + 1 M + 1 ≥ ... ≥ S j L L - - - ( 8 )
Wherein: under be designated as the sequence number of the hidden node before sequence, be above designated as the sequence number of the hidden node after sequence.According to the reservation of M the hidden node that (6) and (7) formula selects precedence forward, rear L-M deletion, makes R={j 1, j 2..., j mbe reserve section point set, D={j m+1, j m+2..., j l}={ d 1, d 2..., d l-Mit is deletion of node collection.In order to preserve the sample information of deleted node, eliminate over-fitting information, adopt the method for the horizontal average propagation of weights to upgrade the input layer weights retaining node, more new formula is simultaneously
ω j l , i n e w = ω j l , i o l d + 1 L - M Σ k = 1 L - M ω d k , i - - - ( 9 )
Wherein: 1≤i≤n, 1≤l≤M, with be respectively i-th input node and j before and after upgrading lthe connection weights of individual reservation node, be i-th input node and d kthe connection weights of individual deleted node.Redundant node utilizes the LS solution of the least norm of contradiction system of linear equations to try to achieve network output layer weights to be after deleting:
β'=(H') +T(10)
Wherein: H' is the hidden layer output matrix of neural network after knot removal, β ' is new network output layer weights.The OS-ELM algorithm based on sensitivity analysis is drawn according to above-mentioned analysis:
(1) SLFN that structure one is larger, hidden layer node number is designated as L<N, and note hidden layer output matrix is H, and output layer weight matrix is β, arranges network size fitness threshold gamma, forwards to (2).
(2) according to H and β, the sensitivity S of study residual error relative to a jth hidden layer node is calculated by (4) formula j(1≤j≤L), and hidden layer node is sorted:
S j 1 1 &GreaterEqual; S j 2 2 &GreaterEqual; ... &GreaterEqual; S j M M &GreaterEqual; S j M + 1 M + 1 &GreaterEqual; ... &GreaterEqual; S j L L
Wherein: under be designated as the sequence number of the hidden node before sequence, be above designated as the sequence number of the hidden node after sequence.Forward to (3).
(3) according to study residual sensitivity S j, by formula (6) computational grid scale fitness M k(1≤k≤L), turns (4).
(4) according to M kand γ, select deleted node by formula (7) and (8), go to (5).
(5) according to (9), reservation implicit node input weights are upgraded.
(6) to the network calculations hidden layer output matrix H' after beta pruning, and according to (10) computational grid output layer weights β ' from (7)-(8), the importance of deleted node is relatively little, therefore meeting under training precision condition, SA-ELM algorithm can obtain the compacter neural network of structure.
In order to ensure that algorithm can have good structure risk and empiric risk simultaneously, Regularization is carried out to the online ELM algorithm introduced.
From statistics angle analysis, practical risk comprises empiric risk and structure risk.The network of a generalization ability had should be weighed these two kinds of risks, and obtains best compromise.Therefore, for improving the stability of SAOS_ELM, need to introduce regularization factors in weight vector computation process, therefore propose RSAOS_ELM algorithm based on R-ELM algorithm.RSAOS-ELM algorithm basic thought and RELM similar, can be explained as follows:
The mathematical model of RELM can be expressed as:
arg min E &beta; ( W ) = arg min &beta; ( 1 2 | | &beta; | | 2 + 1 2 &gamma; | | &epsiv; | | 2 ) ,
s t . &Sigma; i = 1 N ~ &beta; i g ( a i &CenterDot; x j + b i ) - t j = &epsiv; j , j = 1 , 2 , .. , N - - - ( 11 )
Wherein, || ε || 2representing the quadratic sum of error, is empiric risk; || β || 2represent structure risk, wherein β represents that network exports weights.γ represents the scale parameter of two kinds of risks, and the mode by cross validation determines γ to obtain the best compromise point of two kinds of risks.In order to strengthen robustness and the interference free performance of algorithm, need to be weighted the error of different sample, || ε || 2be extended to || D ε || 2, D represents the diagonal matrix of error weights, is designated as:
D=diag(v 1,v 2,v 3,...,v N)
Thus, (11) formula can be write as:
arg min E &beta; ( W ) = arg min &beta; ( 1 2 | | &beta; | | 2 + 1 2 &gamma; | | D &epsiv; | | 2 ) ,
s t . &Sigma; i = 1 N ~ &beta; i g ( a i &CenterDot; x j + b i ) - t j = &epsiv; j , j = 1 , 2 , .. , N - - - ( 12 )
(12) formula analysis is known it is constrained extremal problem, unconditional extreme can be converted into by Lagrange's equation and solve:
L ( &beta; , &epsiv; , &alpha; ) = &gamma; 2 | | D &epsiv; | | 2 + 1 2 | | &beta; | | 2 - &Sigma; j = 1 N &alpha; j ( g ( a i x j + b i ) - t i - &epsiv; i ) = &gamma; 2 | | D &epsiv; | | 2 + 1 2 | | &beta; | | 2 - &alpha; ( H &beta; - T - &epsiv; ) - - - ( 13 )
Wherein, &alpha; = &lsqb; &part; 1 , &part; 2 , ... , &part; N &rsqb; &part; j &Element; R m ( j = 1 , 2 , ... , N ) Represent Lagrange multiplier.
Ask Lagrangian gradient and make it be 0:
&part; L &part; &beta; = 0 &part; L &part; &epsiv; = 0 &part; L &part; &alpha; = 0 - - - ( 14 )
Draw:
&beta; T = &alpha; H ( a ) &gamma;&epsiv; T D 2 + &alpha; = 0 ( b ) H &beta; - T - &epsiv; = 0 ( c ) - - - ( 15 )
Equation (c) is substituted in (b) and obtains:
α=-γ(Hβ-T) TD 2(16)
(16) are updated in (a) and obtain:
&beta; = ( 1 &gamma; + H T D 2 H ) + H T D 2 T - - - ( 17 )
Can find from formula (4): expression formula is only containing one inverse of a matrix operates, so calculate speed very fast of β.
In classifier design, in order to ensure the accuracy of result, the result of classification all adopts the mean value of testing for 50 times, and experimental result is as table 1-4.
Table 1ELM method test set classification accuracy rate
Table 2OS_ELM method test set classification accuracy rate
Table 3SAOS_ELM method test set classification accuracy rate
Table 4RSAOS_ELM method test set classification accuracy rate
(4) linear regression.For verifying the robustness of algorithm of the present invention, namely in order to the effect after regularization is described, adopt the fabric defects data set of Noise, do linear regression, result as depicted in figs. 1 and 2.
Experiment shows:
In learning time: the shortest between SAOS-ELM Algorithm Learning, mainly because hidden node obtains yojan, the speed of study is improved, and also can draw, because yojan hidden node, generalization ability also can be improved simultaneously.
In the accuracy rate of classification: SAOS-ELM and RSAOS-ELM is best, demonstrate the rationality of algorithm herein.
In network structure: the sorter of the algorithm design of SAOS-ELM, the beta pruning nodes of hidden layer, generalization ability is improved, and can to obtain classification results relatively preferably within the shorter classification time simultaneously.Because SAOS-ELM algorithm and the employing of RSAOS-ELM algorithm produce and choose the method for network structure, gained network structure relative compact, can be issued to good classification results in the situation that the number of hidden nodes is less, generalization ability is improved.
In robustness: as can be seen from Fig. 1 and Fig. 2, when analyzing containing noise fabric data set, the accuracy rate of RSAOS-ELM algorithm regression forecasting is higher than SAOS-ELM.Algorithm after this illustrates regularization has certain anti-interference, and the stability of algorithm is guaranteed.

Claims (4)

1., based on a Fabric Defects Classification method for the very fast learning machine of modified, it is characterized in that, comprise the following steps:
(1) defect image of training sample is carried out pre-service;
(2) construct adaptive wavelet to carry out decomposition and detect fabric defects, and carry out feature extraction by the method for multiple features fusion, obtain fault feature;
(3) fault feature is classified, during classification, introduce online ELM algorithm, and the online ELM beta pruning process of Sensitivity Analysis Method is carried out to hidden node.
2. the Fabric Defects Classification method based on the very fast learning machine of modified according to claim 1, is characterized in that, carries out Regularization in described step (3) to the online ELM algorithm introduced.
3. the Fabric Defects Classification method based on the very fast learning machine of modified according to claim 1, it is characterized in that, the online ELM algorithm introduced in described step (3) trains residual error relative to the sensitivity of hidden layer node according to the output layer weights definition of hidden node output and correspondence thereof, accordingly hidden layer node is sorted, and computational grid scale fitness, retain the highly sensitive hidden node specified number; The input layer weights and threshold of deleted node adopts the method for superposition to be averagely allocated to other reservation nodes.
4. the Fabric Defects Classification method based on the very fast learning machine of modified according to claim 1, is characterized in that, the pre-service of described step (1) comprises homomorphic filtering process and histogram equalization process.
CN201510894453.8A 2015-12-07 2015-12-07 Fabric defect classification method based on improved extreme learning machine Pending CN105335763A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510894453.8A CN105335763A (en) 2015-12-07 2015-12-07 Fabric defect classification method based on improved extreme learning machine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510894453.8A CN105335763A (en) 2015-12-07 2015-12-07 Fabric defect classification method based on improved extreme learning machine

Publications (1)

Publication Number Publication Date
CN105335763A true CN105335763A (en) 2016-02-17

Family

ID=55286278

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510894453.8A Pending CN105335763A (en) 2015-12-07 2015-12-07 Fabric defect classification method based on improved extreme learning machine

Country Status (1)

Country Link
CN (1) CN105335763A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107341499A (en) * 2017-05-26 2017-11-10 昆明理工大学 It is a kind of based on non-formaldehyde finishing and ELM fabric defect detection and sorting technique
CN109145706A (en) * 2018-06-19 2019-01-04 徐州医科大学 A kind of sensitive features selection and dimension reduction method for analysis of vibration signal
CN109816631A (en) * 2018-12-25 2019-05-28 河海大学 A kind of image partition method based on new cost function
CN111260614A (en) * 2020-01-13 2020-06-09 华南理工大学 Convolutional neural network cloth flaw detection method based on extreme learning machine

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140187988A1 (en) * 2010-03-15 2014-07-03 Nanyang Technological University Method of predicting acute cardiopulmonary events and survivability of a patient
CN103914711A (en) * 2014-03-26 2014-07-09 中国科学院计算技术研究所 Improved top speed learning model and method for classifying modes of improved top speed learning model
CN104537391A (en) * 2014-12-23 2015-04-22 天津大学 Meta learning method of extreme learning machine
CN104616030A (en) * 2015-01-21 2015-05-13 北京工业大学 Extreme learning machine algorithm-based recognition method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140187988A1 (en) * 2010-03-15 2014-07-03 Nanyang Technological University Method of predicting acute cardiopulmonary events and survivability of a patient
CN103914711A (en) * 2014-03-26 2014-07-09 中国科学院计算技术研究所 Improved top speed learning model and method for classifying modes of improved top speed learning model
CN104537391A (en) * 2014-12-23 2015-04-22 天津大学 Meta learning method of extreme learning machine
CN104616030A (en) * 2015-01-21 2015-05-13 北京工业大学 Extreme learning machine algorithm-based recognition method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
李凡军 等: "基于灵敏度分析法的ELM剪枝算法", 《控制与决策》 *
杜占龙 等: "改进的灵敏度剪枝极限学习机", 《控制与决策》 *
邓万宇 等: "神经网络极速学习方法研究", 《计算机学报》 *
马强 等: "基于多特征融合的织物瑕疵检测研究", 《图像与多媒体》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107341499A (en) * 2017-05-26 2017-11-10 昆明理工大学 It is a kind of based on non-formaldehyde finishing and ELM fabric defect detection and sorting technique
CN107341499B (en) * 2017-05-26 2021-01-05 昆明理工大学 Fabric defect detection and classification method based on unsupervised segmentation and ELM
CN109145706A (en) * 2018-06-19 2019-01-04 徐州医科大学 A kind of sensitive features selection and dimension reduction method for analysis of vibration signal
CN109816631A (en) * 2018-12-25 2019-05-28 河海大学 A kind of image partition method based on new cost function
CN111260614A (en) * 2020-01-13 2020-06-09 华南理工大学 Convolutional neural network cloth flaw detection method based on extreme learning machine

Similar Documents

Publication Publication Date Title
WO2020125668A1 (en) Method and system for automatically identifying surrounding rock level by applying while-drilling parameters
CN103226741B (en) Public supply mains tube explosion prediction method
CN110542819B (en) Transformer fault type diagnosis method based on semi-supervised DBNC
CN110097755A (en) Freeway traffic flow amount state identification method based on deep neural network
CN109948647A (en) A kind of electrocardiogram classification method and system based on depth residual error network
CN109002845A (en) Fine granularity image classification method based on depth convolutional neural networks
CN106897821A (en) A kind of transient state assesses feature selection approach and device
CN109697469A (en) A kind of self study small sample Classifying Method in Remote Sensing Image based on consistency constraint
CN104155574A (en) Power distribution network fault classification method based on adaptive neuro-fuzzy inference system
CN104050242A (en) Feature selection and classification method based on maximum information coefficient and feature selection and classification device based on maximum information coefficient
CN109165743A (en) A kind of semi-supervised network representation learning algorithm based on depth-compression self-encoding encoder
CN105184316A (en) Support vector machine power grid business classification method based on feature weight learning
CN105335763A (en) Fabric defect classification method based on improved extreme learning machine
CN109271374A (en) A kind of database health scoring method and scoring system based on machine learning
CN110363349A (en) A kind of LSTM neural network hydrologic(al) prognosis method and system based on ASCS
CN110363230A (en) Stacking integrated sewage handling failure diagnostic method based on weighting base classifier
CN110689069A (en) Transformer fault type diagnosis method based on semi-supervised BP network
CN113901977A (en) Deep learning-based power consumer electricity stealing identification method and system
CN103886030B (en) Cost-sensitive decision-making tree based physical information fusion system data classification method
CN105893876A (en) Chip hardware Trojan horse detection method and system
CN110008853A (en) Pedestrian detection network and model training method, detection method, medium, equipment
CN110009030A (en) Sewage treatment method for diagnosing faults based on stacking meta learning strategy
CN110941902A (en) Lightning stroke fault early warning method and system for power transmission line
CN112150304A (en) Power grid running state track stability prejudging method and system and storage medium
Chu et al. Co-training based on semi-supervised ensemble classification approach for multi-label data stream

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20160217

RJ01 Rejection of invention patent application after publication