CN113177563B - Post-chip anomaly detection method integrating CMA-ES algorithm and sequential extreme learning machine - Google Patents

Post-chip anomaly detection method integrating CMA-ES algorithm and sequential extreme learning machine Download PDF

Info

Publication number
CN113177563B
CN113177563B CN202110494529.3A CN202110494529A CN113177563B CN 113177563 B CN113177563 B CN 113177563B CN 202110494529 A CN202110494529 A CN 202110494529A CN 113177563 B CN113177563 B CN 113177563B
Authority
CN
China
Prior art keywords
formula
model
generation
learning machine
extreme learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110494529.3A
Other languages
Chinese (zh)
Other versions
CN113177563A (en
Inventor
崔欣
杨婷婷
雷世怡
吴雨豪
林子越
赵浩冰
周子云
金兢
夏娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Xingbei Intelligent Control Technology Co ltd
Original Assignee
Anhui Shuaier Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Shuaier Information Technology Co ltd filed Critical Anhui Shuaier Information Technology Co ltd
Priority to CN202110494529.3A priority Critical patent/CN113177563B/en
Publication of CN113177563A publication Critical patent/CN113177563A/en
Application granted granted Critical
Publication of CN113177563B publication Critical patent/CN113177563B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/446Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering using Haar-like filters, e.g. using integral image techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/086Learning methods using evolutionary algorithms, e.g. genetic algorithms or genetic programming

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Physiology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a post-chip anomaly detection method integrating a CMA-ES algorithm and a sequential extreme learning machine, which comprises the following steps: 1, obtaining a sample training set by normally welding a PCB database; 2, carrying out image enhancement on the training set to obtain an enhanced training set; 3, extracting training set characteristics by Haar transformation 4, and constructing a sequential extreme learning machine model by using a single hidden layer feedforward neural network; 5, completing the initial training of the model; 6, obtaining the optimal parameters of the sequential extreme learning machine by using a CMA-ES algorithm; 7 designing an algorithm for detecting the abnormality after the surface mounting based on a sequential extreme learning machine model; 8, performing on-line training on a sequential extreme learning machine model; 9 using the model to detect whether a patch anomaly has occurred. The method can effectively detect the abnormality after the surface mounting, has better precision and real-time performance, does not need to increase additional auxiliary information, is suitable for the abnormality detection after the surface mounting of the SMT, can be widely applied to the SMT production line, and has wide application prospect.

Description

Post-chip anomaly detection method integrating CMA-ES algorithm and sequential extreme learning machine
Technical Field
The invention belongs to the technical field of computer vision, and particularly relates to a post-chip anomaly detection method integrating a CMA-ES algorithm and a sequential extreme learning machine.
Background
With the development of the semiconductor industry, the embedded system based on the PCB circuit is widely applied, and the post-chip abnormality detection method capable of ensuring the normal work of the whole embedded system has great research value; the method for detecting the abnormality after the surface mounting aims at detecting whether phenomena of surface mounting leakage, material flying, deflection and the like occur before the PCB is produced;
the main idea of anomaly detection behind the paster is to realize real-time detection and then know its paster condition to the circuit board behind the paster, and the current vision scheme that realizes this technique with is mainly divided into two modules: the feature vector acquisition and classifier design applied to post-patch anomaly detection mainly has the following features: (1) the Haar _ like characteristic (2) is a HOG characteristic, and a common classification learning method has shallow characteristics of Adaboost ensemble learning and deep Convolutional Neural Networks (CNN) which are manually designed, has pertinence, and also has the defects of limitation and low robustness. The generalization capability of the method is often weak, the model cannot be modified and optimized once being formed, meanwhile, the calculation speed is slow, and some remarkable characteristics are lost along with the layer-by-layer progression of the network. The patch anomaly detection system with high requirements on precision and detection speed has obvious disadvantages.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, and provides a post-SMT anomaly detection method which integrates a CMA-ES algorithm and a sequential extreme learning machine, so that the post-SMT anomaly detection can be completed by a reinforcement learning method, the detection result is more accurate, and the detection efficiency is higher.
In order to achieve the purpose, the invention adopts the following technical scheme:
the invention discloses a post-chip anomaly detection method integrating a CMA-ES algorithm and a sequential extreme learning machine, which is characterized by comprising the following steps of:
step 1, obtaining a gray level histogram of N normally welded PCBs from a normally welded PCB database and forming a sample training set;
step 2, carrying out image enhancement processing on the training set to obtain an enhanced training set;
2.1, converting the N gray level histograms in the training set into a uniform distribution map;
step 2.2, modifying the gray value of each pixel in the uniform distribution map by utilizing a gray stretching algorithm to obtain an enhanced training set;
step 3, utilizing Haar transformation to extract features of the enhanced training set to obtain the difference between pixel sums of white areas and black areas of the images in the training set, and using the difference as a feature vector of the training set;
step 4, constructing a single hidden layer feedforward neural network and initializing a sequential extreme learning machine model;
step 4.1, taking a Sigmoid function as a hidden layer activation function;
step 4.2, randomly generating input weight { W } i L =1,2, \8230;, L } and hidden layer bias { b [ ] i L } to determine the input to output relationship; wherein, W i Representing the ith input weight, b i Represents the ith hidden layer bias;
step 5, performing initialization training on the sequential extreme learning machine model;
according to the feature vectors of the training set, an initial neuron matrix H and an output vector T are constructed, the least square solution of the initial neuron matrix H and the output vector T is solved, and a model parameter { beta is obtained i |i=1,2,…,L},β i Representing the ith model parameter, and L representing the total number of the model parameters, thereby completing the initial training of the model;
step 6, performing iterative optimization on the initialized sequential extreme learning machine model by utilizing a CMA-ES algorithm so as to obtain the optimal parameters of the sequential extreme learning machine model;
step 6.1, setting the minimum iteration number as eta, defining the current iteration number as g, and initializing g =1;
utilizing the input weight { W } of the initialized sequential extreme learning machine model obtained in the step 5 i I =1,2, \ 8230;, L } and hidden layer bias { b [ ] i L } forms the g-th generation population;
with the model parameter { beta i L =1,2, \ 8230;, L } is used as the model parameter corresponding to the g-th generation population
Figure GDA0003763343410000021
Wherein the content of the first and second substances,
Figure GDA0003763343410000022
representing model parameters corresponding to ith individuals of the g-th generation population, and taking the initial neuron matrix H as the g-th generation as a neuron matrix H (g)
Step 6.2, calculating the fitness of the ith individual in the population of the g generation by using the formula (1)
Figure GDA0003763343410000023
Thus obtaining the fitness of each individual of the g-th generation population, and ranking each individual according to the fitness from high to low:
Figure GDA0003763343410000024
in the formula (1), the reaction mixture is,
Figure GDA0003763343410000025
is the average output vector;
Figure GDA0003763343410000026
is a model parameter corresponding to the ith individual in the g generation population
Figure GDA0003763343410000027
Calculating a predicted feature vector;
step 6.3, iteration and mutation are carried out on the g generation population by using the formula (2), so that the ith individual of the g +1 generation population is obtained
Figure GDA0003763343410000028
Thereby forming a g +1 generation population to update a g generation neuron matrix H (g) And obtaining a g +1 th generation neuron matrix H (g+1)
Figure GDA0003763343410000031
In the formula (2), m (g) Is the mean vector of the mu individuals with the individual fitness ranking in the g-th generation population, N i (0,C (g) ) Is a Gaussian distribution function obeyed by the ith individual in the g-th generation population; epsilon (g) For the step size of the population evolution of the g generation, when g =1, the epsilon is initialized (g) =1;C (g) Is a covariance matrix of the g-th generation population;
step 6.4, calculating the mean vector m of the g +1 th generation population (g+1) Step size epsilon of population evolution (g+1) Model parameters
Figure GDA0003763343410000032
Step 6.4.1, obtaining the mean vector m of the g +1 th generation population by using the formula (3) (g+1)
Figure GDA0003763343410000033
In the formula (3), ω n Is the optimized weight of the nth individual in the mu individuals before fitness ranking, an
Figure GDA0003763343410000034
Figure GDA0003763343410000035
Is the nth individual of the mu individuals in the population of the g generation;
step 6.4.2, obtaining the step size epsilon of the population evolution of the g +1 generation by using the formula (4) (g+1)
Figure GDA0003763343410000036
In the formula (4), c (g) Is a step size epsilon (g) Updating parameters of (1);
step 6.4.3, according to the g +1 th generation neuron matrix H (g+1) Obtaining model parameters corresponding to the g +1 generation population according to the process of the step 5
Figure GDA0003763343410000037
Step 6.5, obtaining a convergence criterion S by using the formula (5), if the S is less than theta for continuous lambda times, stopping iteration, obtaining a sequential extreme learning machine model of the optimal parameter, and simultaneously outputting the optimal parameter, wherein the step comprises the following steps: optimal input weights
Figure GDA0003763343410000038
Optimal hidden layer biasing
Figure GDA0003763343410000039
And corresponding optimal model parameters
Figure GDA00037633434100000310
Wherein the content of the first and second substances,
Figure GDA00037633434100000311
represents the ith optimal input weight,
Figure GDA00037633434100000312
indicating the ith optimal hidden layer bias,
Figure GDA00037633434100000313
and (3) representing the ith optimal model parameter, otherwise, returning to the step 6.2, wherein theta is a convergence boundary:
Figure GDA00037633434100000314
in the formula (5), the reaction mixture is,
Figure GDA0003763343410000041
is the fitness of the ith individual in the g-1 generation population, and is initialized when g =1
Figure GDA0003763343410000042
Step 7, anomaly detection after mounting:
step 7.1, obtaining the average value sigma of the feature vectors of the training set by using the formula (6):
Figure GDA0003763343410000043
in the formula (6), c is the total number of the feature vectors in the training set, X j Is the j-th feature vector of the input;
step 7.2, defining the current time as t, and initializing t =0;
optimizing model parameters
Figure GDA0003763343410000044
As model parameters at the current time t
Figure GDA0003763343410000045
Figure GDA0003763343410000046
Is shown asThe ith model parameter at the previous t moment;
obtaining a predicted feature vector of a model output at a time t +1 using equation (7)
Figure GDA0003763343410000047
Figure GDA0003763343410000048
In the formula (7), the reaction mixture is,
Figure GDA0003763343410000049
representing a hidden layer activation function;
step 7.3, according to the prediction characteristic vector
Figure GDA00037633434100000410
The prediction variance at time t +1 is obtained by equation (8)
Figure GDA00037633434100000411
Figure GDA00037633434100000412
In the formula (8), the reaction mixture is,
Figure GDA00037633434100000413
is the predicted variance at time t; when t =0, let
Figure GDA00037633434100000414
And 7.4, obtaining a detection threshold value omega by using the formula (8):
Figure GDA00037633434100000415
in the formula (9), T t+1 Representing the actual feature vector at time t + 1;
step 7.5, calculating t +1 by using the formula (10)Predicted distance difference phi of scales t+1
Figure GDA00037633434100000416
If phi is t+1 If the value is larger than omega, the paster is abnormal at the time of t +1, otherwise, the paster is normal at the time of t + 1;
step 8, acquiring the patch data of the welded PCB in real time and processing the data according to the steps 1 to 3 to obtain the patch characteristic vector of the welded PCB; sequentially inputting the patch characteristic vectors of the welded PCB into the sequential extreme learning machine model of the optimal parameters for on-line training so as to update the optimal parameters, thereby obtaining the sequential extreme learning machine model with strong adaptability;
step 9, scanning the area needing patch detection by adopting a multi-scale sliding window, and processing the test image in the area where each sliding window is located according to the steps 1-3 to obtain a feature vector to be detected;
step 10, inputting the feature vector to be detected into a strong-adaptability sequential extreme learning machine model for anomaly detection, thereby obtaining the areas of all windows with the patch anomalies;
and 11, screening all window areas with the patch abnormity, so as to obtain the positions of the patch abnormity in the test image.
Compared with the prior art, the invention has the beneficial effects that:
1. the method is based on the sequential extreme learning machine, the sequential extreme learning machine is a high-efficiency classifier with high classification speed and few learning parameters, further optimization of a model can be realized by continuously adding new samples, active learning is realized, feedback is accurately obtained, the method has strong self-learning and large-scale parallel processing capacity, compared with the traditional neural network structure, the method has the characteristics of being updatable and optimizable, meanwhile, on the basis of the extreme learning machine, the concept of time is introduced into network training, and the SMT post-paster anomaly detection method with high training speed, strong generalization capability and high precision is established, so that the method is suitable for detection and repair of post-paster anomalies;
2. according to the invention, the sequential extreme learning machine model is improved based on the CMA-ES algorithm, the CMA-ES algorithm has the characteristics of good overall performance, high optimization efficiency and high convergence rate, and the parameters of the sequential extreme learning machine model are improved by the CMA-ES algorithm, so that the sequential extreme learning machine model has better generalization, avoids falling into local optimization, better promotes the accuracy and reliability of abnormal detection after SMT (surface mount technology) chip mounting, and improves the production efficiency;
3. the method is based on the single hidden layer feedforward neural network, the output weight connecting the hidden layer and the output layer is determined by an analytic method, the parameters are simple and convenient to select, iteration is not needed, the learning speed is high, and the detection and repair of the abnormity after the surface mounting are realized;
4. in the detection stage, the whole image is not scanned any more when the multi-scale sliding window is adopted, and only the marked salient region of the test image needs to be scanned, so that the generation of a plurality of candidate regions is reduced, and the detection speed is further improved.
5. The invention has wide application range: additional auxiliary information is not required to be added, the method can be widely applied to abnormality detection after SMT patch, and has wide application prospect.
Drawings
FIG. 1 is a flow chart of a core algorithm of the method;
FIG. 2 is a schematic diagram of a single hidden layer feedforward neural network structure;
FIG. 3 is a flow chart of the CMA-ES algorithm.
Detailed Description
In this embodiment, as shown in fig. 1, a post-patch anomaly detection method that combines a CMA-ES (covariance matrix adaptive evolution policy) algorithm and a sequential extreme learning machine is performed according to the following steps:
step 1, obtaining a gray level histogram of N normally welded PCBs from a normally welded PCB database to form a sample training set;
step 2, carrying out image enhancement on the training set to obtain an enhanced training set;
step 2.1, converting N gray level histograms in the training set into an averageA uniform distribution pattern, i.e. the number of pixels of the image above each grey level is the same, thereby achieving the effect of increasing the dynamic value range with the abscissa being the frequency with which the image grey level ordinate is the frequency with which the grey level pixel appears in the image, the grey level range being [ l ] 1 ,l 2 ]The gray histogram function is:
h(r k )=n k ,k∈[0,L-1] (1)
in the formula (1), r k Is the kth gray level of the image, and n k Is a gray level r in the image k The number of pixels of (a);
step 2.2, modifying the gray value of each pixel in the uniform distribution map by using a gray stretching algorithm to obtain an enhanced training set;
let the original image in the training set have a gray scale range of l 1 ,l 2 ]. To obtain a compound having [ l 3 ,l 4 ]The transfer function required for the grayscale range image is:
Figure GDA0003763343410000061
in the formula (2), r is the gray level of the image;
after the linear stretching function is adopted, the gray value of the pixel points of all the digital images is subjected to gray level transformation, so that the contrast of the images can be improved, the images are enhanced, and the corresponding characteristic vectors are obtained, thereby facilitating model training by using a sequential extreme learning machine;
step 3, utilizing Haar transformation to extract features of the enhanced training set to obtain the difference between pixel sums of white areas and black areas of images in the training set, and using the difference as a feature vector of the training set; the difference between the pixel sums of the white area and the black area is calculated in a mode of constructing an integral graph, the integral graph is constructed in a mode of scanning an image line by line, the accumulated sum of the row direction of each pixel (i, j) is calculated in a recursion mode, and the formula is as follows:
s(i,j)=s(i,j-1)+f(i,j) (3)
ii(i,j)=ii(i-1,j)+s(i,j) (4)
in equations (3) and (4), s (i, j) represents the accumulated pixel in the row direction, and ii (i, j) represents the value of the integral image.
Three types of features of the image can be extracted through Haar transform: edge features, linear features, center features, and diagonal features;
step 4, constructing a single hidden layer feedforward neural network, as shown in fig. 2, initializing a sequential extreme learning machine (OS-ELM) model
Step 4.1, taking a Sigmoid function as a hidden layer activation function:
Figure GDA0003763343410000071
in the formula (5), W i =[w i,1 ,w i,2 ,…,w i,n ] T To input the weight, X i =[x i,1 ,x i,2 ,…,x i,n ] T As input feature vectors, b i The offset value of the ith hidden layer unit which is randomly output is a random number in the range of (0, 1). W is a group of i ·X j Represents W i And X j The inner product of (d).
Step 4.2, randomly generating input weight { W } i I =1,2, \ 8230;, L } and hidden layer bias { b [ ] i I =1,2, \8230 |, L }, thereby determining the input to output relationship; wherein, the ith input weight is represented, and the ith hidden layer bias is represented; the input-output relationship can be expressed as:
Figure GDA0003763343410000072
in formula (6), g (W) i ·X j +b i ) For the activation function, n is the number of input nodes, and n =4 is taken; l is the number of hidden layer nodes, and L =30 is taken for obtaining better effect; t is j And the network output is the feature vector of the (j + n) th image in the training set.
The minimum loss function during model initialization is:
Figure GDA0003763343410000073
in the formula (7), N is the total number of neuron nodes.
Step 5, performing initialization training on the sequential extreme learning machine model; constructing an initial neuron matrix H and an output vector T, solving the least square solution of the initial neuron matrix H and the output vector T to obtain a model parameter { beta [ [ beta ] ] i |i=1,2,…,L},β i Representing the ith model parameter, and L representing the total number of the model parameters, thereby completing the initial training of the model; the method comprises the following specific steps:
writing equation (6) in matrix form:
Hβ=T (8)
in formula (8), H is a neuron matrix:
Figure GDA0003763343410000074
solving the least square solution of the formula (9) can obtain the output weight:
β=PH T T (10)
in formula (10):
P=(H T H) -1 (11)
due to the model parameter beta i The solution of (2) adopts an analytic method, and the learning speed is high, so the method is an extreme learning machine.
Step 6, carrying out iterative optimization on the initialized sequential extreme learning machine model by utilizing a CMA-ES algorithm so as to obtain the optimal parameters of the sequential extreme learning machine model;
step 6.1, setting the minimum iteration number as eta, defining the current iteration number as g, and initializing g =1;
utilizing the input weight { W of the initialized sequential extreme learning machine model obtained in the step 5 i L =1,2, \8230;, L } and hidden layer bias { b [ ] i L =1,2, \8230 |, L } constitutes the g-th generation population;
by model parameter { beta i L =1,2, \ 8230;, L } is used as the model parameter corresponding to the g-th generation population
Figure GDA0003763343410000081
Wherein the content of the first and second substances,
Figure GDA0003763343410000082
representing model parameters corresponding to ith individuals of the g generation population, and taking the initial neuron matrix H as the g generation as the neuron matrix H (g)
Step 6.2, calculating the fitness of the ith individual in the population of the g generation by using the formula (12)
Figure GDA0003763343410000083
Thus obtaining the fitness of each individual of the g-th generation population, and ranking each individual according to the fitness from high to low:
Figure GDA0003763343410000084
in the formula (12), the reaction mixture is,
Figure GDA0003763343410000085
is the average output vector;
Figure GDA0003763343410000086
is a model parameter corresponding to the ith individual in the g generation population
Figure GDA0003763343410000087
Calculating a predicted feature vector;
step 6.3, iteration and mutation are carried out on the g generation population by using the formula (13), so that the ith individual of the g +1 generation population is obtained
Figure GDA0003763343410000088
Thereby forming a g +1 generation population to update a g generation neuron matrix H (g) And obtaining a g +1 th generation neuron matrix H (g+1)
Figure GDA0003763343410000089
In formula (13), m (g) Is the mean vector of the mu individuals with the individual fitness ranking in the g-th generation population, N i (0,C (g) ) Is a gaussian distribution function obeyed by the ith individual in the population of the g generation; epsilon (g) For the step size of the population evolution of the g generation, when g =1, the epsilon is initialized (g) =1;C (g) Is a covariance matrix of the g-th generation population;
step 6.4, calculating the mean vector m of the g +1 th generation population (g+1) Step size epsilon of population evolution (g+1) Model parameters
Figure GDA0003763343410000091
Step 6.4.1, obtaining the mean vector m of the g +1 th generation population by using the formula (14) (g+1)
Figure GDA0003763343410000092
In formula (14), ω n Is the optimized weight of the nth individual in the mu individuals before the fitness ranking, and
Figure GDA0003763343410000093
Figure GDA0003763343410000094
is the nth individual of the mu individuals in the population of the g generation;
step 6.4.2, obtaining the step size epsilon of the population evolution of the g +1 generation by using the formula (15) (g+1)
Figure GDA0003763343410000095
In the formula (15), c (g) Is a step size epsilon (g) Updating parameters of (2);
step 6.4.3, according to the g +1 th generation neuron matrix H (g+1) Obtaining model parameters corresponding to the g +1 generation population according to the process of the step 5
Figure GDA0003763343410000096
Step 6.5, obtaining a convergence criterion S by using the formula (16), if S is less than theta for continuous lambda times and g is more than eta, stopping iteration and obtaining a sequential extreme learning machine model of the optimal parameters, and simultaneously outputting the optimal parameters, wherein the step comprises the following steps of: optimal input weights
Figure GDA0003763343410000097
Optimal hidden layer biasing
Figure GDA0003763343410000098
And corresponding optimal model parameters
Figure GDA0003763343410000099
Wherein, the first and the second end of the pipe are connected with each other,
Figure GDA00037633434100000910
represents the ith optimal input weight and,
Figure GDA00037633434100000911
indicating the ith optimal hidden layer bias,
Figure GDA00037633434100000912
and (4) representing the ith optimal model parameter, otherwise, returning to the step 6.2, wherein theta is a convergence boundary:
Figure GDA00037633434100000913
in the formula (16), the compound represented by the formula,
Figure GDA00037633434100000914
is the fitness of the ith individual in the g-1 generation population, and is initialized when g =1
Figure GDA00037633434100000915
In summary, the flow of the CMA-ES algorithm is shown in FIG. 3.
Step 7, anomaly detection after mounting:
step 7.1, obtaining the average value sigma of the feature vectors of the training set by using the formula (17):
Figure GDA0003763343410000101
in the formula (17), c is the total number of the feature vectors in the training set, X j Is the j-th feature vector of the input;
step 7.2, defining the current time as t, and initializing t =0;
optimizing model parameters
Figure GDA0003763343410000102
As model parameters at the current time t
Figure GDA0003763343410000103
Figure GDA0003763343410000104
Representing the ith model parameter at the current time t;
obtaining a predicted feature vector of a model output at time t +1 using equation (18)
Figure GDA0003763343410000105
Figure GDA0003763343410000106
In the formula (18), the reaction mixture,
Figure GDA0003763343410000107
representing a hidden layer activation function;
step 7.3, predicting the feature vector according to the prediction
Figure GDA0003763343410000108
The prediction variance at time t +1 is obtained by equation (19)
Figure GDA0003763343410000109
Figure GDA00037633434100001010
In the formula (19), the reaction mixture is,
Figure GDA00037633434100001011
is the predicted variance at time t; when t =0, let
Figure GDA00037633434100001012
And 7.4, obtaining a detection threshold value omega by using the formula (20):
Figure GDA00037633434100001013
in the formula (20), T t+1 Representing the actual feature vector at time t + 1;
step 7.5, calculating the prediction distance difference phi at the t +1 moment by using the formula (21) t+1
Figure GDA00037633434100001014
If phi is t+1 If omega is larger than omega, the paster is abnormal at the time of t +1, otherwise, the paster is normal at the time of t + 1;
because the prediction model is trained by adopting a large number of PCBs with normal patches, whether the PCBs are abnormal or not can be reversely deduced.
Step 8, acquiring the patch data of the welded PCB in real time and processing the data according to the steps 1 to 3 to obtain the patch characteristic vector of the welded PCB; sequentially inputting the patch characteristic vectors of the welded PCB into a sequential extreme learning machine model with optimal parameters for on-line training so as to update the optimal parameters, thereby obtaining the sequential extreme learning machine model with strong adaptability; the new training sample is input into the model in a sequential mode, and the model parameter beta is updated in a recursion mode, and the method comprises the following specific steps:
model parameter β at time t +1 t+1 Obtained according to the following formula:
Figure GDA0003763343410000111
Figure GDA0003763343410000112
in the formulae (21) and (22),
h t+1 =[g(W i ,b i ,X j ) g(W i ,b i ,X j ) ... g(W i ,b i ,X j )] (24)
step 9, scanning the area needing to be subjected to patch detection by adopting multi-scale sliding windows, and processing the test image in the area where each sliding window is located according to the steps 1-3 to obtain a feature vector to be detected; every time sliding is carried out, the area where the window is located is scaled into an image with the resolution of 160X 90;
step 10, inputting the feature vector to be detected into a strong-adaptability sequential extreme learning machine model for anomaly detection, thereby obtaining the areas of all windows with the patch anomalies;
in the method, according to the size of a region to be detected in a picture, 10 different windows are set to scan the test picture, and the sliding step length is set to be 10 pixels;
and 11, screening all window areas with the patch abnormity, so as to obtain the positions of the patch abnormity in the test image.
When a multi-scale sliding window is adopted to detect the abnormal area of the patch, most of the adopted detection windows are larger than a target to be detected, window combination is needed, and the combination principle is as follows: and if the ratio of the intersection area of the two mutually overlapped detection windows to the smaller area of the two windows is greater than 0.5, selecting the window with higher output score.

Claims (1)

1. A post-chip anomaly detection method integrating a CMA-ES algorithm and a sequential extreme learning machine is characterized by comprising the following steps:
step 1, obtaining a gray level histogram of N normally welded PCBs from a normally welded PCB database and forming a sample training set;
step 2, carrying out image enhancement processing on the training set to obtain an enhanced training set;
2.1, converting the N gray level histograms in the training set into a uniform distribution map;
step 2.2, modifying the gray value of each pixel in the uniform distribution map by utilizing a gray stretching algorithm to obtain an enhanced training set;
step 3, extracting the features of the enhanced training set by utilizing Haar transformation to obtain the difference between the pixel sum of a white area and a black area of an image in the training set, and taking the difference as a feature vector of the training set;
step 4, constructing a single hidden layer feedforward neural network and initializing a sequential extreme learning machine model;
step 4.1, taking a Sigmoid function as a hidden layer activation function;
step 4.2, randomly generating input weight { W } i I =1,2, \ 8230;, L } and hidden layer bias { b [ ] i I =1,2, \8230 |, L }, thereby determining the input to output relationship; wherein, W i Representing the ith input weight, b i Indicating the ith hidden layer bias;
step 5, performing initialization training on the sequential extreme learning machine model;
according to the feature vectors of the training set, an initial neuron matrix H and an output vector T are constructed, the least square solution of the initial neuron matrix H and the output vector T is solved, and a model parameter { beta is obtained i |i=1,2,…,L},β i Representing the ith model parameter, and L representing the total number of the model parameters, thereby completing the initial training of the model;
step 6, performing iterative optimization on the initialized sequential extreme learning machine model by utilizing a CMA-ES algorithm so as to obtain the optimal parameters of the sequential extreme learning machine model;
step 6.1, setting the minimum iteration number as eta, defining the current iteration number as g, and initializing g =1;
utilizing the input weight { W of the initialized sequential extreme learning machine model obtained in the step 5 i I =1,2, \ 8230;, L } and hidden layer bias { b [ ] i L =1,2, \8230 |, L } constitutes the g-th generation population;
with the model parameter { beta i L =1,2, \ 8230;, L } is used as the model parameter corresponding to the g-th generation population
Figure FDA0003763343400000011
Wherein the content of the first and second substances,
Figure FDA0003763343400000012
representing model parameters corresponding to ith individual of the g generation population, and taking the initial neuron matrix H as the g generation as the neuron matrix H (g)
Step 6.2, calculating the fitness of the ith individual in the g-th generation population by using the formula (1)
Figure FDA0003763343400000013
Thus obtaining the fitness of each individual of the g-th generation population, and ranking each individual according to the fitness from high to low:
Figure FDA0003763343400000021
in the formula (1), the reaction mixture is,
Figure FDA0003763343400000022
is the average output vector;
Figure FDA0003763343400000023
is a model parameter corresponding to the ith individual in the g generation population
Figure FDA0003763343400000024
Calculating a predicted feature vector;
step 6.3, iteration is carried out on the g generation population by using the formula (2) andmutating to obtain the ith individual of the g +1 generation population
Figure FDA0003763343400000025
Thereby forming a g +1 generation population to update a g generation neuron matrix H (g) And obtaining a g +1 th generation neuron matrix H (g +1)
Figure FDA0003763343400000026
In the formula (2), m (g) Is the mean vector of the mu individuals with the individual fitness ranking in the g-th generation population, N i (0,C (g) ) Is a gaussian distribution function obeyed by the ith individual in the population of the g generation; epsilon (g) For the step size of the population evolution of the g generation, when g =1, the epsilon is initialized (g) =1;C (g) Is a covariance matrix of the g-th generation population;
step 6.4, calculating the mean vector m of the g +1 th generation population (g+1) Step size epsilon of population evolution (g+1) Model parameters
Figure FDA0003763343400000027
Step 6.4.1, obtaining the mean vector m of the g +1 th generation of population by using the formula (3) (g+1)
Figure FDA0003763343400000028
In the formula (3), ω is n Is the optimized weight of the nth individual in the mu individuals before fitness ranking, an
Figure FDA0003763343400000029
Figure FDA00037633434000000210
Is the nth individual of the mu individuals in the population of the g generation;
step 6.4.2, obtaining the step size epsilon of the population evolution of the g +1 generation by using the formula (4) (g+1)
Figure FDA00037633434000000211
In the formula (4), c (g) Is a step size epsilon (g) Updating parameters of (2);
step 6.4.3, according to the g +1 th generation neuron matrix H (g+1) Obtaining model parameters corresponding to the g +1 generation population according to the process of the step 5
Figure FDA0003763343400000031
Step 6.5, obtaining a convergence criterion S by using the formula (5), if the S is less than theta for continuous lambda times, stopping iteration, obtaining a sequential extreme learning machine model of the optimal parameter, and simultaneously outputting the optimal parameter, wherein the step comprises the following steps: optimal input weight W i * I =1,2, \ 8230;, L }, optimal hidden layer bias
Figure FDA0003763343400000032
And corresponding optimal model parameters
Figure FDA0003763343400000033
Wherein, W i * Represents the ith optimal input weight,
Figure FDA0003763343400000034
indicating the ith optimal hidden layer bias,
Figure FDA0003763343400000035
and (4) representing the ith optimal model parameter, otherwise, returning to the step 6.2, wherein theta is a convergence boundary:
Figure FDA0003763343400000036
in the formula (5), the reaction mixture is,
Figure FDA0003763343400000037
is the fitness of the ith individual in the population of the g-1 generation, and is initialized when g =1
Figure FDA0003763343400000038
Step 7, anomaly detection after mounting:
step 7.1, obtaining the average value sigma of the feature vectors of the training set by using the formula (6):
Figure FDA0003763343400000039
in the formula (6), c is the total number of the feature vectors in the training set, X j Is the j-th feature vector of the input;
step 7.2, defining the current time as t, and initializing t =0;
optimizing model parameters
Figure FDA00037633434000000310
As model parameters at the current time t
Figure FDA00037633434000000311
Figure FDA00037633434000000312
Representing the ith model parameter at the current time t;
obtaining a predicted feature vector of a model output at time t +1 using equation (7)
Figure FDA00037633434000000313
Figure FDA00037633434000000314
Formula (7)In the step (1), the first step,
Figure FDA00037633434000000315
representing a hidden layer activation function;
step 7.3, predicting the feature vector according to the prediction feature vector
Figure FDA00037633434000000316
The prediction variance at time t +1 is obtained by equation (8)
Figure FDA00037633434000000317
Figure FDA00037633434000000318
In the formula (8), the reaction mixture is,
Figure FDA00037633434000000319
is the predicted variance at time t; when t =0, let
Figure FDA00037633434000000320
And 7.4, obtaining a detection threshold value omega by using the formula (8):
Figure FDA0003763343400000041
in the formula (9), T t+1 Representing the actual feature vector at time t + 1;
step 7.5, calculating the prediction distance difference phi at the t +1 moment by using the formula (10) t+1
Figure FDA0003763343400000042
If phi is t+1 If the value is larger than omega, the paster is abnormal at the time of t +1, otherwise, the paster is normal at the time of t + 1;
step 8, acquiring the paster data of the welded PCB in real time and processing according to the steps 1-3 to obtain the paster characteristic vector of the welded PCB; sequentially inputting the patch characteristic vectors of the welded PCB into the sequential extreme learning machine model of the optimal parameters for on-line training so as to update the optimal parameters, thereby obtaining the sequential extreme learning machine model with strong adaptability;
step 9, scanning the area needing patch detection by adopting a multi-scale sliding window, and processing the test image in the area where each sliding window is located according to the steps 1-3 to obtain a feature vector to be detected;
step 10, inputting the feature vector to be detected into a strong-adaptability sequential extreme learning machine model for anomaly detection, so as to obtain the areas of all windows where the patch anomaly occurs;
and 11, screening all window areas with the patch abnormity, so as to obtain the positions of the patch abnormity in the test image.
CN202110494529.3A 2021-05-07 2021-05-07 Post-chip anomaly detection method integrating CMA-ES algorithm and sequential extreme learning machine Active CN113177563B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110494529.3A CN113177563B (en) 2021-05-07 2021-05-07 Post-chip anomaly detection method integrating CMA-ES algorithm and sequential extreme learning machine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110494529.3A CN113177563B (en) 2021-05-07 2021-05-07 Post-chip anomaly detection method integrating CMA-ES algorithm and sequential extreme learning machine

Publications (2)

Publication Number Publication Date
CN113177563A CN113177563A (en) 2021-07-27
CN113177563B true CN113177563B (en) 2022-10-14

Family

ID=76928270

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110494529.3A Active CN113177563B (en) 2021-05-07 2021-05-07 Post-chip anomaly detection method integrating CMA-ES algorithm and sequential extreme learning machine

Country Status (1)

Country Link
CN (1) CN113177563B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114820613B (en) * 2022-06-29 2022-10-28 深圳市瑞亿科技电子有限公司 Incoming material measuring and positioning method for SMT (surface mount technology) patch processing

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101825581A (en) * 2010-04-16 2010-09-08 广东工业大学 Model-based method for detecting PCB defects
CN106779066A (en) * 2016-12-02 2017-05-31 上海无线电设备研究所 A kind of radar circuit plate method for diagnosing faults
WO2017197626A1 (en) * 2016-05-19 2017-11-23 江南大学 Extreme learning machine method for improving artificial bee colony optimization
CN112729826A (en) * 2020-12-21 2021-04-30 湘潭大学 Bearing fault diagnosis method for artificial shoal-frog leaping optimization extreme learning machine

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB201704373D0 (en) * 2017-03-20 2017-05-03 Rolls-Royce Ltd Surface defect detection
CN107590538B (en) * 2017-08-28 2021-04-27 南京航空航天大学 Danger source identification method based on online sequence learning machine
CN109615056A (en) * 2018-10-09 2019-04-12 天津大学 A kind of visible light localization method based on particle group optimizing extreme learning machine

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101825581A (en) * 2010-04-16 2010-09-08 广东工业大学 Model-based method for detecting PCB defects
WO2017197626A1 (en) * 2016-05-19 2017-11-23 江南大学 Extreme learning machine method for improving artificial bee colony optimization
CN106779066A (en) * 2016-12-02 2017-05-31 上海无线电设备研究所 A kind of radar circuit plate method for diagnosing faults
CN112729826A (en) * 2020-12-21 2021-04-30 湘潭大学 Bearing fault diagnosis method for artificial shoal-frog leaping optimization extreme learning machine

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Surf Based Fault Image Detection for Printed Circuit Board Inspection;Yuk E H等;《IASTEM International Conference》;20160930;第31-35页 *
Zakaria S S 等;Automated Detection of Printed Circuit Boards (PCB) Defects by U;《IOP Conference Series Materials Science and Engineering》;20210331;第1-7页 *
基于卷积神经网络的PCB缺陷检测与识别算法;王永利 等;《电子测量与仪器学报》;20191231;第78-84页 *
运用在线贯序极限学习机的故障诊断方法;尹刚 等;《振动、测试与诊断》;20131231;第325-329页 *

Also Published As

Publication number Publication date
CN113177563A (en) 2021-07-27

Similar Documents

Publication Publication Date Title
CN108961235B (en) Defective insulator identification method based on YOLOv3 network and particle filter algorithm
CN110097053B (en) Improved fast-RCNN-based electric power equipment appearance defect detection method
CN109272500B (en) Fabric classification method based on adaptive convolutional neural network
CN111583263A (en) Point cloud segmentation method based on joint dynamic graph convolution
Wan et al. Ceramic tile surface defect detection based on deep learning
CN112818969B (en) Knowledge distillation-based face pose estimation method and system
CN110322445B (en) Semantic segmentation method based on maximum prediction and inter-label correlation loss function
CN112561910A (en) Industrial surface defect detection method based on multi-scale feature fusion
CN110197205A (en) A kind of image-recognizing method of multiple features source residual error network
CN113221787A (en) Pedestrian multi-target tracking method based on multivariate difference fusion
CN111145145B (en) Image surface defect detection method based on MobileNet
CN112819063B (en) Image identification method based on improved Focal loss function
KR20210127069A (en) Method of controlling performance of fusion model neural network
CN114972759A (en) Remote sensing image semantic segmentation method based on hierarchical contour cost function
CN108596044B (en) Pedestrian detection method based on deep convolutional neural network
CN113177563B (en) Post-chip anomaly detection method integrating CMA-ES algorithm and sequential extreme learning machine
Iivarinen et al. A defect detection scheme for web surface inspection
CN114818826A (en) Fault diagnosis method based on lightweight Vision Transformer module
CN110349119B (en) Pavement disease detection method and device based on edge detection neural network
CN115409822A (en) Industrial part surface anomaly detection method based on self-supervision defect detection algorithm
CN114821098A (en) High-speed pavement damage detection algorithm based on gray gradient fusion characteristics and CNN
CN114596433A (en) Insulator identification method
CN109887005B (en) TLD target tracking method based on visual attention mechanism
CN111079750A (en) Power equipment fault region extraction method based on local region clustering
CN111639206A (en) Effective fine image classification method based on optimized feature weight

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240104

Address after: No. 369 Huayuan Avenue, Baohe Economic Development Zone, Hefei City, Anhui Province, 230041, F326

Patentee after: HEFEI SSTARS MONITORING INFORMATION TECHNOLOGY Co.,Ltd.

Address before: Room c503, feicui science and education building, Hefei University of technology, 485 Danxia Road, University Town, Hefei Economic and Technological Development Zone, 230000, Anhui Province

Patentee before: Anhui shuaier Information Technology Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240527

Address after: Room F320, Intelligent Manufacturing Technology Research Institute, Hefei University of Technology, No. 369 Huayuan Avenue, Baohe District, Hefei City, Anhui Province, 230051

Patentee after: Hefei Xingbei Intelligent Control Technology Co.,Ltd.

Country or region after: China

Address before: No. 369 Huayuan Avenue, Baohe Economic Development Zone, Hefei City, Anhui Province, 230041, F326

Patentee before: HEFEI SSTARS MONITORING INFORMATION TECHNOLOGY Co.,Ltd.

Country or region before: China

TR01 Transfer of patent right