CN115829942A - Electronic circuit defect detection method based on non-negative constraint sparse self-encoder - Google Patents

Electronic circuit defect detection method based on non-negative constraint sparse self-encoder Download PDF

Info

Publication number
CN115829942A
CN115829942A CN202211408638.XA CN202211408638A CN115829942A CN 115829942 A CN115829942 A CN 115829942A CN 202211408638 A CN202211408638 A CN 202211408638A CN 115829942 A CN115829942 A CN 115829942A
Authority
CN
China
Prior art keywords
image
defect
electronic circuit
encoder
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211408638.XA
Other languages
Chinese (zh)
Inventor
付丽辉
石跃
吴文昊
蒋舟
李轶旻
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huaiyin Institute of Technology
Original Assignee
Huaiyin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huaiyin Institute of Technology filed Critical Huaiyin Institute of Technology
Priority to CN202211408638.XA priority Critical patent/CN115829942A/en
Publication of CN115829942A publication Critical patent/CN115829942A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses an electronic circuit defect detection method based on a non-negative constraint sparse self-encoder, which comprises the following steps: collecting and cutting an electronic circuit image, enhancing image data and extracting characteristics, and determining a defect data set and a defect type. The image denoising method based on the group graph model and the nuclear specification is used for denoising the electronic circuit image, a Laplace graph matrix of the image is obtained through an optimization learning strategy, and the graph structure is packaged by the Laplace graph matrix. A deep learning model of a non-negative constraint sparse self-encoder is proposed and used in the extraction of the defect region. And predicting a circuit image by using the self-encoder model successfully trained, subtracting the original defect input image from the predicted image to generate a defect detection image, and setting a proper threshold value to obtain the correctly classified defect type. Compared with the prior art, the method can effectively detect the true defect of the electronic circuit with larger defect color change and the false defect with smaller color change, and improve the reliability and the precision of the defect identification of the electronic circuit.

Description

Electronic circuit defect detection method based on non-negative constraint sparse self-encoder
Technical Field
The invention relates to the technical field of electronic circuit defect classification, in particular to an electronic circuit defect detection method based on a non-negative constraint sparse self-encoder.
Background
It is well known that electronic circuit boards mechanically support the connection of electronic components by conductive traces, pads and solder joints. With the progress of scientific technology and the rapid development of the mobile electronic product market, electronic circuit boards are more diversified and complicated, more electronic devices are integrated into the electronic circuit boards, the layout of the electronic circuit boards is increasing, and the problems caused by the increase are becoming more obvious. Typically, the signal transmission trace defect in the circuit board has a great influence on the signal of the whole system, thereby causing the performance of the connected electronic components to be reduced, and the electronic components are critical to the function of the whole system, and therefore, the circuit fault and the defect of the circuit system performance are caused finally. At present, the conventional mode of observing through human eyes is difficult to distinguish the tiny defects of the electronic circuit board, so the research requirement for realizing the mode of automatically detecting the defects of the electronic circuit board is very urgent, which is one of the most important ways to control the quality of the circuit.
The defect detection methods of electronic circuit boards can be generally classified into two types: direct detection approaches and camera-based machine vision methods. The direct detection approach is an inspection by manual operation, allowing the operator to easily perform using a visual inspection. However, the operator is easy to feel tired due to repeated work, and the detection result of each person is inconsistent, which is a fundamental limitation of human judgment. To overcome these limitations, researchers have investigated machine vision based defect detection, which typically includes a camera, a light source, and an operating system. The method is visual and easy to understand, but needs a sensitive light environment with high shooting alignment precision. In addition to the above methods, developers have also used various machine vision and image processing algorithms to accomplish defect detection of electronic circuits. Generally, these methods require that all defect types be reported in advance. However, we cannot guarantee that the inspection system will only encounter the defects that have been determined in advance. In an actual production environment, various burst defects are often encountered, which cannot be correctly detected by the conventional machine vision-based detection method. In this case, the defect inspection system must be able to recalibrate in the face of changes in circuit manufacturing conditions using new sample data. This is a major drawback of conventional machine vision inspection systems.
Disclosure of Invention
The purpose of the invention is as follows: aiming at the current situation that electronic circuit boards are increasingly diversified and complicated and the tiny defects of the electronic circuit boards are difficult to distinguish conventionally through a human eye observation mode, the invention provides a method for detecting the defects of the electronic circuit based on a non-negative constraint sparse self-encoder.
The technical scheme is as follows: the invention discloses an electronic circuit defect detection method based on a non-negative constraint sparse self-encoder, which comprises the following steps of:
(1) Preprocessing the electronic circuit defect data set; collecting and cutting an electronic circuit image, determining a defect data set and a defect type of the electronic circuit, and completing the enhancement and the feature selection of the image data;
(2) The method comprises the steps of completing image denoising of an electronic circuit based on a group graph model and a kernel normalization method GNN, and packaging a graph structure by utilizing a Laplace graph matrix and a kernel normalization of an image;
(3) Constructing a deep learning model based on a non-negative constraint sparse self-encoder FFSAE, and using the deep learning model in the extraction of the defect region;
(4) Generating a defect detection map, predicting a high-quality circuit image by using a successfully trained self-encoder FFSAE model, subtracting an original defect input image from the predicted image to generate the defect detection map, and finally highlighting the defect position by setting a proper threshold value for the defect detection map so as to finish the correct classification of the defect types of the electronic circuit.
Further, the step (1) of preprocessing the defect data set of the electronic circuit comprises the following specific steps:
step (1) a: collecting and cutting an electronic circuit image, and determining an electronic circuit defect data set;
step (1) b: determining the defect type of the electronic circuit, and setting two defects, wherein one defect is a true defect, and the other defect is a false defect; the true defects are defects caused by the change of the shape of the lead, and are specifically set as disconnection defects, connection defects, protrusion defects and crack defects; the pseudo defects are characterized in that only the color changes, the shape characteristics of the lead and the basic component are not changed, and the pseudo defects are specifically divided into oxidation defects and dust defects;
step (1) c: completing image data enhancement and feature selection
Data enhancement: the data enhancement is accomplished by applying geometric transformations and adding noise: applying a random rotation mode to overcome the position deviation of the image data; multiplying a random matrix with a noise distribution with the original data;
selecting characteristics: determining feature parameters as color information and shape information respectively, wherein the color information comprises 30 types, and extracting the following features from RGB and HSV color models respectively, wherein the features comprise 1) a maximum value, 2) a minimum value, 3) a mean value, 4) a proportion high value, 5) a ratio of a lead region to a candidate region and a lead, 6) a ratio of a base part to a candidate part, 7) a position difference between a numerical gravity center and the maximum value, 8) a variance, 9) a standard deviation, 10) a kurtosis, 11) a skewness, 12) an entropy, 13) a difference value between the maximum value and the minimum value, 14) a median value, and 15) correlation between a test image and a reference image; the shape information includes 8 types in total, including 1) area, 2) perimeter, 3) x-direction dimension, 4) y-direction dimension, 5) aspect ratio, 6) diagonal length, 7) complexity, and 8) roundness.
Further, the step (2) of denoising the circuit image based on the group-based graph model and the kernel normalization method GNN is specifically performed as follows:
step (2) a: patch image acquired by using Laplace matrix L representation
(1) Obtaining a weighted neighbor matrix W from image data
The undirected weighted adjacency matrix Wmag is non-negative and has equal diagonal elements, i.e., W ij =w ji ,W ij And if the weight matrix W is more than or equal to 0, forming a weight matrix W of the edge by using a threshold Gaussian kernel, wherein the weight matrix W is as follows:
Figure BDA0003937225330000031
wherein,
Figure BDA0003937225330000032
is at the image vertex v i And v j The Euclidean distance between the two, sigma is a speed control parameter for controlling the attenuation of the weight along with the increase of the distance, and epsilon is a threshold parameter and represents an epsilon-neighborhood graph;
(2) Acquiring an image L represented by a Laplace matrix
L=Δ-W
Wherein Δ is a diagonal matrix satisfying the equation Δ ii =∑ j W ij
Step (2) b: establishing a group-based graph model and a kernel specification combined optimization formula
(1) Constructing a basic optimization formula
Is provided with
Figure BDA0003937225330000033
If the regularization term is associated with the laplacian matrix, the basic optimization formula based on image denoising is as follows:
Figure BDA0003937225330000034
wherein x and y are both nx1 vectors representing image blocks, L is an nxn laplacian matrix, and θ is a regularization parameter;
(2) Constructing optimization formula based on grouping dual graph
Considering that each group is a matrix, i.e. constructing a dual graph T comprising a row graph and a column graph m×n Then, an optimized expression for the dual graph model is defined as follows:
Figure BDA0003937225330000041
where X and Y are m X n matrix of image row and column data, θ r And theta c Is a regularization control parameter used for determining the influence degree of a regularization term, namely a row diagram
Figure BDA0003937225330000042
And column diagram
Figure BDA0003937225330000043
Figure BDA0003937225330000044
The method is defined by utilizing the similarity of pixel intensities at the same position of all similar patch images based on a row diagram regularization term of a group, and specifically comprises the following steps:
Figure BDA0003937225330000045
Figure BDA0003937225330000046
the method is defined by utilizing the similarity of pixel intensities of all corresponding positions of each patch image based on a column diagram regularization term of a group, and specifically comprises the following steps:
Figure BDA0003937225330000047
L r and L c Respectively a row laplace matrix and a column laplace matrix;
(3) Optimization formula for constructing nuclear specifications
Introducing low-order optimization processing, and the conventional replacement of the low-rank data matrix X is called kernel specification or tracking specification | | X | | sweet wind * Specifically, the following are defined:
||X|| * =tr((XX T ) 1/2 )=∑ k σ k
wherein σ k Is the singular value of X;
(4) Construction of grouping-based graph model and core specification combined optimization formula
The specific definition is as follows:
Figure BDA0003937225330000048
wherein, theta n 、θ r And theta c The control parameters of the kernel norm, the row diagram and the column diagram show that the regularization item reflects the non-local self-similarity, and the kernel specification reflects the low-rank characteristic of the image capable of using a large amount of information.
Further, the grouped graph model and kernel specification based combined optimization formula in the step (2) is optimized and solved by a KNN algorithm, and the method specifically comprises the following steps:
(1) Calculating optimization formula values between the current patch image and all patch images;
(2) Arranging according to the optimized value in an ascending order;
(3) Selecting K patch images with the nearest optimization values;
(4) And counting the occurrence frequency of the category where the K patch images are located, and taking the category with the highest occurrence frequency in the K patch images as a de-noised result image.
Further, the deep learning model of the step (3) based on the non-negative constraint sparse self-encoder (FFSAE) is specifically:
step (3) a: encoding input data using an encoder
First, a set of encoders θ is utilized E ={W E ,b E -converting the input data into image features represented in a "compressed" representation; inputting a signal X by the following formula m ∈R d Transformed to hidden layer eigenvectors h m ∈R s
h m =E(X m ,θ E )=sigm(W E X E +b E )
Wherein, theta E Is represented by a weight matrix W E And an offset vector b E The encoder is a nonlinear transformation function: e (): r d →R s (d>s);
Step (3) b: defining a cost function eta AE (W,b)
Defining the average reconstruction error of all training samples as a cost function eta AE (W, b), adding a weight attenuation penalty term alpha, and specifically defining as follows:
Figure BDA0003937225330000051
wherein W = { W E ,W D },b={b E ,b D M is the number of training samples, and alpha is a normalized penalty term for controlling the reduction of weight;
step (3) c: building sparse autoencoders
(1) Solving for average activation value of hidden layer unit
Figure BDA0003937225330000052
A sparse autoencoder can be constructed by imposing sparsity on the hidden layer units of the autoencoder, the sparse autoencoder expects the average activation value of each hidden layer unit to be close to zero; is provided with [ h m ] j Is equal to X m The activation value of the relevant j hidden unit, the j hidden unit is on the whole training setThe average activation value was calculated as follows:
Figure BDA0003937225330000061
(2) Defining penalty terms
Figure BDA0003937225330000062
Sparsity constraint of sparse autoencoder
Figure BDA0003937225330000063
Enforcing, wherein epsilon is a predefined sparsity parameter, adding an additional penalty term for penalizing significant deviations from epsilon
Figure BDA0003937225330000064
In the case of (1), the penalty term is defined as the Kullback-Leibler (KL) divergence, as shown by the following equation:
Figure BDA0003937225330000065
wherein,
Figure BDA0003937225330000066
is the average activation vector of the hidden units, s is the number of hidden units,
Figure BDA0003937225330000067
is a standard function for measuring the difference between the two distributions;
it can be seen that when
Figure BDA0003937225330000068
When the temperature of the water is higher than the set temperature,
Figure BDA0003937225330000069
can reach a minimum value of 0 when
Figure BDA00039372253300000610
UpDeviations from ε may result in its being invalid, and therefore minimizing this penalty term may result in
Figure BDA00039372253300000611
Close to ε;
(3) Defining a sparse cost function η SAE (W,b)
The training target optimization function of the sparse self-encoder is to minimize the average reconstruction error eta AE (W, b) and sparsity penalty term
Figure BDA00039372253300000612
Thus, the sparse cost function η SAE (W, b) is defined as:
Figure BDA00039372253300000613
wherein β is a weight used to control the sparsity penalty term; since ε represents the average activation of all hidden units, and activation of hidden units depends on the parameters { W, b }, the ε term also depends on { W, b };
step (3) d: establishing a cost function η for a non-negative constraint standard auto-encoder FFSAE (W,b)
A non-negative-constrained self-encoder FFSAE is proposed, modifying the cost function to η FFSAE (W, b), as follows:
Figure BDA0003937225330000071
wherein,
Figure BDA0003937225330000072
step (3) e: updating using a gradient descent method
Figure BDA0003937225330000073
And
Figure BDA0003937225330000074
first, parameters are initialized
Figure BDA0003937225330000075
And
Figure BDA0003937225330000076
for random values close to zero, and then training by applying a gradient descent optimization algorithm, updating the parameters in each iteration
Figure BDA0003937225330000077
And
Figure BDA0003937225330000078
specifically, the following formula:
Figure BDA0003937225330000079
Figure BDA00039372253300000710
wherein λ > 0 is the learning rate;
step (3) f: reconstruction of input signal with decoder
Using a decoder theta D ={W D ,b D To restore the hidden features backwards, and thus to reconstruct the input signal, in particular by means of a decoder D (): r s →R d Feature vector h to be hidden m Reverse recovery into reconstructed vectors with similar structure
Figure BDA00039372253300000711
The implementation formula is as follows:
Figure BDA00039372253300000712
wherein, theta D Is represented by a weight matrix W D And a deviation vector b D Constituent decoder parameters.
Further, in the step (4), when determining the defect type by using the defect detection map, a performance evaluation index needs to be defined, and a structural similarity measurement index SSI is used as the evaluation index to measure the degradation degree of a certain image structure information relative to another image structure information, and the specific calculation is as follows:
Figure BDA0003937225330000081
where μ and σ represent the mean and variance of the respective pixel parameters, respectively; sigma xy Is a covariance; c. C 1 、c 2 Is a constant that prevents the divisor from being zero.
Has the advantages that:
1. the invention realizes the automatic and effective detection of the defect type of the electronic circuit by utilizing a non-negative constraint sparse self-encoder (FFSAE), a group-based graph model and a kernel canonical image denoising method (GNN). The effective detection of circuit defects is achieved by capturing images of the electronic circuit using an image sensor (industrial camera), using these images to train a depth autoencoder model, and decoding the original non-defective image from the defective circuit image, and then comparing the decoded circuit image with the input circuit image to determine the location of the defect. Thus, the method can overcome the problem of small and unbalanced data sets at early manufacturing stages without prior knowledge of the defect type or the expert system's normal/defect evaluation criteria.
2. According to the invention, through a proper preprocessing process, a data set suitable for training can be designed, so that the performance of the classification model is improved. And finishing preprocessing technologies such as cutting and enhancing of the circuit image, and denoising the image by adopting a group-based graph model and a kernel normalization method (GNN). According to the method, the Laplace graph matrix of the image is obtained by optimizing a learning strategy, and the graph structure of the data matrix is encapsulated by the Laplace matrix, so that the topological structure of the image can be reflected, the good application effect of the image on the aspects of image smoothing and denoising is ensured, the quality of the circuit image is further effectively improved, and a more perfect defect data set is ensured to be established. The invention adopts a non-negative constraint sparse automatic encoder (FFSAE) to extract the defect area. The FFSAE algorithm emphasizes that the neuron weight is non-negative, converts the input code into a low-dimensional space code form, and reconstructs the original data through decoding. The method allows useful features to be extracted from unmarked data, original input data can be reproduced through potential vector expansion, and based on the non-negative characteristic of the weight of the neuron, the FFSAE algorithm can improve the interpretability of the operation of the identification network, enhance the discriminability of the learned features and further improve the reliability and the accuracy of electronic circuit defect identification.
Drawings
FIG. 1 is a process flow of an electronic circuit defect detection method based on a non-negative constraint sparse self encoder (FFSAE);
FIG. 2 is a true defect image;
FIG. 3 is a pseudo-defect image;
FIG. 4 is an electronic circuit image denoising process based on a group-based graph model and kernel normalization method (GNN);
FIG. 5 is an image denoising process based on a set of graph models and kernel specifications;
FIG. 6 is a structure of an auto encoder in an image denoising recovery application;
FIG. 7 is a standard self-encoder structure;
fig. 8 is an image of the results of an FFSAE-based circuit defect detection experiment.
Detailed Description
For better explanation of the present invention to facilitate understanding, the technical solutions of the present invention are described in detail below. The following examples are illustrative of the present invention, and the present invention is not limited to the following examples.
The invention provides an electronic circuit defect detection method based on a non-negative constraint sparse self-encoder (FFSAE). The electronic circuit defect detection process has two main purposes, one of which is to preprocess the electronic circuit defect data set. Because the training data volume determines the performance of the defect detection classification model, a data set suitable for training can be designed through a proper preprocessing process, and therefore the performance of the classification model is improved. Aiming at the problem, the method completes preprocessing technologies such as cutting and enhancing of the circuit image, and denoising the image by adopting a group-based graph model and a kernel normalization method (GNN). According to the method, the Laplace graph matrix of the image is obtained by optimizing a learning strategy, and the graph structure of the data matrix is encapsulated by the Laplace matrix, so that the topological structure of the image can be reflected, the good application effect of the image on the aspects of image smoothing and denoising is ensured, the quality of the circuit image is further effectively improved, and a more perfect defect data set is ensured to be established. And secondly, extracting the defect area and correctly identifying the defect area. The patent adopts a non-negative constraint sparse automatic encoder (FFSAE) to extract the defect area. The FFSAE algorithm emphasizes that the neuron weight is non-negative, converts the input code into a low-dimensional space code form, and reconstructs the original data through decoding. The method allows useful features to be extracted from unmarked data, original input data can be reproduced through potential vector expansion, and based on the non-negative characteristic of the weight of the neuron, the FFSAE algorithm can improve the interpretability of the operation of the identification network, enhance the discriminability of the learned features and further improve the reliability and the accuracy of electronic circuit defect identification.
The whole electronic circuit defect detection processing process mainly comprises a model training stage and a defect identification stage. In the model training phase, firstly, the image is preprocessed, the original electronic circuit image is divided into patch images with the size of 500 × 500 pixels through the clipping operation of the image, then, the noise suppression processing is carried out on each image, and the noise suppression of the image is completed by adopting a group-based graph model and a kernel normalization method (GNN) so as to improve the quality of the original electronic circuit image data set. And then, performing data enhancement on the denoised electronic circuit image. In the data enhancement module, the enhancement of the image data set is completed by performing operations such as random rotation, inversion and noise addition on the defective patch image, so that the problem of unbalanced data class caused by insufficient data is effectively solved. Finally, the enhanced patch image is used to train a non-negative-constraint sparse self-encoder model (FFSAE) to predict the input non-defective patch image. And after the model is trained successfully, entering the stage of identifying the defects of the electronic circuit. The defect electronic circuit image for detection is sent to a model which is trained successfully, once a high-quality image is predicted from the defect electronic circuit image through the trained model, a defect input image is subtracted from the predicted image, and a defect detection map can be generated. And finally, the threshold value of the defect detection diagram is properly set, so that the defect position is highlighted, and the effective and intuitive identification of the defect is completed.
Fig. 1 shows a processing procedure of an electronic circuit defect detection method based on a non-negative constraint sparse self-encoder (FFSAE) in an embodiment of the present invention, and with reference to fig. 1, the electronic circuit defect detection method based on a non-negative constraint sparse self-encoder (FFSAE) disclosed in the present invention specifically includes the following steps:
and step 1, preprocessing the defect image of the electronic circuit. The method comprises the following specific steps:
step (1) a: collecting and cutting electronic circuit image to determine electronic circuit defect data set
To verify the effectiveness of the electronic circuit defect detection method, it is necessary to create an electronic circuit defect image dataset. In practice, ten reference electronic circuit image datasets are acquired, each dataset being acquired by a megapixel high definition industrial camera equipped with a CMOS sensor. The original image is 4608 pixels by 3456 pixels, and the data set contains 600 electronic circuit board defect images, which can be divided into color images and grayscale images. Because the electronic circuit images collected by the industrial camera are high-resolution images, the data volume for processing the images is large, the calculation amount for processing is very large, and the processing time for training the classification model is long. In order to improve its computing power, in the preprocessing, adjustment is performed according to the size of each electronic circuit, the electronic circuit image is cropped, and the defect area is cropped into a patch image having a size of 500 × 500.
Step (1) b: determining defect types of electronic circuits
After the electronic circuit image is collected, the defect image needs to be manually processed to determine the defect type in the electronic circuit board. In practice, two types of defects are set, one being true defects and the other being false defects. Among them, true defects are defects due to a change in the shape of the lead, and the true defect image is, as shown in fig. 3, specifically, set to a disconnection defect (see fig. 2 (a)), a connection defect (see fig. 2 (b)), a protrusion defect (see fig. 2 (c)), and a crack defect (see fig. 2 (d)). The pseudo-defect is characterized in that only the color changes, but the shape characteristics of the lead and the base member do not change, and the pseudo-defect definition image is shown in fig. 3 and is specifically divided into an oxidation defect (see fig. 3 (a)) and a dust defect (see fig. 3 (b)).
Step (1) c: completing image data enhancement and feature selection
Typically, training deep classification algorithms requires extensive training data. However, in the manufacturing process of electronic circuits, the probability of generating circuit defects is generally small, and the types of defects also vary in the mass production of electronic circuits. This imbalance of data is a fundamental problem limiting the application of electronic circuit defect detection systems, which if applied to deep defect detection models, results in problems of over-fitting and performance degradation, and in order to avoid these problems we apply data enhancement to supplement the small amount of defect data, thereby improving the model performance. In consideration of enhancing the redundancy of images, geometric transformation and noise addition are applied to complete data enhancement. Geometric transformation is a method of effectively using the shape, orientation or feature position of a part, and applies a random rotation manner to overcome the positional deviation of image data. Adding noise is to add noise to the original image, and multiply the original data by a random matrix with noise distribution. Through the data enhancement technology, the classification model can be helped to learn more robust features.
After preprocessing, image features need to be extracted from defect candidate regions and fed into a non-negative-constraint self-encoder (FFSAE) for learning and classification. In implementation, the characteristic parameters are determined as color information and shape information, respectively. The first one is color information, and 30 kinds of the color information are extracted from RGB and HSV color models respectively, wherein the characteristics are (1) a maximum value, (2) a minimum value, (3) a mean value, (4) a proportion high value, (5) a ratio of a lead region to a candidate region and a lead, (6) a ratio of a base part to a candidate part, (7) a position difference between a numerical gravity center and a maximum value, (8) a variance, (9) a standard deviation, (10) a kurtosis, (11) a skewness, (12) an entropy, (13) a difference between a maximum value and a minimum value, (14) a median value, and (15) a correlation between a test image and a reference image. The second is shape information, which includes 8 types in total, including (1) area, (2) perimeter, (3) x-dimension, (4) y-dimension, (5) aspect ratio, (6) diagonal length, (7) complexity, and (8) roundness.
And (2) completing the denoising of the electronic circuit image based on the group graph model and the kernel normalization method (GNN).
In image denoising, an image denoising method (GNN) based on a group image model and kernel specification is proposed. Firstly, an original noisy image is converted into a patch image vector by using a block sampling mode, and then the patch image vector is processed by using GNN. Constructing a grouping-based dual graph and a kernel specification optimization formula, searching for similar patches by using a KNN algorithm, namely performing block matching on each patch image with noise, searching for m patch images similar to an original image, and finally stacking all similar patch images to construct a group-based graph model. In the construction of a grouped graph model, a Laplace matrix graph strategy is provided, a learning strategy is established through an optimization formula, and then the Laplace matrix is obtained by utilizing the optimization learning strategy. The Laplace weighted matrix image regards each pixel in the data matrix as a node, and the similarity among patch image pixels is explored, so that the topological structure of the image can be reflected, the image is effectively smoothed, and the denoising smoothing effect of the electronic circuit image is further enhanced.
Fig. 4 shows an image denoising process of an electronic circuit based on a group graph model and a kernel normalization method (GNN), which includes the following steps:
step (2) a: patch image collected by utilizing Laplace matrix L representation
(1) Obtaining a weighted neighbor matrix W from image data
An undirected weighted graph can be represented by G = (V, E, W). Wherein V is a groupN vertices, E is an edge set consisting of a plurality of V × N weighted edges, and W is a weighted neighbor matrix. The weighted adjacency matrix W reflects the vertex v i And v j Similarity between the two, in general, undirected weighted adjacency matrix Wgrams are non-negative and have equal diagonal elements, i.e., W ij =w ji ,W ij ≥0。
Based on this, the weight matrix W of the edge is constructed by using a threshold gaussian kernel, which is specifically as follows:
Figure BDA0003937225330000121
wherein,
Figure BDA0003937225330000122
is at the image vertex v i And v j The euclidean distance between, σ is the velocity control parameter that controls the decay of the weight as the distance increases, and ε is the threshold parameter, representing the ε -neighborhood map.
(2) Acquiring an image L represented by a Laplace matrix
The laplacian matrix L plays a crucial role in describing the data characteristics of the graph. L encapsulates the graph structure of the data matrix. In the undirected weighted graph, the calculation mode of L is determined by W, and the specific steps are as follows:
L=Δ-W (2)
where Δ is a diagonal matrix, satisfying the equation Δ ii =∑ j W ij
Step (2) b: establishing a group-based graph model and a kernel specification combined optimization formula
(1) Constructing a basic optimization formula
Is provided with
Figure BDA0003937225330000123
Is the regularization term associated with the laplacian matrix. Then the basic optimization formula based on image denoising is as follows:
Figure BDA0003937225330000124
where x and y are both n × 1 vectors representing image blocks, L is an n × n Laplace matrix, and θ is a regularization parameter.
(2) Constructing optimization formula based on grouping dual graph
To construct the dual graph, the conventional approach is to combine each data matrix X m×n Column vector X = (X) viewed as n dimensions 1 ,…,x n ) Or a line vector X = ((X' 1 ) T ,…,(x′ m ) T ) T . Thus, each vector in the matrix (each column or each row) is considered to be a node for computing the weighted neighbor matrix W. Then the adjacent matrix W is weighted m×m Is a function of the Euclidean distance of the row of
Figure BDA0003937225330000131
The Euclidean distance function of a column is
Figure BDA0003937225330000132
Thus, use is made of W m×m And W n×n A row laplacian matrix L can be obtained r And column Laplace matrix L c
Considering that each group is a matrix, i.e. constructing a dual graph T comprising a row graph and a column graph m×n Then, an optimized expression for the dual graph model is defined as follows:
Figure BDA0003937225330000133
where X and Y are m X n matrix of image row and column data, θ r And theta c Is a regularization control parameter used for determining the influence degree of a regularization term, namely a row diagram
Figure BDA0003937225330000134
And column diagram
Figure BDA0003937225330000135
Figure BDA0003937225330000136
The method is defined by using the similarity of pixel intensities at the same position of all similar patch images based on a group row diagram regularization term, and specifically comprises the following steps:
Figure BDA0003937225330000137
Figure BDA0003937225330000138
the method is defined by utilizing the similarity of pixel intensities of all corresponding positions of each patch image based on a column diagram regularization term of a group, and specifically comprises the following steps:
Figure BDA0003937225330000139
L r and L c Respectively a row laplace matrix and a column laplace matrix.
(3) Optimization formula for constructing nuclear specifications
In consideration of the strong low-rank property of the grouped images, low-order optimization processing is further introduced. A conventional replacement for a low-rank data matrix X is referred to as a kernel specification or a tracking specification | | X | | luminance * Specifically, the following are defined:
||X|| * =tr((XX T ) 1/2 )=∑ k σ k (7)
wherein σ k Is the singular value of X.
(4) Construction of grouping-based graph model and core specification combined optimization formula
The specific definition is as follows:
Figure BDA0003937225330000141
wherein, theta n 、θ r And theta c Are the control parameters of the kernel norm, row diagram and column diagram. In equation (7), the regularization term reflects non-local self-similarity, and the kernel specification reflects a low rank characteristic that a large number of information images can be used.
Step (2) c: carrying out optimization processing solving by using a KNN algorithm, and specifically comprising the following steps:
(1) Calculating the values of an optimization formula between the current patch image and all patch images;
(2) Arranging according to the optimized value in an ascending order;
(3) Selecting K patch images with the nearest optimization values;
(4) And counting the occurrence frequency of the category where the K patch images are located, and taking the category with the highest occurrence frequency in the K patch images as a de-noised result image.
FIG. 5 is a diagram of an image denoising process based on a set of graph models and kernel specifications.
And (3) realizing a deep learning model based on a non-negative constraint sparse self-encoder (FFSAE), and using the deep learning model for extracting the defect region.
An auto-encoder is a form of encoding intended to transform an input code into a low-dimensional space, and original data can be reconstructed by a decoder. This method allows the extraction of useful features from unlabeled data, enabling the original input data to be reproduced by latent vector expansion. The method is generally used for data compression, denoising, anomaly detection, image recovery and the like.
Fig. 6 is a structure of an automatic encoder in an image denoising recovery application.
An autoencoder network is an unsupervised learning algorithm. Essentially, the autoencoder learns the function M W,b(X) X, in other words, by learning the estimated value of the feature function, a similar one to X can be obtained
Figure BDA0003937225330000142
In specific implementation, the automatic encoder enforces that the number of the hidden units is smaller than the input dimension, so that the compression characteristic of the network learning input data can be ensured, and the compression characteristic is utilizedFeature structures in the input data may be discovered. The standard self-encoder consists of an encoder and a decoder.
Fig. 7 is a standard auto-encoder architecture, which in this patent implementation proposes a non-negative constrained sparse auto-encoder (FFSAE) to extract and correctly identify defective regions. The FFSAE algorithm emphasizes that the neuron weight is non-negative, converts the input code into a low-dimensional space code form, and reconstructs the original data through decoding. The method allows useful features to be extracted from unmarked data, original input data can be reproduced through potential vector expansion, and based on the non-negative characteristic of the weight of neurons, the FFSAE algorithm can improve the interpretability of the operation of the identification network, can enhance the identifiability of the learned features, and further can improve the reliability and the accuracy of electronic circuit defect identification. The FFSAE algorithm is specifically realized as follows:
step (3) a: encoding input data using an encoder
First, a set of encoders θ is utilized E ={W E ,b E Converts the input data into an image feature in a "compressed" representation. The encoder can be understood as a non-linear transformation function: e (): r d →R s (d > s) inputting the signal X by the formula (9) m ∈R d Transformed to hidden layer eigenvectors h m ∈R s
h m =E(X m ,θ E )=sigm(W E X E +b E ) (9)
Wherein, theta E Is represented by a weight matrix W E And an offset vector b E Constituent encoder parameters.
Step (3) b: defining a cost function η AE (W,b)
The essence of the auto-encoder is to learn the image compression characteristics at the hidden layer, with the minimum average error in all training samples to reconstruct the input. Thus, the average reconstruction error of all training samples is defined as a cost function η AE (W, b), and meanwhile, adding a weight attenuation penalty term alpha to minimize the risk of overfitting and improve the generalization capability of the algorithm. Utensil for cleaning buttockThe body is defined as formula (10):
Figure BDA0003937225330000151
wherein W = { W E ,W D },b={b E ,b D M is the number of training samples, α is the normalized penalty term that controls the decreasing weights.
Step (3) c: building sparse autoencoders
(1) Solving for average activation value of hidden layer unit
Figure BDA0003937225330000152
A sparse autoencoder can be constructed by imposing sparsity on the hidden layer units of the autoencoder, which expects the average activation value of each hidden layer unit to be close to zero. Is provided with [ h ] m ] j Is equal to X m The activation value of the relevant j hidden unit, then the average activation value of the j hidden unit over the whole training set is calculated as the following equation (11):
Figure BDA0003937225330000153
(2) Defining penalty terms
Figure BDA0003937225330000154
Sparsity constraint of sparse autoencoder
Figure BDA0003937225330000161
And (6) enforcing. Where ε is a predefined sparsity parameter, typically a small value close to 0 (e.g., 0.05). To satisfy the sparsity constraint, the activation of the hidden layer unit must mostly be close to zero. To achieve this, an additional penalty term is added to penalize significant deviations from ε
Figure BDA0003937225330000162
In the case ofThe penalty term is defined as Kullback-Leibler (KL) divergence, as shown by equation (12):
Figure BDA0003937225330000163
wherein,
Figure BDA0003937225330000164
is the average activation vector of the hidden units, and s is the number of hidden units.
Figure BDA0003937225330000165
Is a standard function for measuring the difference between the two distributions.
It can be seen that when
Figure BDA0003937225330000166
When the temperature of the water is higher than the set temperature,
Figure BDA0003937225330000167
can reach a minimum value of 0 when
Figure BDA0003937225330000168
An upward deviation from ε may result in its being invalid, and therefore minimizing this penalty term may result in
Figure BDA0003937225330000169
Close to epsilon.
(3) Defining a sparse cost function η SAE (W,b)
The training objective optimization function of the sparse autoencoder is to minimize the average reconstruction error η in equation (10) AE Sparsity penalty term in (W, b) and equation (12)
Figure BDA00039372253300001610
Thus, the sparse cost function η SAE (W, b) is defined as formula (13):
Figure BDA00039372253300001611
where β is the weight used to control the sparsity penalty term. It can be seen that since ε represents the average activation of all hidden units, and the activation of hidden units depends on the parameter { W, b }, the ε term also depends on { W, b }.
Step (3) d: establishing a cost function η for a non-negative constrained standard automatic encoder (FFSAE) FFSAE (W,b)
When the weight of the reinforced neuron is non-negative, the interpretability of the network operation can be improved, and the discriminability of the characteristic can be enhanced. Therefore, we propose a non-negative-constrained self-encoder (FFSAE), in order to achieve non-negativity of the weights, the cost function in the modified equation (13) is η FFSAE (W, b), specifically of formula (14):
Figure BDA0003937225330000171
wherein,
Figure BDA0003937225330000172
the goal of FFSAE training is to align η FFSAE (W, b) is minimized and is a function of W and b. According to equation (15), the penalty value assigned to the negative weight is the square value of the corresponding term, while the non-negative weight is 0. Thus, minimizing the cost function η FFSAE (W, b) may better reduce the number of negative weights. In addition, η, which is a normalization term of a non-negative constraint auto-encoder (FFSAE) FFSAE (W, b), equation (14) will also have the feature of reducing reconstruction errors, encouraging learning of sparse features, and also reduce the number of non-negative weights.
Step (3) e: updating using a gradient descent method
Figure BDA0003937225330000173
And
Figure BDA0003937225330000174
in order to accomplish the above objective of optimization,first, parameters are initialized
Figure BDA0003937225330000175
And
Figure BDA0003937225330000176
for random values close to zero, and then training by applying a gradient descent optimization algorithm, updating the parameters in each iteration
Figure BDA0003937225330000177
And
Figure BDA0003937225330000178
specifically, the formula (16) and (17):
Figure BDA0003937225330000179
Figure BDA00039372253300001710
wherein λ > 0 is the learning rate.
Step (3) f: reconstruction of input signal with decoder
Using a decoder theta D ={W D ,b D To restore the hidden features backwards and thus to reconstruct the input signal. In particular, by decoder D (): r s →R d Feature vector h to be hidden m Reverse recovery into reconstructed vectors with similar structure
Figure BDA00039372253300001711
The implementation formula is as follows:
Figure BDA0003937225330000181
wherein, theta D Is represented by a weight matrix W D And a deviation vector b D Constituent decoder parameters.
And (4) generating a defect detection map.
In the proposed method, a defect detection map needs to be generated finally, obtaining the correct defect type. In the specific implementation, a defect-free output image is generated by using the classification model, and then the input image is subtracted from the defect-free output image to obtain a defect detection image. This image subtraction is a numerical calculation that subtracts the value of another image from the value of the entire image. In this way, we can detect the change between the two images and use it in the identification of circuit defects. The method comprises the following specific steps:
step (4) a: predicting circuit images
The defective electronic circuit image for detection is sent to a trained self-encoder (FFSAE) model, and a high-quality circuit image is predicted from the defective circuit image by the training model.
Step (4) b: generating defect detection maps
And subtracting the original defect input image from the predicted image to generate a defect detection image. And finally, highlighting the defect position by setting a proper threshold value for the defect detection graph, thereby completing the correct classification of the defect type of the electronic circuit.
When determining the defect type using the defect detection map, a performance evaluation index needs to be defined. The performance index is used for measuring the similarity degree of the predicted image and the target image. The method adopts a structural similarity measurement index (SSI) as an evaluation index for measuring the degradation degree of certain image structure information relative to other image structure information, and the SSI is used for calculating the similarity of two pictures by comparing brightness, contrast and structure. The specific calculation is as follows:
Figure BDA0003937225330000182
where μ and σ represent the mean and variance of the respective pixel parameters, respectively; sigma xy Is a covariance; c. C 1 、c 2 Is a constant that prevents the divisor from being zero.
Step 5, based on the technology, an experiment platform is built, the specific implementation of motor fault classification is completed, and the following test experiment is mainly completed:
(1) FFSAE-based circuit defect detection experiment
The subsets are randomly sampled in the electronic circuit image data set, color features and shape features are randomly selected and fed into a non-negative-constraint self-encoder (FFSAE), and finally the detection effect is obtained as shown in fig. 8. Therefore, the method can detect the sample with large color change of the defect, such as broken line and oxidation defect, and besides, the proposed method can also detect the dust false defect with small color change.
(2) True defect and false defect detection comparison experiment
To verify the correctness of the proposed method, FFSAE is compared to conventional algorithms (SVM, BP, RBF). The electronic circuit data set used comprises 500 defect images in total, 300 of which are true defects and 200 of which are false defects. Gray and color images are used for each method. The brightness value of the gray image is taken to correspond to the color image RGB, and 15 characteristics are taken. Finally, the test results of true defects and false defects of each method are obtained as shown in table 1. During the test, the method was performed 10 times, with each subset and feature selection being random, and the average taken as the final result.
TABLE 1 results of the different methods of classification
Figure BDA0003937225330000191
The results in table 1 show that the method gives better discrimination results of real defects and false defects compared to the existing methods.
(3) Defect detection comparison experiment of color image and gray image
In the experiment, 200 color defect images and 200 gray defect images were selected, and defect measurements were performed by different methods, respectively, to obtain the experimental results of table 2.
TABLE 2 different image classification results
Figure BDA0003937225330000192
Table 2 shows that the measurement results are better than those of table 1 when the color image is used alone, and worse than those of table 1 when the gray image is used alone. On the other hand, when a color image is used, the correct ratio of all methods is improved over the gray-scale image as a whole. That is, the color image is effectively represented by a combination of features represented by a plurality of colors, such as ratios, entropy in RGB, correlation between the test image and the reference image, and the like, and the combined color has a better effect on defect classification, thereby further explaining the important role of the color image in defect measurement and classification.
The above embodiments are merely illustrative of the technical concepts and features of the present invention, and the purpose of the embodiments is to enable those skilled in the art to understand the contents of the present invention and implement the present invention, and not to limit the protection scope of the present invention. All equivalent changes and modifications made according to the spirit of the present invention should be covered within the protection scope of the present invention.

Claims (6)

1. A method for detecting defects of an electronic circuit based on a non-negative constraint sparse self-encoder is characterized by comprising the following steps of:
(1) Preprocessing the electronic circuit defect data set; collecting and cutting an electronic circuit image, determining a defect data set and a defect type of the electronic circuit, and completing the enhancement and the feature selection of the image data;
(2) The method comprises the steps of completing image denoising of an electronic circuit based on a group graph model and a kernel normalization method GNN, and packaging a graph structure by utilizing a Laplace graph matrix and a kernel normalization of an image;
(3) Constructing a deep learning model based on a non-negative constraint sparse self-encoder FFSAE, and using the deep learning model in the extraction of the defect region;
(4) Generating a defect detection map, predicting a high-quality circuit image by using a successfully trained self-encoder FFSAE model, subtracting an original defect input image from the predicted image to generate the defect detection map, and finally highlighting the defect position by setting a proper threshold value for the defect detection map so as to finish the correct classification of the defect types of the electronic circuit.
2. The method for detecting defects of electronic circuits of non-negatively-constrained sparse self-encoder according to claim 1, wherein the step (1) of preprocessing the defect data set of electronic circuits comprises the following specific steps:
step (1) a: collecting and cutting an electronic circuit image, and determining an electronic circuit defect data set;
step (1) b: determining the defect type of the electronic circuit, and setting two defects, wherein one defect is a true defect, and the other defect is a false defect; the true defects are the defects caused by the change of the shape of the lead, and are specifically set as disconnection defects, connection defects, protrusion defects and crack defects; the pseudo defects are characterized in that only the color changes, the shape characteristics of the lead and the basic component are not changed, and the pseudo defects are specifically divided into oxidation defects and dust defects;
step (1) c: completing image data enhancement and feature selection
Data enhancement: the data enhancement is accomplished by applying geometric transformations and adding noise: applying a random rotation mode to overcome the position deviation of the image data; multiplying a random matrix with a noise distribution with the original data;
selecting characteristics: determining characteristic parameters as color information and shape information respectively, wherein the color information comprises 30 types, and extracting the following characteristics from RGB and HSV color models respectively, wherein the characteristics are 1) a maximum value, 2) a minimum value, 3) an average value, 4) a proportion high value, 5) the ratio of a lead wire area to a candidate area and a lead wire, 6) the ratio of a base part to a candidate part, 7) the position difference between a numerical gravity center and the maximum value, 8) a variance, 9) a standard deviation, 10) a kurtosis, 11) a skewness degree, 12) an entropy, 13) a difference value between the maximum value and the minimum value, 14) a median value, and 15) the correlation between a test image and a reference image; the shape information includes 8 types in total, including 1) area, 2) perimeter, 3) x-direction dimension, 4) y-direction dimension, 5) aspect ratio, 6) diagonal length, 7) complexity, and 8) roundness.
3. The method for detecting the defect of the electronic circuit based on the non-negative constraint sparse self-encoder as claimed in claim 1, wherein the step (2) of denoising the circuit image based on the group-based graph model and the kernel normalization method GNN is implemented by the following specific steps:
step (2) a: patch image collected by utilizing Laplace matrix L representation
(1) Obtaining a weighted neighbor matrix W from image data
The undirected weighted adjacency matrix Wgraph is non-negative and has equal diagonal elements, i.e., W ij =w ji ,W ij And if the weight matrix W is more than or equal to 0, forming a weight matrix W of the edge by using a threshold Gaussian kernel, wherein the weight matrix W is as follows:
Figure FDA0003937225320000021
wherein,
Figure FDA0003937225320000022
is at the image vertex v i And v j The Euclidean distance between the two, sigma is a speed control parameter for controlling the attenuation of the weight along with the increase of the distance, and epsilon is a threshold parameter and represents an epsilon-neighborhood graph;
(2) Acquiring an image L represented by a Laplace matrix
L=Δ-W
Where Δ is a diagonal matrix, satisfying the equation Δ ii =∑ j W ij
Step (2) b: establishing a group-based graph model and a kernel specification combined optimization formula
(1) Constructing a basic optimization formula
Is provided with
Figure FDA0003937225320000023
If the regularization term is associated with the laplacian matrix, the basic optimization formula based on image denoising is as follows:
Figure FDA0003937225320000024
wherein x and y are both nx1 vectors representing image blocks, L is an nxn laplacian matrix, and θ is a regularization parameter;
(2) Constructing an optimization formula based on a packet-pair graph
Considering that each group is a matrix, i.e. constructing a dual graph T comprising a row graph and a column graph m×n Then, an optimized expression for the dual graph model is defined as follows:
Figure FDA0003937225320000031
where X and Y are m X n matrix of image row and column data, θ r And theta c Is a regularization control parameter used for determining the influence degree of a regularization term, namely a row diagram
Figure FDA0003937225320000032
And column diagram
Figure FDA0003937225320000033
Figure FDA0003937225320000034
The method is defined by using the similarity of pixel intensities at the same position of all similar patch images based on a group row diagram regularization term, and specifically comprises the following steps:
Figure FDA0003937225320000035
Figure FDA0003937225320000036
is a group-based column graph regularization term, using a regularization term located at each patch imageThe similarity of the pixel intensities corresponding to all the positions is defined as follows:
Figure FDA0003937225320000037
L r and L c Respectively a row laplace matrix and a column laplace matrix;
(3) Optimization formula for constructing nuclear specifications
Introducing low-order optimization processing, and the conventional replacement of the low-rank data matrix X is called kernel specification or tracking specification | | X | | sweet wind * Specifically, the following are defined:
||X|| * =tr((XX T ) 1/2 )=∑ k σ k
wherein σ k Is the singular value of X;
(4) Construction of grouping-based graph model and core specification combined optimization formula
The specific definition is as follows:
Figure FDA0003937225320000038
wherein, theta n 、θ r And theta c The control parameters of the kernel norm, the row diagram and the column diagram are shown, and it can be seen that the regularization item reflects the non-local self-similarity, and the kernel norm reflects the low-rank characteristic of the image which can use a large amount of information.
4. The electronic circuit defect detection method based on the non-negative constraint sparse self-encoder as claimed in claim 3, wherein the combined optimization formula based on the grouped graph model and the kernel specification in the step (2) is optimized and solved by using a KNN algorithm, and the specific steps are as follows:
(1) Calculating optimization formula values between the current patch image and all patch images;
(2) Arranging according to the optimized value in an ascending order;
(3) Selecting K patch images with the most adjacent optimization values;
(4) And counting the occurrence frequency of the category where the K patch images are located, and taking the category with the highest occurrence frequency in the K patch images as a de-noised result image.
5. The method for detecting defects of an electronic circuit based on a non-negative constraint sparse self-encoder according to claim 1, wherein the deep learning model of the step (3) based on the non-negative constraint sparse self-encoder (FFSAE) is specifically as follows:
step (3) a: encoding input data using an encoder
First, a set of encoders θ is utilized E ={W E ,b E -converting the input data into image features represented in a "compressed" representation; inputting a signal X by the following formula m ∈R d Transformed to hidden layer eigenvectors h m ∈R s
h m =E(X m ,θ E )=sigm(W E X E +b E )
Wherein, theta E Is represented by a weight matrix W E And an offset vector b E The encoder is a nonlinear transformation function: e (): r d →R s (d>s);
Step (3) b: defining a cost function η AE (W,b)
Defining the average reconstruction error of all training samples as a cost function eta AE (W, b), adding a weight attenuation penalty term alpha, and specifically defining as follows:
Figure FDA0003937225320000041
wherein W = { W E ,W D },b={b E ,b D M is the number of training samples, and alpha is a normalized penalty term for controlling the reduction of weight;
step (3) c: building sparse autoencoders
(1) Solving for average activation value of hidden layer unit
Figure FDA0003937225320000042
A sparse autoencoder can be constructed by imposing sparsity on the hidden layer units of the autoencoder, the sparse autoencoder expects the average activation value of each hidden layer unit to be close to zero; is provided with [ h ] m ] j Is equal to X m The activation value of the relevant jth hidden unit, then the average activation value of the jth hidden unit over the entire training set is calculated as follows:
Figure FDA0003937225320000051
(2) Defining penalty terms
Figure FDA0003937225320000052
Sparsity constraint of sparse autoencoder
Figure FDA0003937225320000053
Enforcing, wherein epsilon is a predefined sparsity parameter, adding an additional penalty term for penalizing significant deviations from epsilon
Figure FDA0003937225320000054
In the case of (1), the penalty term is defined as the Kullback-Leibler (KL) divergence, as shown by the following equation:
Figure FDA0003937225320000055
wherein,
Figure FDA0003937225320000056
is the average activation vector of the hidden units, s is the number of hidden units,
Figure FDA0003937225320000057
is a standard function for measuring the difference between the two distributions;
it can be seen that when
Figure FDA0003937225320000058
When the utility model is used, the water is discharged,
Figure FDA0003937225320000059
can reach a minimum value of 0 when
Figure FDA00039372253200000510
An upward deviation from ε may result in its being invalid, and therefore minimizing this penalty term may result in
Figure FDA00039372253200000511
Close to ε;
(3) Defining a sparse cost function η SAE (W,b)
The training target optimization function of the sparse self-encoder is to minimize the average reconstruction error eta AE (W, b) and sparsity penalty term
Figure FDA00039372253200000512
Thus, the sparse cost function η SAE (W, b) is defined as:
Figure FDA00039372253200000513
where β is the weight used to control the sparsity penalty term; since ε represents the average activation of all hidden units, and activation of hidden units depends on the parameters { W, b }, the ε term also depends on { W, b };
step (3) d: establishing a cost function η for a non-negative constraint standard auto-encoder FFSAE (W,b)
A non-negative-constrained self-encoder FFSAE is proposed, modifying the cost function to η FFSAE (W, b), as follows:
Figure FDA0003937225320000061
wherein,
Figure FDA0003937225320000062
step (3) e: updating using a gradient descent method
Figure FDA0003937225320000063
And
Figure FDA0003937225320000064
first, parameters are initialized
Figure FDA0003937225320000065
And
Figure FDA0003937225320000066
for random values close to zero, and then training by applying a gradient descent optimization algorithm, updating the parameters in each iteration
Figure FDA0003937225320000067
And
Figure FDA0003937225320000068
specifically, the following formula:
Figure FDA0003937225320000069
Figure FDA00039372253200000610
wherein λ > 0 is the learning rate;
step (3) f: reconstruction of input signals with a decoder
By usingDecoder theta D ={W D ,b D -restoring the hidden features in reverse, thereby reconstructing the input signal, in particular by means of a decoder D (): r s →R d Feature vector h to be hidden m Reverse recovery into reconstructed vectors with similar structure
Figure FDA00039372253200000611
The implementation formula is as follows:
Figure FDA00039372253200000612
wherein, theta D Is represented by a weight matrix W D And a deviation vector b D Constituent decoder parameters.
6. The method as claimed in claim 1, wherein in the step (4), when determining the defect type by using the defect detection map, a performance evaluation index needs to be defined, and a structural similarity measurement index SSI is used as the evaluation index to measure the degradation degree of structural information of a certain image relative to structural information of another image, and the specific calculation is as follows:
Figure FDA0003937225320000071
where μ and σ represent the mean and variance of the respective pixel parameters, respectively; sigma xy Is a covariance; c. C 1 、c 2 Is a constant that prevents the divisor from being zero.
CN202211408638.XA 2022-11-10 2022-11-10 Electronic circuit defect detection method based on non-negative constraint sparse self-encoder Pending CN115829942A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211408638.XA CN115829942A (en) 2022-11-10 2022-11-10 Electronic circuit defect detection method based on non-negative constraint sparse self-encoder

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211408638.XA CN115829942A (en) 2022-11-10 2022-11-10 Electronic circuit defect detection method based on non-negative constraint sparse self-encoder

Publications (1)

Publication Number Publication Date
CN115829942A true CN115829942A (en) 2023-03-21

Family

ID=85527648

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211408638.XA Pending CN115829942A (en) 2022-11-10 2022-11-10 Electronic circuit defect detection method based on non-negative constraint sparse self-encoder

Country Status (1)

Country Link
CN (1) CN115829942A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117474836A (en) * 2023-09-27 2024-01-30 深圳市长盈精密技术股份有限公司 Deformation defect detection method and device
CN117853453A (en) * 2024-01-10 2024-04-09 苏州矽行半导体技术有限公司 Defect filtering method based on gradient lifting tree

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117474836A (en) * 2023-09-27 2024-01-30 深圳市长盈精密技术股份有限公司 Deformation defect detection method and device
CN117853453A (en) * 2024-01-10 2024-04-09 苏州矽行半导体技术有限公司 Defect filtering method based on gradient lifting tree

Similar Documents

Publication Publication Date Title
CN109584248B (en) Infrared target instance segmentation method based on feature fusion and dense connection network
CN106529447B (en) Method for identifying face of thumbnail
Yin et al. Hot region selection based on selective search and modified fuzzy C-means in remote sensing images
CN113592845A (en) Defect detection method and device for battery coating and storage medium
CN106599854B (en) Automatic facial expression recognition method based on multi-feature fusion
CN111553929A (en) Mobile phone screen defect segmentation method, device and equipment based on converged network
CN111160249A (en) Multi-class target detection method of optical remote sensing image based on cross-scale feature fusion
CN109766858A (en) Three-dimensional convolution neural network hyperspectral image classification method combined with bilateral filtering
CN115829942A (en) Electronic circuit defect detection method based on non-negative constraint sparse self-encoder
CN111257341A (en) Underwater building crack detection method based on multi-scale features and stacked full convolution network
CN115841447A (en) Detection method for surface defects of magnetic shoe
CN111275686A (en) Method and device for generating medical image data for artificial neural network training
Lin et al. Determination of the varieties of rice kernels based on machine vision and deep learning technology
CN114266894A (en) Image segmentation method and device, electronic equipment and storage medium
CN111814852A (en) Image detection method, image detection device, electronic equipment and computer-readable storage medium
CN111639697B (en) Hyperspectral image classification method based on non-repeated sampling and prototype network
CN115861226A (en) Method for intelligently identifying surface defects by using deep neural network based on characteristic value gradient change
CN111145145A (en) Image surface defect detection method based on MobileNet
CN117036243A (en) Method, device, equipment and storage medium for detecting surface defects of shaving board
CN115239672A (en) Defect detection method and device, equipment and storage medium
CN109919150A (en) A kind of non-division recognition sequence method and system of 3D pressed characters
CN117292117A (en) Small target detection method based on attention mechanism
CN115546171A (en) Shadow detection method and device based on attention shadow boundary and feature correction
CN113421210B (en) Surface point Yun Chong construction method based on binocular stereoscopic vision
CN110910497A (en) Method and system for realizing augmented reality map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination