CN115482452A - PCB welding spot identification method based on deep convolutional neural network - Google Patents

PCB welding spot identification method based on deep convolutional neural network Download PDF

Info

Publication number
CN115482452A
CN115482452A CN202211156120.1A CN202211156120A CN115482452A CN 115482452 A CN115482452 A CN 115482452A CN 202211156120 A CN202211156120 A CN 202211156120A CN 115482452 A CN115482452 A CN 115482452A
Authority
CN
China
Prior art keywords
image
brightness
welding
convolution
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211156120.1A
Other languages
Chinese (zh)
Inventor
周昌军
张志聪
魏子麒
朱东林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Normal University CJNU
Original Assignee
Zhejiang Normal University CJNU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Normal University CJNU filed Critical Zhejiang Normal University CJNU
Priority to CN202211156120.1A priority Critical patent/CN115482452A/en
Publication of CN115482452A publication Critical patent/CN115482452A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/06Recognition of objects for industrial automation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a PCB welding spot identification method based on a deep convolutional neural network, which specifically comprises the following steps: s1: performing threshold segmentation operation based on brightness balance on an input image; s2: intercepting the divided images one by welding points based on morphological operation; s3: performing data enhancement on the segmented welding spot image, and normalizing; s4: inputting the data enhanced data into multi-path convolution and pooling operation, and extracting multi-scale convolution characteristics; then reducing the dimension of the multi-scale features through GAP (GAP image search), and finally outputting the corresponding classes of the welding spots through a classifier; s5: and displaying the corresponding category of each welding point in the original image. The invention can realize the identification and classification of the welding points of the PCB welding plate, has the characteristics of high speed and high accuracy, and is helpful for the automatic evaluation of quality inspection equipment such as PCB welding point detection, PCB defect detection and the like.

Description

PCB welding spot identification method based on deep convolutional neural network
Technical Field
The invention relates to the technical field of deep learning and pattern recognition, in particular to a PCB welding spot recognition method based on a deep convolutional neural network.
Background
With the rapid development of science and technology, computers have stronger and stronger computing power and can be competent for more and more complex computing tasks. In the past decades, computers have been successful in replacing human labor in many areas, enough to accomplish tasks automatically and even intelligently. Compared with the manual task completion, the computer automation has the advantages of rapidness and accuracy. However, with the rapid development of the field of artificial intelligence, not only computers can become intelligent, but also industrial production can become intelligent.
At present, the precision of the PCB assembly technology is rapidly developed, the traditional manual visual inspection method becomes more and more difficult, the method is easily influenced by subjective factors to cause error inspection or missing inspection, and the efficiency is low.
Disclosure of Invention
The invention discloses a PCB welding spot identification method based on a deep convolution neural network, and aims to solve the technical problems that the PCB assembly technology proposed in the background technology is precise and rapid in development, the traditional manual visual inspection method becomes more and more difficult, the method is easily influenced by subjective factors to cause error inspection or missing inspection, and the efficiency is low.
In order to achieve the purpose, the invention adopts the following technical scheme:
a PCB welding spot identification method based on a deep convolutional neural network specifically comprises the following steps:
s1: performing threshold segmentation operation based on brightness balance on an input image;
s2: intercepting the divided images one by welding points based on morphological operation;
s3: performing data enhancement on the segmented welding spot image, and normalizing;
s4: inputting the data enhanced data into multi-path convolution and pooling operation, and extracting multi-scale convolution characteristics; then reducing the dimension of the multi-scale features through GAP (Global Average Power), and finally outputting the corresponding category of the welding spot through a classifier;
s5: and displaying the corresponding category of each welding point in the original image.
In a preferred embodiment, the step S1 specifically includes the following steps:
s11: firstly, loading the image and solving the global average brightness of the image;
s12: dividing an image into subblocks with the same size, scanning each subblock of the image to obtain the average brightness of the subblock, obtaining an average brightness matrix of the subblocks according to the distribution of each small subblock, and subtracting the global average brightness from each value in the brightness matrix of the subblocks to obtain a brightness difference matrix of the subblocks;
s13: expanding the luminance difference matrix of the sub-blocks to be the same as the size of the original image through interpolation operation to obtain a full-image luminance difference matrix; subtracting the corresponding value in the full-image brightness difference matrix from each pixel brightness value of the original image, so that the area with high image brightness is attenuated at the same time, and the area with low brightness is enhanced;
s14: adjusting the brightness of each sub-block pixel according to the lowest brightness and the highest brightness in the original image to enable the brightness of each sub-block pixel to be in accordance with the whole brightness range, segmenting the image by using a common threshold value selection method, finally outputting the image to obtain a PCB welding plate image with balanced brightness, dividing the original image into image small blocks with the size of 32x32 in the step S1, obtaining a sub-block brightness matrix, and then expanding the sub-block brightness matrix to the size same as that of the original image by using a bicubic difference value method.
In a preferred embodiment, the step S2 specifically includes the following steps:
s21: intercepting the segmented images one by using OpenCV image morphological operation;
s22: converting an RGB three-channel image into an HSV space by using OpenCV, performing morphological operation by using a saturation S axis of the HSV space, and determining the position of a similar welding spot in an original image by corrosion, expansion, binaryzation and opening operation modes to obtain an image Img1;
s23: determining the noise position which is not a welding point in a welding plate by using morphological operation on a gray-scale image of an original image to obtain an image Img2;
s24: removing noise positions of the two images obtained above by using mask operation in OpenCV, and determining positions of all welding points to obtain an image Img3;
s25: determining the positions of welding points in the original image and Img3 by using mask operation, and intercepting the image by using a minimum rectangular frame corresponding to the positions of the welding points to obtain an RGB color image corresponding to each welding point;
and in the step S2, the threshold value of the Otsu method binary conversion is obtained according to the between-class variance method.
In a preferred scheme, in the step S3, data enhancement is performed on the captured color image of the welding spot by using data enhancement, data enhancement is performed by using methods of rotation, random clipping and color gamut transformation based on the translational invariance characteristic of the image, the data amount is extended to 3 times of the original data amount, the generalization capability and robustness of the training model are improved, and then normalization processing is performed on the data set to obtain uniformly distributed data.
In a preferred embodiment, the step S4 of scaling the normalized image data to a fixed size and inputting the scaled normalized image data to the multilayer convolutional neural network specifically includes the following steps:
s41: extracting features of the input image through one convolution and average pooling to obtain a C1 feature extraction layer;
s42: performing feature extraction on the feature map output by the S41 through two different convolutions and one pooling, wherein the feature extraction is performed twice continuously, the dimension increasing operation is performed for the first time, and the feature extraction is performed for the second time, so that the feature map is a C2 feature extraction layer;
s43: combining the three paths of feature maps output by the S42, performing feature fusion by using a layer of convolution, and extracting features through twice convolution and pooling to obtain C3 and C4 feature extraction layers;
s44: performing GAP (Global object Pooling) operation dimensionality reduction on the feature graph output by the step S43, outputting convolution features with fewer dimensions, classifying the convolution features output in the step S4 in the step S5 through a classifier, classifying the convolution features according to four categories of normal, tin-lacking, tin-rich and solder missing, and labeling corresponding areas in an original graph by using rectangular frames with different colors according to the position information recorded before;
the step S42 uses three paths of non-convolution kernels, and is characterized in that: firstly, dimension is raised through three (1 x 1) × 32 convolution, and (5 x 5) × 32 convolution layers, (3 x 3) × 32 convolution layers, (2 x 2) × 32 Average Pooling layers are respectively carried out on three data paths, in the step S43, the Pooling operation of the C4 feature extraction layer uses the maximum Pooling, the convolution kernel size is (7 x 7) × 256, and in the step S44, the convolution kernel size of the GAP (Global Average Pooling) operation is (2 x 2) × 512.
From the above, a method for identifying a solder joint of a PCB based on a deep convolutional neural network specifically includes the following steps: s1: performing threshold segmentation operation based on brightness balance on an input image; s2: intercepting the divided images one by welding points based on morphological operation; s3: performing data enhancement on the segmented welding spot image, and normalizing; s4: inputting the data enhanced data into multi-path convolution and pooling operation, and extracting multi-scale convolution characteristics; then reducing the dimension of the multi-scale features through GAP (Global Average Power), and finally outputting the corresponding category of the welding spot through a classifier; s5: and displaying the corresponding category of each welding spot in the original image. The PCB welding spot identification method based on the deep convolution neural network has the following technical effects:
(1) In terms of image segmentation, the illumination influence in the image is corrected by using an image threshold segmentation technology based on brightness balance, so that the input images under the condition of different illumination intensity tend to be consistent.
(2) In the aspect of model structure, three different convolution modes are adopted for feature extraction and down sampling, and multi-scale feature information is spliced for feature extraction, so that the network generalization capability is enhanced, and the model performance is improved.
(3) Considering from the aspect of product implementation and application, the method can be applied to the fields of PCB detection and teaching scoring, and can be used for scoring the welding plate or judging whether the welding plate meets the production standard or not according to the proportion of qualified welding spots in the PCB, so that the labor cost is greatly reduced.
(4) In consideration of the accuracy of welding spot identification, the deep convolution is the powerful feature extraction and feature learning capacity of the neural network, and the limitation of the traditional method and the complexity of manually designing image features are avoided.
Drawings
Fig. 1 is a diagram of an example of each type of solder joint of a PCB solder joint identification method based on a deep convolutional neural network according to the present invention.
FIG. 2 is a graph before and after the PCB welding spot image brightness equalization segmentation of the PCB welding spot recognition method based on the deep convolutional neural network.
Fig. 3 is a model diagram of a deep convolutional neural network of a PCB welding spot identification method based on the deep convolutional neural network according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments.
Referring to fig. 1-3, a method for identifying a solder joint of a PCB based on a deep convolutional neural network specifically includes the following steps:
the method comprises the following steps of 1, performing segmentation operation on an input image based on a threshold value of brightness balance, namely loading the image and solving the global average brightness of the image; dividing the image into subblocks (32 x32 or 64x 64) with the same size, scanning each small block of the image to obtain the average brightness of the block, obtaining a subblock average brightness matrix according to the distribution of each small subblock, and subtracting the global average brightness from each value in the subblock brightness matrix to obtain a subblock brightness difference matrix; expanding the luminance difference matrix of the sub-blocks to be the same as the size of the original image through interpolation operation to obtain a full-image luminance difference matrix; subtracting the corresponding value in the full-image brightness difference matrix from each pixel brightness value of the original image, so that the area with high image brightness is attenuated at the same time, and the area with low brightness is enhanced; adjusting the brightness of each sub-block pixel according to the lowest brightness and the highest brightness in the original image to enable the brightness to be in accordance with the whole brightness range, segmenting the image by using a common threshold selection method, and finally outputting the image to obtain a PCB welding plate image with balanced brightness;
solder joint image accounts for a short time among the PCB circuit board, and the circuit board background occupies most field of vision scope, and the solder joint represents white mirror surface light in the image, if direct image with shooing discerns, because light is inhomogeneous or light is stronger, introduce other noise influence and can appear the false retrieval and leak the detection, can carry out luminance equilibrium to the image earlier, keep the even and not great gap of light in the whole image as far as possible, will probably get rid of with the similar solder joint region of white mirror surface light.
The algorithm principle is as follows: an image with the size of NxN is transmitted, and the value range after binarization is (0.., L). The average brightness is then:
Figure BDA0003858707390000071
where p (i, j) is the luminance value corresponding to each pixel of the image at coordinates (i, j).
If the subblock image with the size of n × n in the image is taken, the luminance average value of the corresponding subblock is:
Figure BDA0003858707390000072
the difference between the luminance sub-block mean and the global luminance mean is Δ lum =Lum av_bn -Lum av
From Δ of high luminance sub-block in the above available image lum Greater than 0, Δ of low luminance sub-block lum Less than 0. The brightness of each sub-block needs to be adjusted to realize the brightness equalization of the image, but the same adjustment value cannot be directly added or subtracted to each sub-block to ensure the smoothness of the image, but the delta of each sub-block is divided into blocks lum Performing bicubic interpolation on a matrix of the subblocks in a format to expand the size of the subblocks to the size of an original image, wherein the bicubic interpolation method is also called a bicubic interpolation method, the value of a function f at a point (x, y) in the method can be obtained by weighted average of the latest sixteen sampling points in a matrix grid, two polynomial interpolation cubic functions are required to be used, one function is used in each direction, and the function expression is as follows:
Figure BDA0003858707390000073
wherein the weight W (x) is expressed as:
Figure BDA0003858707390000074
the extended delta is then subtracted from the pixel values of the original image lum The brightness balance of the whole image can be realized.
Step 2, intercepting the segmented image one by using OpenCV image morphological operation, namely converting the RGB three-channel image into HSV space by using OpenCV, then performing morphological operation by using a saturation (S) axis of the HSV space, and determining the position of a similar welding spot in the original image by corrosion, expansion, binarization, opening operation and other modes to obtain an image Img1; in addition, determining the position of noise which is not a welding point in the welding plate by using morphological operation on the gray-scale image of the original image to obtain an image Img2; removing noise positions of the two images obtained above by using mask operation in OpenCV, and determining positions of all welding points to obtain an image Img3; determining the positions of welding points in the original image and Img3 by using mask operation, and intercepting the image by using a minimum rectangular frame corresponding to the positions of the welding points to obtain an RGB color image corresponding to each welding point;
performing color gamut conversion on an image output by brightness balance division, determining positions of similar welding spots by using an S axis through HSV conversion once, determining positions of useless points through grey-scale image conversion once, removing the useless points by using masking operation, determining positions of all the welding spots, obtaining pictures of all the independent welding spots by using an image interception method, and storing corresponding welding spot position information. The specific execution steps are as follows:
a) Firstly, converting the RGB three-channel image into HSV color gamut space, and calculating the formula as follows: first converting R, G, B to [0,1]Space is respectively obtained
Figure BDA0003858707390000081
The maximum value Cmax and the minimum value Cmin of the 3 channels thereof are obtained, and the difference therebetween:
cmin = min (R ', G ', B '); cmax = max (R ', G ', B '); Δ = Cmax-Cmin Hue value (Hue) is calculated as:
Figure BDA0003858707390000091
the Saturation (Saturation) is calculated as:
Figure BDA0003858707390000092
the brightness (value), i.e., luminance, is calculated as: v = Cmax
b) The RGB three-channel image is converted into a single-channel gray scale image, and the calculation formula is as follows:
Gray (i,j) =R (i,j) *0.299+G (i,j) *0.587+B (i,j) *0.114
wherein (i, j) is the coordinate point of the corresponding pixel position in the image, and Gray is the pixel value of the corresponding Gray level image.
c) Morphological operations such as erosion, expansion, opening and closing operations are performed on the image of the converted color gamut, the positions of the welding points are determined, and the noise points are removed. Firstly, small noise points are removed through corrosion, the corrosion can remove the small noise of the opposite welding points through a local minimum value, and the functional expression of the corrosion is as follows:
E(F,K)=F!K=min{F(m-a,n-b)-K(a,b)
where F represents the original image and K represents the structural element of the dilation operation performed on the original image, where! Indicating the dilation operation and E the resulting grayscale image pixel value after the erosion operation.
Then, restoring the size of a normal welding spot by using expansion operation, and increasing the difference between the normal welding spot and a noise spot, wherein the expansion function expression is as follows:
Figure BDA0003858707390000093
wherein F represents the original image, and K represents the expansion operation performed on the original imageStructural elements of (1), used herein
Figure BDA0003858707390000101
Indicating the dilation operation and D the resulting grayscale image pixel value after the erosion operation.
Finally, opening operation is used for eliminating noise except for correct welding spots, namely, firstly, corrosion and then expansion are carried out, and closing operation is used for filling welding spot vacancies, namely, expansion and corrosion are carried out.
After the operations are completed, an HSV color gamut S-axis gray scale map based on similar welding spots and a binary gray scale map based on useless welding spots can be obtained, the HSV color gamut S-axis gray scale map and the binary gray scale map are combined through mask operation of OpenCV, so that correct welding spot information of the image can be obtained, the contour detection algorithm can obtain the position information of the correct welding spots through the image, and the position information corresponding to the residual welding spots is extracted and stored.
And segmenting the original image according to the minimum rectangular frame corresponding to the position information to obtain color images of the welding points one by one.
And 3, expanding data of the color images of the welding points one by one obtained by segmentation by using data enhancement, wherein due to the characteristic translation invariance of the images, the data enhancement is carried out by using methods such as rotation, random cutting, color gamut transformation and the like, the data volume is expanded to 3 times of the original data volume, and the generalization capability and the robustness of the training model are improved. Then, carrying out normalization processing on the data set to obtain uniformly distributed data;
and 4, step 4: all solder joint categories are classified into four categories. The four conditions are normal, less tin, more tin and missing welding. The image size is scaled to 112x112, a deep convolutional neural network training model is input, wherein C1 is a first feature extraction layer, convolution operation is firstly carried out on input data, the convolution kernel size is (3 x 3) × 16,3x3 is the convolution kernel size, 16 is the number of convolution kernels, the step size is 1, and the feature map size output after convolution calculation is (112 x 112) × 16. After convolution, one downsampling operation is performed using average pooling, the average pooled convolution kernel size is (2 x 2) × 16, the step size is 2, and the signature size output after pooling calculation is (56 x 56) × 16.
And C2 is a second layer of feature extraction layer, the image is processed by three different operations, firstly, the dimension of the feature map is increased by simultaneously using (1 x 1) × 32 convolution, then, the features are extracted by three different operations, the features are extracted by using (5 x 5) convolution operation at the rightmost side, the convolution kernel size is (5 x 5) × 32, and the step size is 1. The feature size output after convolution calculation is (56 × 56) × 32. The features are extracted using a (3 x 3) convolution operation with a convolution kernel size of (3 x 3) × 32 and a step size of 1. The signature size output after the convolution calculation is (56 x 56) × 32. The left-most uses the (2 x 2) average pooling operation with a pooling convolution kernel size of (3 x 3) × 32 with a step size of 1. The output signature after pooling calculation is (56 x 56) × 32. And respectively using maximum pooling downsampling for the three outputs, wherein the maximum pooling size is (2 x 2) × 32, and the step length is 1. And (4) obtaining a fused feature map with the size of (28 x 28) × 96 as a result of the three-way pooling operation Concat. Performing multi-scale feature extraction on the fused feature map by using a convolution operation, wherein the convolution kernel size is (3 x 3) × 128, and the step size is 1. The signature size output after the convolution calculation is (28 x 28) × 128.
C3 is the third layer of feature extraction layer, and features are first extracted using convolution operations with a convolution kernel size of (3 x 3) × 256 and a step size of 1. The signature size output after the convolution calculation is (28 × 28) × 256. And performing downsampling by using a pooling operation, wherein the convolution kernel size of the average pooling operation is (2 x 2) × 256, and the step size is 2. The output feature size after pooling calculation was (14 x 14) × 256.
C4 is the fourth layer of feature extraction, and features are first extracted using convolution operations with a convolution kernel size of (3 x 3) × 512 and a step size of 1. The signature size output after the convolution calculation is (14 × 14) × 512. And performing downsampling by using a pooling operation, wherein the convolution kernel size of the maximal pooling operation is (7 x 7) × 512, and the step size is 7. The output feature size after pooling calculation was (2 x 2) × 512.
The GAP layer mainly plays a role in reducing dimensions, replaces a full connection layer of an original structure of the convolutional neural network, greatly reduces the overall parameter number of the convolutional neural network, uses a convolution kernel with the size of (2 x 2) × 512, obtains a vector value for each feature map, finally outputs a one-dimensional vector with the length dimension of 512, uses a full connection layer to map the vector with the length of 512 to the corresponding category number of 4,
and 5, classifying the characteristic vectors output in the step 4 by a Softmax function classifier, classifying according to four categories of normal, less tin, more tin and missing welding, marking corresponding category welding points by using rectangular frames with different colors according to the position information recorded in the step 2, and finally obtaining all PCB welding point classification results.
In a preferred embodiment, an example of each type of solder joint is shown in FIG. 1.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.

Claims (10)

1. A PCB welding spot identification method based on a deep convolution neural network is characterized by comprising the following steps:
s1: performing threshold segmentation operation based on brightness balance on an input image;
s2: intercepting the divided images one by welding points based on morphological operation;
s3: performing data enhancement on the segmented welding spot image, and normalizing;
s4: inputting the data enhanced data into multi-path convolution and pooling operation, and extracting multi-scale convolution characteristics; then, reducing the dimension of the multi-scale features through GAP (good GAP), and finally outputting the corresponding category of the welding spots through a classifier;
s5: and displaying the corresponding category of each welding point in the original image.
2. The PCB welding spot recognition method based on the deep convolutional neural network as claimed in claim 1, wherein the S1 step specifically comprises the following steps:
s11: firstly, loading the image and solving the global average brightness of the image;
s12: dividing an image into subblocks with the same size, scanning each subblock of the image to obtain the average brightness of the subblock, obtaining a subblock average brightness matrix according to the distribution of each small subblock, and subtracting the global average brightness from each value in the subblock brightness matrix to obtain a subblock brightness difference matrix;
s13: expanding the luminance difference matrix of the sub-blocks to be the same as the size of the original image through interpolation operation to obtain a full-image luminance difference matrix; subtracting the corresponding value in the full-image brightness difference matrix from each pixel brightness value of the original image, so that the area with high image brightness is attenuated at the same time, and the area with low brightness is enhanced;
s14: and adjusting the brightness of each sub-block pixel according to the lowest brightness and the highest brightness in the original image to enable the brightness to be in accordance with the whole brightness range, segmenting the image by using a common threshold selection method, and finally outputting the image to obtain a PCB welding plate image with balanced brightness.
3. The PCB welding spot identification method based on the deep convolutional neural network as claimed in claim 1, wherein the S2 step specifically comprises the following steps:
s21: intercepting the segmented images one by using OpenCV image morphological operation;
s22: converting an RGB three-channel image into an HSV space by using OpenCV, performing morphological operation by using a saturation S axis of the HSV space, and determining the position of a similar welding spot in an original image by corrosion, expansion, binaryzation and opening operation modes to obtain an image Img1;
s23: determining the noise position which is not a welding point in a welding plate by using morphological operation on a gray-scale image of an original image to obtain an image Img2;
s24: removing noise positions of the two images obtained above by using mask operation in OpenCV, and determining positions of all welding points to obtain an image Img3;
s25: and determining the positions of welding points in the original image and Img3 by using mask operation, and intercepting the image by using the minimum rectangular frame corresponding to the positions of the welding points to obtain an RGB color image corresponding to each welding point.
4. The PCB welding spot identification method based on the deep convolutional neural network of claim 1, wherein in the step S3, data enhancement is performed on the intercepted welding spot color image by using data enhancement, data enhancement is performed by using methods of rotation, random clipping and color gamut transformation based on the translation invariance characteristic of the image, the data amount is expanded to 3 times of the original data amount, and then normalization processing is performed on a data set to obtain uniformly distributed data.
5. The PCB welding spot identification method based on the deep convolutional neural network of claim 1, wherein the S4 step of scaling the normalized image data to a fixed size and inputting the scaled normalized image data to the multilayer convolutional neural network specifically comprises the following steps:
s41: extracting features of the input image through one convolution and average pooling to obtain a C1 feature extraction layer;
s42: performing feature extraction on the feature map output by the S41 through two different convolutions and one pooling, wherein the feature extraction is performed twice continuously, the dimension increasing operation is performed for the first time, and the feature extraction is performed for the second time, so that the feature map is a C2 feature extraction layer;
s43: combining the three paths of feature maps output by the S42, performing feature fusion by using a layer of convolution, and extracting features through twice convolution and pooling to obtain C3 and C4 feature extraction layers;
s44: and performing GAP operation dimensionality reduction on the feature graph output in the S43, and outputting convolution features with fewer dimensionalities.
6. The PCB welding spot identification method based on the deep convolutional neural network as claimed in claim 5, wherein in the step S5, the convolutional features output in the step S4 are classified through a classifier according to four categories of normal, low tin, high tin and missing welding, and corresponding areas are marked with rectangular frames with different colors in an original image according to the position information recorded before.
7. The PCB solder joint identification method based on the deep convolutional neural network of claim 2, wherein the S1 step divides the original image into image small blocks with the size of 32x32, obtains a brightness matrix of the sub-blocks, and then expands the brightness matrix to the same size as the original image by using a bicubic difference method.
8. The PCB solder joint identification method based on the deep convolutional neural network of claim 3, wherein the threshold of the binary transformation of Otsu method is obtained according to the between-class variance method in the S2 step.
9. The PCB solder joint identification method based on the deep convolutional neural network of claim 5, wherein the step S42 uses three-way non-convolutional kernel, which is characterized in that: first, dimension is increased by three (1 x 1) × 32 convolution, and three data paths are respectively processed by (5 x 5) × 32 convolution layer, (3 x 3) × 32 convolution layer, and (2 x 2) × 32 average pooling layer.
10. The PCB solder joint identification method based on the deep convolutional neural network of claim 5, wherein in the step S43, the pooling operation of the C4 feature extraction layer uses maximum pooling, the convolution kernel size is (7 x 7) 256, and in the step S44, the convolution kernel size of the GAP operation is (2 x 2) 512.
CN202211156120.1A 2022-09-22 2022-09-22 PCB welding spot identification method based on deep convolutional neural network Pending CN115482452A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211156120.1A CN115482452A (en) 2022-09-22 2022-09-22 PCB welding spot identification method based on deep convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211156120.1A CN115482452A (en) 2022-09-22 2022-09-22 PCB welding spot identification method based on deep convolutional neural network

Publications (1)

Publication Number Publication Date
CN115482452A true CN115482452A (en) 2022-12-16

Family

ID=84392738

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211156120.1A Pending CN115482452A (en) 2022-09-22 2022-09-22 PCB welding spot identification method based on deep convolutional neural network

Country Status (1)

Country Link
CN (1) CN115482452A (en)

Similar Documents

Publication Publication Date Title
CN111223088B (en) Casting surface defect identification method based on deep convolutional neural network
CN109829893B (en) Defect target detection method based on attention mechanism
CN113658132B (en) Computer vision-based structural part weld joint detection method
CN109299274B (en) Natural scene text detection method based on full convolution neural network
CN108154502B (en) Through hole welding spot identification method based on convolutional neural network
US7925073B2 (en) Multiple optical input inspection system
CN111915704A (en) Apple hierarchical identification method based on deep learning
CN109948566B (en) Double-flow face anti-fraud detection method based on weight fusion and feature selection
CN110766017B (en) Mobile terminal text recognition method and system based on deep learning
CN112132196B (en) Cigarette case defect identification method combining deep learning and image processing
CN113592911B (en) Apparent enhanced depth target tracking method
CN111680690A (en) Character recognition method and device
CN111768415A (en) Image instance segmentation method without quantization pooling
CN111209858A (en) Real-time license plate detection method based on deep convolutional neural network
CN109166092A (en) A kind of image defect detection method and system
CN113807378A (en) Training data increment method, electronic device and computer readable recording medium
CN112233173A (en) Method for searching and positioning indoor articles of people with visual impairment
CN117557565B (en) Detection method and device for lithium battery pole piece
CN114445410A (en) Circuit board detection method based on image recognition, computer and readable storage medium
CN114445620A (en) Target segmentation method for improving Mask R-CNN
CN112750113B (en) Glass bottle defect detection method and device based on deep learning and linear detection
CN117593264A (en) Improved detection method for inner wall of cylinder hole of automobile engine by combining YOLOv5 with knowledge distillation
CN112926694A (en) Method for automatically identifying pigs in image based on improved neural network
CN115482452A (en) PCB welding spot identification method based on deep convolutional neural network
CN116188361A (en) Deep learning-based aluminum profile surface defect classification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination