CN109859207B - Defect detection method of high-density flexible substrate - Google Patents

Defect detection method of high-density flexible substrate Download PDF

Info

Publication number
CN109859207B
CN109859207B CN201910166760.2A CN201910166760A CN109859207B CN 109859207 B CN109859207 B CN 109859207B CN 201910166760 A CN201910166760 A CN 201910166760A CN 109859207 B CN109859207 B CN 109859207B
Authority
CN
China
Prior art keywords
network
neural network
convolutional neural
fics
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910166760.2A
Other languages
Chinese (zh)
Other versions
CN109859207A (en
Inventor
罗家祥
吴冬冬
林宗沛
胡跃明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201910166760.2A priority Critical patent/CN109859207B/en
Publication of CN109859207A publication Critical patent/CN109859207A/en
Application granted granted Critical
Publication of CN109859207B publication Critical patent/CN109859207B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a defect detection method of a high-density flexible substrate, which comprises the steps of collecting FICS images with appearance defects, unifying the images into standard sizes after preprocessing, marking the positions and the types of the defects in each image, and taking the positions and the types of the defects as training samples of a faster R-CNN convolutional neural network model; the training sample of the mark number is used as input of a master R-CNN convolutional neural network model, and the position and type information of the FICS defect are output, so that a trained master R-CNN convolutional neural network model is obtained; and inputting the FICS image to be detected into a trained Faster R-CNN-based convolutional neural network model, outputting whether the FICS image is defective, and outputting the position and type of the defect if the FICS image is defective. The invention realizes the rapid positioning and type judgment of the appearance defects of the high-density flexible substrate, and solves the problems of low speed and low accuracy of the traditional defect detection method.

Description

Defect detection method of high-density flexible substrate
Technical Field
The invention relates to the technical field of machine vision surface defect detection, in particular to a defect detection method of a high-density flexible substrate.
Background
A high-density flexible substrate (Flexible Integrated Circuit Substrate, FICS for short), which is a high-density flexible printed circuit board that can be used as an IC package substrate. In the production process of high-density FICS, it is inevitable to generate defects in appearance due to the accuracy problem of process control. Through high-precision visual detection, quick positioning and type recognition of various appearance defects of the FICS are key to realizing quality control in the manufacturing process of the high-density FICS.
The manufacturers of flexible substrates mainly use a manual visual inspection method to detect appearance defects of high-density FICS. The detection efficiency of manual visual inspection is low, so that the labor resource is wasted greatly, and the false detection rate is high, so that the detection quality is difficult to ensure. Some scholars put forward appearance defect detection methods based on image features, but the detection speed of the methods is low, so that the actual application requirements of FICS detection are difficult to meet; meanwhile, the methods can only detect a specific defect, and cannot simultaneously identify various types of appearance defects on the FICS.
Disclosure of Invention
In order to overcome the defects and shortcomings in the prior art, the invention provides a defect detection method of a high-density flexible substrate.
The invention adopts the following technical scheme:
a defect detection method of a high-density flexible substrate, comprising:
training a deep neural network model, which specifically comprises the following steps:
FICS images with appearance defects are collected, the images are preprocessed and unified into standard sizes, and defect positions and categories in each image are marked to be used as training samples of a master R-CNN convolutional neural network model;
the training sample of the mark number is used as input of a master R-CNN convolutional neural network model, and the position and type information of the FICS defect are output, so that a trained master R-CNN convolutional neural network model is obtained;
the defect detection step specifically comprises the following steps:
and inputting the FICS image to be detected into a trained Faster R-CNN-based convolutional neural network model, outputting whether the FICS image is defective, and outputting the position and type of the defect if the FICS image is defective.
The master R-CNN convolutional neural network model comprises
Shared convolutional neural network taking preprocessed FICS image to be detected as input and outputting feature map
After sampling point coordinates are carried out on the output feature map, calculating the values of the sampling point coordinates through bilinear interpolation, and outputting a parallel space transformation network of target point values corresponding to the feature map;
inputting a candidate frame generating network of the candidate frame corresponding to the target point value of the feature map;
and inputting a candidate frame, and outputting the position information of the defect and the classification network of the region of interest pooling.
The shared convolutional neural network is composed of 5 convolutional layers and 2 pooling layers.
The parallel spatial transformation network comprises
The positioning network is composed of three full-connection layers and is used for obtaining a conversion matrix theta through the feature map;
the pixel positioner is used for acquiring the position of each sampling point on the output characteristic diagram on the input characteristic diagram through the conversion matrix theta and then converting the characteristic diagram;
and calculating the obtained value of the coordinate of the sampling point through bilinear interpolation, and outputting a sampler of the target point value on the feature map.
The pixel positioner includes a transformation of scaling, translation, and rotation.
The candidate frame generation network adopts a mechanism of combining a sliding window with an anchor to determine the target of the sliding window.
The region of interest pooling classification network comprises a region of interest layer and two fully connected layers.
The loss function of the region-of-interest pooling classification network is:
L hard =L class (p,u)+δL regre (t,v)
L class (p, u) represents the log loss of the classification, L regre (t, v) represents regression loss of the predicted frame coordinates; p represents the predicted value of the predicted frame for the true category of the object contained in the predicted frame, and t and v represent the vector formed by combining 4 coordinates of the predicted frame and the vector formed by combining the actual position coordinates respectively;
the 4 coordinates are (x, y, w, h) respectively, x represents the relative abscissa of the candidate frame, y represents the relative ordinate of the candidate frame, w represents the width of the candidate frame, h represents the height of the candidate frame, and delta is used for balancing the classification loss and the coordinate regression loss;
in the training process, the first 128 candidate frames with the largest loss are regarded as difficult samples, and the small batch random gradient descent method is utilized for back propagation, so that the network parameters are updated.
The uniform size of the invention is specifically the standard size of 224 x 224 pixels.
The invention has the beneficial effects that:
(1) The visual detection method of the appearance defects of the high-density flexible substrate based on the improved master R-CNN can be applied to the rapid detection of the appearance defects in the manufacturing process of the flexible substrate, replaces manual visual detection, avoids the waste of human resources caused by the manual visual detection, and greatly improves the working efficiency;
(2) The detection method can automatically and rapidly detect different types of defects on the high-density flexible substrate, marks the positions and types of the defects, and solves the problems of rapid defect positioning and detection under the condition of random positions which are difficult to solve by the traditional image method;
(3) The image characteristic analysis is not needed, and after the picture to be detected is input into the trained neural network model, the position and the type information of the defect can be directly output, so that the problems that the traditional image processing method based on the characteristic analysis is low in speed and can not detect various types of defects at the same time are solved.
Drawings
FIG. 1 is a workflow diagram of the present invention;
FIG. 2 is a block diagram of a deep neural network model based on a master R-CNN of the present invention;
FIG. 3 is a schematic diagram of the shared convolutional neural network of FIG. 2;
FIG. 4 is a schematic diagram of the parallel spatial transformation network of FIG. 2;
FIG. 5 is a schematic diagram of a candidate block generation network of FIG. 2;
FIG. 6 is a schematic diagram of the region of interest pooling classification network of FIG. 2;
fig. 7 (a) to 7 (f) are graphs showing the detection effects of FICS in the present invention, which are short-circuit, open-circuit, scratch, pinhole, oxidation, and broken hole defects, respectively.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but embodiments of the present invention are not limited thereto.
Examples
As shown in FIG. 1, the defect detection method of the high-density flexible substrate comprises two steps of deep neural network model training and defect detection.
The deep neural network model training step, in this embodiment, adopts a faster-CNN, and is specifically as follows:
s1: collecting a plurality of FICS images with different types of appearance defects;
s2: preprocessing the sizes of the collected images to be 224×224 standard sizes;
s3: marking the position and the category of the defect in each image, and taking the marked image as a training sample of a next-step master R-CNN convolutional neural network;
s4: and taking the marked training sample image as the input of the deep convolutional neural network model, taking the position and type information of the FICS defect as the model output, and training the neural network detection model based on the improved master R-CNN to obtain the deep neural network model for positioning the appearance defect of the FICS image and identifying the type.
The defect detection step comprises the following steps:
unifying all the FICS images to be detected into 224X 224 standard sizes;
inputting the FICS images into a trained convolutional neural network model in sequence, outputting information about whether defects exist, the positions and the types of the defects, and storing the information into a database;
and detecting the next input image until the detection is finished.
As shown in fig. 2, the deep convolutional neural network model based on the improved master R-CNN in this embodiment. The method consists of 4 parts, wherein the first part is a shared convolutional neural network, the second part is a parallel space transformation network, the third part is a candidate frame generation network, and the fourth part is a region of interest and pooling network.
As shown in fig. 3, the first part is a shared convolutional neural network. This partial network consists of 5 convolutional layers and 2 pooling layers, the input is the preprocessed image with the size of 224 x 224, and the output is the feature map. The pooling layer employs a maximum pooling approach to reduce the number of parameter computations and to prevent overfitting.
As shown in fig. 4, the second part is a parallel spatial transformation network. Each spatial transformation network comprises 3 parts, namely: 1) A positioning network consisting of 3 fully connected layers. The input is a characteristic diagram/, the width is w, the height is h, the channel number is c, and the output is a conversion matrix theta; 2) The pixel positioner obtains the position of each point on the output characteristic diagram on the input characteristic diagram through the conversion matrix theta, and generates different kinds of characteristic diagram conversion when the conversion matrix theta is subjected to special treatment, wherein the parallel space conversion network comprises 3 conversion modes of scaling, translation and rotation; 3) The sampler calculates the value of the sampling point coordinate through bilinear interpolation after acquiring the sampling point coordinate of the output characteristic diagram on the input characteristic diagram through the last step, and determines the value of the corresponding target point on the output characteristic diagram.
As shown in fig. 5 and 6, the candidate box generation network is used for inputting an image and then outputting a batch of candidate boxes with higher scores. In the network, the generation of the candidate frames adopts a mechanism of combining sliding windows with anchors to determine whether targets exist in the area corresponding to each sliding window. Because of the non-uniform length and width of the target, windows of multiple dimensions are required for coverage. The anchor mechanism is to generate different candidate frames according to the length-width and multiple ratio based on the size of a reference window. The invention adopts three multiples of (8, 16, 32) and three proportions of (0.5, 1, 2) to generate candidate frames, and can obtain 9 anchors with different scales.
The fourth part is a pooled classification network that incorporates regions of interest that are difficult to detect. The region-of-interest pooling classification network consists of one region-of-interest layer and two fully connected layers. Its input is a series of candidate boxes, generated by the candidate box generation network of the last part, and its output is the location and class information of the neural network for the defect, the loss function is as follows:
L hard =L class (p,u)+δL regre (t,v)
L class (p, u) represents the log loss of the classification, L regre (t, v) represents regression loss of the predicted frame coordinates; p represents the predicted value of the predicted frame for the true class of the object it contains, and t and v represent the vector of the combination of the 4 coordinates of the predicted frame and the vector of the combination of the actual position coordinates, respectively. 4 coordinatesRespectively (x, y, w, h), x representing the relative abscissa of the candidate frame, y representing the relative ordinate of the candidate frame, w representing the width of the candidate frame, and h representing the height of the candidate frame. Delta is used to balance the classification loss and coordinate regression loss. In the training process, the first 128 candidate frames with the largest loss are regarded as difficult samples, and the small batch random gradient descent method is utilized for back propagation, so that the network parameters are updated.
The training steps of the deep convolutional neural network model based on the improved master R-CNN in the embodiment are as follows:
1) Respectively initializing convolutional neural networks of the four parts;
2) Training a shared convolutional neural network, a parallel space transformation network and a candidate frame generation network to obtain a series of candidate frames;
3) Training the shared convolutional neural network, the parallel space transformation network and the region-of-interest pooling classification network by utilizing the candidate frames generated in the step 2, selecting the first 128 candidate frames with the largest loss each time, and updating network parameters by utilizing a small batch of random gradient descent algorithm;
4) The method comprises the steps of fixing a shared convolutional neural network, paralleling a space transformation network, and training a candidate frame generation network independently to obtain a series of candidate frames;
5) Training a shared convolutional neural network, a parallel space transformation network and a region-of-interest pooling classification network by utilizing the candidate frames generated in the step 4, selecting 128 candidate frames with the largest loss each time, and updating network parameters by utilizing a small batch of random gradient descent algorithm;
the appearance defect detection system and method of the present invention are used for detecting defects such as short circuit, open circuit, breakage, broken hole defect and the like of a circuit in FICS images, and the detection results are shown in fig. 7 (a), 7 (b), 7 (c), 7 (d), 7 (e) and 7 (f). According to the method, the depth convolution network model based on deep learning is trained, the FICS image to be detected is input into the trained model, and the position and the type information of the defect on the FICS image can be rapidly output.
The embodiments described above are preferred embodiments of the present invention, but the embodiments of the present invention are not limited to the embodiments described above, and any other changes, modifications, substitutions, combinations, and simplifications that do not depart from the spirit and principles of the present invention should be made in the equivalent manner, and are included in the scope of the present invention.

Claims (3)

1. A defect detection method of a high-density flexible substrate, comprising:
training a deep neural network model, which specifically comprises the following steps:
FICS images with appearance defects are collected, the images are preprocessed and unified into standard sizes, and defect positions and categories in each image are marked to be used as training samples of a master R-CNN convolutional neural network model;
the training sample of the mark number is used as input of a master R-CNN convolutional neural network model, and the position and type information of the FICS defect are output, so that a trained master R-CNN convolutional neural network model is obtained;
the defect detection step specifically comprises the following steps:
inputting the FICS image to be detected into a trained Faster R-CNN-based convolutional neural network model, outputting whether the FICS image is defective, and outputting the position and type of the defect if the FICS image is defective;
the master R-CNN convolutional neural network model comprises four parts:
the first part is a shared convolutional neural network taking the preprocessed FICS image to be detected as input and outputting a characteristic image
The second part is a parallel space transformation network for calculating the value of the sampling point coordinate through bilinear interpolation after the sampling point coordinate is carried out on the output characteristic diagram, and outputting the value of the target point corresponding to the characteristic diagram;
the third part is a candidate frame generation network of the candidate frame corresponding to the target point value of the input feature map;
the fourth part is an input candidate frame, and the input candidate frame is output as the position information of the defect and the region-of-interest pooling classification network of the category;
the shared convolutional neural network consists of 5 convolutional layers and 2 pooling layers; the input is an image with the size of 224 x 224 after pretreatment, the output is a characteristic diagram, and a pooling layer adopts a maximum pooling method to reduce a large number of parameter calculations;
the parallel space transformation network comprises three parts, namely a positioning network, a pixel positioner and a sampler;
the positioning network consists of three full-connection layers, wherein the input is a characteristic diagram I with the width w, the height h and the channel number c, and the output is a conversion matrix theta:
the pixel positioning is used for acquiring the position of each sampling point on the output characteristic diagram on the input characteristic diagram through the conversion matrix theta, and then transforming the characteristic diagram;
the sampler is used for calculating the obtained value of the coordinates of the sampling points through bilinear interpolation and outputting a target point value on the feature map;
in the candidate frame generation network, a mechanism of combining sliding windows with anchors is adopted to determine whether targets exist in an area corresponding to each sliding window, and because the length and the width of the targets are inconsistent, windows with multiple scales are needed to cover, the anchor mechanism is used for generating different candidate frames according to the length and width ratio on the basis of the size of a reference window, and three ratios of 8, 16 and 32 and three ratios of 0.5,1 and 2 are adopted to generate the candidate frames, so that 9 anchors with different scales are obtained;
the region of interest pooling classification network consists of a region of interest layer and two fully connected layers, wherein the input of the region of interest pooling classification network is a series of candidate frames, the input of the region of interest pooling classification network is generated by the candidate frame generating network of the last part, the output is the position and class information of the neural network for the defect, and the loss function is as follows:
L hard =L crass (P,u)+δL regre (t,v)
L class (p, u) represents the log loss of the classification, L regre (t, v) represents regression loss of the predicted frame coordinates; p represents the predicted value of the predicted frame for the true class of the object contained in the predicted frame, and t and v represent the combined vector of 4 coordinates of the predicted frame and the combined direction of the actual position coordinatesThe method comprises the steps of measuring 4 coordinates, namely x, y, w and h, wherein x represents the relative abscissa of a candidate frame, y represents the relative ordinate of the candidate frame, w represents the width of the candidate frame, h represents the height of the candidate frame, delta is used for balancing classification loss and coordinate regression loss, in the training process, the first 128 candidate frames with the largest loss are regarded as difficult samples, reverse propagation is carried out by using a small batch random gradient descent method, and network parameters are updated;
the training steps of the deep convolutional neural network model based on the improved master R-CNN are as follows:
1) Respectively initializing convolutional neural networks of the four parts;
2) Training a shared convolutional neural network, a parallel space transformation network and a candidate frame generation network to obtain a series of candidate frames;
3) Training the shared convolutional neural network, the parallel space transformation network and the region-of-interest pooling classification network by utilizing the candidate frames generated in the step 2, selecting the first 128 candidate frames with the largest loss each time, and updating network parameters by utilizing a small batch of random gradient descent algorithm;
4) The method comprises the steps of fixing a shared convolutional neural network, paralleling a space transformation network, and training a candidate frame generation network independently to obtain a series of candidate frames;
5) And (3) training the shared convolutional neural network, the parallel space transformation network and the region-of-interest pooling classification network by utilizing the candidate frames generated in the step 4, selecting 128 candidate frames with the largest loss each time, and updating network parameters by utilizing a small batch of random gradient descent algorithm.
2. The defect detection method of claim 1, wherein the pixel positioner comprises a transformation of scaling, translation, and rotation.
3. The defect detection method of claim 1, wherein the uniform size is specifically a standard size of 224 x 224 pixels.
CN201910166760.2A 2019-03-06 2019-03-06 Defect detection method of high-density flexible substrate Active CN109859207B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910166760.2A CN109859207B (en) 2019-03-06 2019-03-06 Defect detection method of high-density flexible substrate

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910166760.2A CN109859207B (en) 2019-03-06 2019-03-06 Defect detection method of high-density flexible substrate

Publications (2)

Publication Number Publication Date
CN109859207A CN109859207A (en) 2019-06-07
CN109859207B true CN109859207B (en) 2023-06-23

Family

ID=66899962

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910166760.2A Active CN109859207B (en) 2019-03-06 2019-03-06 Defect detection method of high-density flexible substrate

Country Status (1)

Country Link
CN (1) CN109859207B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110349145B (en) * 2019-07-09 2022-08-16 京东方科技集团股份有限公司 Defect detection method, defect detection device, electronic equipment and storage medium
CN110400296A (en) * 2019-07-19 2019-11-01 重庆邮电大学 The scanning of continuous casting blank surface defects binocular and deep learning fusion identification method and system
CN110560376B (en) * 2019-07-19 2021-06-22 华瑞新智科技(北京)有限公司 Product surface defect detection method and device
CN111415325B (en) * 2019-11-11 2023-04-25 杭州电子科技大学 Copper foil substrate defect detection method based on convolutional neural network
CN111080615B (en) * 2019-12-12 2023-06-16 创新奇智(重庆)科技有限公司 PCB defect detection system and detection method based on convolutional neural network
CN111462043B (en) * 2020-03-05 2023-10-24 维库(厦门)信息技术有限公司 Defect detection method, device, equipment and medium based on Internet
CN111563179A (en) * 2020-03-24 2020-08-21 维库(厦门)信息技术有限公司 Method and system for constructing defect image rapid classification model
CN111429431B (en) * 2020-03-24 2023-09-19 深圳市振邦智能科技股份有限公司 Element positioning and identifying method based on convolutional neural network
CN113362277A (en) * 2021-04-26 2021-09-07 辛米尔视觉科技(上海)有限公司 Workpiece surface defect detection and segmentation method based on deep learning
CN114862845B (en) * 2022-07-04 2022-09-06 深圳市瑞桔电子有限公司 Defect detection method, device and equipment for mobile phone touch screen and storage medium
CN117670876B (en) * 2024-01-31 2024-05-03 成都数之联科技股份有限公司 Panel defect severity level judging method, system, equipment and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106952250B (en) * 2017-02-28 2021-05-07 北京科技大学 Metal plate strip surface defect detection method and device based on fast R-CNN network
GB201704373D0 (en) * 2017-03-20 2017-05-03 Rolls-Royce Ltd Surface defect detection
CN107274451A (en) * 2017-05-17 2017-10-20 北京工业大学 Isolator detecting method and device based on shared convolutional neural networks
CN109142371A (en) * 2018-07-31 2019-01-04 华南理工大学 High density flexible exterior substrate defect detecting system and method based on deep learning
CN109190668A (en) * 2018-08-01 2019-01-11 福州大学 The detection of multiclass certificate and classification method based on Faster-RCNN
CN109360204B (en) * 2018-11-28 2021-07-16 燕山大学 Inner defect detection method of multilayer metal lattice structure material based on Faster R-CNN

Also Published As

Publication number Publication date
CN109859207A (en) 2019-06-07

Similar Documents

Publication Publication Date Title
CN109859207B (en) Defect detection method of high-density flexible substrate
CN107561738B (en) Fast TFT-LCD surface defect detection method based on FCN
CN109584227A (en) A kind of quality of welding spot detection method and its realization system based on deep learning algorithm of target detection
CN113436169B (en) Industrial equipment surface crack detection method and system based on semi-supervised semantic segmentation
CN110992317A (en) PCB defect detection method based on semantic segmentation
CN108846397B (en) Automatic detection method for cable semi-conducting layer based on image processing
CN107064170A (en) One kind detection phone housing profile tolerance defect method
CN111667455A (en) AI detection method for various defects of brush
KR20210150970A (en) Detecting defects in semiconductor specimens using weak labeling
CN111696079B (en) Surface defect detection method based on multitask learning
CN108764134A (en) A kind of automatic positioning of polymorphic type instrument and recognition methods suitable for crusing robot
CN111161224A (en) Casting internal defect grading evaluation system and method based on deep learning
CN114549507B (en) Improved Scaled-YOLOv fabric flaw detection method
CN110443791A (en) A kind of workpiece inspection method and its detection device based on deep learning network
CN112381175A (en) Circuit board identification and analysis method based on image processing
CN115439458A (en) Industrial image defect target detection algorithm based on depth map attention
CN115775236A (en) Surface tiny defect visual detection method and system based on multi-scale feature fusion
CN113658147B (en) Workpiece size measuring device and method based on deep learning
CN111091534A (en) Target detection-based pcb defect detection and positioning method
CN116433661B (en) Method, device, equipment and medium for detecting semiconductor wafer by multitasking
CN116205918B (en) Multi-mode fusion semiconductor detection method, device and medium based on graph convolution
CN116071437A (en) Hydraulic tunnel apparent defect space calibration method
CN113901947A (en) Intelligent identification method for tire surface flaws under small sample
CN112750113B (en) Glass bottle defect detection method and device based on deep learning and linear detection
CN115601610A (en) Fabric flaw detection method based on improved EfficientDet model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant