CN111861995B - CNN-based high-density micro led chip visual detection method - Google Patents

CNN-based high-density micro led chip visual detection method Download PDF

Info

Publication number
CN111861995B
CN111861995B CN202010570395.4A CN202010570395A CN111861995B CN 111861995 B CN111861995 B CN 111861995B CN 202010570395 A CN202010570395 A CN 202010570395A CN 111861995 B CN111861995 B CN 111861995B
Authority
CN
China
Prior art keywords
led chip
micro led
chip
sample
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010570395.4A
Other languages
Chinese (zh)
Other versions
CN111861995A (en
Inventor
蔡觉平
阮文长
温凯林
褚洁
张呈凯
王武壮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Honghu Qiji Electronic Technology Co ltd
Original Assignee
Suzhou Honghu Qiji Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Honghu Qiji Electronic Technology Co ltd filed Critical Suzhou Honghu Qiji Electronic Technology Co ltd
Priority to CN202010570395.4A priority Critical patent/CN111861995B/en
Publication of CN111861995A publication Critical patent/CN111861995A/en
Application granted granted Critical
Publication of CN111861995B publication Critical patent/CN111861995B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30148Semiconductor; IC; Wafer

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Quality & Reliability (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of chip testing and artificial intelligent neural network, in particular to a CNN-based high-density micro led chip visual detection method, which comprises the steps of binarizing a micro led chip; dividing a chip into a plurality of areas in a row and column mode according to a connected domain dividing method; taking the divided areas as input of a convolutional neural network; and (3) detecting whether the chip is good or not by adopting a full-connection layer of the convolutional neural network and a softmax function, setting the quantity of neurons of an output layer of the convolutional neural network to be 2 to represent good or bad results, and calculating the output probability of each neuron through the softmax function to judge the quality of the sample. The detection result proves that the convolutional neural network is used for realizing chip detection, so that the detection time is reduced, and meanwhile, the detection accuracy is effectively improved.

Description

CNN-based high-density micro led chip visual detection method
Technical Field
The invention relates to the field of chip testing and artificial intelligent neural networks, in particular to a detection method of a CNN-based high-density micro led chip.
Background
An artificial neural network is a computational model that works like a biological neural network. Because of its nature of a parallel architecture, artificial neural networks are widely used to increase execution speed.
With the development of integration technology, more and more multichip samples are integrated on a silicon chip at high density, and the intervals between chips are smaller and smaller, so that great challenges are encountered in detecting the chips. In order to solve this problem, many detection methods have been proposed to improve the detection effect, but the speed is slow and the time is long. It is therefore significant to improve the accuracy of the detection on the basis of reducing the detection time.
Disclosure of Invention
In order to solve the defects in the prior art, the invention provides a CNN-based high-density micro led chip visual detection method, which reduces the chip detection time and improves the detection accuracy.
In order to achieve the above object, the present invention adopts the following technical scheme: a CNN-based high-density micro led chip visual detection method comprises the following steps:
step one: binarizing the micro led chip, and converting the chip into a black-and-white image, wherein the silicon chip is black, and the micro led chip sample is white;
step two: dividing a chip into a plurality of areas in a row and column mode according to a connected domain dividing method, wherein each area comprises a micro-led chip sample, and the number of the areas depends on the number of samples designed in the micro-led chip;
step three: taking the divided areas as the input of a convolutional neural network, adopting convolutional layers with different structures to extract relevant features from each area, adopting a pooling filter with the size of 2 multiplied by 2 to realize pooling to reduce the dimension of a sample image and obtaining a feature map;
step four: and (3) detecting whether the chip is good or not by adopting a full-connection layer of the convolutional neural network and a softmax function, setting the quantity of neurons of an output layer of the convolutional neural network to be 2 to represent two results of good and bad, and calculating the output probability of each neuron through the softmax function to judge the quality of the sample, wherein the output of the sample is identified as good sample output [1 0] and the output of the sample is identified as bad sample output [0 1].
In the first step, a global threshold T is set, micro-led silicon chips are converted into black-and-white images, all pixels brighter than or equal to the threshold are converted into white, and all pixels darker than the threshold are converted into black, so that I (x) is met 1 ,y 1 )≥T>I(x 2 ,y 2 ) Wherein I (x 1 ,y 1 ) Pixel point pixels belonging to micro led chip sample area, I (x 2 ,y 2 ) Belongs to pixel point pixels outside the sample area of the micro led chip.
In the second step, the connected domain refers to the region formed by adjacent pixel points with the same pixel value in the image, each white module is a connected domain, and different modules are different connected domains, so that the connected domain is divided into a plurality of regions, and the number of formula regions is satisfiedN represents the number of rows of micro led chips in the silicon wafer, N i Representing the number of samples in each row, i is a natural number.
In step three, a new convolution feature map Conv (x) of size m×m i ,k ij ) The values obtained at the (m, n) pixel points of the (m, n) or pooling feature map aver_pool (m, n) are determined by the following formulas, respectively:
x i representing the ith image, k, of the last network layer ij The convolution kernel used for calculating the convolution size of the jth output feature map and the ith input image is k multiplied by k, f () is a convolution neural network activation function, s is a pooling step length, M and n are integers which are 1-M, n-M, w and h are positions of w rows and h columns in the convolution kernel which are 0-w, h-k-1 and p and q are positions of p rows and q columns in the pooling filter.
In the fourth step, the characteristic of the sample is transmitted to the output layer by adopting the full-connection mode of the full-connection layer of the convolutional neural network, the softmax function is defined by the following formula, and the highest probability result calculated according to the softmax function depends on the detection result of the sample
Wherein f softmax For each neuron identified as probability of outputting the result, x i The output probability of the ith neuron which is the output layer; x is x j The output probability of the jth neuron of the output layer; n=2 is the number of neurons in the output layer, representing both good and bad results of the chip test.
The convolutional neural network adopts a parameter sharing strategy in calculation, all neurons of the same layer use a group of parameters, and a plurality of neurons can be calculated at the same time, so that the detection speed of the network is increased.
And a plurality of convolutional neural networks are used for detecting the divided chip samples simultaneously, so that the detection speed of the micro led chip is improved.
The convolutional neural network is used for effectively improving the accuracy of chip detection.
The invention has the beneficial effects that: the convolutional neural network is applied to micro-led chip detection, the chip is divided into a plurality of image blocks, and each image block comprises a chip sample. The image blocks are transmitted to the convolutional neural network, feature extraction is achieved through the convolutional neural network convolutional layer and the pooling layer, chip classification is achieved through full connection and softmax functions, and the detection speed of the chips is effectively improved through simultaneous detection of a plurality of convolutional neural networks. The detection result proves that the convolutional neural network is used for realizing chip detection, so that the detection time is reduced, and meanwhile, the detection accuracy is effectively improved.
Drawings
FIG. 1 is an image effect obtained after binarization of a micro led chip;
FIG. 2 is a micro led chip segmentation result;
FIG. 3 is a flowchart of micro led chip detection.
FIG. 4 is a convolutional neural network model used for micro led chip detection.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for more clearly illustrating the technical aspects of the present invention, and are not intended to limit the scope of the present invention.
A CNN-based high-density micro led chip visual detection method comprises the following steps:
step one: binarizing the micro led chip, and converting the chip into a black-and-white image, wherein the silicon chip is black, and the micro led chip sample is white.
According to the set global threshold T, the micro led chip is converted into a black-and-white image, all pixels brighter than the threshold are converted into white, and all pixels darker than the threshold are converted into black. The pixel value of the pixel point of the silicon wafer image meets the following formula:
I(x 1 ,y 1 )≥T>I(x 2 ,y 2 )
(x 1 ,y 1 ) Representing pixels brighter than a threshold value, (x) 2 ,y 2 ) Representing pixels darker than the threshold. As shown in fig. 1, all pixels of the micro led chip area are converted to white, and all pixels of the interval area between chips are converted to black.
Step two: according to the connected domain division method, the chip is divided into a plurality of areas in a row and column mode, each area contains a sample of the micro led chip, and the number of the areas depends on the number of samples designed in the micro led chip.
The connected domain refers to the region formed by adjacent pixel points with the same pixel value in the image, and the adjacent pixels around the first white pixel point I (x, y) are searched, if I (x, y) is less than or equal to I (x, y) Adjacent to Both pixels belong to the same connected domain. The new pixel values for I (x, y) at this time are:
I(x,y)=min{I(x,y) adjacent to }
If the pixel values of all the adjacent points of I (x, y) are smaller than the pixel values of I (x, y), the connected domain is interrupted, and a new connected domain is formed. As shown in fig. 1, the sample of each white block is a connected domain, and different connected domains are different between different samples.
The micro led chips are divided according to the division of the connected domains, and each connected domain is divided into one module as shown in fig. 2. The number of divided regions is calculated by the following formula:
n represents how many rows the samples are arranged in the chip; n (N) i Representing the number of samples in each row.
Step three: and taking the divided areas as input of a convolutional neural network, extracting relevant features from each area by adopting convolutional layers with different structures, and adopting a pooling filter with the size of 2 multiplied by 2 to realize pooling so as to reduce the dimension of a sample image and obtain a feature map.
As shown in fig. 3, the segmented sample image adopts convolution and pooling algorithm of convolution neural network to realize feature extraction and image dimension reduction functions, so as to obtain a plurality of feature images containing different features. Fig. 4 shows the structure of a convolutional neural network model used by the present invention for the size of the micro led chip. As can be seen from the figure, the new convolution feature map Conv (x i ,k ij ) (m, n) the values obtained at the (m, n) pixel points are determined by the following formulas, respectively:
x i representing the ith image, k, of the last network layer ij The method is characterized in that the method is used for calculating a convolution kernel with the convolution size of k multiplied by k between a jth output feature map and an ith input image, f () is a convolution neural network activation function, M and n are integers which are smaller than or equal to 1 and smaller than or equal to M, w and h are positions of a w th row and a h th column in the convolution kernel which are smaller than or equal to 0 and smaller than or equal to w, h is smaller than or equal to k-1, I is the size of the input image, s is a convolution sliding step length, and P is a filling value. And the values obtained at the (M, n) pixel points of the new pooling feature map aver_pool (M, n) with the size of m×m are respectively determined by the following formulas:
m and n are integers which are 1-M, n-M, p and q are p-th row and q-th column positions in the pooling filter, l represents the size of the pooling filter as l×l, and s represents the pooling step length.
Step four: and (3) detecting whether the chip is good or not by adopting a full-connection layer of the convolutional neural network and a softmax function, setting the quantity of neurons of an output layer of the convolutional neural network to be 2 to represent two results of good and bad, and calculating the output probability of each neuron through the softmax function to judge the quality of the sample, wherein the output of the sample is identified as good sample output [1 0] and the output of the sample is identified as bad sample output [0 1].
The sample features are transmitted to the output layer by full connectivity, the softmax function is defined by the following formula, and the highest probability result calculated from the softmax function depends on the detection result of the sample.
f softmax For each neuron identified as probability of outputting the result, x i Output probability, x, for the ith neuron of the output layer j For the output probability of the j-th neuron of the output layer, n=2 is the number of neurons of the output layer, which represents two results of good or bad chip detection.
Because the convolutional neural network adopts a parameter sharing strategy in calculation, a plurality of neurons can be calculated at the same time, and meanwhile, as shown in fig. 3, the convolutional neural network can detect the divided chip samples at the same time, so that the micro led chip detection algorithm based on the convolutional neural network accelerates the chip detection speed compared with other detection methods.
The foregoing is merely a preferred embodiment of the present invention, and it should be noted that modifications and variations could be made by those skilled in the art without departing from the technical principles of the present invention, and such modifications and variations should also be regarded as being within the scope of the invention.

Claims (7)

1. A CNN-based high-density micro led chip visual detection method is characterized in that: comprises the following steps of
Step one: binarizing the micro led chip, and converting the chip into a black-and-white image, wherein the silicon chip is black, and the micro led chip sample is white;
step two: dividing a chip into a plurality of areas in a row and column mode according to a connected domain dividing method, wherein each area comprises a micro-led chip sample, and the number of the areas depends on the number of samples designed in the micro-led chip;
step three: taking the divided areas as the input of a convolutional neural network, adopting convolutional layers with different structures to extract relevant features from each area, adopting a pooling filter with the size of 2 multiplied by 2 to realize pooling to reduce the dimension of a sample image and obtaining a feature map;
step four: and (3) detecting whether the chip is good or not by adopting a full-connection layer of the convolutional neural network and a softmax function, setting the quantity of neurons of an output layer of the convolutional neural network to be 2 to represent two results of good and bad, and calculating the output probability of each neuron through the softmax function to judge the quality of the sample, wherein the output of the sample is identified as good sample output [1 0] and the output of the sample is identified as bad sample output [0 1].
2. The CNN-based high-density micro led chip visual inspection method according to claim 1, wherein the method comprises the steps of: in the first step, a global threshold T is set, micro-led silicon chips are converted into black-and-white images, all pixels brighter than or equal to the threshold are converted into white, and all pixels darker than the threshold are converted into black, so that I (x) is met 1 ,y 1 )≥T>I(x 2 ,y 2 ) Wherein I (x 1 ,y 1 ) Pixel point pixels belonging to micro led chip sample area, I (x 2 ,y 2 ) Belonging to micro ledPixels outside the chip sample area.
3. The CNN-based high-density micro led chip visual inspection method according to claim 1, wherein the method comprises the steps of: in the second step, the connected domain refers to the region formed by adjacent pixel points with the same pixel value in the image, each white module is a connected domain, and different modules are different connected domains, so that the connected domain is divided into a plurality of regions, and the number of formula regions is satisfiedN represents the number of rows of micro led chips in the silicon wafer, N i Representing the number of samples in each row, i is a natural number.
4. The CNN-based high-density micro led chip visual inspection method according to claim 1, wherein the method comprises the steps of: in step three, a new convolution feature map Conv (x) of size m×m i ,k ij ) The values obtained at the (m, n) pixel points of the (m, n) or pooling feature map aver_pool (m, n) are determined by the following formulas, respectively:
x i representing the ith image, k, of the last network layer ij The convolution kernel used for calculating the convolution size of the jth output feature map and the ith input image is k multiplied by k, f () is a convolution neural network activation function, s is a pooling step length, M and n are integers which are 1-M, n-M, w and h are positions of w rows and h columns in the convolution kernel which are 0-w, h-k-1 and p and q are positions of p rows and q columns in the pooling filter.
5. The CNN-based high-density micro led chip visual inspection method according to claim 1, wherein the method comprises the steps of: in the fourth step, the characteristic of the sample is transmitted to the output layer by adopting the full-connection mode of the full-connection layer of the convolutional neural network, the softmax function is defined by the following formula, and the highest probability result calculated according to the softmax function depends on the detection result of the sample
Wherein f softmax For each neuron identified as probability of outputting the result, x i The output probability of the ith neuron which is the output layer; x is x j The output probability of the jth neuron of the output layer; n=2 is the number of neurons in the output layer, representing both good and bad results of the chip test.
6. The CNN-based high-density micro led chip visual inspection method according to claim 1, wherein the method comprises the steps of: the convolutional neural network adopts a parameter sharing strategy in calculation, all neurons of the same layer use a group of parameters, and a plurality of neurons can be calculated at the same time, so that the detection speed of the network is increased.
7. The CNN-based high-density micro led chip visual inspection method according to claim 1, wherein the method comprises the steps of: and a plurality of convolutional neural networks are used for detecting the divided chip samples simultaneously, so that the detection speed of the micro led chip is improved.
CN202010570395.4A 2020-06-19 2020-06-19 CNN-based high-density micro led chip visual detection method Active CN111861995B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010570395.4A CN111861995B (en) 2020-06-19 2020-06-19 CNN-based high-density micro led chip visual detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010570395.4A CN111861995B (en) 2020-06-19 2020-06-19 CNN-based high-density micro led chip visual detection method

Publications (2)

Publication Number Publication Date
CN111861995A CN111861995A (en) 2020-10-30
CN111861995B true CN111861995B (en) 2024-01-23

Family

ID=72986240

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010570395.4A Active CN111861995B (en) 2020-06-19 2020-06-19 CNN-based high-density micro led chip visual detection method

Country Status (1)

Country Link
CN (1) CN111861995B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107563999A (en) * 2017-09-05 2018-01-09 华中科技大学 A kind of chip defect recognition methods based on convolutional neural networks
CN108038843A (en) * 2017-11-29 2018-05-15 英特尔产品(成都)有限公司 A kind of method, apparatus and equipment for defects detection
CN109829914A (en) * 2019-02-26 2019-05-31 视睿(杭州)信息科技有限公司 The method and apparatus of testing product defect
CN111047583A (en) * 2019-12-23 2020-04-21 大连理工大学 Underwater netting system damage detection method based on machine vision

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11010888B2 (en) * 2018-10-29 2021-05-18 International Business Machines Corporation Precision defect detection based on image difference with respect to templates

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107563999A (en) * 2017-09-05 2018-01-09 华中科技大学 A kind of chip defect recognition methods based on convolutional neural networks
CN108038843A (en) * 2017-11-29 2018-05-15 英特尔产品(成都)有限公司 A kind of method, apparatus and equipment for defects detection
CN109829914A (en) * 2019-02-26 2019-05-31 视睿(杭州)信息科技有限公司 The method and apparatus of testing product defect
CN111047583A (en) * 2019-12-23 2020-04-21 大连理工大学 Underwater netting system damage detection method based on machine vision

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于图像的生物芯片点样质量检测方法研究;陈曦;赵佳敏;许雪;张自力;李永猛;;包装工程(19);全文 *

Also Published As

Publication number Publication date
CN111861995A (en) 2020-10-30

Similar Documents

Publication Publication Date Title
CN108961235B (en) Defective insulator identification method based on YOLOv3 network and particle filter algorithm
CN109934200B (en) RGB color remote sensing image cloud detection method and system based on improved M-Net
CN107038416B (en) Pedestrian detection method based on binary image improved HOG characteristics
CN111797712B (en) Remote sensing image cloud and cloud shadow detection method based on multi-scale feature fusion network
CN112561910A (en) Industrial surface defect detection method based on multi-scale feature fusion
CN109840483B (en) Landslide crack detection and identification method and device
CN112508090A (en) External package defect detection method
Yang et al. Automatic pixel-level crack detection for civil infrastructure using Unet++ and deep transfer learning
CN110084302B (en) Crack detection method based on remote sensing image
CN114820625B (en) Automobile top block defect detection method
CN114782391A (en) Method, system and device for constructing defect detection model of few-sample industrial image
CN113326846B (en) Rapid bridge apparent disease detection method based on machine vision
CN113591617B (en) Deep learning-based water surface small target detection and classification method
CN111105389A (en) Detection method for pavement crack by fusing Gabor filter and convolutional neural network
CN116342894B (en) GIS infrared feature recognition system and method based on improved YOLOv5
CN105825219A (en) Machine vision detection system
CN112784757A (en) Marine SAR ship target significance detection and identification method
CN116309485A (en) Pavement crack detection method for improving UNet network structure
CN111062381A (en) License plate position detection method based on deep learning
CN111861995B (en) CNN-based high-density micro led chip visual detection method
CN109284752A (en) A kind of rapid detection method of vehicle
CN103065296B (en) High-resolution remote sensing image residential area extraction method based on edge feature
CN117372853A (en) Underwater target detection algorithm based on image enhancement and attention mechanism
CN110349119B (en) Pavement disease detection method and device based on edge detection neural network
CN114155246B (en) Deformable convolution-based power transmission tower pin defect detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant