CN116228678A - Automatic identification and processing method for chip packaging defects - Google Patents

Automatic identification and processing method for chip packaging defects Download PDF

Info

Publication number
CN116228678A
CN116228678A CN202310047087.7A CN202310047087A CN116228678A CN 116228678 A CN116228678 A CN 116228678A CN 202310047087 A CN202310047087 A CN 202310047087A CN 116228678 A CN116228678 A CN 116228678A
Authority
CN
China
Prior art keywords
image
defect
chip
detection
single chip
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310047087.7A
Other languages
Chinese (zh)
Inventor
柯佳键
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Kexin Electronics Co ltd
Original Assignee
Guangdong Kexin Electronics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Kexin Electronics Co ltd filed Critical Guangdong Kexin Electronics Co ltd
Priority to CN202310047087.7A priority Critical patent/CN116228678A/en
Publication of CN116228678A publication Critical patent/CN116228678A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/95Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
    • G01N21/956Inspecting patterns on the surface of objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/446Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering using Haar-like filters, e.g. using integral image techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/763Non-hierarchical techniques, e.g. based on statistics of modelling distributions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8887Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/95Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
    • G01N21/956Inspecting patterns on the surface of objects
    • G01N2021/95638Inspecting patterns on the surface of objects for PCB's
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30148Semiconductor; IC; Wafer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/06Recognition of objects for industrial automation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Analytical Chemistry (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Biochemistry (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

The invention relates to the technical field of chip detection, and discloses a method for automatically identifying and processing chip packaging defects, which comprises the following steps: an industrial CCD camera is adopted to collect an initial image after chip encapsulation; performing target chip detection on the initial image by using a trained YOLOv3 algorithm model, and calculating the distance between detection frames to obtain a single Zhang Xinpian image; preprocessing, otsu threshold segmentation pins and Harris-SIFT feature point matching are sequentially carried out on the single chip image so as to determine whether defects are detected in the single chip image; when the defect is detected in the single chip image, sending a signal to an alarm device for defect alarm, and determining the single chip image as a defect image; the defect image is correlated with its defect detection time and stored in a defect image library to predict the number of defects in a predetermined time period in the future. The invention provides an automatic identification and processing method for chip packaging defects, which solves the technical problems of lower detection speed and lower detection precision in the existing manual detection mode.

Description

Automatic identification and processing method for chip packaging defects
Technical Field
The invention relates to the technical field of chip detection, in particular to an automatic chip packaging defect identification and treatment method.
Background
The packaging of the chip rapidly progresses to miniaturization, chip type and high performance, and the requirements for defect detection of the chip packaging are gradually increased. The defect detection of the chip pins is a necessary premise for correct packaging, and other appearance defect detection such as the model number, the production date and the like of the chip are also the quality assurance of the packaged element. At present, a traditional manual visual inspection mode is still adopted on many chip packaging production lines to detect chip packaging defects. For the manual detection mode, although the manual visual detection is convenient and direct, the method has important problems. Firstly, in the manual visual inspection mode, the working intensity of workers is high, visual fatigue is easy to cause, and therefore false inspection is caused, and the reliability of product detection is directly reduced. Secondly, the manpower resource cost of the manual visual inspection mode is high, and continuous investment is required. In addition, the quality judgment standard of the manual visual inspection is not easy to quantify, so that the stability of the detection result is poor. Finally, because the chip size is smaller, the detection speed and the detection precision of the manual visual inspection are lower due to the recognition capability of human eyes.
Disclosure of Invention
The invention provides an automatic identification and processing method for chip packaging defects, which solves the technical problems of lower detection speed and lower detection precision in the existing manual detection mode.
The invention provides a method for automatically identifying and processing chip packaging defects, which comprises the following steps:
an industrial CCD camera is adopted to collect an initial image after chip encapsulation;
the super parameters of the YOLOv3 algorithm model are adjusted by adopting a pre-collected training image after chip encapsulation, so that a trained YOLOv3 algorithm model is obtained;
performing target chip detection on the initial image by using the trained YOLOv3 algorithm model, returning coordinate information according to a detection result to calculate detection frame spacing, judging a defect of a part missing, and dividing according to the detection frame spacing to obtain a single Zhang Xinpian image;
preprocessing, otsu threshold segmentation pins and Harris-SIFT feature point matching are sequentially carried out on the single chip image so as to determine whether defects are detected in the single chip image;
when the defect is detected in the single chip image, sending a signal to an alarm device for defect alarm, and determining the single chip image as a defect image;
and correlating the defect image with the defect detection time of the defect image, and storing the defect image into a defect image library so as to predict the defect number in a preset time period in the future according to the defect history data.
Further, the step of performing target chip detection on the initial image by using the trained YOLOv3 algorithm model, calculating a detection frame interval according to coordinate information returned by a detection result, judging a missing part defect, and dividing according to the detection frame interval to obtain a single chip image comprises the following steps:
Performing target chip detection on the initial image by using the trained YOLOv3 algorithm model, and returning the category, confidence score and boundary frame coordinate information to which the initial image belongs;
determining the coordinate set of the detection frame as b i ={[x 1 ,y 1 ,w 1 ,h 1 ],[x 2 ,y 2 ,w 2 ,h 2 ],…,[x i ,y i ,w i ,h i ]}, b i According to x i The coordinates are sequentially and incrementally ordered and re-numbered to locate the chip; where i is the detected chip number, (x) i ,y i ) Is the upper left corner mark of the ith bounding box, (w) i ,h i ) Is the width and height of the ith bounding box;
the distance between two adjacent boundary frame coordinates is calculated as the distance between adjacent chips, and the formula is as follows:
Figure BDA0004056012390000021
wherein, (x) i ,y i ) Representing the abscissa of the upper left corner of the ith bounding box, when d i When the gap value is larger than the maximum preset gap value, determining that the defects of the ith chip and the (i+1) th chip occur;
and dividing according to the detection frame spacing to obtain a single chip image with the defect of part missing.
Further, in the step of sequentially performing preprocessing, otsu threshold segmentation pin, and Harris-SIFT feature point matching on the single-chip image to determine whether a defect is detected in the single-chip image, the step of performing Otsu threshold segmentation pin on the single-chip image includes:
conducting pin area blocking on the single chip image to obtain a plurality of pin areas of the single chip image;
Acquiring a three-channel image of the single chip image, and converting the three-channel image into a gray level image;
according to the proportional position relation between pins and a chip main body in the single chip image, transversely dividing the gray level image into a left pin area, a chip main body and a right pin area according to an input proportional coefficient k, and respectively storing the left pin area and the right pin area; wherein the width pixel range
Figure BDA0004056012390000022
Left pin area, ">
Figure BDA0004056012390000023
Is a chip main body,
Figure BDA0004056012390000024
Figure BDA0004056012390000025
Is the right pin area;
dividing the left pin area and the right pin area into n+1 equal parts transversely to obtain a left pin block area set;
traversing each region and each pixel point of each region successively according to the sequence number, and if the pixel point is positioned in the image edge region, fusing the pixel value of the point into a pixel gray average value of the point in a region of 3 multiplied by 3 around the original image;
traversing each region successively according to the sequence number, respectively counting and normalizing gray histograms in each region to obtain average gray values in each region, testing a plurality of thresholds to obtain optimal segmentation thresholds of each region, and carrying out image binarization segmentation according to the optimal thresholds of different regions so as to update a left pin block region set and a right pin block region set;
And re-splicing and writing each region in the left and right pin block region sets into the left and right pins according to the sequence number, splicing the left and right pin regions, and generating a segmented chip pin binary image.
Further, after the step of re-splicing each area in the left and right pin block area sets into the left and right pins according to the sequence numbers, and splicing the left and right pin areas to generate the segmented chip pin binary image, the method further comprises the steps of:
using four edge points of the divided pin image as angular points, adopting a least square method to fit four straight lines respectively to form a quadrangle as the minimum external moment of the edge of the pin image, and using a center method to determine the geometric center position of the image; wherein, the starting coordinate of the upper left corner of the image is set as P (0, 0), and the lower right corner is set as Q (m, n) to obtain the central point coordinate (x) 0 ,y 0 ) The calculation formula of (2) is as follows:
Figure BDA0004056012390000031
Figure BDA0004056012390000032
/>
where m and n are the number of rows and columns of pin image pixels, respectively, and f (x, y) is the gray value of the image at point (x, y).
Further, in the step of sequentially performing preprocessing, otsu threshold segmentation pins, and Harris-SIFT feature point matching on the single-chip image to determine whether a defect is detected in the single-chip image, performing Harris-SIFT feature point matching on the single-chip image includes:
Detecting image corner points, scanning an object by establishing a window, wherein the calculation formula of a corresponding corner point response function R is as follows:
R=det(M)-k(traceM) 2
det(M)=λ 1 λ 2
traceM=λ 12
where det (M) is a determinant of M, traceM is a trace of M, λ 12 K is a constant value which is two eigenvalues of the autocorrelation matrix;
determining corner feature vectors, generating a corner set, and distributing gradient amplitude values m (x, y) and directions theta (x, y) capable of reflecting corner features to each corner by adopting a SIFT algorithm, wherein the formula is as follows:
Figure BDA0004056012390000033
Figure BDA0004056012390000034
l represents the scale space where the feature points are located and is obtained by convolution of a Gaussian function and an original image;
generating SIFT feature description vectors, rotating by taking a main direction of a key point as a center, solving 8 gradient directions of each seed point in 16 4 multiplied by 4 windows, and generating a 128-dimensional SIFT feature vector for each feature point;
and (3) matching the characteristic points, namely adopting a K-nearest neighbor algorithm to obtain the nearest neighbor and the next nearest neighbor of the Euclidean distance, and determining the nearest neighbor and the next nearest neighbor as matching points when the ratio of the nearest neighbor to the next nearest neighbor is smaller than a set value T.
Further, the step of associating the defect image with the defect detection time thereof and storing the defect image in a defect image library to predict the number of defects in a predetermined time period in the future based on the defect history data includes:
Counting the number of the defect images in a set time period before the current time according to the defect image library;
calculating a plurality of defect rates of unit time in a set time period according to the defect image number and the total number of chips in the set time period before the current time;
taking the average value of the defect rates in the unit time as the defect rate in the unit time;
and multiplying the defect rate in unit time by the expected production quantity of chips in unit time in the future reservation time period to obtain the defect quantity in the future reservation time period.
The invention also provides a device for automatically identifying and processing the chip packaging defects, which comprises the following steps:
the acquisition module is used for acquiring an initial image of the packaged chip by adopting an industrial CCD camera;
the training module is used for adjusting the super parameters of the YOLOv3 algorithm model by adopting the pre-acquired training image after chip encapsulation so as to obtain a trained YOLOv3 algorithm model;
the detection module is used for detecting the target chip of the initial image by utilizing the trained YOLOv3 algorithm model, calculating the distance between detection frames according to the coordinate information returned by the detection result, judging the defect of the part missing, and dividing the detection frame according to the distance between the detection frames to obtain a single Zhang Xinpian image;
The processing module is used for sequentially preprocessing the single chip image, otsu threshold segmentation pins and Harris-SIFT feature point matching so as to determine whether defects are detected in the single chip image;
the alarm module is used for sending a signal to the alarm device to carry out defect alarm when the defect is detected in the single chip image, and determining the single chip image as a defect image;
and the prediction module is used for correlating the defect image with the defect detection time of the defect image and storing the defect image into a defect image library so as to predict the defect number in a preset time period in the future according to the defect history data.
The invention also provides a computer device comprising a memory storing a computer program and a processor implementing the steps of the above method when executing the computer program.
The invention also provides a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the above method.
The beneficial effects of the invention are as follows:
according to the invention, a deep learning YOLOv3 algorithm is combined with an image processing technology to be applied to automatic recognition and processing of chip packaging, a CCD industrial camera is installed in an original workshop, an initial image after chip packaging is obtained, chip target positioning is carried out through a target detection YOLOv3 algorithm, a single chip image is segmented, defect classification judgment is carried out on the obtained single chip image, a Harris operator is combined with a scale-invariant feature transformation feature description method to match a template image with various defect images, a feature clustering method is adopted for the matching points, mismatching points are removed, the matching accuracy is improved, the judgment of various defective products of the chip is realized, the defect of the packaged chip is avoided, the labor intensity of detection workers is reduced, the false detection caused by visual fatigue of detection workers is avoided, the reliability of product detection is improved, and the product qualification rate is improved.
Drawings
FIG. 1 is a flow chart of a method according to an embodiment of the invention.
Fig. 2 is a schematic diagram of an apparatus structure according to an embodiment of the invention.
Fig. 3 is a schematic diagram illustrating an internal structure of a computer device according to an embodiment of the invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
As shown in fig. 1, the present invention provides a method for automatically identifying and processing a chip package defect, which includes:
s1, acquiring an initial image of a chip after packaging by using an industrial CCD camera; the industrial CCD camera is adopted to collect images, the optical images received by the image sensor are converted into electrical signals which can be processed by a computer, when shooting is carried out, the chip is transmitted through the conveyor belt, and when the chip is transmitted to the position right below the industrial CCD camera, the chip is fixed through the conveyor belt baffle of the upper production line, so that the position deviation caused by motion inertia is prevented, and the industrial CCD camera can shoot clear chip packaging images conveniently.
S2, adjusting super parameters of the YOLOv3 algorithm model by adopting a pre-acquired training image packaged by the chip to obtain a trained YOLOv3 algorithm model; the deep learning algorithm YOLOv3 (you only lookonse) is a one-stage algorithm, and after the characteristics of the input image are extracted, the category and the position of the target can be rapidly predicted, so that the detection speed is high, and the requirement of real-time defect detection in the industrial production process can be met. The backbone network Darknet-53 of the YOLOv3 extracts the characteristics of downsampling by 32 times, 16 times and 8 times respectively, the target detection of different sizes is realized, the local characteristic interaction is carried out by adopting a multi-scale fusion method, and fusion prediction is carried out on 3 characteristic graphs Y1, Y2 and Y3 of different scales extracted from the backbone network. Feature maps of different scales divide the input image into different numbers of cells, the greater the number, the easier it is to detect small target objects. The YOLOv3 clusters 9 prior frames, the feature map of each scale is distributed to 3 prior frames with different sizes, the model parameters dark net53.Conv.74 which are pre-trained on the ImageNet are trained continuously on the basis, the rotation, the translation, the scaling and the horizontal overturning are mainly selected by the data enhancement method, the number of training set samples is increased, the generalization capability of the model is improved, and the fitting phenomenon is avoided.
S3, detecting a target chip of the initial image by using the trained YOLOv3 algorithm model, returning coordinate information according to a detection result to calculate a detection frame interval, judging a defect of a part missing, and dividing according to the detection frame interval to obtain a single Zhang Xinpian image;
the step S3 specifically comprises the following steps:
s31, performing target chip detection on the initial image by using the trained YOLOv3 algorithm model, and returning the category, confidence score and boundary frame coordinate information to which the initial image belongs;
s32, determining the coordinate set of the detection frame as b i ={[x 1 ,y 1 ,w 1 ,h 1 ],[x 2 ,y 2 ,w 2 ,h 2 ],…,[x i ,y i ,w i ,h i ]}, b i According to x i The coordinates are sequentially and incrementally ordered and re-numbered to locate the chip; where i is the detected chip number, (x) i ,y i ) Is the upper left corner mark of the ith bounding box, (w) i ,h i ) Is the width and height of the ith bounding box;
s33, calculating the distance between two adjacent boundary frame coordinates as the distance between adjacent chips, wherein the formula is as follows:
Figure BDA0004056012390000061
wherein, (x) i ,y i ) Representing the abscissa of the upper left corner of the ith bounding box, when d i When the gap value is larger than the maximum preset gap value, determining that the defects of the ith chip and the (i+1) th chip occur;
s34, dividing according to the detection frame spacing to obtain a single chip image with the defect of part missing.
S4, preprocessing, otsu threshold segmentation pins and Harris-SIFT feature point matching are sequentially carried out on the single chip image so as to determine whether defects are detected in the single chip image; the defect detection adopts an image segmentation algorithm, namely an Otsu (Otsu) split region to extract a pin part, 4 edge points of an image in a binary image are fitted with straight lines by a least square method, chip skew judgment is carried out, template matching is carried out through an improved Harris-SIFT algorithm, defect classification judgment is carried out on the distributed region and the matching point pair occupation ratio according to the matching characteristic points, and chip reverse, printing blurring and plastic package imprecise defects are identified, so that whether the chip is a qualified product or various defective products is finally judged.
Since chip images acquired at industrial sites often contain noise due to factors such as sensor material properties, photographing illumination angles, etc., the resulting image data is first preprocessed, such as light source correction, image denoising, etc. The histogram equalization method can be used for processing some collected images which are too dark and too bright, enrich the details of the images, enhance the quality of the images and reduce the influence of bright areas generated by the reflection of the light source.
The Otsu threshold segmentation pin for the single chip image comprises the following steps:
the input is single chip image number, single chip three-channel image, single chip coordinate position information, single chip image height, single chip image width, pin number, width proportion coefficient occupied by chip main body area and pin area, and output is divided chip pin binary image.
S401, conducting pin area blocking on the single chip image to obtain a plurality of pin areas of the single chip image;
s402, acquiring a three-channel image of the single chip image, and converting the three-channel image into a gray level image;
s403, transversely dividing the gray level image into a left pin area, a chip main body and a right pin area according to an input proportionality coefficient k according to the proportional position relation of the pins in the single chip image and the chip main body, and respectively storing the left pin area and the right pin area; wherein the width pixel range
Figure BDA0004056012390000062
Left pin area, ">
Figure BDA0004056012390000063
Is a chip main body>
Figure BDA0004056012390000064
Is the right pin area;
s404, transversely equally dividing n+1 equal parts of the left pin area and the right pin area respectively to obtain a left pin block area set and a right pin block area set;
s405, sequentially traversing each region and each pixel point of each region according to the sequence number, and if the pixel point is positioned in the image edge region, fusing the pixel value of the point into a pixel gray average value of the point in a region of 3 multiplied by 3 around the original image;
s406, traversing each region successively according to the sequence number, respectively counting and normalizing gray level histograms in each region to obtain average gray level values in each region, testing a plurality of thresholds to obtain optimal segmentation thresholds of each region, and carrying out image binarization segmentation according to the optimal thresholds of different regions so as to update a left pin block region set and a right pin block region set;
s407, re-splicing and writing each region in the left and right pin block region sets into the left and right pins according to the sequence number, splicing the left and right pin regions, and generating a segmented chip pin binary image. The segmented pin part may be truncated due to illumination and other problems, so that noise in the pin image can be morphologically operated. The closed operation can remove foreground noise and fill holes in the original closed image. Performing closed operation on the image, namely, firstly using expansion operation to communicate adjacent scattered pin points, so that the problem of pin truncation is effectively solved; and corroding the redundant noise part inside the chip by corroding the expanded image. The closed operation can remove foreground noise and fill holes in the original closed image.
S408, using four edge points of the divided pin image as corner points, adopting a least square method to fit four straight lines respectively to form a quadrangle as the minimum external moment of the edge of the pin image, and using a center method to determine the geometric center position of the image; wherein, the starting coordinate of the upper left corner of the image is set as P (0, 0), and the lower right corner is set as Q (m, n) to obtain the central point coordinate (x) 0 ,y 0 ) The calculation formula of (2) is as follows:
Figure BDA0004056012390000071
Figure BDA0004056012390000072
where m and n are the number of rows and columns of pin image pixels, respectively, and f (x, y) is the gray value of the image at point (x, y).
The calculation of angle parameters takes a central store coordinate P as an original point as an extension line, a boundary intersection point of a fitting straight line obtained by a least square method is marked as O (0, 0), an O point is taken as the original point to establish a relative rectangular coordinate system, the coordinate of a marked point Q is (x, y), and a calculation formula of a chip deflection angle theta is as follows:
Figure BDA0004056012390000073
the Harris-SIFT feature point matching of the single chip image comprises the following steps:
and mapping the images to a small scale space after detecting the angular points in the large scale space by adopting a Harris operator, establishing SIFT feature descriptors, and carrying out accurate matching on the images by adopting an angular point set type vector clustering algorithm on a correction part at matching points so as to provide incorrect matching points. And carrying out characteristic point detection and characteristic point description on the template image and the image to be detected by a Harris-SIFT image matching algorithm to obtain respective target characteristic point sets, further carrying out characteristic point matching on the target characteristic point set of the template image and the target characteristic point set of the image to be detected, and finally correcting the matching points.
S411, detecting an image corner, wherein the Harris corner scans an object by establishing a window, and the calculation formula of a corresponding corner response function R is as follows:
R=det(M)-k(traceM) 2
det(M)=λ 1 λ 2
traceM=λ 12
where det (M) is a determinant of M, traceM is a trace of M, λ 12 K is a constant and is usually 0.04-0.06;
s412, determining corner feature vectors, generating a corner set after feature point detection is completed, and distributing gradient amplitude m (x, y) and direction theta (x, y) capable of reflecting corner features to each corner by adopting a SIFT algorithm, wherein the formula is as follows:
Figure BDA0004056012390000081
Figure BDA0004056012390000082
l represents the scale space where the feature points are located and is obtained by convolution of a Gaussian function and an original image;
s413, generating SIFT feature description vectors, rotating by taking a main direction of a key point as a center, solving 8 gradient directions of each seed point in 16 4 multiplied by 4 windows, generating a 128-dimensional SIFT feature vector by each feature point, and carrying out normalization processing on the SIFT feature vectors to eliminate interference of illumination variation;
s414, matching the characteristic points, adopting a K-neighbor algorithm (K is usually valued as 2) to obtain the Euclidean distance nearest neighbor and the secondary nearest neighbor, and determining the nearest neighbor and the secondary nearest neighbor as the matching points when the ratio of the nearest neighbor to the secondary nearest neighbor is smaller than a set value T.
Among the matched pairs of points, the point pair with the greatest similarity is not necessarily correctly matched, a homography matrix of a group of local points is generally calculated by using a random sampling consistency (RANSAC) algorithm for the matched pairs of points, mapping errors are calculated according to the homography matrix, a local point set is calculated again in an iterative mode according to an error threshold value, and after the optimal local point set is found, the external point is judged to be a mismatching point; therefore, correcting the matching points using the K-means clustering algorithm includes:
the input is single-chip image number, single-chip three-channel image, single-chip coordinate position information, single-chip image width and height, initial matching point pair set and clustering initial center point number, and the output is corrected matching point pair set.
T1, fork=1→N, obtaining the first feature as the matching point pairEuclidean distance d between k The second characteristic is that the connecting line between the matching point pairs forms an included angle theta with the horizontal plane k Will d k And theta k And adding a category vector set F.
T2, carrying out K-means clustering on the class vector set F, setting the number of clusters to K=2, generating 2 cluster center stores, and calculating all class vectors F k And updating the center point of the distance from the center point of the cluster, carrying out iterative clustering, and stopping iteration when the change of the center point meets the convergence requirement.
And T3, because the correct matching point pairs are concentrated, the category with the most pairs of points is the correct matching point pair, and the point pair in the other category with the error matching is removed from the initial matching point pair set C, so that a corrected matching point pair set P is obtained.
S5, when the defect is detected in the single chip image, sending a signal to an alarm device for defect alarm, and determining the single chip image as a defect image;
and S6, correlating the defect image with the defect detection time of the defect image, and storing the defect image into a defect image library so as to predict the defect number in a preset time period in the future according to the defect history data.
The step S6 specifically comprises the following steps:
s61, counting the number of the defect images in a set time period before the current time according to the defect image library; the set period of time may be set to one month, one quarter, half year, etc., and the allocation adjustment is performed according to the specific situation, which is not limited herein. Each defect image has detection time, and the defect images can be classified in time according to the detection time to obtain the number of the defect images in a set time period.
S62, calculating a plurality of defect rates in unit time in a set time period according to the defect image number and the total number of chips in the set time period before the current time; the calculation method is as follows: defect rate per unit time = number of defective images/total number of chips within a set period of time; the unit time is set to a time smaller than the set time period, and if the set time period is one month, the unit time may be set to one week; setting the time period to be one quarter, and setting the unit time to be one week or one month; the set time period is half a year, and the unit time may be one week, one month, or one quarter.
S63, taking an average value of the defect rates in a plurality of unit time as the defect rate in the unit time; for example, when the set time period is one month and the unit time is one week, there are four unit times in the set time period, each unit time has a defect rate, and the four defect rates are averaged to be the defect rate of one week.
S64, multiplying the defect rate in unit time by the expected production quantity of chips in unit time in a preset time period in the future to obtain the defect quantity in the preset time period in the future; as described above, after the defect rate of one week is obtained, how many unit times are in the future predetermined period of time are obtained, and how much the expected production quantity is per unit time, and then the defect rate in the unit time is multiplied by the expected production quantity of chips in the unit time to obtain the defect quantity of one unit time, and the defect quantities of a plurality of unit times in the future predetermined period of time are added to obtain the defect quantity in the future predetermined period of time.
The method comprises the steps of applying a deep learning YOLOv3 algorithm to automatic recognition and processing of chip packaging in combination with an image processing technology, installing a CCD industrial camera in an original workshop, obtaining an initial image after chip packaging, carrying out chip target positioning through a target detection YOLOv3 algorithm, dividing a single chip image, carrying out defect classification judgment on the obtained single chip image, carrying out matching of a template image and various defect images by adopting a Harris operator combined scale invariant feature transformation feature description method, eliminating mismatching points on matching points by adopting a feature clustering method, improving matching accuracy, judging various defective products of the chip, finally determining the defect rate of the chip according to the result of chip defect detection, predicting the defect number in a preset time period according to the defect rate, and analyzing machine faults or reminding production workers of production quality; meanwhile, the defect that the packaged chip is identified by human eyes can be avoided, the labor intensity of detection workers is reduced, false detection caused by visual fatigue of the detection workers is avoided, the reliability of product detection is improved, and the product qualification rate is improved.
As shown in fig. 2, the present invention further provides an apparatus for automatically identifying and processing a chip package defect, including:
the acquisition module 1 is used for acquiring an initial image of a chip after being packaged by an industrial CCD camera;
the training module 2 is used for adjusting the super parameters of the YOLOv3 algorithm model by adopting a pre-acquired training image after chip encapsulation so as to obtain a trained YOLOv3 algorithm model;
the detection module 3 is used for detecting the target chip of the initial image by using the trained YOLOv3 algorithm model, calculating the detection frame spacing according to the coordinate information returned by the detection result, judging the defect of the part missing, and dividing the detection frame spacing to obtain a single Zhang Xinpian image;
the processing module 4 is used for sequentially preprocessing the single chip image, otsu threshold segmentation pins and Harris-SIFT feature point matching so as to determine whether defects are detected in the single chip image;
an alarm module 5, configured to send a signal to an alarm device to perform defect alarm when a defect is detected in the single chip image, and determine the single chip image as a defect image;
and the prediction module 6 is used for correlating the defect image with the defect detection time of the defect image and storing the defect image into a defect image library so as to predict the defect number in a preset future time period according to the defect history data.
In one embodiment, the detection module 3 comprises:
the return unit is used for detecting the target chip of the initial image by utilizing the trained YOLOv3 algorithm model and returning the category, the confidence score and the boundary frame coordinate information to which the initial image belongs;
a determining unit for determining the coordinate set of the detection frame as b i ={[x 1 ,y 1 ,w 1 ,h 1 ],[x 2 ,y 2 ,w 2 ,h 2 ],…,[x i ,y i ,w i ,h i ]}, b i According to x i The coordinates are sequentially and incrementally ordered and re-numbered to locate the chip; where i is the detected chip number, (x) i ,y i ) Is the upper left corner mark of the ith bounding box, (w) i ,h i ) Is the width and height of the ith bounding box;
the calculating unit is used for calculating the distance between two adjacent boundary frame coordinates as the distance between adjacent chips, and the formula is as follows:
Figure BDA0004056012390000101
wherein, (x) i ,y i ) Representing the abscissa of the upper left corner of the ith bounding box, when d i When the gap value is larger than the maximum preset gap value, determining that the defects of the ith chip and the (i+1) th chip occur;
the dividing unit is used for dividing and obtaining a single chip image with the defect of part missing according to the distance between the detection frames.
In one embodiment, in the processing module 4, performing Otsu thresholding pins on the single chip image includes:
the blocking unit is used for blocking the pin areas of the single chip image to obtain a plurality of pin areas of the single chip image;
The acquisition unit is used for acquiring three-channel images of the single chip image and converting the three-channel images into gray images;
the dividing unit is used for transversely dividing the gray level image into a left pin area, a chip main body and a right pin area according to an input proportion coefficient k according to the proportion position relation of the pins in the single chip image and the chip main body, and respectively storing the left pin area and the right pin area; wherein the width pixel range
Figure BDA0004056012390000111
Is a left pin area,
Figure BDA0004056012390000112
Is a chip main body>
Figure BDA0004056012390000113
Is the right pin area;
the equally dividing unit is used for equally dividing the left pin area and the right pin area into n+1 equal parts transversely to obtain a left pin block area set and a right pin block area set;
the first traversing unit is used for traversing each region and each pixel point of each region successively according to the sequence number, and if the pixel point is positioned in the image edge region, the pixel value of the point is fused into a pixel gray average value of the point in a region 3 multiplied by 3 around the original image;
the second traversing unit is used for traversing each region successively according to the sequence number, respectively counting the gray level histogram in each region and normalizing to obtain the average gray level value in each region, testing a plurality of thresholds to obtain the optimal segmentation threshold value of each region, and carrying out image binarization segmentation according to the optimal threshold values of different regions so as to update the left pin block region set and the right pin block region set;
And the splicing unit is used for splicing and writing each area in the left pin block area set and the right pin block area set into the left pin and the right pin again according to the sequence number, splicing the left pin area and the right pin area, and generating a segmented chip pin binary image.
In one embodiment, the processing module 4 further comprises:
the center position determining unit is used for respectively fitting four straight lines by using the four edge points of the divided pin image as angular points and adopting a least square method to form a quadrangle as the minimum external moment of the edge of the pin image, and determining the geometric center position of the image by using a center method; wherein, the starting coordinate of the upper left corner of the image is set as P (0, 0), and the lower right corner is set as Q (m, n) to obtain the central point coordinate (x) 0 ,y 0 ) The calculation formula of (2) is as follows:
Figure BDA0004056012390000114
Figure BDA0004056012390000115
/>
where m and n are the number of rows and columns of pin image pixels, respectively, and f (x, y) is the gray value of the image at point (x, y).
In one embodiment, the processing module 4 performs Harris-SIFT feature point matching on the single chip image, including:
the scanning unit is used for detecting image corner points, scanning objects by establishing windows, and the calculation formula of the corresponding corner point response function R is as follows:
R=det(M)-k(traceM) 2
det(M)=λ 1 λ 2
traceM=λ 12
where det (M) is a determinant of M, traceM is a trace of M, λ 12 K is a constant value which is two eigenvalues of the autocorrelation matrix;
the distribution unit is used for determining corner feature vectors, generating corner sets, distributing gradient amplitude values m (x, y) and directions theta (x, y) capable of reflecting corner features for each corner by adopting a SIFT algorithm, and the formula is as follows:
Figure BDA0004056012390000121
Figure BDA0004056012390000122
l represents the scale space where the feature points are located and is obtained by convolution of a Gaussian function and an original image;
the rotation unit is used for generating SIFT feature description vectors, rotating the SIFT feature description vectors by taking the main direction of the key points as the center, solving 8 gradient directions of each seed point in 16 4 multiplied by 4 windows, and generating 128-dimensional SIFT feature vectors for each feature point;
the matching unit is used for matching the characteristic points, a K-nearest neighbor algorithm is adopted to calculate the Euclidean distance nearest neighbor and the secondary nearest neighbor, and when the ratio of the nearest neighbor distance to the secondary nearest neighbor distance of the point is smaller than a set value T, the point is determined to be the matching point.
In one embodiment, prediction module 6 includes:
a statistics unit, configured to count, according to the defect image library, the number of defect images in a set period of time before a current time;
a first calculating unit, configured to calculate a plurality of defect rates per unit time in a set time period according to the number of defect images and a total number of chips in the set time period before a current time;
An average unit for averaging the defect rates in a plurality of unit time as the defect rate in the unit time;
and the second calculation unit is used for multiplying the defect rate in unit time by the expected production quantity of chips in unit time in the future reservation time period to obtain the defect quantity in the future reservation time period.
The above modules and units are used for correspondingly executing each step in the method for automatically identifying and processing the chip package defect, and specific implementation manners thereof are described with reference to the above method embodiments and are not repeated herein.
As shown in fig. 3, the present invention also provides a computer device, which may be a server, and the internal structure of which may be as shown in fig. 3. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the computer is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used for storing all data required by the process of the automatic identification and processing method of the chip packaging defects. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program, when executed by a processor, implements a method for automatically identifying and handling chip package defects.
Those skilled in the art will appreciate that the architecture shown in fig. 3 is merely a block diagram of a portion of the architecture in connection with the present application and is not intended to limit the computer device to which the present application is applied.
An embodiment of the present application further provides a computer readable storage medium, on which a computer program is stored, where the computer program when executed by a processor implements any one of the above-mentioned methods for automatically identifying and processing a chip package defect.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by hardware associated with a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium provided herein and used in embodiments may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), dual speed data rate SDRAM (SSRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, apparatus, article or method that comprises the element.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the invention, and all equivalent structures or equivalent processes using the descriptions and drawings of the present invention or directly or indirectly applied to other related technical fields are included in the scope of the invention.

Claims (9)

1. The automatic identification and processing method for the chip packaging defects is characterized by comprising the following steps:
an industrial CCD camera is adopted to collect an initial image after chip encapsulation;
the super parameters of the YOLOv3 algorithm model are adjusted by adopting a pre-collected training image after chip encapsulation, so that a trained YOLOv3 algorithm model is obtained;
Performing target chip detection on the initial image by using the trained YOLOv3 algorithm model, returning coordinate information according to a detection result to calculate detection frame spacing, judging a defect of a part missing, and dividing according to the detection frame spacing to obtain a single Zhang Xinpian image;
preprocessing, otsu threshold segmentation pins and Harris-SIFT feature point matching are sequentially carried out on the single chip image so as to determine whether defects are detected in the single chip image;
when the defect is detected in the single chip image, sending a signal to an alarm device for defect alarm, and determining the single chip image as a defect image;
and correlating the defect image with the defect detection time of the defect image, and storing the defect image into a defect image library so as to predict the defect number in a preset time period in the future according to the defect history data.
2. The method for automatically identifying and processing the chip package defects according to claim 1, wherein the steps of performing target chip detection on the initial image by using the trained YOLOv3 algorithm model, calculating a detection frame interval according to the detection result and returning coordinate information, determining the defect of the missing piece, and dividing the defect according to the detection frame interval to obtain a single chip image comprise the following steps:
Performing target chip detection on the initial image by using the trained YOLOv3 algorithm model, and returning the category, confidence score and boundary frame coordinate information to which the initial image belongs;
determining the coordinate set of the detection frame as b i ={[x 1 ,y 1 ,w 1 ,h 1 ],[x 2 ,y 2 ,w 2 ,h 2 ],…,[x i ,y i ,w i ,h i ]}, b i According to x i The coordinates are sequentially and incrementally ordered and re-numbered to locate the chip; where i is the detected chip number, (x) i ,y i ) Is the upper left corner mark of the ith bounding box, (w) i ,h i ) Is the width and height of the ith bounding box;
the distance between two adjacent boundary frame coordinates is calculated as the distance between adjacent chips, and the formula is as follows:
Figure FDA0004056012380000011
wherein, (x) i ,y i ) Representing the abscissa of the upper left corner of the ith bounding box, when d i When the gap value is larger than the maximum preset gap value, determining that the defects of the ith chip and the (i+1) th chip occur;
and dividing according to the detection frame spacing to obtain a single chip image with the defect of part missing.
3. The method for automatically identifying and processing a chip package defect according to claim 1, wherein in the step of sequentially preprocessing the single chip image, otsu thresholding pins, and Harris-SIFT feature point matching to determine whether a defect is detected in the single chip image, otsu thresholding pins are performed on the single chip image, comprising:
Conducting pin area blocking on the single chip image to obtain a plurality of pin areas of the single chip image;
acquiring a three-channel image of the single chip image, and converting the three-channel image into a gray level image;
according to the proportional position relation between pins and a chip main body in the single chip image, transversely dividing the gray level image into a left pin area, a chip main body and a right pin area according to an input proportional coefficient k, and respectively storing the left pin area and the right pin area; wherein the width pixel range
Figure FDA0004056012380000021
Left pin area, ">
Figure FDA0004056012380000022
Is a chip main body,
Figure FDA0004056012380000023
Figure FDA0004056012380000024
Is the right pin area;
dividing the left pin area and the right pin area into n+1 equal parts transversely to obtain a left pin block area set;
traversing each region and each pixel point of each region successively according to the sequence number, and if the pixel point is positioned in the image edge region, fusing the pixel value of the point into a pixel gray average value of the point in a region of 3 multiplied by 3 around the original image;
traversing each region successively according to the sequence number, respectively counting and normalizing gray histograms in each region to obtain average gray values in each region, testing a plurality of thresholds to obtain optimal segmentation thresholds of each region, and carrying out image binarization segmentation according to the optimal thresholds of different regions so as to update a left pin block region set and a right pin block region set;
And re-splicing and writing each region in the left and right pin block region sets into the left and right pins according to the sequence number, splicing the left and right pin regions, and generating a segmented chip pin binary image.
4. The method for automatically identifying and processing a chip package defect according to claim 3, wherein after the steps of re-stitching each area in the set of left and right pin block areas into the left and right pins according to the sequence number, stitching the left and right pin areas, and generating the segmented chip pin binary image, the method further comprises:
using four edge points of the divided pin image as angular points, adopting a least square method to fit four straight lines respectively to form a quadrangle as the minimum external moment of the edge of the pin image, and using a center method to determine the geometric center position of the image; wherein, the starting coordinate of the upper left corner of the image is set as P (0, 0), and the lower right corner is set as Q (m, n) to obtain the central point coordinate (x) 0 ,y 0 ) The calculation formula of (2) is as follows:
Figure FDA0004056012380000025
Figure FDA0004056012380000026
where m and n are the number of rows and columns of pin image pixels, respectively, and f (x, y) is the gray value of the image at point (x, y).
5. The method for automatically identifying and processing a chip package defect according to claim 1, wherein in the step of sequentially performing preprocessing, otsu thresholding pins, harris-SIFT feature point matching on the single chip image to determine whether a defect is detected in the single chip image, performing Harris-SIFT feature point matching on the single chip image comprises:
Detecting image corner points, scanning an object by establishing a window, wherein the calculation formula of a corresponding corner point response function R is as follows:
R=det(M)-k(traceM) 2
det(M)=λ 1 λ 2
traceM=λ 12
where det (M) is a determinant of M, traceM is a trace of M, λ 12 K is a constant value which is two eigenvalues of the autocorrelation matrix;
determining corner feature vectors, generating a corner set, and distributing gradient amplitude values m (x, y) and directions theta (x, y) capable of reflecting corner features to each corner by adopting a SIFT algorithm, wherein the formula is as follows:
Figure FDA0004056012380000031
Figure FDA0004056012380000032
l represents the scale space where the feature points are located and is obtained by convolution of a Gaussian function and an original image;
generating SIFT feature description vectors, rotating by taking a main direction of a key point as a center, solving 8 gradient directions of each seed point in 16 4 multiplied by 4 windows, and generating a 128-dimensional SIFT feature vector for each feature point;
and (3) matching the characteristic points, namely adopting a K-nearest neighbor algorithm to obtain the nearest neighbor and the next nearest neighbor of the Euclidean distance, and determining the nearest neighbor and the next nearest neighbor as matching points when the ratio of the nearest neighbor to the next nearest neighbor is smaller than a set value T.
6. The method for automatically identifying and processing the chip package defects according to claim 1, wherein the step of associating the defect image with the defect detection time thereof and storing the defect image in a defect image library to predict the number of defects in a predetermined period of time in the future based on the defect history data comprises:
Counting the number of the defect images in a set time period before the current time according to the defect image library;
calculating a plurality of defect rates of unit time in a set time period according to the defect image number and the total number of chips in the set time period before the current time;
taking the average value of the defect rates in the unit time as the defect rate in the unit time;
and multiplying the defect rate in unit time by the expected production quantity of chips in unit time in the future reservation time period to obtain the defect quantity in the future reservation time period.
7. An apparatus for automatically identifying and handling chip package defects, comprising:
the acquisition module is used for acquiring an initial image of the packaged chip by adopting an industrial CCD camera;
the training module is used for adjusting the super parameters of the YOLOv3 algorithm model by adopting the pre-acquired training image after chip encapsulation so as to obtain a trained YOLOv3 algorithm model;
the detection module is used for detecting the target chip of the initial image by utilizing the trained YOLOv3 algorithm model, calculating the distance between detection frames according to the coordinate information returned by the detection result, judging the defect of the part missing, and dividing the detection frame according to the distance between the detection frames to obtain a single Zhang Xinpian image;
The processing module is used for sequentially preprocessing the single chip image, otsu threshold segmentation pins and Harris-SIFT feature point matching so as to determine whether defects are detected in the single chip image;
the alarm module is used for sending a signal to the alarm device to carry out defect alarm when the defect is detected in the single chip image, and determining the single chip image as a defect image;
and the prediction module is used for correlating the defect image with the defect detection time of the defect image and storing the defect image into a defect image library so as to predict the defect number in a preset time period in the future according to the defect history data.
8. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 6 when the computer program is executed.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
CN202310047087.7A 2023-01-31 2023-01-31 Automatic identification and processing method for chip packaging defects Pending CN116228678A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310047087.7A CN116228678A (en) 2023-01-31 2023-01-31 Automatic identification and processing method for chip packaging defects

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310047087.7A CN116228678A (en) 2023-01-31 2023-01-31 Automatic identification and processing method for chip packaging defects

Publications (1)

Publication Number Publication Date
CN116228678A true CN116228678A (en) 2023-06-06

Family

ID=86574224

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310047087.7A Pending CN116228678A (en) 2023-01-31 2023-01-31 Automatic identification and processing method for chip packaging defects

Country Status (1)

Country Link
CN (1) CN116228678A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116559183A (en) * 2023-07-11 2023-08-08 钛玛科(北京)工业科技有限公司 Method and system for improving defect judging efficiency
CN116703912A (en) * 2023-08-07 2023-09-05 深圳市鑫赛科科技发展有限公司 Mini-host network port integrity visual detection method
CN117686516A (en) * 2024-01-29 2024-03-12 江苏优众微纳半导体科技有限公司 Automatic chip appearance defect detection system based on machine vision

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116559183A (en) * 2023-07-11 2023-08-08 钛玛科(北京)工业科技有限公司 Method and system for improving defect judging efficiency
CN116559183B (en) * 2023-07-11 2023-11-03 钛玛科(北京)工业科技有限公司 Method and system for improving defect judging efficiency
CN116703912A (en) * 2023-08-07 2023-09-05 深圳市鑫赛科科技发展有限公司 Mini-host network port integrity visual detection method
CN116703912B (en) * 2023-08-07 2023-11-24 深圳市鑫赛科科技发展有限公司 Mini-host network port integrity visual detection method
CN117686516A (en) * 2024-01-29 2024-03-12 江苏优众微纳半导体科技有限公司 Automatic chip appearance defect detection system based on machine vision
CN117686516B (en) * 2024-01-29 2024-05-10 江苏优众微纳半导体科技有限公司 Automatic chip appearance defect detection system based on machine vision

Similar Documents

Publication Publication Date Title
CN116228678A (en) Automatic identification and processing method for chip packaging defects
CN111474184B (en) AOI character defect detection method and device based on industrial machine vision
CN109658584B (en) Bill information identification method and device
CN111476827B (en) Target tracking method, system, electronic device and storage medium
CN111507976B (en) Defect detection method and system based on multi-angle imaging
CN113160192A (en) Visual sense-based snow pressing vehicle appearance defect detection method and device under complex background
US11392787B2 (en) Method for grasping texture-less metal parts based on bold image matching
CN111739020B (en) Automatic labeling method, device, equipment and medium for periodic texture background defect label
CN105447512A (en) Coarse-fine optical surface defect detection method and coarse-fine optical surface defect detection device
CN104464079A (en) Multi-currency-type and face value recognition method based on template feature points and topological structures of template feature points
CN113724231A (en) Industrial defect detection method based on semantic segmentation and target detection fusion model
CN112365497A (en) High-speed target detection method and system based on Trident Net and Cascade-RCNN structures
CN110766016A (en) Code spraying character recognition method based on probabilistic neural network
CN114241469A (en) Information identification method and device for electricity meter rotation process
CN111126393A (en) Vehicle appearance refitting judgment method and device, computer equipment and storage medium
CN111738979B (en) Certificate image quality automatic checking method and system
CN111814740A (en) Pointer instrument reading identification method and device, computer equipment and storage medium
CN111898659A (en) Target detection method and system
CN117037132A (en) Ship water gauge reading detection and identification method based on machine vision
CN116342525A (en) SOP chip pin defect detection method and system based on Lenet-5 model
CN114332622A (en) Label detection method based on machine vision
CN109726722B (en) Character segmentation method and device
CN112308061B (en) License plate character recognition method and device
US20230138821A1 (en) Inspection method for inspecting an object and machine vision system
CN114943738A (en) Sensor packaging curing adhesive defect identification method based on visual identification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination