CN115527049A - High-precision measurement method for lead frame pin spacing - Google Patents

High-precision measurement method for lead frame pin spacing Download PDF

Info

Publication number
CN115527049A
CN115527049A CN202211256870.6A CN202211256870A CN115527049A CN 115527049 A CN115527049 A CN 115527049A CN 202211256870 A CN202211256870 A CN 202211256870A CN 115527049 A CN115527049 A CN 115527049A
Authority
CN
China
Prior art keywords
image
pyramid
pin
template
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211256870.6A
Other languages
Chinese (zh)
Inventor
张小国
王士强
史志豪
刘复铭
王慧青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN202211256870.6A priority Critical patent/CN115527049A/en
Publication of CN115527049A publication Critical patent/CN115527049A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a high-precision measurement method for pin spacing of a lead frame, and belongs to the field of machine vision. The method comprises the following steps: after a reference image and an image to be detected are obtained, processing and rotating the reference image to obtain a multi-angle high-rise pyramid rotating template image and a corresponding mask; carrying out template matching to realize coarse matching, and processing a matching result to obtain a pin image only containing a lead frame pin area; detecting the outline of the pin image, processing each outline to obtain a pin mask image, and fusing the pin mask image with the image to be detected to obtain a pin coarse positioning image; performing sub-pixel edge detection on the pin coarse positioning image to obtain a sub-pixel edge point set; and calculating the minimum distance of the edge point sets of the adjacent outlines to obtain the distance between the leads of the adjacent lead frames. The method improves the speed through template matching and rough matching; the distance measurement is realized through the image, and the labor cost is reduced; and meanwhile, the sub-pixel edge detection is introduced for improving the precision, and the method has the characteristic of high calculation precision.

Description

High-precision measurement method for lead frame pin spacing
Technical Field
The invention belongs to the field of machine vision, and particularly relates to a high-precision measurement method for lead frame pin spacing.
Background
At present, the semiconductor chip is widely applied to the fields of computer equipment, network communication, automotive electronics, aerospace and the like, and is the basis of all modern lives. With the large-scale development of integrated circuits, the chip packaging process is also receiving more and more attention. The semiconductor lead frame is a frame for connecting a semiconductor core chip contact point and a metal lead, two processes of etching and stamping exist at present, due to the limitation of a production process, the lead frame can have the defects of lead deformation, small adjacent lead distance and the like, the defects can cause the problems of inaccurate positioning of the metal lead during bonding, poor conduction between a chip and the frame and the like, and the detection of the product quality of the lead frame is an important link of a chip packaging technology.
In recent years, machine vision is more and more widely applied to the manufacturing industry and the automatic detection technology, meanwhile, the traditional optical detection mode and the size detection mode cannot be directly used in the chip package after the chip package is completed, and the ultrahigh-resolution structural imaging is easily obtained by utilizing the penetrability of X rays and the difference of the absorption rate of materials of all parts in the chip package to the X rays. Although some professional software has the function of size measurement, the professional software still needs to be manually marked one by one, and due to the fact that the number of pins of the lead frame is large and the requirement on precision is high, generally, the conventional manual mode is to perform sampling inspection or full inspection on the lead frame, the labor cost is increased when the complex production environment is faced, the manual efficiency is not high, and the situations of false inspection, missed inspection and the like are easy to occur; meanwhile, the pin has small size, so that the current pixel-level size detection error is large, the requirement of high-precision measurement cannot be met, and the quality of the lead frame pin is difficult to ensure.
Disclosure of Invention
In order to solve the problems, the invention discloses a high-precision measurement method for the pin pitch of a lead frame, which realizes the measurement of the pin pitch of the lead frame by a machine vision image processing technology, replaces manual sampling inspection or full inspection, improves the efficiency and simultaneously meets the requirement of high-precision measurement at a sub-pixel level.
In order to achieve the purpose, the technical scheme of the invention is as follows:
a high-precision measurement method for lead frame pin spacing comprises the following specific steps:
step 1: acquiring a reference image and an image to be detected, which are imaged by X-rays and comprise a lead frame and a chip core;
step 2: performing image preprocessing on the reference image in the step 1 to obtain an inner core template image in the center of the reference image, and continuously performing downsampling processing on the inner core template image for 2 to 4 times to obtain a high-level pyramid template image;
and step 3: performing image preprocessing and continuous 2-4 times of downsampling processing on the image to be detected in the step 1 to obtain a bottom pyramid image to be detected and a high-level pyramid image to be detected;
and 4, step 4: making a mask of the high-level pyramid template image in the step 2, and rotating the mask and the high-level pyramid template image in the range of [ -20 degrees to-10 degrees, and 10 degrees to 20 degrees ], so as to obtain a multi-angle high-level pyramid rotation template image and a multi-angle high-level pyramid rotation mask image corresponding to the multi-angle high-level pyramid rotation template image;
and 5: rough matching, namely sequentially template matching the multi-angle high-rise pyramid rotating template image in the step 4 with the high-rise pyramid image to be detected in the step 3, and fusing the optimal matching result with the bottom pyramid image to be detected in the step 3 to obtain a pin image only containing lead frame pins;
step 6: processing the pin image in the step 5, extracting the outline of the pin image to obtain an outline set containing all pin outlines in the pin image, and processing each outline in the outline set in sequence to obtain an outline mask image of each outline;
and 7: performing edge detection on the image to be detected in the step 1 by using a sub-pixel edge detection method combining an adaptive threshold and a polynomial interpolation method in combination with the contour mask image to obtain a sub-pixel coordinate point set of each pin edge;
and 8: and (4) calculating the distance between the adjacent pins by adopting a Euclidean distance method of a two-dimensional space according to the sub-pixel coordinate point set in the step (7).
Further, the reference image and the image to be measured in step 1 are high-resolution gray scale images.
Further, the specific method of step 2 is as follows:
step 2-1: performing threshold segmentation on the reference image in the step 1, obtaining a gray threshold of the reference image by using a maximum inter-class variance method, wherein the gray value of a pixel point higher than the gray threshold is 255, and the gray value of the pixel point higher than the gray threshold is 0 to obtain a reference threshold segmentation image;
step 2-2: cutting the center of the reference threshold segmentation image obtained in the step 2-1, wherein the cutting range is 200-300 pixels away from the periphery of the core of the chip, and obtaining a core template image;
step 2-3: in the step 2-2, downsampling processing is continuously performed on the kernel template image for 2 to 4 times, which specifically includes: taking the kernel template image as a bottom pyramid template image, extracting odd rows and odd columns of the bottom pyramid template image, and obtaining a first layer pyramid template image with the image size of 1/2 of the rows and columns of the bottom pyramid template image; odd lines and odd columns of the first layer pyramid template image are extracted, and a second layer pyramid template image with the image size being 1/2 of the lines and columns of the first layer pyramid template image is obtained; repeating the operation for 2-4 times to obtain the high-level pyramid template image.
Further, in the step 3, the image to be detected in the step 1 is subjected to image preprocessing and down-sampling processing, wherein the image preprocessing method is consistent with the threshold segmentation step in the step 2-1, the processed image is used as the bottom pyramid image to be detected, and the down-sampling processing method is consistent with the specific down-sampling processing step in the step 2-3, so that the high-level pyramid image to be detected is obtained.
Further, the mask is made and the high-level pyramid template image is rotated in the step 4, and the specific method includes:
step 4-1: making a mask of the high-level pyramid template image in the step 2, defining the area where the template image is located as an interested area, wherein the gray value of the interested area in the mask is 1, and the gray value of the non-interested area is 0;
step 4-2: in the image rotation of the step 4, the rotation range is [ -20 degrees to-10 degrees, and 10 degrees to 20 degrees ]; the rotating target is the high-level pyramid template image and the mask manufactured in the step 4-1 which is in one-to-one correspondence with the rotating angle of the high-level pyramid template image, the gray value of the position of the non-interested region in the mask after rotation is 0, and the mask does not participate in template matching calculation; the step length of each rotation angle is (layer + 1), wherein the layer refers to the number of layers where the current rotation target is located; and taking the high-layer pyramid template image before rotation as the high-layer pyramid rotation template image when the rotation angle is 0 degrees.
Further, the specific steps of rough matching in the step 5 are as follows:
step 5-1: matching the high-layer pyramid rotating template image in the step 4 with the high-layer pyramid image to be detected in the step 3 in sequence within the rotating range of [ -20 degrees to-10 degrees, 10 degrees to 20 degrees ], wherein the step length of the rotating angle is (layer + 1), the high-layer pyramid rotating template image slides in a window mode on the high-layer pyramid image to be detected, and the matching degree at each position is calculated to obtain a high-layer optimal matching position (x 0, y 0) and a rotating angle theta;
the matching degree calculation adopts a normalized correlation coefficient method, and the formula is as follows:
Figure BDA0003889932440000031
wherein, R represents the matching degree, the range is [ -1,1], T represents a template image, S represents an image to be detected, m represents the image gray average value corresponding to the subscript, and (u, v) represents each coordinate point of the ROI area;
step 5-2: according to the optimal matching position (x 0, y 0) of the upper layer and the rotation angle theta in the step 5-1, the optimal matching position (2) of the bottom layer on the image to be detected of the bottom layer pyramid in the step 3 is obtained L *x0,2 L * y 0) and a rotation angle theta, wherein L refers to the specific layer number of the high-level pyramid template image;
step 5-3: according to the bottom layer best matching position (2) obtained in the step 5-2 L *x0,2 L * y 0) and the rotation angle theta, a rectangular area of the matching result on the image to be detected of the bottom pyramid can be obtained, and the image gray level of the rectangular area range is 255, so that the pin image only containing the lead frame pins can be obtained.
Further, the image processing and contour processing operation in step 6 specifically includes:
step 6-1: the image processing in the step 6 is to perform morphological erosion operation on the pin image in the step 5-3, so that a white area in the pin image is narrowed, a black area in the pin image is widened, and the size of a kernel matrix of the erosion operation is 5 × 5; then, changing the gray value of the periphery boundary of the corroded image into 255, wherein the boundary is one pixel distance; extracting outlines of the image after image processing to obtain an outline set containing all pin outlines in the pin image;
step 6-2: the processing of the contour set in the step 6-1 is that the gray level of the inner part of the contour is set to be 0, the gray level of the outer part of the contour is set to be between 220 and 240, so that the gray level of the outer part of the contour is uniformly distributed, the gray level step change is avoided, and a contour mask image of each contour is obtained; all pin profiles are ordered clockwise according to the center position of each profile.
Further, in step 7, each pin area of the image to be detected in step 1 is extracted separately by combining the outline mask image, and then sub-pixel edge detection is performed on the pin areas, wherein the sub-pixel edge detection specifically comprises the following steps:
step 7-1: constructing Sobel partial derivative operator templates in 4 directions, wherein the directions are respectively horizontal, vertical, 45-degree and 135-degree directions; calculating the template and the pin area in sequence to obtain gradient images in the four 4 directions, and searching the 4 gradient images pixel by pixel for the maximum gradient and the corresponding gradient direction to obtain a pseudo edge image;
the Sobel partial derivative operator templates are as follows:
Figure BDA0003889932440000041
and P (x, y) represents the pixel gray value of a certain point (x, y), and the calculation formula of each directional gradient is as follows:
g 1 (x,y)=[P(x+1,y-1)+2*P(x+1,y)+P(x+1,y+1)]-[P(x-1,y-1)+2*P(x-1,y)+P(x-1,y+1)]
g 2 (x,y)=[P(x-1,y-1)+2*P(x,y-1)+P(x+1,y-1)]-[P(x-1,y+1)+2*P(x,y+1)+P(x+1,y+1)]
g 3 (x,y)=[P(x,y-1)+2*P(x+1,y-1)+P(x+1,y)]-[P(x-1,y)+2*P(x-1,y+1)+P(x,y+1)]
g 4 (x,y)=[P(x,y-1)+2*P(x-1,y-1)+P(x-1,y)]-[P(x,y+1)+2*P(x+1,y+1)+P(x+1,y)]
wherein, g i (x, y) represents the gradient at some point (x, y) after the ith partial derivative operator template is calculated;
step 7-2: in the step 7-1, the maximum gradient and the corresponding gradient direction are obtained by calculation, because the Sobel partial derivative operator templates in four directions are used, the edge in the pseudo-edge image is not a single pixel point, if the sub-pixel edge detection is performed, the edge only with a single pixel needs to be solved, whether the pixel point (x, y) is the optimal edge point or not is judged, and the optimal edge point set of the pseudo-edge image is obtained, wherein the specific judgment method comprises the following steps:
g(x0,y0)>=g(x1,y1)and g(x0,y0)>=g(x2,y2)
wherein, the point (x 0, y 0) is an optimal edge point, and the point (x 1, y 1) and the point (x 2, y 2) are two adjacent points along the gradient direction at the point (x 0, y 0);
in the step 7-2, before judging whether the pseudo edge point is the optimal edge point, the pseudo edge image in the step 7-1 is screened by using an adaptive threshold method and a non-maximum value inhibition method, a minimum threshold value and a maximum threshold value are utilized to filter out certain sudden interference points and weak edges, the maximum threshold value is determined by using a maximum inter-class variance method for the pseudo edge image in a self-adaptive manner, and the minimum threshold value is 0.4-0.6 times of the maximum threshold value; the self-adaptive threshold value enables the screening and filtering process of the edge points of each contour to be more targeted, and local pin areas with different brightness and contrast have corresponding local threshold values;
and 7-3: fitting the optimal edge point set obtained in the step 7-2 by adopting a Lagrange polynomial-based interpolation method to obtain a sub-pixel edge point set, wherein the interpolation function is as follows:
Figure BDA0003889932440000042
wherein x is k As interpolation point, y k As a discrete function value, (x) i ,y i ) Is an edge point;
the solution to the sub-pixel coordinate formula is:
Figure BDA0003889932440000043
Figure BDA0003889932440000044
wherein, (x, y) is a sub-pixel edge point, R 0 Is a point (x) i ,y i ) Gray scale amplitude of (R) 1 And R 2 Is a point (x) i ,y i ) The gray scale amplitudes of adjacent points along the gradient direction, where the gray scale amplitudes are approximated by the gradient amplitudes.
Further, the euclidean distance method of the two-dimensional space in step 8 is as follows:
D=min{d 1 ,d 2 ,d 3 ,…},
Figure BDA0003889932440000051
wherein, d i Means the distance of two edge points from the set of adjacent contour points, respectively;
in step 8, when the shortest distance between the two edge point sets is obtained, since the number of the pins is large and the subpixel edge point set of each pin includes a large number of points, the shortest distance between the two subpixel edge point sets is obtained by using a divide and conquer method, and the shortest distance is obtained by sequentially calculating the subpixel edge point sets of adjacent pins.
The invention has the beneficial effects that:
the application provides a high-precision measurement method for lead frame pin spacing, which comprises the steps of after a reference image and an image to be measured are obtained, carrying out image processing and image rotation on the reference image to obtain a multi-angle high-rise pyramid rotation template image and a corresponding mask, and then carrying out template matching on the multi-angle high-rise pyramid rotation template image and the high-rise pyramid image to be measured by adopting a normalized cross-correlation matching method; fusing the matching result with the bottom pyramid image to be detected to obtain a pin image only containing the lead frame pin area; detecting the outline of the pin image, performing edge expansion on each outline to obtain a pin mask image, and fusing the pin mask image and the image to be detected to obtain a pin rough positioning image; performing sub-pixel edge detection on the pin coarse positioning image by combining an adaptive threshold and a polynomial interpolation method to obtain a sub-pixel edge point set; calculating the minimum distance of the edge point sets of the adjacent outlines to obtain the distance between the pins of the adjacent lead frames; the template matching only aims at determining the region of interest, and the high-level pyramid image is adopted for template matching, so that the matching speed can be improved; the lead frame pin spacing measurement is realized through a machine vision image processing technology, manual sampling inspection or full inspection is replaced, and the efficiency is improved; and meanwhile, sub-pixel edge detection is introduced for improving the precision, and the method has the characteristic of high calculation precision.
Drawings
FIG. 1 is a flow chart of a method for high-precision measurement of lead frame pin pitch according to the present invention;
FIG. 2 is a partially enlarged schematic view of a pin image including only lead frame pins;
FIG. 3 is a schematic enlarged partial view of the contour mask image, selected from the same area as FIG. 2;
fig. 4 is a result obtained by extracting the image to be measured with the mask region in fig. 3;
FIG. 5 is a schematic diagram of an enlarged image of a pin sub-pixel coordinate point location.
Detailed Description
The present invention will be further illustrated with reference to the accompanying drawings and detailed description, which will be understood as being illustrative only and not limiting in scope.
As shown in fig. 1, the present invention provides a method for high-precision measurement of lead frame pin pitch, which realizes lead frame pin pitch measurement by machine vision image processing technology, replaces manual sampling inspection or full inspection, improves efficiency, and simultaneously meets the requirement of high-precision measurement at sub-pixel level, and comprises the following specific steps:
step 1: acquiring a reference image and an image to be detected which are imaged by X-rays and comprise a lead frame and a chip core, wherein the reference image and the image to be detected are both 8-bit gray level images, and the resolution is 1536X 1536;
step 2: performing image preprocessing on the reference image in the step 1 to obtain a kernel template image of the center of the reference image, and continuously performing down-sampling on the kernel template image twice to obtain a high-level pyramid template image;
and step 3: performing image preprocessing and continuous two-time downsampling processing on the image to be detected in the step 1 to obtain a bottom pyramid image to be detected and a high-level pyramid image to be detected;
and 4, step 4: making a mask of the high-level pyramid template image in the step 2, and rotating the mask and the high-level pyramid template image within the range of [ -15 degrees, 15 degrees ], so as to obtain a multi-angle high-level pyramid rotation template image and a multi-angle high-level pyramid rotation mask image corresponding to the multi-angle high-level pyramid rotation template image;
and 5: rough matching, namely sequentially template matching the multi-angle high-rise pyramid rotation template image in the step 4 with the high-rise pyramid to-be-detected image in the step 3, and performing image processing on the low-rise pyramid to-be-detected image in the step 3 according to the position coordinate and the rotation angle of the maximum matching similarity to obtain a pin image only containing lead frame pins, wherein a partial enlarged image of the pin image is shown in fig. 2;
step 6: processing the pin image in the step 5, then extracting the outline of the pin image to obtain an outline set containing all pin outlines in the pin image, and processing each outline in the outline set in sequence, including outline edge expansion, to obtain an outline mask image of each outline, wherein the schematic diagram of the area where the figure 2 is located in the outline mask image is shown in figure 3;
and 7: combining the outline mask image, independently extracting each pin in the image to be detected in the step 1, and detecting the edge of each pin by using a sub-pixel edge detection method combining an adaptive threshold and a polynomial interpolation method to obtain a sub-pixel coordinate point set of each pin edge;
and 8: and (4) calculating the distance between the adjacent pins by adopting a Euclidean distance method of a two-dimensional space according to the sub-pixel coordinate point set in the step (7).
Further, the specific method of step 2 is as follows:
step 2-1: performing threshold segmentation on the reference image in the step 1, and obtaining a gray threshold of the reference image by using a maximum inter-class variance method, wherein in the embodiment, the gray threshold is 128, the gray value of a pixel point higher than the gray threshold is 255, and the gray value of a pixel point higher than the gray threshold is 0, so as to obtain a reference threshold segmentation image;
step 2-2: in the embodiment, the center of the reference threshold segmentation image in the step 2-1 is cut, and the cutting range is 230 pixels away from the periphery of the core of the chip, so that a core template image is obtained;
step 2-3: in the step 2-2, the down-sampling processing is performed on the kernel template image for two times, specifically: taking the kernel template image as a bottom pyramid template image, extracting odd rows and odd columns of the bottom pyramid template image, and obtaining a first layer pyramid template image with the image size of 1/2 of the rows and columns of the bottom pyramid template image; odd lines and odd columns are extracted from the first-layer pyramid template image, a second-layer pyramid template image with the image size being 1/2 of the lines and columns of the first-layer pyramid template image is obtained, and the second-layer pyramid template image is the high-layer pyramid template image in the embodiment.
Further, in the step 3, the image to be detected in the step 1 is subjected to image preprocessing and down-sampling processing, wherein the image preprocessing method is consistent with the threshold segmentation step in the step 2-1, the processed image is used as a bottom pyramid image to be detected, the gray threshold is calculated to be 120 by a maximum inter-class variance method, and the down-sampling processing method is consistent with the down-sampling processing specific step in the step 2-3, so that the high-level pyramid image to be detected is obtained.
Further, the mask is made and the high-level pyramid template image is rotated in the step 4, and the specific method includes:
step 4-1: making a mask of the high-level pyramid template image in the step 2, defining the area where the template image is located as an interested area, wherein the gray value of the interested area position in the mask is 1, the gray value of the uninteresting area position is 0, and the uninteresting area does not participate in the matching calculation process;
step 4-2: in this embodiment, in the image rotation of step 4, the rotation range is [ -15 °,15 ° ]; the rotating target is the high-level pyramid template image and the mask manufactured in the step 4-1 which is in one-to-one correspondence with the rotating angle of the high-level pyramid template image, the gray value of the position of the non-interested region in the mask after rotation is 0, and the mask does not participate in template matching calculation; the step length of each rotation angle is (layer + 1), wherein layer refers to the number of layers where the current rotation target is located, for example, in the first layer, the step length of the rotation angle is 2; and taking the high-layer pyramid template image before rotation as the high-layer pyramid rotation template image with the rotation angle of 0 degrees to obtain the multi-angle high-layer pyramid rotation template image and the multi-angle high-layer pyramid rotation mask image.
Further, the rough matching in the step 5 specifically comprises the following steps:
step 5-1: matching the high-rise pyramid rotation template image in the step 4 with the high-rise pyramid to-be-detected image template in the step 3 in a rotation range of [ -15 degrees, 15 degrees ], wherein the step length of the rotation angle is 3 degrees, the high-rise pyramid rotation template image slides on the high-rise pyramid to-be-detected image in a window mode, and the matching degree of each position is calculated to obtain a high-rise optimal matching position (x 0, y 0) and a rotation angle theta;
the matching degree calculation adopts a normalized correlation coefficient method, and the formula is as follows:
Figure BDA0003889932440000071
wherein, R represents the matching degree, the range is [ -1,1], T represents a template image, S represents an image to be detected, m represents the image gray average value corresponding to the subscript, and (u, v) represents each coordinate point of the ROI area;
step 5-2: according to the optimal matching position (x 0, y 0) and the rotation angle theta of the upper layer in the step 5-1 and the down-sampling times 2, the optimal matching position (4 x0,4 x y 0) and the rotation angle theta of the bottom layer on the image to be measured of the bottom layer pyramid in the step 3 are obtained;
step 5-3: according to the bottom layer optimal matching position (4 x0,4 x y 0) and the rotation angle theta obtained in the step 5-2, a rectangular area of the matching result on the image to be measured of the bottom layer pyramid can be obtained, the gray level of the image in the range of the rectangular area is 255, the gray level of the rest part of the image is 0, and the pin image only containing the pins of the lead frame can be obtained, as shown in fig. 2, the pin image is a local amplification schematic diagram, and the gray level of the blank area is 255.
Further, in the step 6, the contours are extracted from the pin image, so that the number of the contours in the image and a coordinate point set of each contour can be obtained, in this embodiment, the number of the contours is 104, the 104 contours are sequentially subjected to edge expansion to obtain 104 contour mask images, and the 104 contour mask images are fused with the image to be detected in the step 1 to obtain 104 pin regions of interest.
Further, the image processing and contour processing operation in step 6 specifically includes:
step 6-1: the image processing in the step 6 is to perform morphological erosion operation on the pin image in the step 5-3, so that a white area in the pin image is narrowed, a black area in the pin image is widened, and the size of a kernel matrix of the erosion operation is 5 × 5; then, changing the gray value of the periphery boundary of the corroded image into 255, wherein the size of the boundary is the distance of one pixel; extracting outlines of the image after image processing to obtain 104 outline sets containing all the pin outlines in the pin image;
step 6-2: processing each contour set in the step 6-1, namely setting the gray level of the inner part of the contour as 0, and setting the gray level of the outer part of the contour as 230, so that the gray levels of the outer part of the contour are uniformly distributed, the gray level step change is avoided, and contour mask images of 104 contours are obtained, wherein the schematic diagram of the area where the figure 2 is located in the contour mask images is shown in figure 3, and the gray value of a blank area is 230;
in this embodiment, to ensure that two adjacent profiles can be accurately positioned subsequently, the 104 profiles should also be sorted clockwise by the center position of each profile.
Further, in step 7, each pin area of the image to be detected in step 1 is extracted separately by combining with the outline mask image, fig. 4 is a result obtained by using fig. 3 as a mask part extraction, a gray value of a white area is 230, and then sub-pixel edge detection is performed on the pin area, wherein the sub-pixel edge detection specifically includes the steps of:
step 7-1: constructing Sobel partial derivative operator templates in 4 directions, wherein the directions are horizontal, vertical, 45-degree and 135-degree directions respectively; calculating the template and the pin area in sequence to obtain gradient images in the four 4 directions, and searching the 4 gradient images pixel by pixel for the maximum gradient and the corresponding gradient direction to obtain a pseudo edge image;
the Sobel partial derivative operator templates are as follows:
Figure BDA0003889932440000081
and P (x, y) represents the pixel gray value of a certain point (x, y), and the calculation formula of each directional gradient is as follows:
g 1 (x,y)=[P(x+1,y-1)+2*P(x+1,y)+P(x+1,y+1)]-[P(x-1,y-1)+2*P(x-1,y)+P(x-1,y+1)]
g 2 (x,y)=[P(x-1,y-1)+2*P(x,y-1)+P(x+1,y-1)]-[P(x-1,y+1)+2*P(x,y+1)+P(x+1,y+1)]
g 3 (x,y)=[P(x,y-1)+2*P(x+1,y-1)+P(x+1,y)]-[P(x-1,y)+2*P(x-1,y+1)+P(x,y+1)]
g 4 (x,y)=[P(x,y-1)+2*P(x-1,y-1)+P(x-1,y)]-[P(x,y+1)+2*P(x+1,y+1)+P(x+1,y)]
wherein, g i (x, y) represents the gradient at some point (x, y) after the ith partial derivative operator template is calculated;
step 7-2: in the step 7-1, the maximum gradient and the corresponding gradient direction are obtained through calculation, since Sobel partial derivative operator templates in four directions are used, the edge in the pseudo-edge image is not a single pixel point, if sub-pixel edge detection is performed, the edge only with a single pixel needs to be solved, whether the pixel point (x, y) is an optimal edge point is judged, and an optimal edge point set of the pseudo-edge image is obtained, wherein the specific judgment method comprises the following steps:
g(x0,y0)>=g(x1,y1)and g(x0,y0)>=g(x2,y2)
wherein, the point (x 0, y 0) is an optimal edge point, and the point (x 1, y 1) and the point (x 2, y 2) are two adjacent points along the gradient direction at the point (x 0, y 0);
in this embodiment, in step 7-2, before determining whether the false edge point is the optimal edge point, the false edge image in step 7-1 is further screened by using an adaptive threshold method and a non-maximum suppression method, a minimum threshold value and a maximum threshold value are used to filter out some abrupt interference points and weak edges, the maximum threshold value is adaptively determined to be 161 by using a maximum inter-class variance method for the false edge image, and the minimum threshold value is 0.5 times the maximum threshold value; the self-adaptive threshold value enables the screening and filtering process of the edge points of each contour to be more targeted, and local pin areas with different brightness and contrast have corresponding local threshold values;
and 7-3: fitting the optimal edge point set obtained in the step 7-2 by adopting an interpolation method based on Lagrange polynomial to obtain a sub-pixel edge point set, wherein the interpolation function is as follows:
Figure BDA0003889932440000091
wherein x is k As interpolation point, y k As a discrete functionValue (x) i ,y i ) Is an edge point;
the solution to the sub-pixel coordinate formula is:
Figure BDA0003889932440000092
Figure BDA0003889932440000093
wherein, (x, y) is a sub-pixel edge point, R 0 Is a point (x) i ,y i ) Gray scale amplitude of (R) 1 And R 2 Is a point (x) i ,y i ) The gray scale amplitudes of adjacent points along the gradient direction, where the gray scale amplitudes are approximated by the gradient amplitudes.
In this embodiment, for the pin area shown in fig. 4, the comparison table between the calculated sub-pixel coordinate point set and the coordinate point set obtained without using sub-pixel edge detection is as follows, taking the y-axis coordinate as an example:
P1 P2 P3 P4 P5 P6 P7
x axis 1311 1312 1313 1314 1319 1327 1336
(Y-axis) pixel level 820 819 819 819 820 821 822
(Y-axis) sub-pixel level 819.146 818.833 818.899 818.967 819.655 820.455 821.674
In the high-resolution image of this example, a more accurate coordinate point position can be obtained by using the sub-pixel level edge detection method, and for facilitating observation, the image is enlarged by 10 times, and a schematic diagram of the pin sub-pixel coordinate point position is shown in fig. 5.
Further, the euclidean distance method of the two-dimensional space in step 8 is as follows:
D=min{d 1 ,d 2 ,d 3 ,…},
Figure BDA0003889932440000101
wherein, d i Means the distance of two edge points from the set of adjacent contour points, respectively;
in this embodiment, in step 8, when the shortest distance between two edge point sets is obtained, since the number of the pins is large and the sub-pixel edge point set of each pin includes a large number of points, the shortest distance between two sub-pixel edge point sets is obtained by using a divide and conquer method, and the shortest distance is obtained by sequentially calculating the sub-pixel edge point sets of adjacent pins;
the divide and conquer method divides a complex problem into two or more sub-problems which are the same or similar, and then divides the sub-problems into smaller sub-problems until the final sub-problems can be simply and directly solved. In this embodiment, each point from different contour point sets is marked first, the contour index number is defined, and then the point pair when the shortest distance is recorded is calculated and the shortest distance is recorded sequentially for the point sets of adjacent contours using the divide and conquer method.
It should be noted that the above-mentioned contents only illustrate the technical idea of the present invention, and the protection scope of the present invention is not limited thereby, and it is obvious to those skilled in the art that several modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations fall within the protection scope of the claims of the present invention.

Claims (9)

1. A high-precision measurement method for lead frame pin pitch is characterized by comprising the following steps:
step 1: acquiring a reference image and an image to be detected, which are imaged by X-rays and comprise a lead frame and a chip core;
and 2, step: performing image preprocessing on the reference image in the step 1 to obtain an inner core template image in the center of the reference image, and continuously performing downsampling processing on the inner core template image for 2 to 4 times to obtain a high-level pyramid template image;
and step 3: performing image preprocessing and continuous 2-4 times of downsampling processing on the image to be detected in the step 1 to obtain a bottom pyramid image to be detected and a high-rise pyramid image to be detected;
and 4, step 4: making a mask of the high-level pyramid template image in the step 2, and rotating the mask and the high-level pyramid template image in the range of [ -20 degrees to-10 degrees, and 10 degrees to 20 degrees ], so as to obtain a multi-angle high-level pyramid rotation template image and a multi-angle high-level pyramid rotation mask image corresponding to the multi-angle high-level pyramid rotation template image;
and 5: rough matching, namely sequentially template matching the multi-angle high-rise pyramid rotating template image in the step 4 with the high-rise pyramid image to be detected in the step 3, and fusing the optimal matching result with the bottom pyramid image to be detected in the step 3 to obtain a pin image only containing lead frame pins;
and 6: performing image processing on the pin image in the step 5, extracting outlines of the pin image to obtain an outline set containing all pin outlines in the pin image, and sequentially processing each outline in the outline set to obtain an outline mask image of each outline;
and 7: performing edge detection on the image to be detected in the step 1 by using a sub-pixel edge detection method combining an adaptive threshold and a polynomial interpolation method in combination with the contour mask image to obtain a sub-pixel coordinate point set of each pin edge;
and step 8: and (4) calculating the distance between the adjacent pins by adopting a Euclidean distance method of a two-dimensional space according to the sub-pixel coordinate point set in the step (7).
2. A method for high precision measurement of lead frame lead spacing according to claim 1, wherein: the reference image and the image to be detected in the step 1 are high-resolution gray scale images which are imaged by X-rays and comprise lead frames and chip cores.
3. The method for high-precision measurement of lead frame lead spacing according to claim 1, wherein the specific method of step 2 is as follows:
step 2-1: dividing the threshold of the reference image in the step 1, obtaining a gray threshold of the reference image by utilizing a variance method between maximum classes, wherein the gray value of a pixel point higher than the gray threshold is 255, and the gray value of the pixel point higher than the gray threshold is 0, so as to obtain a divided image of the reference threshold;
step 2-2: cutting the center of the reference threshold segmentation image obtained in the step 2-1, wherein the cutting range is 200-300 pixels away from the periphery of the core of the chip, and obtaining a core template image;
step 2-3: in the step 2-2, downsampling the kernel template image for 2 to 4 times, specifically: taking the kernel template image as a bottom pyramid template image, extracting odd rows and odd columns of the bottom pyramid template image, and obtaining a first layer pyramid template image with the image size of 1/2 of the rows and columns of the bottom pyramid template image; odd lines and odd columns of the first layer pyramid template image are extracted, and a second layer pyramid template image with the image size being 1/2 of the lines and columns of the first layer pyramid template image is obtained; repeating the operation for 2-4 times to obtain the high-level pyramid template image.
4. The method as claimed in claim 3, wherein in step 3, the image to be measured in step 1 is subjected to image preprocessing and down-sampling processing, wherein the image preprocessing is consistent with the threshold segmentation step in step 2-1, the processed image is used as the image to be measured in the bottom pyramid, and the down-sampling processing is consistent with the down-sampling processing step in step 2-3, so as to obtain the image to be measured in the high pyramid.
5. The method for high-precision measurement of lead frame lead spacing according to claim 1, wherein the mask making and the high-level pyramid template image rotation in the step 4 are specifically:
step 4-1: making a mask of the high-level pyramid template image in the step 2, defining the area where the template image is located as an interested area, wherein the gray value of the position of the interested area in the mask is 1, and the gray value of the position of the non-interested area is 0;
step 4-2: in the image rotation of the step 4, the rotation range is [ -20 ° -10 °,10 ° -20 ° ]; the rotating target is the high-level pyramid template image and the mask manufactured in the step 4-1 which is in one-to-one correspondence with the rotating angle of the high-level pyramid template image, the gray value of the position of the non-interested region in the mask after rotation is 0, and the mask does not participate in template matching calculation; the step length of each rotation angle is (layer + 1), wherein layer refers to the number of layers where the current rotation target is located; and taking the high-layer pyramid template image before rotation as the high-layer pyramid rotation template image when the rotation angle is 0 degrees.
6. The method for measuring the lead frame lead spacing with high precision as claimed in claim 1, wherein the step 5 of rough matching comprises the following specific steps:
step 5-1: matching the high-level pyramid rotating template image in the step 4 with the high-level pyramid to-be-detected image template in the step 3 in a rotating range of between-20 degrees and-10 degrees and between 10 degrees and 20 degrees in sequence, wherein the step length of the rotating angle is (layer + 1), the high-level pyramid rotating template image slides in a window mode on the high-level pyramid to-be-detected image, the matching degree of each position is calculated, and the best matching position (x 0, y 0) and the rotating angle theta of the high level pyramid are obtained;
the matching degree calculation adopts a normalized correlation coefficient method, and the formula is as follows:
Figure FDA0003889932430000021
wherein, R represents the matching degree, the range is [ -1,1], T represents a template image, S represents an image to be detected, m represents the image gray average value corresponding to the subscript, and (u, v) represents each coordinate point of the ROI area;
step 5-2: according to the optimal matching position (x 0, y 0) of the upper layer and the rotation angle theta in the step 5-1, the optimal matching position (2) of the bottom layer on the image to be measured of the bottom layer pyramid in the step 3 is obtained L *x0,2 L * y 0) and an angle of rotation theta, where L denotesThe specific layer number of the high-rise pyramid template image is located;
step 5-3: according to the bottom layer best matching position (2) obtained in the step 5-2 L *x0,2 L * y 0) and the rotation angle theta to obtain a rectangular area of the matching result on the image to be detected of the bottom pyramid, setting the image gray level of the rectangular area range to be 255, and obtaining the pin image only containing the lead frame pins.
7. The method for high-precision measurement of lead frame lead spacing according to claim 6, wherein the image processing and contour processing operation in step 6 is as follows:
step 6-1: the image processing in the step 6 is to perform morphological erosion operation on the pin image in the step 5-3, so that a white area in the pin image is narrowed, a black area in the pin image is widened, and the size of a kernel matrix of the erosion operation is 5 × 5; then, changing the gray value of the periphery boundary of the corroded image into 255, wherein the boundary is one pixel distance; extracting outlines of the image after image processing to obtain an outline set containing all pin outlines in the pin image;
step 6-2: the processing of the contour set in the step 6-1 is that the gray level of the inner part of the contour is set to be 0, the gray level of the outer part of the contour is set to be between 220 and 240, so that the gray level of the outer part of the contour is uniformly distributed, the gray level step change is avoided, and a contour mask image of each contour is obtained; all pin profiles are ordered clockwise according to the center position of each profile.
8. The method for high-precision measurement of lead frame lead spacing according to claim 1, wherein in the step 7, each lead area of the image to be measured in the step 1 is extracted separately by combining the outline mask image, and then sub-pixel edge detection is performed on the lead area, wherein the sub-pixel edge detection specifically comprises the following steps:
step 7-1: constructing Sobel partial derivative operator templates in 4 directions, wherein the directions are respectively horizontal, vertical, 45-degree and 135-degree directions; calculating the template and the pin area in sequence to obtain four gradient images in 4 directions, and searching the 4 gradient images pixel by pixel for the maximum gradient and the corresponding gradient direction to obtain a pseudo edge image;
the Sobel partial derivative operator template is as follows:
Figure FDA0003889932430000031
the pixel gray value of a certain point (x, y) is expressed by P (x, y), and the calculation formula of each directional gradient is as follows:
g 1 (x,y)=[P(x+1,y-1)+2*P(x+1,y)+P(x+1,y+1)]-[P(x-1,y-1)+2*P(x-1,y)+P(x-1,y+1)]
g 2 (x,y)=[P(x-1,y-1)+2*P(x,y-1)+P(x+1,y-1)]-[P(x-1,y+1)+2*P(x,y+1)+P(x+1,y+1)]
g 3 (x,y)=[P(x,y-1)+2*P(x+1,y-1)+P(x+1,y)]-[P(x-1,y)+2*P(x-1,y+1)+P(x,y+1)]
g 4 (x,y)=[P(x,y-1)+2*P(x-1,y-1)+P(x-1,y)]-[P(x,y+1)+2*P(x+1,y+1)+P(x+1,y)]
wherein, g i (x, y) represents the gradient at some point (x, y) after the ith partial derivative operator template is calculated;
step 7-2: in the step 7-1, the maximum gradient and the corresponding gradient direction are obtained by calculation, because the Sobel partial derivative operator templates in four directions are used, the edge in the pseudo-edge image is not a single pixel point, if the sub-pixel edge detection is performed, the edge only with a single pixel needs to be solved, whether the pixel point (x, y) is the optimal edge point or not is judged, and the optimal edge point set of the pseudo-edge image is obtained, wherein the specific judgment method comprises the following steps:
g(x0,y0)>=g(x1,y1)and g(x0,y0)>=g(x2,y2)
wherein, the point (x 0, y 0) is an optimal edge point, and the point (x 1, y 1) and the point (x 2, y 2) are two adjacent points along the gradient direction at the point (x 0, y 0);
in the step 7-2, before judging whether the pseudo edge point is the optimal edge point, the pseudo edge image in the step 7-1 is screened by using an adaptive threshold method and a non-maximum value inhibition method, a minimum threshold value and a maximum threshold value are utilized to filter out certain sudden interference points and weak edges, the maximum threshold value is determined by using a maximum inter-class variance method for the pseudo edge image in a self-adaptive manner, and the minimum threshold value is 0.4-0.6 times of the maximum threshold value; the self-adaptive threshold value enables the screening and filtering process of the edge points of each contour to be more targeted, and local pin areas with different brightness and contrast have corresponding local threshold values;
and 7-3: fitting the optimal edge point set obtained in the step 7-2 by adopting a Lagrange polynomial-based interpolation method to obtain a sub-pixel edge point set, wherein the interpolation function is as follows:
Figure FDA0003889932430000041
wherein x is k As interpolation point, y k As a discrete function value, (x) i ,y i ) Is an edge point;
solving the sub-pixel coordinate formula as follows:
Figure FDA0003889932430000042
Figure FDA0003889932430000043
wherein, (x, y) is a sub-pixel edge point, R 0 Is a point (x) i ,y i ) Gray scale amplitude of (R) 1 And R 2 Is a point (x) i ,y i ) The gray scale amplitudes of adjacent points along the gradient direction, where the gray scale amplitudes are approximated by the gradient amplitudes.
9. The method for measuring lead frame lead spacing with high precision according to claim 1, wherein the euclidean distance in two dimensions in step 8 is as follows:
Figure FDA0003889932430000044
wherein d is i Means the distance of two edge points from the set of adjacent contour points, respectively;
in step 8, when the shortest distance between the two edge point sets is obtained, since the number of the pins is large and the sub-pixel edge point set of each pin includes a large number of points, the shortest distance between the two sub-pixel edge point sets is obtained by using a divide and conquer method, and the shortest distance is obtained by sequentially calculating the sub-pixel edge point sets of adjacent pins.
CN202211256870.6A 2022-10-14 2022-10-14 High-precision measurement method for lead frame pin spacing Pending CN115527049A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211256870.6A CN115527049A (en) 2022-10-14 2022-10-14 High-precision measurement method for lead frame pin spacing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211256870.6A CN115527049A (en) 2022-10-14 2022-10-14 High-precision measurement method for lead frame pin spacing

Publications (1)

Publication Number Publication Date
CN115527049A true CN115527049A (en) 2022-12-27

Family

ID=84701547

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211256870.6A Pending CN115527049A (en) 2022-10-14 2022-10-14 High-precision measurement method for lead frame pin spacing

Country Status (1)

Country Link
CN (1) CN115527049A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116894841A (en) * 2023-09-08 2023-10-17 山东天鼎舟工业科技有限公司 Visual detection method for quality of alloy shell of gearbox
CN117152144A (en) * 2023-10-30 2023-12-01 潍坊华潍新材料科技有限公司 Guide roller monitoring method and device based on image processing

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116894841A (en) * 2023-09-08 2023-10-17 山东天鼎舟工业科技有限公司 Visual detection method for quality of alloy shell of gearbox
CN116894841B (en) * 2023-09-08 2023-11-28 山东天鼎舟工业科技有限公司 Visual detection method for quality of alloy shell of gearbox
CN117152144A (en) * 2023-10-30 2023-12-01 潍坊华潍新材料科技有限公司 Guide roller monitoring method and device based on image processing
CN117152144B (en) * 2023-10-30 2024-01-30 潍坊华潍新材料科技有限公司 Guide roller monitoring method and device based on image processing

Similar Documents

Publication Publication Date Title
CN109003258B (en) High-precision sub-pixel circular part measuring method
CN115527049A (en) High-precision measurement method for lead frame pin spacing
CN105334219B (en) A kind of bottle mouth defect detection method of residual analysis dynamic threshold segmentation
CN106651828B (en) Method for measuring sub-pixel of product size under industrial small-scale motion blur imaging condition
CN114723681B (en) Concrete crack defect detection method based on machine vision
CN107341802B (en) Corner sub-pixel positioning method based on curvature and gray scale compounding
CN115861291B (en) Chip circuit board production defect detection method based on machine vision
CN111080582B (en) Method for detecting defects of inner and outer surfaces of workpiece
CN113592955B (en) Round workpiece plane coordinate high-precision positioning method based on machine vision
CN109993099A (en) A kind of lane line drawing recognition methods based on machine vision
CN115096206B (en) High-precision part size measurement method based on machine vision
CN106780526A (en) A kind of ferrite wafer alligatoring recognition methods
Huang et al. Sub-pixel edge detection algorithm based on canny–zernike moment method
CN111354047B (en) Computer vision-based camera module positioning method and system
CN116704516B (en) Visual inspection method for water-soluble fertilizer package
CN111489389A (en) Light spot center detection method
CN116503462A (en) Method and system for quickly extracting circle center of circular spot
JP2011007728A (en) Method, apparatus and program for defect detection
CN113705564B (en) Pointer type instrument identification reading method
CN108537778B (en) Improved random round hole detection method for flexible substrate
CN113112396B (en) Method for detecting conductive particles
CN113763279A (en) Accurate correction processing method for image with rectangular frame
CN111476792B (en) Extraction method of strip steel image contour
CN115690104B (en) Wafer crack detection method and device and storage medium
CN112734779A (en) Dot calibration plate edge sub-pixel detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination