CN114708451A - Detection method and system for punching and molding of lead frame plastic package integrated circuit - Google Patents

Detection method and system for punching and molding of lead frame plastic package integrated circuit Download PDF

Info

Publication number
CN114708451A
CN114708451A CN202111493585.1A CN202111493585A CN114708451A CN 114708451 A CN114708451 A CN 114708451A CN 202111493585 A CN202111493585 A CN 202111493585A CN 114708451 A CN114708451 A CN 114708451A
Authority
CN
China
Prior art keywords
gaussian distribution
density map
distribution density
dimensional gaussian
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202111493585.1A
Other languages
Chinese (zh)
Inventor
金宣黄
李运勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ruian Hele Electronic Technology Co ltd
Original Assignee
Ruian Hele Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ruian Hele Electronic Technology Co ltd filed Critical Ruian Hele Electronic Technology Co ltd
Priority to CN202111493585.1A priority Critical patent/CN114708451A/en
Publication of CN114708451A publication Critical patent/CN114708451A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to the field of lead frame plastic package integrated circuits, and particularly discloses a detection method and a detection system for die-cut forming of a lead frame plastic package integrated circuit, wherein a convolutional neural network model is adopted to extract high-dimensional features of an induction matrix from an array type transmission type photoelectric sensor and a regression reflection type photoelectric sensor, simultaneously a Gaussian density map is adopted to fuse a first feature map and a second feature map, a Gaussian mixture model is further reused to gradually simplify the three-dimensional Gaussian density map, and a convolutional neural network is comprehensively trained by using density map simplification loss function values and classification loss function values, so that the convolutional neural model is helped to learn consistent feature representation in the high-dimensional features. Thus, the abnormality can be detected better to protect the equipment and ensure the yield of the product.

Description

Detection method and system for punching and forming of lead frame plastic package integrated circuit
Technical Field
The present disclosure relates to the field of lead frame plastic package integrated circuits, and more particularly, to a method and a system for detecting die-cut molding of a lead frame plastic package integrated circuit.
Background
After the integrated circuit is molded, the residual gate epoxy plastic package material of the plastic package body needs to be cut off, the pins are bent and formed, and the like, so that the mounting requirements of the circuit are met. In the production process of the subsequent procedure, the pin and the lead frame of the circuit need to be cut off and separated, the frame and the circuit are only connected by the connecting rib part, so that the strength of the circuit is poor, and in addition, the uncontrollable factors in the previous plastic packaging process damage the connecting rib, so that the colloid of the integrated circuit possibly falls off in the subsequent punching forming process, and the product is scrapped and the related processing equipment is damaged.
Accordingly, in the process of punching and molding the lead of the lead frame plastic package integrated circuit, abnormal phenomena such as cracking and falling of the plastic package body, pin breakage and the like exist, and if the abnormal phenomena cannot be found in time, the batch scrapping of products or the damage of related molding dies can be caused. However, in the conventional inspection process, since only the surface of the plastic package is not inspected, the inspection difficulty is high and the accuracy is low, and in order to better detect the abnormality, protect equipment and ensure the yield of products, an inspection method for die cutting and molding of the lead frame plastic package integrated circuit is desired.
Disclosure of Invention
The present application is proposed to solve the above-mentioned technical problems. The embodiment of the application provides a detection method and a detection system for punching and forming of a lead frame plastic package integrated circuit, which adopt a convolutional neural network model to extract high-dimensional features of an induction matrix from an array type transmission type photoelectric sensor and a regression reflection type photoelectric sensor, simultaneously adopt a Gaussian density map to fuse a first feature map and a second feature map, further use a Gaussian mixture model to gradually simplify the three-dimensional Gaussian density map, and comprehensively train the convolutional neural network by using a density map simplification loss function value and a classification loss function value, thereby helping the convolutional neural model to learn consistent feature representation in the high-dimensional features. Thus, the abnormality can be detected better to protect the equipment and ensure the yield of the product.
According to one aspect of the application, a method for detecting punching forming of a lead frame plastic package integrated circuit is provided, which comprises the following steps:
a training phase comprising:
respectively obtaining a first induction matrix and a second induction matrix of the integrated circuit after punching forming through an array type transmission photoelectric sensor and an array type return reflection photoelectric sensor;
spatially encoding the first and second sensing matrices using a convolutional neural network to obtain a first signature corresponding to the first sensing matrix and a second signature corresponding to the second sensing matrix;
constructing a three-dimensional Gaussian distribution density map by using the mean value of the feature values at the respective positions in the first feature map and the second feature map as the mean value of the Gaussian distribution and the variance between the feature values at the respective positions in the first feature map and the second feature map as the variance of the Gaussian distribution
Figure BDA0003400138850000021
Simplifying the three-dimensional Gaussian distribution density map with a Gaussian mixture model to obtain a two-dimensional Gaussian distribution density map, wherein the simplifying the three-dimensional Gaussian distribution density map with the Gaussian mixture model comprises weighting and summing the mean and variance of each Gaussian distribution in the width dimension W, and the sum of the weights is 1;
simplifying the two-dimensional Gaussian distribution density map into a one-dimensional Gaussian distribution density map, wherein the simplifying the two-dimensional Gaussian distribution density map into the one-dimensional Gaussian distribution density map comprises: carrying out weighted summation on the mean value and the variance of each Gaussian distribution on the width dimension H, wherein the sum of the weights is 1;
performing Gaussian discretization on the Gaussian distribution of each position in the one-dimensional Gaussian distribution density map to obtain a classification matrix;
passing the classification matrix through a classifier to obtain a classification loss function value;
calculating a first density map reduction loss function value that reduces the three-dimensional gaussian distribution density map to the two-dimensional gaussian distribution density map and calculating a second density map reduction loss function value that reduces the two-dimensional gaussian distribution density map to the one-dimensional gaussian distribution density map; and
computing a weighted sum between the first density map reduction loss function value, the second density map reduction loss function value, and the classification loss function value as a loss function value to train the convolutional neural network; and
an inference phase comprising:
respectively obtaining a first induction matrix and a second induction matrix of the integrated circuit after punching forming through an array type transmission photoelectric sensor and an array type return reflection photoelectric sensor;
spatially encoding the first and second sensing matrices using the convolutional neural network trained in a training phase to obtain a first signature corresponding to the first sensing matrix and a second signature corresponding to the second sensing matrix;
constructing a three-dimensional Gaussian distribution density map by using the mean value of the feature values at the respective positions in the first feature map and the second feature map as the mean value of the Gaussian distribution and the variance between the feature values at the respective positions in the first feature map and the second feature map as the variance of the Gaussian distribution
Figure BDA0003400138850000031
Simplifying the three-dimensional Gaussian distribution density map by using a Gaussian mixture model to obtain a two-dimensional Gaussian distribution density map, wherein the simplification of the three-dimensional Gaussian distribution density map by using the Gaussian mixture model comprises the step of carrying out weighted summation on the mean value and the variance of each Gaussian distribution in the width dimension W, and the sum of the weights is 1;
simplifying the two-dimensional Gaussian distribution density map into a one-dimensional Gaussian distribution density map, wherein the simplifying the two-dimensional Gaussian distribution density map into the one-dimensional Gaussian distribution density map comprises: carrying out weighted summation on the mean value and the variance of each Gaussian distribution on the width dimension H, wherein the sum of the weights is 1;
performing Gaussian discretization on the Gaussian distribution of each position in the one-dimensional Gaussian distribution density map to obtain a classification matrix; and
and passing the classification matrix through a classifier to obtain a classification result for indicating whether the punching forming is normal or not.
According to the detection method and the detection system for the punching forming of the lead frame plastic package integrated circuit, a convolutional neural network model is adopted to extract the high-dimensional features of the sensing matrix from the array type transmission type photoelectric sensor and the regression reflection type photoelectric sensor, simultaneously, a Gaussian density map is adopted to fuse the first feature map and the second feature map, a Gaussian mixture model is further used to gradually simplify the three-dimensional Gaussian density map, and the density map simplification loss function value and the classification loss function value are used for comprehensively training the convolutional neural network, so that the convolutional neural model is helped to learn the consistent feature representation in the high-dimensional features. Thus, the abnormality can be detected better to protect the equipment and ensure the yield of the product.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing in more detail embodiments of the present application with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the principles of the application. In the drawings, like reference numbers generally represent like parts or steps.
Fig. 1A is one of schematic diagrams of a photo sensor formed by punching a lead frame plastic package integrated circuit according to an embodiment of the present application.
Fig. 1B is a second schematic diagram of a photo sensor formed by die cutting a lead frame plastic package integrated circuit according to the embodiment of the present application.
Fig. 2 is a flowchart of a training phase in the detection method for die cutting and molding of the lead frame plastic package integrated circuit according to the embodiment of the application.
Fig. 3 is a flowchart of an inference stage in a detection method for die cutting and molding of a lead frame plastic-packaged integrated circuit according to an embodiment of the application.
Fig. 4 is a schematic diagram of an architecture in a training stage in a detection method for die cutting and molding of a lead frame plastic package integrated circuit according to an embodiment of the present application.
Fig. 5 is a schematic diagram of an architecture of an inference stage in a detection method for die cutting and molding of a lead frame plastic package integrated circuit according to an embodiment of the application.
Fig. 6 is a block diagram of a detection system for die cutting and molding of a lead frame plastic package integrated circuit according to an embodiment of the application.
Detailed Description
Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be understood that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and that the present application is not limited by the example embodiments described herein.
Overview of a scene
As mentioned above, in the process of punching and molding the lead of the lead frame plastic package integrated circuit, there are abnormal phenomena such as breaking and falling of the plastic package body, breaking of the pin, etc., and if these abnormal phenomena cannot be found in time, the batch scrapping of the product or the damage of the related molding die may be caused. However, in the conventional inspection process, since only the surface of the plastic package is not inspected, the inspection is difficult and accurate, and a method for inspecting the die-cut molding of the lead frame plastic-packaged integrated circuit is desired in order to better detect the abnormality, protect the device, and ensure the yield of the product.
The photoelectric sensor is a sensor for detecting an object with light, and detects reflection, transmission, absorption, and the like of the object, thereby changing an optical signal transmitted from the sensor light projecting section, detecting the optical signal by the sensor light receiving section, and generating a corresponding output signal. Three are common: a transmission-type photosensor, a retro-reflection-type photosensor, and a reflection-type photosensor, as shown in fig. 1A.
In view of this, in the present invention, it is considered that the surface condition is not detected only, and therefore, both the transmission type photosensor and the retro-reflection type photosensor are selected to perform simultaneous detection.
Specifically, a first sensing matrix obtained by an array type transmission type photoelectric sensor and a second sensing matrix obtained by an array type regression reflection type photoelectric sensor are obtained, and a first characteristic diagram and a second characteristic diagram are obtained through convolution neural networks respectively.
Then, the first feature map and the second feature map are fused using a gaussian density map widely used as a learning objective function of a convolutional neural network, that is, a mean value of feature values of each position in the first feature map and the second feature map is taken as a mean value of a gaussian distribution, and a variance of the feature values is taken as a variance of the gaussian distribution, to obtain a three-dimensional gaussian density map
Figure BDA0003400138850000051
Next, the three-dimensional gaussian density map is simplified using a gaussian mixture model, i.e. first to a two-dimensional gaussian density map:
Figure BDA0003400138850000052
that is, the mean and variance of each Gaussian distribution are weighted and summed over the width dimension W, where πkIs denoted as a first weighting coefficient and the sum is one.
Then, the two-dimensional gaussian density map is reduced to a one-dimensional gaussian density map:
Figure BDA0003400138850000053
that is, the mean and variance of each Gaussian distribution are weighted over the width dimension H, where ρlDenoted as the second weighting coefficient and the sum of which is one.
In this way, a gaussian vector having a gaussian distribution at each position in the channel dimension is obtained, so that the gaussian distribution at each position can be subjected to gaussian discretization to obtain a classification matrix.
In the training process, besides inputting the classification matrix into the classifier to obtain the classification loss function value, the method further calculates the pikAnd ρlThe corresponding first and second density map reduce the loss function values, expressed as:
Figure BDA0003400138850000054
and then, the weighted sum of the first density map simplified loss function value and the second density map simplified loss function value and the classified loss function value is used as a loss function to train the convolutional neural network, so that the fused Gaussian density map can correct the target domain offset between the first feature map and the second feature map, and the convolutional neural model is helped to learn the consistent feature representation in the high-dimensional features.
Based on this, this application has proposed a detection method that lead frame plastic envelope integrated circuit die-cut was fashioned, it includes the training phase, includes: respectively obtaining a first induction matrix and a second induction matrix of the integrated circuit after punching forming through an array type transmission photoelectric sensor and an array type return reflection photoelectric sensor; spatially encoding the first and second sensing matrices using a convolutional neural network to obtain a first signature corresponding to the first sensing matrix and a second signature corresponding to the second sensing matrix; constructing a three-dimensional Gaussian distribution density map by using the mean value of the feature values at the respective positions in the first feature map and the second feature map as the mean value of the Gaussian distribution and the variance between the feature values at the respective positions in the first feature map and the second feature map as the variance of the Gaussian distribution
Figure BDA0003400138850000061
Simplifying the three-dimensional Gaussian distribution density map by a Gaussian mixture model to obtain a two-dimensional Gaussian distribution density map, wherein the three-dimensional Gaussian distribution density map is subjected to the Gaussian mixture modelSimplifying the dimensional Gaussian distribution density graph comprises weighting and summing the mean value and the variance of each Gaussian distribution in the width dimension W, wherein the sum of the weights is 1; simplifying the two-dimensional Gaussian distribution density map into a one-dimensional Gaussian distribution density map, wherein the simplification of the two-dimensional Gaussian distribution density map into the one-dimensional Gaussian distribution density map comprises the following steps: carrying out weighted summation on the mean value and the variance of each Gaussian distribution on the width dimension H, wherein the sum of the weights is 1; performing Gaussian discretization on the Gaussian distribution of each position in the one-dimensional Gaussian distribution density map to obtain a classification matrix; passing the classification matrix through a classifier to obtain a classification loss function value; calculating a first density map reduction loss function value that reduces the three-dimensional gaussian distribution density map to the two-dimensional gaussian distribution density map and calculating a second density map reduction loss function value that reduces the two-dimensional gaussian distribution density map to the one-dimensional gaussian distribution density map; and, calculating a weighted sum between the first density map reduction loss function value, the second density map reduction loss function value, and the classification loss function value as a loss function value to train the convolutional neural network; and an inference phase comprising: respectively obtaining a first induction matrix and a second induction matrix of the integrated circuit after punching forming through an array type transmission photoelectric sensor and an array type return reflection photoelectric sensor; spatially encoding the first and second sensing matrices using the convolutional neural network trained in a training phase to obtain a first signature corresponding to the first sensing matrix and a second signature corresponding to the second sensing matrix; constructing a three-dimensional Gaussian distribution density map by using the mean value of the feature values at the respective positions in the first feature map and the second feature map as the mean value of the Gaussian distribution and the variance between the feature values at the respective positions in the first feature map and the second feature map as the variance of the Gaussian distribution
Figure BDA0003400138850000062
Simplifying the three-dimensional Gaussian distribution density map by a Gaussian mixture model to obtain a two-dimensional Gaussian distribution density mapWherein, the simplifying the three-dimensional Gaussian distribution density map by the Gaussian mixture model comprises weighting and summing the mean value and the variance of each Gaussian distribution in the width dimension W, and the sum of the weights is 1; simplifying the two-dimensional Gaussian distribution density map into a one-dimensional Gaussian distribution density map, wherein the simplifying the two-dimensional Gaussian distribution density map into the one-dimensional Gaussian distribution density map comprises: carrying out weighted summation on the mean value and the variance of each Gaussian distribution on the width dimension H, wherein the sum of the weights is 1; performing Gaussian discretization on the Gaussian distribution of each position in the one-dimensional Gaussian distribution density map to obtain a classification matrix; and passing the classification matrix through a classifier to obtain a classification result for indicating whether the punching forming is normal or not.
Fig. 1B illustrates a scene schematic diagram of a detection method for die cutting and molding of a lead frame plastic-packaged integrated circuit according to an embodiment of the application. As shown in fig. 1B, in this application scenario, in the training phase, first, a first sensing matrix and a second sensing matrix of the integrated circuit (e.g., H as illustrated in fig. 1B) after die-cut molding are respectively obtained by a transmission-type photosensor of an array type (e.g., P1 as illustrated in fig. 1B) and a regression-type photosensor of an array type (e.g., P2 as illustrated in fig. 1B). Then, the obtained first sensing matrix and the second sensing matrix are input into a server (for example, S as illustrated in fig. 1B) deployed with a detection algorithm for die cutting and molding of the lead frame plastic package integrated circuit, wherein the server can train a convolutional neural network for die cutting and molding detection of the lead frame plastic package integrated circuit with the first sensing matrix and the second sensing matrix based on the detection algorithm for die cutting and molding of the lead frame plastic package integrated circuit.
After the training is completed, in the inference stage, the first sensing matrix and the second sensing matrix of the die-cut molded integrated circuit (e.g., H as illustrated in fig. 1B) are obtained by an array type transmission type photosensor (e.g., P1 as illustrated in fig. 1B) and an array type return reflection type photosensor (e.g., P2 as illustrated in fig. 1B), respectively. Then, the first sensing matrix and the second sensing matrix are input into a server (for example, S as illustrated in fig. 1B) deployed with a detection algorithm for die cutting and molding of the lead frame plastic package integrated circuit, where the server can process the first sensing matrix and the second sensing matrix by the detection algorithm for die cutting and molding of the lead frame plastic package integrated circuit to generate a classification result for indicating whether die cutting and molding are normal or not.
Having described the general principles of the present application, various non-limiting embodiments of the present application will now be described with reference to the accompanying drawings.
Exemplary method
Fig. 2 illustrates a flow chart of a training phase in the detection method for die cutting and forming of the lead frame plastic package integrated circuit according to the embodiment of the application. Fig. 3 illustrates a flow chart of an inference stage in a detection method for die cutting and forming of a lead frame plastic package integrated circuit according to an embodiment of the application. As shown in fig. 2, the method for detecting die cutting and molding of the lead frame plastic-encapsulated integrated circuit according to the embodiment of the application includes: a training phase comprising: s110, respectively obtaining a first induction matrix and a second induction matrix of the integrated circuit after punching forming through an array type transmission photoelectric sensor and an array type regression reflection photoelectric sensor; s120, using a convolutional neural network to spatially encode the first sensing matrix and the second sensing matrix to obtain a first characteristic diagram corresponding to the first sensing matrix and a second characteristic diagram corresponding to the second sensing matrix; s130, constructing a three-dimensional Gaussian distribution density map by taking the mean value of the feature values of the positions in the first feature map and the second feature map as the mean value of the Gaussian distribution and taking the variance between the feature values of the positions in the first feature map and the second feature map as the variance of the Gaussian distribution
Figure BDA0003400138850000081
S140, simplifying the three-dimensional Gaussian distribution density map by using a Gaussian mixture model to obtain a two-dimensional Gaussian distribution density map, wherein the simplification of the three-dimensional Gaussian distribution density map by using the Gaussian mixture model comprises the step of carrying out simplification on the three-dimensional Gaussian distribution density map in the width dimension WCarrying out weighted summation on the mean value and the variance of each Gaussian distribution, wherein the sum of the weights is 1; s150, simplifying the two-dimensional gaussian distribution density map into a one-dimensional gaussian distribution density map, wherein the simplifying the two-dimensional gaussian distribution density map into a one-dimensional gaussian distribution density map includes: carrying out weighted summation on the mean value and the variance of each Gaussian distribution on the width dimension H, wherein the sum of the weights is 1; s160, carrying out Gaussian discretization on the Gaussian distribution of each position in the one-dimensional Gaussian distribution density graph to obtain a classification matrix; s170, enabling the classification matrix to pass through a classifier to obtain a classification loss function value; s180, calculating a first density map simplification loss function value for simplifying the three-dimensional Gaussian distribution density map into the two-dimensional Gaussian distribution density map and calculating a second density map simplification loss function value for simplifying the two-dimensional Gaussian distribution density map into the one-dimensional Gaussian distribution density map; and, S190, calculating a weighted sum between the first density map reduction loss function value, the second density map reduction loss function value, and the classification loss function value as a loss function value to train the convolutional neural network.
As shown in fig. 3, the method for detecting die cutting and molding of a lead frame plastic-encapsulated integrated circuit according to the embodiment of the present application further includes: an inference phase comprising: s210, respectively obtaining a first induction matrix and a second induction matrix of the integrated circuit after punching forming through an array type transmission photoelectric sensor and an array type regression reflection photoelectric sensor; s220, using the convolutional neural network trained in the training stage to spatially encode the first sensing matrix and the second sensing matrix to obtain a first characteristic diagram corresponding to the first sensing matrix and a second characteristic diagram corresponding to the second sensing matrix; s230, constructing a three-dimensional Gaussian distribution density map by taking the mean value of the feature values of the positions in the first feature map and the second feature map as the mean value of the Gaussian distribution and taking the variance between the feature values of the positions in the first feature map and the second feature map as the variance of the Gaussian distribution
Figure BDA0003400138850000091
S240, simplifying the three-dimensional Gaussian distribution density map by using a Gaussian mixture model to obtain a two-dimensional Gaussian distribution density map, wherein the simplification of the three-dimensional Gaussian distribution density map by using the Gaussian mixture model comprises the step of carrying out weighted summation on the mean value and the variance of each Gaussian distribution in the width dimension W, and the sum of the weights is 1; s250, reducing the two-dimensional gaussian distribution density map into a one-dimensional gaussian distribution density map, wherein reducing the two-dimensional gaussian distribution density map into a one-dimensional gaussian distribution density map includes: carrying out weighted summation on the mean value and the variance of each Gaussian distribution on the width dimension H, wherein the sum of the weights is 1; s260, carrying out Gaussian discretization on the Gaussian distribution of each position in the one-dimensional Gaussian distribution density graph to obtain a classification matrix; and S270, enabling the classification matrix to pass through a classifier to obtain a classification result for indicating whether the punching forming is normal or not.
Fig. 4 illustrates an architecture diagram of a training phase in the detection method for die cutting and forming of the lead frame plastic package integrated circuit according to the embodiment of the application. As shown in fig. 4, in the training phase, first, in the network architecture, the obtained first sensing matrix (e.g., M1 as illustrated in fig. 4) and the second sensing matrix (e.g., M2 as illustrated in fig. 4) are spatially encoded using a convolutional neural network (e.g., CNN1 as illustrated in fig. 4) to obtain a first profile (e.g., F1 as illustrated in fig. 4) and a second profile (e.g., F2 as illustrated in fig. 4); then, constructing a three-dimensional gaussian distribution density map (for example, GD1 as illustrated in fig. 4) with the mean of the feature values of the respective positions in the first feature map and the second feature map as the mean of the gaussian distribution and the variance between the feature values of the respective positions in the first feature map and the second feature map as the variance of the gaussian distribution; then, simplifying the three-dimensional gaussian distribution density map with a gaussian mixture model to obtain a two-dimensional gaussian distribution density map (e.g., GD2 as illustrated in fig. 4); then, the two-dimensional gaussian distribution density map is simplified into a one-dimensional gaussian distribution density map (e.g., GD3 as illustrated in fig. 4); then, gaussian discretizing the gaussian distribution of each position in the one-dimensional gaussian distribution density map to obtain a classification matrix (e.g., M as illustrated in fig. 4); then, passing the classification matrix through a classifier (e.g., a classifier as illustrated in fig. 4) to obtain classification loss function values (e.g., CVs as illustrated in fig. 4); then, calculating a first density map reduction loss function value (e.g., SV1 as illustrated in fig. 4) that reduces the three-dimensional gaussian distribution density map to the two-dimensional gaussian distribution density map and calculating a second density map reduction loss function value (e.g., SV2 as illustrated in fig. 4) that reduces the two-dimensional gaussian distribution density map to the one-dimensional gaussian distribution density map; finally, a weighted sum between the first density map reduction loss function value, the second density map reduction loss function value, and the classification loss function value is calculated as a loss function value (e.g., LV as illustrated in fig. 4) to train the convolutional neural network.
Fig. 5 illustrates an architecture diagram of an inference stage in a detection method for die cutting and molding of a lead frame plastic package integrated circuit according to an embodiment of the application. As shown in fig. 5, in the inference stage, in the network structure, first, the first sensing matrix (e.g., MI1 as illustrated in fig. 5) and the second sensing matrix (e.g., MI2 as illustrated in fig. 5) are spatially encoded using the convolutional neural network (e.g., CNN2 as illustrated in fig. 5) trained by the training stage to obtain a first feature map (e.g., F3 as illustrated in fig. 5) and a second feature map (e.g., F4 as illustrated in fig. 5); then, a three-dimensional gaussian distribution density map is constructed with the mean value of the feature values of the respective positions in the first feature map and the second feature map as the mean value of the gaussian distribution and the variance between the feature values of the respective positions in the first feature map and the second feature map as the variance of the gaussian distribution (for example, as illustrated in fig. 5 as G1); then, the three-dimensional gaussian distribution density map is simplified by a gaussian mixture model to obtain a two-dimensional gaussian distribution density map (e.g., G2 as illustrated in fig. 5); then, the two-dimensional gaussian distribution density map is simplified into a one-dimensional gaussian distribution density map (e.g., as illustrated in fig. 5 as G3); then, gaussian discretizing the gaussian distribution of each position in the one-dimensional gaussian distribution density map to obtain a classification matrix (e.g., MC as illustrated in fig. 5); and, finally, passing the classification matrix through a classifier (e.g., a classifier as illustrated in fig. 5) to obtain a classification result indicating whether the die-cut molding is normal or not.
More specifically, in the training phase, in steps S110 and S120, a first sensing matrix and a second sensing matrix of the die-cut integrated circuit are obtained by an array-type transmissive photosensor and an array-type retro-reflective photosensor, respectively, and the first sensing matrix and the second sensing matrix are spatially encoded using a convolutional neural network to obtain a first characteristic diagram corresponding to the first sensing matrix and a second characteristic diagram corresponding to the second sensing matrix. As described above, in the present invention, both the transmission type photoelectric sensor and the retro-reflection type photoelectric sensor are selected for simultaneous detection in order to detect abnormality such as breakage and falling of the plastic package body and breakage of the lead during the punching and molding of the lead frame plastic-packaged integrated circuit, and to detect the abnormality more favorably, to protect the equipment and ensure the yield of the product, and to consider that the surface condition is not detected only. That is, in one specific example, first, a first sensing matrix and a second sensing matrix of the integrated circuit after die-cut molding are obtained by an array type transmission type photosensor and an array type retro-reflection type photosensor, respectively. And then, inputting the first sensing matrix and the second sensing matrix into a convolutional neural network for spatial coding, thereby obtaining a first characteristic diagram and a second characteristic diagram.
More specifically, in the embodiment of the present application, spatially encoding the first sensing matrix and the second sensing matrix using a convolutional neural network to obtain a first signature corresponding to the first sensing matrix and a second signature corresponding to the second sensing matrix includes: the convolutional neural network spatially encodes the first sensing matrix and the second sensing matrix in the following formula to obtain the first characteristic diagram and the second characteristic diagram;
wherein the formula is:
fi=active(Ni×fi-1+Bi)
wherein f isi-1Is the input of the i-th convolutional neural network, fiIs the output of the ith convolutional neural network, NiA filter which is the ith convolutional neural network, and BiActive represents a nonlinear activation function for the bias matrix of the layer i neural network.
More specifically, in the training phase, in step S130, a three-dimensional gaussian distribution density map is constructed with the mean value of the feature values of the respective positions in the first feature map and the second feature map as the mean value of the gaussian distribution and the variance between the feature values of the respective positions in the first feature map and the second feature map as the variance of the gaussian distribution
Figure BDA0003400138850000111
It should be understood that, in order to better fuse the information in the first feature map and the second feature map, in the technical solution of the present application, the first feature map and the second feature map are fused using a gaussian density map which is widely used as a learning objective function of the convolutional neural network, that is, a mean value of feature values of each position in the first feature map and the second feature map is taken as a mean value of a gaussian distribution, and a variance between feature values of each position in the first feature map and the second feature map is taken as a variance of the gaussian distribution, so as to obtain a three-dimensional gaussian density map
Figure BDA0003400138850000112
More specifically, in the training phase, in steps S140 and S150, the three-dimensional gaussian distribution density map is simplified by a gaussian mixture model to obtain a two-dimensional gaussian distribution density map, wherein the simplification of the three-dimensional gaussian distribution density map by the gaussian mixture model includes weighted summation of a mean value and a variance of each gaussian distribution over the width dimension W, the sum of each weight is 1, and the two-dimensional gaussian distribution density map is simplified into a one-dimensional gaussian distribution density map, wherein the simplification of the two-dimensional gaussian distribution density map into a one-dimensional gaussian distribution density map includes: the mean and variance of each gaussian distribution are weighted and summed over the width dimension H, with the sum of the weights being 1. That is, the obtained three-dimensional gaussian density map is simplified by using a gaussian mixture model, in the technical solution of the present application, first, the three-dimensional gaussian density map is simplified into a two-dimensional gaussian density map, and accordingly, in a specific example, the mean and the variance of each gaussian distribution are weighted and summed in the width dimension W, where the sum of the weights is one. Then, the two-dimensional gaussian density map is reduced to a one-dimensional gaussian density map, and accordingly, in a specific example, the mean and the variance of each gaussian distribution are weighted and summed in the width dimension H, wherein the sum of each weight is one.
More specifically, in the embodiment of the present application, simplifying the three-dimensional gaussian distribution density map with a gaussian mixture model to obtain a two-dimensional gaussian distribution density map includes: simplifying the three-dimensional Gaussian distribution density map by using a Gaussian mixture model according to the following formula to obtain a two-dimensional Gaussian distribution density map; wherein the formula is:
Figure BDA0003400138850000121
that is, the mean and variance of the Gaussian distributions are weighted together over the width dimension W, where πkIs denoted as a first weighting coefficient and the sum is one.
More specifically, in the embodiment of the present application, the reducing the two-dimensional gaussian distribution density map into a one-dimensional gaussian distribution density map includes: simplifying the two-dimensional Gaussian distribution density map into a one-dimensional Gaussian distribution density map by the following formula:
Figure BDA0003400138850000122
that is, the mean and variance of the Gaussian distributions are weighted and summed over the width dimension H, where ρlDenoted as the second weighting coefficient and the sum of which is one.
More specifically, in the training phase, in step S160 and step S170, the gaussian distribution at each position in the one-dimensional gaussian distribution density map is subjected to gaussian discretization to obtain a classification matrix, and the classification matrix is passed through a classifier to obtain a classification loss function value. That is, in the technical scheme of this application, after having obtained that every position on the passageway dimension is the gaussian vector of gaussian distribution, again to the gaussian distribution of every position in the one-dimensional gaussian distribution density map carries out the gaussian discretization to obtain the classification matrix, further again will the classification matrix passes through the classifier in order to obtain the classification loss function value, so that follow-up to the convolutional neural network trains.
More specifically, in the embodiment of the present application, the process of passing the classification matrix through a classifier to obtain a classification loss function value includes: first, the classification matrix is full-connected encoded using at least one full-connection layer of the classifier to obtain a classification feature vector. Then, the classification feature vector is input into a Softmax classification function of the classifier to obtain a classification result. In a specific example, the classification feature vector is input into a Softmax classification function of the classifier to obtain a first probability that the classification feature vector is normal for die cutting and a second probability that the classification feature vector is abnormal for die cutting, and further, when the first probability is greater than the second probability, the classification result is normal for die cutting; when the first probability is smaller than the second probability, the classification result is that the punching forming is abnormal. Then, a cross entropy value between the classification result and a true value is calculated as the classification loss function value.
More specifically, in the training phase, in step S180 and step S190, the calculation is to be saidSimplifying a three-dimensional Gaussian distribution density map into a first density map simplified loss function value of the two-dimensional Gaussian distribution density map, calculating a second density map simplified loss function value of the two-dimensional Gaussian distribution density map, and calculating a weighted sum of the first density map simplified loss function value, the second density map simplified loss function value and the classification loss function value as a loss function value to train the convolutional neural network. That is, in the technical solution of the present application, in the training process, in addition to inputting the classification matrix into the classifier to obtain the classification loss function value, pi is further calculatedkAnd ρlThe corresponding first and second density maps reduce the loss function values. And then training the convolutional neural network by taking the weighted sum of the first density map reduction loss function value, the second density map reduction loss function value and the classification loss function value as a loss function, so that the fused Gaussian density map can correct the target domain offset between the first feature map and the second feature map, and the convolutional neural model is helped to learn consistent feature representation in high-dimensional features.
More specifically, in the embodiment of the present application, calculating a first density map reduction loss function value that reduces the three-dimensional gaussian distribution density map to the two-dimensional gaussian distribution density map includes: calculating a first density map reduction loss function value for reducing the three-dimensional Gaussian distribution density map into the two-dimensional Gaussian distribution density map by using the following formula:
Figure BDA0003400138850000131
more specifically, in the embodiment of the present application, calculating a second density map reduction loss function value that reduces the two-dimensional gaussian distribution density map to the one-dimensional gaussian distribution density map includes: calculating a second density map reduction loss function value for reducing the two-dimensional Gaussian distribution density map into the one-dimensional Gaussian distribution density map by using the following formula:
Figure BDA0003400138850000132
after training is completed, the inference phase is entered. Similarly, first, a first sensor matrix and a second sensor matrix of a die-cut integrated circuit are obtained by an array-type transmission-type photoelectric sensor and an array-type retro-reflection-type photoelectric sensor, respectively. The first and second sensing matrices are then spatially encoded using the convolutional neural network trained using a training phase to obtain a first signature corresponding to the first sensing matrix and a second signature corresponding to the second sensing matrix. Then, a three-dimensional Gaussian distribution density map is constructed by using the mean value of the feature values of the positions in the first feature map and the second feature map as the mean value of the Gaussian distribution and using the variance between the feature values of the positions in the first feature map and the second feature map as the variance of the Gaussian distribution
Figure BDA0003400138850000141
Then, simplifying the three-dimensional gaussian distribution density map with a gaussian mixture model to obtain a two-dimensional gaussian distribution density map, wherein simplifying the three-dimensional gaussian distribution density map with the gaussian mixture model includes performing weighted summation of a mean and a variance of each gaussian distribution over a width dimension W, and a sum of the weights is 1. Then, the reducing the two-dimensional gaussian distribution density map into a one-dimensional gaussian distribution density map includes: the mean and variance of each gaussian distribution are weighted and summed over the width dimension H, with the sum of the weights being 1. Then, the gaussian distribution of each position in the one-dimensional gaussian distribution density map is subjected to gaussian discretization to obtain a classification matrix. And finally, passing the classification matrix through a classifier to obtain a classification result for indicating whether the punching forming is normal or not.
In summary, the method for detecting die-cut forming of a lead frame plastic package integrated circuit according to the embodiment of the present application is illustrated, and a convolutional neural network model is adopted to extract high-dimensional features of the sensing matrix from an array-type transmission-type photosensor and a regression reflection-type photosensor, and simultaneously a gaussian density map is adopted to fuse the first feature map and the second feature map, and further a gaussian mixture model is used to gradually simplify the three-dimensional gaussian density map, so that the convolutional neural network is comprehensively trained by using a density map simplification loss function value and a classification loss function value, thereby helping the convolutional neural model learn consistent feature representation in the high-dimensional features. Thus, the abnormality can be detected better to protect the equipment and ensure the yield of the product.
Exemplary System
Fig. 6 illustrates a block diagram of a detection system for die cutting and molding of a lead frame plastic package integrated circuit according to an embodiment of the application. As shown in fig. 6, a system 600 for detecting die cutting and molding of a lead frame plastic package integrated circuit according to an embodiment of the present application includes: a training module 610 and an inference module 620.
As shown in fig. 6, the training module 610 includes: an induction matrix obtaining unit 611, configured to obtain a first induction matrix and a second induction matrix of the integrated circuit after the die cutting and forming through the array type transmission type photoelectric sensor and the array type return reflection type photoelectric sensor, respectively; a convolutional neural network unit 612, configured to perform spatial coding on the first sensing matrix obtained by the sensing matrix obtaining unit 611 and the second sensing matrix obtained by the sensing matrix obtaining unit 611 by using a convolutional neural network to obtain a first characteristic diagram corresponding to the first sensing matrix and a second characteristic diagram corresponding to the second sensing matrix; a three-dimensional gaussian distribution density map constructing unit 613, configured to use a mean value of feature values of respective positions in the first feature map obtained by the convolutional neural network unit 612 and the second feature map obtained by the convolutional neural network unit 612 as a mean value of gaussian distribution and use a square between feature values of respective positions in the first feature map and the second feature mapThe difference is used as the variance of the Gaussian distribution, and a three-dimensional Gaussian distribution density map is constructed
Figure BDA0003400138850000151
A three-dimensional gaussian distribution density map simplifying unit 614 for simplifying the three-dimensional gaussian distribution density map obtained by the three-dimensional gaussian distribution density map constructing unit 613 with a gaussian mixture model to obtain a two-dimensional gaussian distribution density map, wherein the simplification of the three-dimensional gaussian distribution density map with the gaussian mixture model includes weighted summation of a mean and a variance of each gaussian distribution over a width dimension W, and the sum of the weights is 1; a two-dimensional gaussian distribution density map simplifying unit 615, configured to simplify the two-dimensional gaussian distribution density map obtained by the three-dimensional gaussian distribution density map simplifying unit 614 into a one-dimensional gaussian distribution density map, where simplifying the two-dimensional gaussian distribution density map into a one-dimensional gaussian distribution density map includes: carrying out weighted summation on the mean value and the variance of each Gaussian distribution on the width dimension H, wherein the sum of the weights is 1; a gaussian discretization unit 616 for performing gaussian discretization on the gaussian distribution at each position in the one-dimensional gaussian distribution density map obtained by the two-dimensional gaussian distribution density map simplifying unit 615 to obtain a classification matrix; a classifier processing unit 617, configured to pass the classification matrix obtained by the gaussian discretization unit 616 through a classifier to obtain a classification loss function value; a simplified loss function value calculation unit 618 for calculating a first density map simplified loss function value for simplifying the three-dimensional gaussian distribution density map obtained by the three-dimensional gaussian distribution density map construction unit 613 to the two-dimensional gaussian distribution density map obtained by the three-dimensional gaussian distribution density map simplification unit 614, and calculating a second density map simplified loss function value for simplifying the two-dimensional gaussian distribution density map obtained by the three-dimensional gaussian distribution density map simplification unit 614 to the one-dimensional gaussian distribution density map obtained by the two-dimensional gaussian distribution density map simplification unit 615; a training unit 619 for calculating the first density map simplified loss function value obtained by the simplified loss function value calculation unit 618, the simplified loss function value obtained by the simplified loss function value calculation unit 618The weighted sum between the second density map reduction loss function value and the classification loss function value obtained by the classifier processing unit 617 as a loss function value to train the convolutional neural network.
As shown in fig. 6, the inference module 620 includes: a matrix generating unit 621 configured to obtain a first sensing matrix and a second sensing matrix of the integrated circuit after the die-cut molding by using the array type transmissive photosensor and the array type retro-reflective photosensor, respectively; a feature map generating unit 622, configured to perform spatial encoding on the first sensing matrix obtained by the matrix generating unit 621 and the second sensing matrix obtained by the matrix generating unit 621 using the convolutional neural network trained in the training phase to obtain a first feature map corresponding to the first sensing matrix and a second feature map corresponding to the second sensing matrix; a gaussian distribution density map generating unit 623 configured to construct a three-dimensional gaussian distribution density map by using the mean value of the feature values at the respective positions in the first feature map obtained by the feature map generating unit 622 and the second feature map obtained by the feature map generating unit 622 as the mean value of the gaussian distribution and the variance between the feature values at the respective positions in the first feature map and the second feature map as the variance of the gaussian distribution
Figure BDA0003400138850000161
A two-dimensional gaussian distribution density map generating unit 624, configured to simplify the three-dimensional gaussian distribution density map obtained by the gaussian distribution density map generating unit 623 by using a gaussian mixture model to obtain a two-dimensional gaussian distribution density map, where simplifying the three-dimensional gaussian distribution density map by using the gaussian mixture model includes performing weighted summation on a mean and a variance of each gaussian distribution in the width dimension W, and a sum of the weights is 1; a one-dimensional gaussian distribution density map generating unit 625, configured to reduce the two-dimensional gaussian distribution density map obtained by the two-dimensional gaussian distribution density map generating unit 624 into a one-dimensional gaussian distribution density map, where reducing the two-dimensional gaussian distribution density map into a one-dimensional gaussian distribution density map includes: in thatCarrying out weighted summation on the mean value and the variance of each Gaussian distribution on the width dimension H, wherein the sum of the weights is 1; a classification matrix generating unit 626 configured to perform gaussian discretization on the gaussian distribution at each position in the one-dimensional gaussian distribution density map obtained by the one-dimensional gaussian distribution density map generating unit 625 to obtain a classification matrix; and a classification unit 627 for passing the classification matrix obtained by the classification matrix generation unit 626 through a classifier to obtain a classification result indicating whether the die-cutting molding is normal or not.
Here, it will be understood by those skilled in the art that the detailed functions and operations of the respective units and modules in the above-described inspection system 600 for die cutting of lead frame plastic-encapsulated integrated circuits have been described in detail in the above description of the inspection method for die cutting of lead frame plastic-encapsulated integrated circuits with reference to fig. 1 to 5, and thus, a repetitive description thereof will be omitted.
As described above, the inspection system 600 for die cutting and molding of the lead frame plastic packaged integrated circuit according to the embodiment of the present application can be implemented in various terminal devices, such as a server for an inspection algorithm for die cutting and molding of the lead frame plastic packaged integrated circuit. In one example, the lead frame plastic package integrated circuit die cutting detection system 600 according to the embodiment of the present application may be integrated into a terminal device as a software module and/or a hardware module. For example, the detection system 600 for die cutting and molding of the lead frame plastic package integrated circuit may be a software module in the operating system of the terminal device, or may be an application program developed for the terminal device; of course, the detecting system 600 for die cutting and molding of the lead frame plastic package integrated circuit can also be one of numerous hardware modules of the terminal device.
Alternatively, in another example, the lead frame plastic package integrated circuit die-cut detection system 600 and the terminal device may be separate devices, and the lead frame plastic package integrated circuit die-cut detection system 600 may be connected to the terminal device through a wired and/or wireless network and transmit the interaction information according to the agreed data format.
The foregoing describes the general principles of the present application in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present application are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present application. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the foregoing disclosure is not intended to be exhaustive or to limit the disclosure to the precise details disclosed.
The block diagrams of devices, apparatuses, systems referred to in this application are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
It should also be noted that in the devices, apparatuses, and methods of the present application, the components or steps may be decomposed and/or recombined. These decompositions and/or recombinations are to be considered as equivalents of the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the application to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (10)

1. A detection method for die cutting and forming of a lead frame plastic package integrated circuit is characterized by comprising the following steps:
a training phase comprising:
respectively obtaining a first induction matrix and a second induction matrix of the integrated circuit after punching forming through an array type transmission photoelectric sensor and an array type return reflection photoelectric sensor;
spatially encoding the first and second sensing matrices using a convolutional neural network to obtain a first signature corresponding to the first sensing matrix and a second signature corresponding to the second sensing matrix;
constructing a three-dimensional Gaussian distribution density map by using the mean value of the feature values at the respective positions in the first feature map and the second feature map as the mean value of the Gaussian distribution and the variance between the feature values at the respective positions in the first feature map and the second feature map as the variance of the Gaussian distribution
Figure FDA0003400138840000011
Simplifying the three-dimensional Gaussian distribution density map with a Gaussian mixture model to obtain a two-dimensional Gaussian distribution density map, wherein the simplifying the three-dimensional Gaussian distribution density map with the Gaussian mixture model comprises weighting and summing the mean and variance of each Gaussian distribution in the width dimension W, and the sum of the weights is 1;
simplifying the two-dimensional Gaussian distribution density map into a one-dimensional Gaussian distribution density map, wherein the simplifying the two-dimensional Gaussian distribution density map into the one-dimensional Gaussian distribution density map comprises: carrying out weighted summation on the mean value and the variance of each Gaussian distribution on the width dimension H, wherein the sum of the weights is 1;
performing Gaussian discretization on the Gaussian distribution of each position in the one-dimensional Gaussian distribution density map to obtain a classification matrix;
passing the classification matrix through a classifier to obtain a classification loss function value;
calculating a first density map reduction loss function value that reduces the three-dimensional gaussian distribution density map to the two-dimensional gaussian distribution density map and calculating a second density map reduction loss function value that reduces the two-dimensional gaussian distribution density map to the one-dimensional gaussian distribution density map; and
computing a weighted sum between the first density map reduction loss function value, the second density map reduction loss function value, and the classification loss function value as a loss function value to train the convolutional neural network; and
an inference phase comprising:
respectively obtaining a first induction matrix and a second induction matrix of the integrated circuit after punching forming through an array type transmission photoelectric sensor and an array type return reflection photoelectric sensor;
spatially encoding the first and second sensing matrices using the convolutional neural network trained in a training phase to obtain a first signature corresponding to the first sensing matrix and a second signature corresponding to the second sensing matrix;
constructing a three-dimensional Gaussian distribution density map by using the mean value of the feature values at the respective positions in the first feature map and the second feature map as the mean value of the Gaussian distribution and the variance between the feature values at the respective positions in the first feature map and the second feature map as the variance of the Gaussian distribution
Figure FDA0003400138840000021
Simplifying the three-dimensional Gaussian distribution density map with a Gaussian mixture model to obtain a two-dimensional Gaussian distribution density map, wherein the simplifying the three-dimensional Gaussian distribution density map with the Gaussian mixture model comprises weighting and summing the mean and variance of each Gaussian distribution in the width dimension W, and the sum of the weights is 1;
simplifying the two-dimensional Gaussian distribution density map into a one-dimensional Gaussian distribution density map, wherein the simplifying the two-dimensional Gaussian distribution density map into the one-dimensional Gaussian distribution density map comprises: carrying out weighted summation on the mean value and the variance of each Gaussian distribution on the width dimension H, wherein the sum of the weights is 1;
performing Gaussian discretization on the Gaussian distribution of each position in the one-dimensional Gaussian distribution density map to obtain a classification matrix; and
and passing the classification matrix through a classifier to obtain a classification result for indicating whether the punching forming is normal or not.
2. The method for detecting the die cutting forming of the lead frame plastic package integrated circuit according to claim 1, wherein the step of spatially encoding the first sensing matrix and the second sensing matrix by using a convolutional neural network to obtain a first characteristic diagram corresponding to the first sensing matrix and a second characteristic diagram corresponding to the second sensing matrix comprises the steps of:
the convolutional neural network spatially encodes the first sensing matrix and the second sensing matrix in the following formula to obtain the first characteristic diagram and the second characteristic diagram;
wherein the formula is:
fi=active(Ni×fi-1+Bi)
wherein f isi-1Is the input of the i-th convolutional neural network, fiIs the output of the ith convolutional neural network, NiA filter which is the ith convolutional neural network, and BiActive represents a nonlinear activation function for the bias matrix of the layer i neural network.
3. The method for detecting the die cutting formation of the lead frame plastic package integrated circuit according to claim 2, wherein the step of simplifying the three-dimensional Gaussian distribution density map by using a Gaussian mixture model to obtain a two-dimensional Gaussian distribution density map comprises the following steps:
simplifying the three-dimensional Gaussian distribution density map by using a Gaussian mixture model according to the following formula to obtain a two-dimensional Gaussian distribution density map;
wherein the formula is:
Figure FDA0003400138840000031
4. the method for detecting the punching forming of the lead frame plastic package integrated circuit according to claim 3, wherein the step of simplifying the two-dimensional Gaussian distribution density map into a one-dimensional Gaussian distribution density map comprises the following steps:
simplifying the two-dimensional Gaussian distribution density map into a one-dimensional Gaussian distribution density map by the following formula:
Figure FDA0003400138840000032
5. the method for detecting the die-cut forming of the lead frame plastic package integrated circuit as claimed in claim 4, wherein the step of passing the classification matrix through a classifier to obtain a classification loss function value comprises the steps of:
fully concatenating the classification matrix using at least one fully concatenated layer of the classifier to obtain a classification feature vector;
inputting the classification feature vector into a Softmax classification function of the classifier to obtain a classification result; and
and calculating a cross entropy value between the classification result and the real value as the classification loss function value.
6. The method for detecting the punching forming of the lead frame plastic package integrated circuit according to claim 5, wherein the step of calculating a first density map reduction loss function value for reducing the three-dimensional Gaussian distribution density map into the two-dimensional Gaussian distribution density map comprises the steps of:
calculating a first density map reduction loss function value for reducing the three-dimensional Gaussian distribution density map into the two-dimensional Gaussian distribution density map by using the following formula:
Figure FDA0003400138840000041
7. the method for detecting the die-cut forming of the lead frame plastic package integrated circuit according to claim 6, wherein the step of calculating a second density map reduction loss function value for reducing the two-dimensional Gaussian distribution density map into the one-dimensional Gaussian distribution density map comprises the steps of:
calculating a second density map simplification loss function value for simplifying the two-dimensional Gaussian distribution density map into the one-dimensional Gaussian distribution density map by using the following formula:
Figure FDA0003400138840000042
8. the utility model provides a die-cut fashioned detecting system of lead frame plastic envelope integrated circuit which characterized in that includes:
a training module comprising:
an induction matrix obtaining unit, configured to obtain a first induction matrix and a second induction matrix of the integrated circuit after die cutting and molding, respectively, by using the array type transmission-type photoelectric sensor and the array type retro-reflection-type photoelectric sensor;
a convolutional neural network unit, configured to perform spatial coding on the first sensing matrix obtained by the sensing matrix obtaining unit and the second sensing matrix obtained by the sensing matrix obtaining unit by using a convolutional neural network to obtain a first characteristic diagram corresponding to the first sensing matrix and a second characteristic diagram corresponding to the second sensing matrix;
a three-dimensional Gaussian distribution density map construction unit for constructing the three-dimensional Gaussian distribution density mapConstructing a three-dimensional Gaussian distribution density map by using the mean value of the feature values of the respective positions in the first feature map obtained by the convolutional neural network unit and the second feature map obtained by the convolutional neural network unit as the mean value of the Gaussian distribution and the variance between the feature values of the respective positions in the first feature map and the second feature map as the variance of the Gaussian distribution
Figure FDA0003400138840000043
A three-dimensional gaussian distribution density map simplifying unit configured to simplify the three-dimensional gaussian distribution density map obtained by the three-dimensional gaussian distribution density map constructing unit by using a gaussian mixture model to obtain a two-dimensional gaussian distribution density map, wherein the simplification of the three-dimensional gaussian distribution density map by using the gaussian mixture model includes weighted summation of a mean value and a variance of each gaussian distribution in a width dimension W, and the sum of the weights is 1;
a two-dimensional gaussian distribution density map simplifying unit, configured to simplify the two-dimensional gaussian distribution density map obtained by the three-dimensional gaussian distribution density map simplifying unit into a one-dimensional gaussian distribution density map, where the simplification of the two-dimensional gaussian distribution density map into the one-dimensional gaussian distribution density map includes: carrying out weighted summation on the mean value and the variance of each Gaussian distribution on the width dimension H, wherein the sum of the weights is 1;
a gaussian discretization unit configured to perform gaussian discretization on a gaussian distribution at each position in the one-dimensional gaussian distribution density map obtained by the two-dimensional gaussian distribution density map simplifying unit to obtain a classification matrix;
the classifier processing unit is used for enabling the classification matrix obtained by the Gaussian discretization unit to pass through a classifier so as to obtain a classification loss function value;
a simplified loss function value calculation unit configured to calculate a first density map simplified loss function value for simplifying the three-dimensional gaussian distribution density map obtained by the three-dimensional gaussian distribution density map construction unit into the two-dimensional gaussian distribution density map obtained by the three-dimensional gaussian distribution density map simplification unit, and calculate a second density map simplified loss function value for simplifying the two-dimensional gaussian distribution density map obtained by the three-dimensional gaussian distribution density map simplification unit into the one-dimensional gaussian distribution density map obtained by the two-dimensional gaussian distribution density map simplification unit; and
a training unit configured to calculate a weighted sum of the first density map simplified loss function value obtained by the simplified loss function value calculation unit, the second density map simplified loss function value obtained by the simplified loss function value calculation unit, and the classification loss function value obtained by the classifier processing unit as a loss function value to train the convolutional neural network; and
an inference module comprising:
the matrix generation unit is used for respectively obtaining a first induction matrix and a second induction matrix of the integrated circuit after punching forming through the array type transmission type photoelectric sensor and the array type regression reflection type photoelectric sensor;
a feature map generation unit, configured to perform spatial encoding on the first sensing matrix obtained by the matrix generation unit and the second sensing matrix obtained by the matrix generation unit using the convolutional neural network trained in the training phase to obtain a first feature map corresponding to the first sensing matrix and a second feature map corresponding to the second sensing matrix;
a gaussian distribution density map generating unit configured to construct a three-dimensional gaussian distribution density map by using a mean value of feature values of respective positions in the first feature map obtained by the feature map generating unit and the second feature map obtained by the feature map generating unit as a mean value of a gaussian distribution and a variance between feature values of respective positions in the first feature map and the second feature map as a variance of the gaussian distribution
Figure FDA0003400138840000061
A two-dimensional gaussian distribution density map generation unit, configured to simplify the three-dimensional gaussian distribution density map obtained by the gaussian distribution density map generation unit with a gaussian mixture model to obtain a two-dimensional gaussian distribution density map, wherein simplifying the three-dimensional gaussian distribution density map with the gaussian mixture model includes performing weighted summation on a mean and a variance of each gaussian distribution in a width dimension W, and a sum of weights is 1;
a one-dimensional gaussian distribution density map generating unit, configured to simplify the two-dimensional gaussian distribution density map obtained by the two-dimensional gaussian distribution density map generating unit into a one-dimensional gaussian distribution density map, where the simplification of the two-dimensional gaussian distribution density map into the one-dimensional gaussian distribution density map includes: carrying out weighted summation on the mean value and the variance of each Gaussian distribution on the width dimension H, wherein the sum of the weights is 1;
a classification matrix generating unit configured to perform gaussian discretization on the gaussian distribution at each position in the one-dimensional gaussian distribution density map obtained by the one-dimensional gaussian distribution density map generating unit to obtain a classification matrix; and
and the classification unit is used for enabling the classification matrix obtained by the classification matrix generation unit to pass through a classifier so as to obtain a classification result for indicating whether punching forming is normal or not.
9. The system for detecting the die-cut formation of the lead frame plastic package integrated circuit according to claim 8, wherein the convolutional neural network unit is further configured to:
the convolutional neural network spatially encodes the first sensing matrix and the second sensing matrix in the following formula to obtain the first characteristic diagram and the second characteristic diagram;
wherein the formula is:
fi=active(Ni×fi-1+Bi)
wherein f isi-1Is the input of the i-th convolutional neural network, fiIs the output of the ith convolutional neural network, NiA filter which is the ith convolutional neural network, and BiActive represents a nonlinear activation function for the bias matrix of the layer i neural network.
10. The system for inspecting the die-cut formation of the plastic package integrated circuit with lead frame according to claim 8, wherein the three-dimensional gaussian distribution density map simplifying unit is further configured to:
simplifying the three-dimensional Gaussian distribution density map by using a Gaussian mixture model according to the following formula to obtain a two-dimensional Gaussian distribution density map; wherein the formula is:
Figure FDA0003400138840000062
Figure FDA0003400138840000063
CN202111493585.1A 2021-12-08 2021-12-08 Detection method and system for punching and molding of lead frame plastic package integrated circuit Withdrawn CN114708451A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111493585.1A CN114708451A (en) 2021-12-08 2021-12-08 Detection method and system for punching and molding of lead frame plastic package integrated circuit

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111493585.1A CN114708451A (en) 2021-12-08 2021-12-08 Detection method and system for punching and molding of lead frame plastic package integrated circuit

Publications (1)

Publication Number Publication Date
CN114708451A true CN114708451A (en) 2022-07-05

Family

ID=82167461

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111493585.1A Withdrawn CN114708451A (en) 2021-12-08 2021-12-08 Detection method and system for punching and molding of lead frame plastic package integrated circuit

Country Status (1)

Country Link
CN (1) CN114708451A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114972327A (en) * 2022-07-12 2022-08-30 爱尔达电气有限公司 Semiconductor package test system and test method thereof
CN115861246A (en) * 2022-12-09 2023-03-28 马鞍山远昂科技有限公司 Product quality abnormity detection method and system applied to industrial Internet

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114972327A (en) * 2022-07-12 2022-08-30 爱尔达电气有限公司 Semiconductor package test system and test method thereof
CN115861246A (en) * 2022-12-09 2023-03-28 马鞍山远昂科技有限公司 Product quality abnormity detection method and system applied to industrial Internet
CN115861246B (en) * 2022-12-09 2024-02-27 唐山旭华智能科技有限公司 Product quality abnormality detection method and system applied to industrial Internet

Similar Documents

Publication Publication Date Title
CN114708451A (en) Detection method and system for punching and molding of lead frame plastic package integrated circuit
JP6742554B1 (en) Information processing apparatus and electronic apparatus including the same
CN115456789B (en) Abnormal transaction detection method and system based on transaction pattern recognition
CN115834433B (en) Data processing method and system based on Internet of things technology
CN114724386A (en) Short-time traffic flow prediction method and system under intelligent traffic and electronic equipment
CN115512166B (en) Intelligent preparation method and system of lens
CN112232309A (en) Method, electronic device and storage medium for thermographic face recognition
CN114065693A (en) Method and system for optimizing layout of super-large-scale integrated circuit structure and electronic equipment
CN115600140A (en) Fan variable pitch system fault identification method and system based on multi-source data fusion
US20210073635A1 (en) Quantization parameter optimization method and quantization parameter optimization device
CN115470857A (en) Panoramic digital twin system and method for transformer substation
CN115374822A (en) Fault diagnosis method and system based on multi-level feature fusion
CN115744084A (en) Belt tensioning control system and method based on multi-sensor data fusion
CN114821519B (en) Traffic sign recognition method and system based on coordinate attention
CN116700008A (en) Uniform discharging control system controlled by activated coal feeder
CN115761900A (en) Internet of things cloud platform for practical training base management
CN117595504A (en) Intelligent monitoring and early warning method for power grid running state
CN116320459B (en) Computer network communication data processing method and system based on artificial intelligence
CN116385365A (en) Coal flow monitoring system and method based on scanning image
CN116001579A (en) Emergency power-off method and system for new energy vehicle
TW202123054A (en) Building device and building method of prediction model and monitoring system for product quality
TWI636276B (en) Method of determining earthquake with artificial intelligence and earthquake detecting system
CN116124448A (en) Fault diagnosis system and method for wind power gear box
US11875559B2 (en) Systems and methodologies for automated classification of images of stool in diapers
US20220114383A1 (en) Image recognition method and image recognition system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20220705

WW01 Invention patent application withdrawn after publication