CN116256366A - Chip defect detection method, detection system and storage medium - Google Patents

Chip defect detection method, detection system and storage medium Download PDF

Info

Publication number
CN116256366A
CN116256366A CN202310134291.2A CN202310134291A CN116256366A CN 116256366 A CN116256366 A CN 116256366A CN 202310134291 A CN202310134291 A CN 202310134291A CN 116256366 A CN116256366 A CN 116256366A
Authority
CN
China
Prior art keywords
training
chip
deep learning
image
weight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310134291.2A
Other languages
Chinese (zh)
Inventor
刘文斌
曹广忠
梁芳萍
胡海
李国泉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Raybow Optoelectronics Co ltd
Shenzhen University
Original Assignee
Shenzhen Raybow Optoelectronics Co ltd
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Raybow Optoelectronics Co ltd, Shenzhen University filed Critical Shenzhen Raybow Optoelectronics Co ltd
Priority to CN202310134291.2A priority Critical patent/CN116256366A/en
Publication of CN116256366A publication Critical patent/CN116256366A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8887Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques
    • G01N2021/8893Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques providing a video image and a processed signal for helping visual decision
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The application discloses a chip defect detection method, a chip defect detection system and a storage medium. The method is applied to a detection system, and the detection system comprises an image acquisition module, a motion control module and detection equipment. The method comprises the following steps: placing at least one chip to be detected through a motion control module; controlling at least one chip to be detected to move to a preset position through a motion control module; acquiring a surface image of a chip to be detected positioned in a preset position through an image acquisition module; and detecting the surface image of the chip to be detected through detection equipment to obtain a detection result. The machine vision replaces manual visual inspection, so that the quality and efficiency of detection can be effectively improved, and labor and cost are reduced.

Description

Chip defect detection method, detection system and storage medium
Technical Field
The present disclosure relates to the field of semiconductor laser chip detection, and in particular, to a method and system for detecting a chip defect, and a storage medium.
Background
Currently, enterprises engaged in the semiconductor laser industry mostly adopt a manual visual inspection method to detect the surface defects of the semiconductor laser chip. Because the semiconductor laser chip product specification is millimeter level, its defect is only micron size more, so the manual visual inspection needs human eye and microscope high cooperation, and its testing result and efficiency are limited by the manual experience knowledge, resulting in high manual visual inspection testing cost, consuming a large amount of manpower, and simultaneously being prone to occurrence of the problem of low efficiency such as missed detection false detection.
Disclosure of Invention
In view of the above, the present application provides a method, a system and a storage medium for detecting a chip defect, so as to solve the above technical problems.
The first aspect of the present application provides a method for detecting a chip defect, which is applied to a detection system, wherein the detection system includes an image acquisition module, a motion control module and a detection device, and the detection method includes: placing at least one chip to be detected through the motion control module; controlling at least one chip to be detected to move to a preset position through the motion control module; acquiring a surface image of the chip to be detected located in the preset position through the image acquisition module; and detecting the surface image of the chip to be detected through the detection equipment to obtain a detection result.
In some embodiments, the step of detecting, by the detection device, the surface image of the chip to be detected, and obtaining a detection result includes: building a deep learning frame; acquiring a plurality of sample images, and dividing the sample images into training sample images and verification sample images based on a preset proportion; setting a corresponding training sample image label based on the training sample image, and inputting the training sample image and the training sample image label into the deep learning frame; setting preset parameters in the deep learning frame, and performing deep learning on the training sample image based on the deep learning frame to obtain final training weights and the final training parameters, wherein the preset parameters comprise a first preset parameter and a second preset parameter; based on the final training parameters, performing deep learning on the verification sample image through the deep learning framework to obtain verification weights; judging whether the verification weight is the same as the final training weight; and setting the final training parameters as preset test parameters in response to the verification weight being the same as the training weight.
In some embodiments, the step of setting preset parameters in the deep learning frame, performing deep learning on the training sample image based on the deep learning frame, and obtaining a final training weight and the final training parameters includes: setting a first preset parameter in the deep learning frame, and performing deep learning on the training sample image based on the deep learning frame to obtain a first training label corresponding to the training sample image; obtaining a first training weight by comparing the first training label with the training sample image label; setting a second preset parameter in the deep learning frame, and performing deep learning on the training sample image based on the deep learning frame to obtain a second training label corresponding to the second preset parameter; obtaining a second training weight by comparing the second training label with the training sample image label; judging whether the first training weight is larger than the second training weight; and in response to the first training weight being greater than the second training weight, setting the first training weight as a final training weight, and setting a first training parameter corresponding to the first training weight as a final training parameter.
In some embodiments, the step of determining whether the first training weight is greater than the second training weight further comprises: and setting the second training weight as a final training weight and setting a second training parameter corresponding to the second training weight as a final training parameter in response to the first training weight being smaller than or equal to the second training weight.
In some embodiments, the step of obtaining the verification weight by deep learning the verification sample image through the deep learning framework based on the final training parameters includes: setting a corresponding verification sample image tag based on the verification sample image, and inputting the verification sample image and the verification sample image tag into the deep learning framework; setting the final training parameters in the deep learning frame, and performing deep learning on the verification sample image based on the deep learning frame to obtain verification tags corresponding to the final training parameters; and comparing the verification tag with the verification sample image tag to obtain verification weight.
In some embodiments, the step of setting the final training parameter to a preset test parameter in response to the verification weight and the training weight being the same includes: acquiring a plurality of test sample images; setting the preset test parameters in the deep learning frame, and performing deep learning on the test sample image based on the deep learning frame to obtain a test result; calculating the precision of the test result, and judging whether the test result is larger than a first threshold value or not; setting the preset test parameters as final test parameters in response to the detection accuracy of the test result being greater than the first threshold; and detecting the surface image of the chip to be detected through the deep learning framework based on the final training parameters to obtain a detection result.
In some embodiments, the step of acquiring a plurality of sample images comprises: acquiring a plurality of original images; the original image is corrected to the sample image based on brightness adjustment and projection adjustment.
A second aspect of the present application provides a detection system comprising: the image acquisition module is used for acquiring an image of the chip to be detected; the motion control module is arranged on one side of the image acquisition module and is used for moving at least one chip to be detected to a preset position; the detection equipment is respectively connected with the motion control module and the image acquisition module and is used for controlling the motion control module to move at least one chip to be detected to a preset position; the detection device acquires the image from the image acquisition module to obtain a detection result based on the image.
In some embodiments, the image acquisition module comprises: the telecentric lens module is used for acquiring the images of the chips to be detected with the same magnification; the camera module is arranged on one side of the telecentric lens module along the first direction and is used for acquiring and storing the image of the chip to be detected; the coaxial point light source module is arranged on one side of the telecentric lens module along a second direction and used for emitting an illumination light source, wherein the first direction and the second direction are vertically arranged; the annular light source module is arranged on one side, far away from the camera module, of the telecentric lens module along a first direction and is used for emitting the illumination light source.
In some embodiments, the motion control module comprises: the first moving guide rail is used for driving the first moving sliding block to reciprocate on the first moving guide rail; the first moving slide block is arranged on the first moving guide rail along a first direction and is used for driving the loading platform to reciprocate along the first moving guide rail; the first servo motor is arranged on one side of the first moving slide block along the second direction and used for driving the first moving guide rail, so that the first moving slide block moves back and forth on the first moving guide rail, and the second direction is perpendicular to the first direction.
In some embodiments, the motion control module further comprises: the mounting bracket is fixed on one side of the first moving slide block, which is far away from the first moving guide rail, and is perpendicular to the first moving slide block and used for fixing the second moving guide rail; the second moving guide rail is fixed on one side of the mounting bracket, which is far away from the first moving slide block, and is used for driving the second moving slide block to reciprocate on the second moving guide rail; the second moving slide block is arranged on the second moving guide rail along a third direction and is used for driving the loading platform to reciprocate along the second moving guide rail; the second servo motor is arranged on one side of the second moving slide block along the fourth direction and is used for driving the second moving guide rail to enable the second moving slide block to reciprocate on the second moving guide rail, and the third direction is perpendicular to the fourth direction; the loading platform is fixed on one side, far away from the second moving guide rail, of the second moving slide block and is used for placing at least one chip to be detected.
A third aspect of the present application provides a detection system, including a memory and a processor coupled to each other, where the processor is configured to execute program instructions stored in the memory, so as to implement the method for detecting a chip defect in the first aspect.
A fourth aspect of the present application provides a non-transitory computer readable storage medium having stored thereon program instructions which, when executed by a processor, implement the method for detecting a chip defect in the first aspect described above.
According to the chip defect detection method, the chip defect detection system and the storage medium, at least one chip to be detected can be placed through the motion control device, the at least one chip to be detected is controlled to move to the preset position through the motion control device, the image acquisition module acquires the surface image of the chip to be detected located in the preset position, and then the surface image of the chip to be detected is detected, so that the detection result is obtained. The machine vision replaces manual visual inspection, so that the quality and efficiency of detection can be effectively improved, and labor and cost are reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for detecting chip defects according to the present application;
FIG. 2 is a flow chart of an embodiment of the step S14 in FIG. 1;
FIG. 3 is a flowchart of the step S24 in FIG. 2;
FIG. 4 is a flowchart of an embodiment of step S25 in FIG. 2;
FIG. 5 is a flowchart illustrating an embodiment after the step S27 in FIG. 2;
FIG. 6 is a schematic diagram of an embodiment of a detection system of the present application;
FIG. 7 is a schematic diagram of another embodiment of the detection system of the present application;
FIG. 8 is a schematic structural view of an embodiment of an image acquisition module of the present application;
FIG. 9 is a schematic diagram of an embodiment of a motion control module of the present application;
FIG. 10 is a schematic diagram of a frame of another embodiment of the detection system of the present application;
FIG. 11 is a schematic diagram of a framework of one embodiment of the non-transitory computer readable storage medium of the present application.
Detailed Description
The following describes the embodiments of the present application in detail with reference to the drawings. It is specifically noted that the following examples are only for illustration of the present application, but do not limit the scope of the present application. Likewise, the following embodiments are only some, but not all, of the embodiments of the present application, and all other embodiments obtained by one of ordinary skill in the art without inventive effort are within the scope of the present application.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
In the description of the present application, it should be noted that, unless explicitly stated and limited otherwise, the terms "mounted," "disposed," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; the connection can be mechanical connection or electric connection; may be directly connected or may be connected via an intermediate medium. It will be apparent to those skilled in the art that the foregoing is in the specific sense of this application.
In the description of the present application, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship. Further, "a plurality" herein means two or more than two. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "plurality" is at least two, such as two, three, etc., unless explicitly defined otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present application.
Currently, enterprises engaged in the semiconductor laser industry mostly adopt a manual visual inspection method to detect the surface defects of the semiconductor laser chip. Because the semiconductor laser chip product specification is millimeter level, its defect is only micron size more, so the manual visual inspection needs human eye and microscope high cooperation, and its testing result and efficiency are limited by the manual experience knowledge, resulting in high manual visual inspection testing cost, consuming a large amount of manpower, and simultaneously being prone to occurrence of the problem of low efficiency such as missed detection false detection.
In order to solve the above-mentioned problems, the present application proposes a method for detecting a chip defect, please refer to fig. 1, fig. 1 is a flow chart of a method for detecting a chip defect of the present application. The method can be applied to a detection system, and the detection system comprises an image acquisition module, a motion control module, a multifunctional control module and detection equipment.
In some possible implementations, the method may be implemented by way of a processor invoking computer readable instructions stored in a memory. Specifically, the method may include the steps of:
step S11: at least one chip to be detected is placed by a motion control module.
Alternatively, the motion control module may be provided with a loading platform, by which at least one chip to be tested is placed. In some embodiments, a plurality of chip cartridges may be provided in the loading platform, and a plurality of chips to be tested may be placed in each chip cartridge. For example, the loading platform may be provided with 4 chip cartridges, each of which is provided with 30 chips to be tested. It should be noted that the above is merely exemplary, and the present application is not limited thereto.
Step S12: and controlling at least one chip to be detected to move to a preset position through the motion control module.
Optionally, the motion control module may include a first motion slider, a second motion slider, a first motion rail and a second motion rail, where the first motion slider may be disposed on the first motion rail along a first direction, and is configured to drive the loading platform to make a reciprocating motion along the first motion rail direction; the second moving slide block can be arranged on the second moving guide rail along the third direction and used for driving the loading platform to reciprocate along the second moving guide rail. Wherein the first direction and the third direction may be vertically arranged. In an application scenario, the detection device may establish a coordinate system based on the first direction and the third direction, and calculate a position coordinate of the preset position, where the position coordinate based on the first direction may be a first position coordinate, and the position coordinate based on the second direction may be a second position coordinate. The detection equipment calculates a first position coordinate and a second position coordinate of a preset position; the motion control module controls the first motion sliding block to move to a first preset position along the first direction and controls the second motion sliding block to move to a second preset position along the third direction, so that the motion control module controls at least one chip to be detected to move to the preset position.
Step S13: and acquiring the surface image of the chip to be detected positioned in the preset position through an image acquisition module.
Optionally, the image acquisition module may include a telecentric lens module, a camera module, a coaxial point light source module, and an annular light source module. The telecentric lens module is used for acquiring images of the chips to be detected with the same magnification; the camera module is arranged at one side of the telecentric lens module and is used for acquiring and storing images of the chip to be detected; the coaxial point light source module is arranged on one side of the telecentric lens module, which is far away from the camera module, and is used for emitting an illumination light source; the annular light source module is arranged on one side of the coaxial point light source module, which is far away from the telecentric lens module, and is used for emitting an illumination light source. The coaxial point light source module and the annular light source module jointly emit light sources to provide illumination of surface images of the chip to be detected.
In some embodiments, the detection device may collect the surface image of the chip to be detected by controlling the image collection module, and store the collected surface image.
Step S14: and detecting the surface image of the chip to be detected through detection equipment to obtain a detection result.
Alternatively, the detection device may comprise a computer processing system, in particular a computer processing system comprising a memory and a processor coupled to each other. In some embodiments, the computer processing system may be connected to the image acquisition module, receive the image data acquired by the image acquisition module, process and analyze the image data, perform defect detection, and identify the serial number of the chip to be detected.
In this embodiment, at least one chip to be detected may be placed by the motion control device, and the motion control device controls the at least one chip to be detected to move to a preset position, and the image acquisition module acquires the surface image of the chip to be detected located in the preset position, so as to detect the surface image of the chip to be detected, thereby obtaining a detection result. The machine vision replaces manual visual inspection, so that the quality and efficiency of detection can be effectively improved, and labor and cost are reduced.
Referring to fig. 2, fig. 2 is a flow chart illustrating an embodiment of step S14 in fig. 1. Step S14 includes the steps of:
step S21: and (5) constructing a deep learning frame.
Alternatively, the deep learning framework may include a YOLOv5 network structure.
When constructing the YOLOv5 network structure, four parts of an input end, a Backbone, neck end and an output end can be included. The input end can be spliced in a mode of randomly zooming, randomly cutting and randomly arranging a plurality of sample images, so that learning samples for deep learning are greatly enriched, and the detection result is more accurate.
The Backbone and the Neck comprise a plurality of convolution modules and a plurality of function models, wherein the convolution modules comprise a plurality of preset parameters, and the detection result is more accurate through continuous adjustment of the parameters. In some embodiments, a CSP (Cross Stage Partial, cross-phase local network) structure may be included in the backhaul that may enhance the learning ability of deep learning, eliminate computational bottlenecks, and reduce memory costs.
Optionally, in the YOLOv5 network structure, the optimized YOLOv5 network model can be obtained by combining a back propagation algorithm and a random gradient descent algorithm to adjust preset parameters in the network.
Optionally, the deep learning framework may also include a VGG-16 network. In VGG-16 network, transfer learning can be performed to extract features of sample images.
Step S22: a plurality of sample images are acquired and the sample images are divided into training sample images and verification sample images based on a preset ratio.
The sample image may include a surface image of a defective chip and a surface image of a normal chip, among others. In some embodiments, a plurality of original images may be acquired first, brightness adjustment and projection adjustment may be performed on the original images, and the original image samples may be corrected to sample images.
Optionally, an image binarization processing method can be selected when brightness adjustment is performed, and image binarization refers to converting a color image into a black-and-white image. For example, by setting the gradation value of the pixel point on the original image to 0 or 255, the entire original image exhibits a remarkable black-and-white effect. By binarizing the image, the data amount in the original image is greatly reduced, so that the outline of the target can be highlighted.
Optionally, a perspective transformation method can be selected when projection adjustment is performed, and the perspective transformation method uses the condition that three points of a perspective center, an image point and a target point are collinear, so that a perspective surface rotates around a perspective axis by a certain angle according to a perspective rotation law, an original projection light beam is damaged, and transformation of a projection geometric figure on the perspective surface can be kept unchanged. In an application scene, when the image acquisition module detects the surface image of the chip to be detected, the chip to be detected is not strictly placed in the charging box and always has a certain and irregular inclination angle, so that the acquired image can be subjected to projection adjustment by using a perspective transformation method.
In some embodiments, a background noise removal algorithm may be employed to remove background noise from the brightness-adjusted and projection-adjusted sample image. For example, a median filtering algorithm may be employed to remove background noise from the brightness-adjusted and projection-adjusted sample image.
In one implementation scenario, the step S21 and the step S22 may be performed sequentially, for example, the step S21 is performed first and then the step S22 is performed; alternatively, step S22 is performed first, and then step S21 is performed. In another implementation scenario, the above step S21 and step S22 may be performed simultaneously, and specifically may be set according to practical applications, which is not limited herein.
Step S23: and setting corresponding training sample image labels based on the training sample images, and inputting the training sample images and the training sample image labels into the deep learning framework.
Optionally, marking tools can be selected to set various defect labels for the training sample images. Specifically, corresponding chip defects may be marked in the surface image of the defective chip, wherein the types of chip defects include cracking, contamination, scratches, gold wire breakage, and collapse. In some embodiments, feature information corresponding to the training sample image may be obtained, the database is queried according to the feature information, and a corresponding training sample image tag is set for the feature information.
Step S24: setting preset parameters in a deep learning frame, and performing deep learning on the training sample image based on the deep learning frame to obtain final training weights and final training parameters, wherein the preset parameters comprise a first preset parameter and a second preset parameter.
In some embodiments, preset parameters are input into a deep learning frame, deep learning is carried out on training sample images through the deep learning frame, and corresponding training labels are set; and (3) comparing the training label with the training sample image label, and calculating to obtain the accuracy rate of the training label, namely training weight.
Optionally, in the optimized deep learning framework, the preset parameters may be input multiple times to obtain the corresponding training weights. For example, the preset parameters may include a first preset parameter and a second preset parameter. Obtaining a first training weight by inputting a first preset parameter; and obtaining a second training weight by inputting a second preset parameter.
After the corresponding training weights are obtained, the training weights are compared, the optimal training weights are obtained through screening, and then final training parameters corresponding to the optimal training weights are obtained. For example, comparing the first training weight with the second training weight, screening to obtain that the first training weight is the optimal training weight, and then the first training weight is the final training weight, and the first preset parameter is the final training parameter.
It should be noted that, the number of preset parameters and the number of training weights in the present application are not limited, and the "first" and the "second" are only exemplary contents.
Step S25: and based on the final training parameters, performing deep learning on the verification sample image through a deep learning framework to obtain verification weights.
Before the verification sample image is deep-learned by the deep-learning framework, a corresponding verification sample image tag may be set based on the verification sample image. Optionally, marking tools can be selected to set various defect labels for verifying the sample image. Specifically, corresponding chip defects may be marked in the surface image of the defective chip, wherein the types of chip defects include cracking, contamination, scratches, gold wire breakage, and collapse. In some embodiments, feature information corresponding to the verification sample image may be obtained, the database is queried according to the feature information, and a corresponding verification sample image tag is set for the feature information.
In some embodiments, the final training parameters are input into a deep learning framework, the deep learning framework is used for deep learning of the verification sample image, and corresponding verification tags are set; and comparing the verification label with the verification sample image label, and calculating to obtain the accuracy of the final training parameters, namely the verification weight.
Step S26: and judging whether the verification weight and the final training weight are the same.
And determining whether the final training parameters can be set as preset test parameters by comparing the verification weights and the final training weights.
Optionally, a verification threshold may be set, and the verification difference is obtained by calculating the difference between the verification weight and the final training weight. Comparing the verification difference value with a verification threshold value, and if the verification difference value is smaller than or equal to the verification threshold value, responding to the verification weight and training weight to be the same; if the verification weight is greater than the final training weight, then in response to the verification weight and the training weight being different.
Step S27: and setting the final training parameters as preset test parameters in response to the same verification weight and training weight.
In some embodiments, preset test parameters are set in a deep learning framework, and the test sample image is subjected to deep learning based on the deep learning framework to obtain a test result.
Referring to fig. 3, fig. 3 is a flow chart illustrating an embodiment of step S24 in fig. 2. Step S24 includes the steps of:
step S31: setting a first preset parameter in a deep learning frame, and performing deep learning on the training sample image based on the deep learning frame to obtain a first training label corresponding to the training sample image.
The preset parameters can be set manually according to actual application scenes, and can also be calculated according to related functions.
Step S32: and comparing the first training label with the training sample image label to obtain first training weight.
Specifically, a labeling tool can be selected to label the training sample image, so as to obtain a training sample image label corresponding to the training sample image.
Step S33: setting a second preset parameter in the deep learning frame, and performing deep learning on the training sample image based on the deep learning frame to obtain a second training label corresponding to the second preset parameter.
Step S33 in this embodiment is the same as the above embodiment, and will not be described here again.
Step S34: and comparing the second training label with the training sample image label to obtain second training weight.
Step S34 in this embodiment is the same as the above embodiment, and will not be described here again.
Step S35: and judging whether the first training weight is larger than the second training weight.
If the first training weight is greater than the second training weight, go to step S36; if the first training weight is less than or equal to the second training weight, the process proceeds to step S37.
Step S36: and in response to the first training weight being greater than the second training weight, setting the first training weight as a final training weight, and setting a first training parameter corresponding to the first training weight as a final training parameter.
Step S37: and setting the second training weight as a final training weight and setting a second training parameter corresponding to the second training weight as a final training parameter in response to the first training weight being less than or equal to the second training weight.
Referring to fig. 4, fig. 4 is a flowchart illustrating an embodiment of step S25 in fig. 2. Step S25 includes the steps of:
step S41: and setting a corresponding verification sample image label based on the verification sample image, and inputting the verification sample image and the verification sample image label into the deep learning framework.
Optionally, marking tools can be selected to set various defect labels for verifying the sample image. Specifically, corresponding chip defects may be marked in the surface image of the defective chip, wherein the types of chip defects include cracking, contamination, scratches, gold wire breakage, and collapse. In some embodiments, feature information corresponding to the verification sample image may be obtained, the database is queried according to the feature information, and a corresponding verification sample image tag is set for the feature information.
Step S42: and setting final training parameters in the deep learning framework, and performing deep learning on the verification sample image based on the deep learning framework to obtain a verification tag corresponding to the final training parameters.
In some embodiments, the verification sample image may include a first verification sample image and a second verification sample image. For example, there are 200 verification sample images, and the verification sample images are divided into two groups, namely a first verification sample image and a second verification sample image. Wherein, the first verification sample image has 100 pieces, and the second verification sample image has 100 pieces.
Optionally, final training parameters are set in a deep learning frame, deep learning is performed on the first verification sample image based on the deep learning frame, and corresponding first verification tags are set; and performing deep learning on the second verification sample image based on the deep learning framework, and setting a corresponding second verification tag.
It should be noted that, the number of preset verification sample images and the number of verification tags in the present application are not limited, and the "first" and the "second" are only exemplary contents.
Step S43: and comparing the verification tag with the verification sample image tag to obtain verification weight.
In some embodiments, the accuracy of the final training parameter in the first verification sample image, that is, the first verification weight, is calculated by comparing the first verification label with the first verification sample image label; and comparing the second verification label with the second verification sample image label, and calculating to obtain the accuracy of the final training parameters in the second verification sample image, namely the second verification weight. In some embodiments, an average of the first validation weight and the second validation weight is calculated and taken as the validation weight.
Referring to fig. 5, fig. 5 is a flowchart illustrating an embodiment after step S27 in fig. 2. Step S27 is followed by the steps of:
step S51: a plurality of test sample images is acquired.
The test sample image may include a surface image of a defective chip and a surface image of a normal chip, among others. In some embodiments, a plurality of original test images may be acquired first, brightness adjustment and projection adjustment may be performed on the original test images, and the original test image samples may be corrected to test sample images.
Optionally, a perspective transformation method can be selected when projection adjustment is performed, and the perspective transformation method uses the condition that three points of a perspective center, an image point and a target point are collinear, so that a perspective surface rotates around a perspective axis by a certain angle according to a perspective rotation law, an original projection light beam is damaged, and transformation of a projection geometric figure on the perspective surface can be kept unchanged. In an application scene, when the image acquisition module detects the surface image of the chip to be detected, the chip to be detected is not strictly placed in the charging box and always has a certain and irregular inclination angle, so that the acquired image can be subjected to projection adjustment by using a perspective transformation method.
In some embodiments, a background noise removal algorithm may be employed to remove background noise from the brightness-adjusted and projection-adjusted test sample image. For example, a median filtering algorithm may be employed to remove background noise from the brightness-adjusted and projection-adjusted sample image.
Step S52: and setting preset test parameters in the deep learning framework, and performing deep learning on the test sample image based on the deep learning framework to obtain a test result.
In some embodiments, preset test parameters are input into a deep learning framework, and the deep learning framework is used for deep learning of the test sample images to obtain various defects of the corresponding test sample images, namely test results.
Step S53: and calculating the precision of the test result, and judging whether the test result is larger than a first threshold value.
If the test result is greater than the first threshold, the process proceeds to step S54.
For example, based on the test sample image, it is determined that the accuracy of the test result is 96%, and the first threshold is 90%, and the test result is greater than the first threshold, and the process proceeds to step S54.
Step S54: and setting the preset test parameters as final test parameters in response to the detection accuracy of the test result being greater than a first threshold.
For example, if the accuracy of the test result is 96% and the first threshold is 90% based on the test sample image, and the test result is greater than the first threshold, the preset test parameter is set as the final test parameter.
Step S55: and detecting the surface image of the chip to be detected through the deep learning framework based on the final training parameters to obtain a detection result.
Optionally, the detection result includes an image of the chip to be detected, various defects of the chip to be detected, and a serial number of the chip to be detected. In some embodiments, various defects may be marked with different colors by using an image of the chip to be detected, where the first color mark may be used as a mark with a first level of priority, and needs to be treated preferentially, for example, chip cracking and pollution; the second color marking may be a marking with a priority level of two, such as a scratch, a gold thread break.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an embodiment of the detection system of the present application. The detection system 60 includes an image acquisition module 61 and a motion control module 62. The image acquisition module 61 is used for acquiring an image of a chip to be detected, and the motion control module 62 is arranged on one side of the image acquisition module 61 and used for moving at least one chip to be detected to a preset position.
Referring to fig. 7, fig. 7 is a schematic structural diagram of another embodiment of the detection system of the present application. The detection system 60 includes an image acquisition module 61, a motion control module 62, a detection device 63, a light source controller 64, a first servo motor controller 65, a second servo motor controller 66, a motion control board card 67, and a system power supply 68.
Wherein the image acquisition module 61 is arranged at one side of the motion control module 62; the detection device 63 is connected with the light source controller 64, the image acquisition module 61 and the motion control board 67; the light source controller 64 is connected with the image acquisition module 61; the first servo motor controller 65 is connected with the motion control module 62; the second servo motor controller 66 is connected with the motion control module 62; the motion control board 67 is connected with the first servo motor controller 65 and the second servo motor controller 66; the system power supply 68 is connected to the motion control board card 67.
Specifically, the motion control module 62 is used for moving at least one chip to be detected to a preset position, the image acquisition module 61 is used for acquiring an image of the chip to be detected, the detection device 63 is used for controlling the motion control module 62 to move at least one chip to be detected to the preset position, the motion control board card 67 is used for driving the first servo motor controller 65 and the second servo motor controller 66, the light source controller 64 is used for driving the coaxial point light source module and the annular light source module, the first servo motor controller 65 is used for controlling the operation of the first servo motor, and the second servo motor controller 66 is used for controlling the operation of the second servo motor.
In the present embodiment, the detection apparatus 63 controls the motion control module 62 to move at least one chip to be detected to a preset position, and acquires an image from the image acquisition module 61 to obtain a detection result based on the image.
Alternatively, a coordinate system may be established, and a position coordinate of the preset position may be calculated, where the position coordinate based on the first direction may be a first position coordinate, and the position coordinate based on the second direction may be a second position coordinate.
In some embodiments, the motion control module 62 includes a first servomotor for driving the chip to be inspected to move to the first position coordinates and a second servomotor; the second servo motor is used for driving the chip to be detected to move to a second position coordinate. Specifically, the system power supply 68 provides a driving power supply for the motion control board 67, the detection device 63 sends an operation instruction to the motion control board 67, and the first servo motor controller 65 and the second servo motor controller 66 are driven by the motion control function drive in the motion control board 67, so that the first servo motor is driven to move the chip to be detected to the first position coordinate, and the second servo motor is driven to move the chip to be detected to the second position coordinate.
In some embodiments, the image acquisition module 61 acquires an image of the chip to be detected, and sends the acquired image data of the chip to be detected to the detection device 63, and the detection device 63 analyzes the image data to obtain a detection result.
Optionally, the image acquisition module 61 may include an on-axis point light source module and an annular light source module, and the light source controller 64 is connected to the on-axis point light source module and the annular light source module, and may control the brightness of the light source in the on-axis point light source module and the annular light source module and control the illumination state of the light source to be illumination or extinction.
The detecting device 63 may be an upper computer, and the detecting device 63 may include an industrial personal computer 69 and a display screen 601. The industrial personal computer 69 is connected with the light source controller 64, the image acquisition module 61 and the motion control board 67 and is used for controlling the light source controller 64, the image acquisition module 61 and the motion control board 67; the display screen 601 is connected with the industrial personal computer 69 and is used for displaying detection results.
Referring to fig. 6 and 8, fig. 8 is a schematic structural diagram of an embodiment of an image capturing module of the present application. The image acquisition module 61 includes a telecentric lens module 72, a camera module 71, a coaxial point light source module 73, and an annular light source module 74.
The camera module 71 is disposed on one side of the telecentric lens module 72 along a first direction, the annular light source module 74 is disposed on one side of the telecentric lens module 72 far away from the camera module 71 along the first direction, and the coaxial point light source module 73 is disposed on one side of the telecentric lens module 72 along a second direction, wherein the first direction and the second direction are perpendicular.
Specifically, the telecentric lens module 72 is used for obtaining the image of the chip to be detected with the same magnification, and is mainly designed for correcting the parallax of the traditional industrial lens, and can ensure that the obtained image magnification cannot change within a certain object distance range, which is very important application for the situation that the measured object is not on the same object plane. Optionally, a coaxial point light source interface may be provided on one side of telecentric lens module 72 for mounting coaxial point light source module 73.
The camera module 71 is used to acquire and store an image of the chip to be inspected, alternatively, the camera module 71 may use a CCD camera (Charge Coupled Device ), which is a semiconductor device capable of converting an optical image into a digital signal. The tiny photosensitive substances implanted on the CCD are called pixels (pixels). The greater the number of pixels contained on a CCD, the higher the resolution of the picture it provides. The CCD acts like a film, but it converts image pixels into digital signals. The CCD has many capacitors arranged orderly, which can sense light and convert the image into digital signals. Each small capacitor can transfer its charge to its adjacent capacitor via control of an external circuit.
The coaxial point light source module 73 and the annular light source module 74 are both used for emitting illumination light sources, and collectively provide illumination of a full image. The ring light source module 74 may be a high-density LED (Light Emitting Diode ) array, which irradiates the surface of the chip to be detected in a conical shape, and illuminates a small area by diffuse reflection to highlight the edge and height variation of the chip to be detected and the details of the surface of the chip to be detected. In some embodiments, the ring-shaped light source module 74 may include a diffuse reflecting plate to make the light more uniform. The coaxial point light source module 73 can eliminate shadows caused by uneven surfaces of objects, thereby reducing interference. In some embodiments, the coaxial point light source module 73 may be designed by a beam splitter, so as to reduce light loss and improve imaging definition. Wherein the beam splitter is coated glass. One or more thin films are coated on the surface of the optical glass, and when one beam of light is projected onto the coated glass, the beam of light is split into two or more beams by reflection and refraction.
Referring to fig. 6 and 9, fig. 8 is a schematic structural diagram of an embodiment of a motion control module of the present application. The motion control module 62 includes a first motion rail 81, a first motion slide 82, a first servo motor 83, a mounting bracket 84, a second motion rail 85, a second motion slide 86, a second servo motor 87, and a loading platform 88.
Wherein, the first motion slide 82 is disposed on the first motion guide rail 81 along the first direction, the first servo motor 83 is disposed on one side of the first motion slide 82 along the second direction, the mounting bracket 84 is fixed on one side of the first motion slide 82 far away from the first motion guide rail 81, the second motion guide rail 85 is fixed on one side of the mounting bracket 84 far away from the first motion slide 82, the second servo motor 87 is disposed on one side of the second motion slide 86 along the fourth direction, and the loading platform 88 is fixed on one side of the second motion slide 86 far away from the second motion guide rail 85.
Specifically, the first moving rail 81 is used for driving the first moving slider 82 to reciprocate on the first moving rail 81; the first moving slide 82 is used for driving the loading platform 88 to reciprocate along the direction of the first moving guide rail 81; the first servo motor 83 is used for driving the first moving guide rail 81 to enable the first moving sliding block 82 to reciprocate on the first moving guide rail 81, wherein the second direction is perpendicular to the first direction; the mounting bracket 84 is arranged perpendicular to the first moving slide 82 and is used for fixing the second moving guide rail 85; the second moving guide rail 85 is used for driving the second moving slide block 86 to reciprocate on the second moving guide rail 85; the second moving slide block 86 is used for driving the loading platform 88 to reciprocate along the second moving guide rail 85; the second servo motor 87 is used for driving the second moving guide rail 85 to enable the second moving slide block 86 to reciprocate on the second moving guide rail 85, wherein the third direction is perpendicular to the fourth direction; the loading platform 88 is used for placing at least one chip to be tested.
In an application scenario, the detection device 63 may establish a coordinate system based on the first direction and the third direction, and calculate a position coordinate of the preset position, where the position coordinate based on the first direction may be a first position coordinate, and the position coordinate based on the second direction may be a second position coordinate. The detection device 63 calculates a first position coordinate and a second position coordinate of the preset position; the motion control module 62 controls the first motion slide 82 to move to the first position coordinate along the first direction, and controls the second motion slide 86 to move to the second position coordinate along the third direction, so that the motion control module 62 controls the at least one chip to be detected to move to the preset position.
Referring to fig. 10, fig. 10 is a schematic diagram of a frame of another embodiment of the detection system of the present application. The inspection system 90 includes a memory 91 and a processor 92 coupled to each other, the processor 92 being configured to execute program instructions stored in the memory 91 to implement the steps of any of the above-described method embodiments for detecting chip defects. In one particular implementation, the detection system 90 may include, but is not limited to: the detection system 90 may also include mobile devices such as a notebook computer, a tablet computer, etc., without limitation.
Specifically, the processor 92 is configured to control itself and the memory 91 to implement the steps of any of the above-described embodiments of the method for detecting a chip defect. The processor 92 may also be referred to as a CPU (Central Processing Unit ). The processor 92 may be an integrated circuit chip with signal processing capabilities. The processor 92 may also be a general purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a Field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 92 may be commonly implemented by an integrated circuit chip.
Referring to FIG. 11, FIG. 11 is a schematic diagram illustrating an embodiment of a non-volatile computer-readable storage medium of the present application. The non-transitory computer readable storage medium 100 stores program instructions 1001 executable by a processor, the program instructions 1001 for implementing the steps of the method embodiment for detecting a chip defect of any one of the above.
In some embodiments, functions or modules included in an apparatus provided by the embodiments of the present disclosure may be used to perform a method described in the foregoing method embodiments, and specific implementations thereof may refer to descriptions of the foregoing method embodiments, which are not repeated herein for brevity.
The foregoing description of various embodiments is intended to highlight differences between the various embodiments, which may be the same or similar to each other by reference, and is not repeated herein for the sake of brevity.
In the several embodiments provided in the present application, it should be understood that the disclosed methods and apparatus may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical, or other forms.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all or part of the technical solution contributing to the prior art or in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.

Claims (13)

1. The method for detecting the chip defects is characterized by being applied to a detection system, wherein the detection system comprises an image acquisition module, a motion control module and detection equipment, and the detection method comprises the following steps:
placing at least one chip to be detected through the motion control module;
controlling at least one chip to be detected to move to a preset position through the motion control module;
acquiring a surface image of the chip to be detected located in the preset position through the image acquisition module;
and detecting the surface image of the chip to be detected through the detection equipment to obtain a detection result.
2. The method for detecting a chip defect according to claim 1, wherein,
the step of detecting the surface image of the chip to be detected by the detection equipment to obtain a detection result comprises the following steps:
building a deep learning frame;
acquiring a plurality of sample images, and dividing the sample images into training sample images and verification sample images based on a preset proportion;
setting a corresponding training sample image label based on the training sample image, and inputting the training sample image and the training sample image label into the deep learning frame;
Setting preset parameters in the deep learning frame, and performing deep learning on the training sample image based on the deep learning frame to obtain final training weights and final training parameters, wherein the preset parameters comprise a first preset parameter and a second preset parameter;
based on the final training parameters, performing deep learning on the verification sample image through the deep learning framework to obtain verification weights;
judging whether the verification weight is the same as the final training weight;
and setting the final training parameters as preset test parameters in response to the verification weight being the same as the final training weight.
3. The method for detecting a chip defect according to claim 2, wherein,
the step of setting preset parameters in the deep learning frame, and performing deep learning on the training sample image based on the deep learning frame to obtain final training weights and the final training parameters comprises the following steps:
setting a first preset parameter in the deep learning frame, and performing deep learning on the training sample image based on the deep learning frame to obtain a first training label corresponding to the training sample image;
Obtaining a first training weight by comparing the first training label with the training sample image label;
setting a second preset parameter in the deep learning frame, and performing deep learning on the training sample image based on the deep learning frame to obtain a second training label corresponding to the second preset parameter;
obtaining a second training weight by comparing the second training label with the training sample image label;
judging whether the first training weight is larger than the second training weight;
and in response to the first training weight being greater than the second training weight, setting the first training weight as a final training weight, and setting a first training parameter corresponding to the first training weight as a final training parameter.
4. The method of claim 3, wherein the step of determining whether the first training weight is greater than the second training weight further comprises:
and setting the second training weight as a final training weight and setting a second training parameter corresponding to the second training weight as a final training parameter in response to the first training weight being smaller than or equal to the second training weight.
5. The method for detecting a chip defect according to claim 2, wherein,
the step of obtaining the verification weight includes the steps of:
setting a corresponding verification sample image tag based on the verification sample image, and inputting the verification sample image and the verification sample image tag into the deep learning framework;
setting the final training parameters in the deep learning frame, and performing deep learning on the verification sample image based on the deep learning frame to obtain verification tags corresponding to the final training parameters;
and comparing the verification tag with the verification sample image tag to obtain verification weight.
6. The method according to claim 2, wherein the step of setting the final training parameter to a preset test parameter in response to the verification weight and the training weight being the same comprises:
acquiring a plurality of test sample images;
setting the preset test parameters in the deep learning frame, and performing deep learning on the test sample image based on the deep learning frame to obtain a test result;
Calculating the precision of the test result, and judging whether the test result is larger than a first threshold value or not;
setting the preset test parameters as final test parameters in response to the detection accuracy of the test result being greater than the first threshold;
and detecting the surface image of the chip to be detected through the deep learning framework based on the final training parameters to obtain a detection result.
7. The method of claim 2, wherein the step of acquiring a plurality of sample images comprises:
acquiring a plurality of original images;
the original image is corrected to the sample image based on brightness adjustment and projection adjustment.
8. A detection system, the detection system comprising:
the image acquisition module is used for acquiring an image of the chip to be detected;
the motion control module is arranged on one side of the image acquisition module and is used for moving at least one chip to be detected to a preset position;
the detection equipment is respectively connected with the motion control module and the image acquisition module and is used for controlling the motion control module to move at least one chip to be detected to a preset position;
The detection device acquires the image from the image acquisition module to obtain a detection result based on the image.
9. The detection system of claim 8, wherein the image acquisition module comprises:
the telecentric lens module is used for acquiring the images of the chips to be detected with the same magnification;
the camera module is arranged on one side of the telecentric lens module along the first direction and is used for acquiring and storing the image of the chip to be detected;
the coaxial point light source module is arranged on one side of the telecentric lens module along a second direction and used for emitting an illumination light source, wherein the first direction and the second direction are vertically arranged;
the annular light source module is arranged on one side, far away from the camera module, of the telecentric lens module along a first direction and is used for emitting the illumination light source.
10. The detection system of claim 8, wherein the motion control module comprises:
the first moving guide rail is used for driving the first moving sliding block to reciprocate on the first moving guide rail;
the first moving slide block is arranged on the first moving guide rail along a first direction and is used for driving the loading platform to reciprocate along the first moving guide rail;
The first servo motor is arranged on one side of the first moving slide block along the second direction and used for driving the first moving guide rail, so that the first moving slide block moves back and forth on the first moving guide rail, and the second direction is perpendicular to the first direction.
11. The detection system of claim 10, wherein the motion control module further comprises:
the mounting bracket is fixed on one side of the first moving slide block, which is far away from the first moving guide rail, and is perpendicular to the first moving slide block and used for fixing the second moving guide rail;
the second moving guide rail is fixed on one side of the mounting bracket, which is far away from the first moving slide block, and is used for driving the second moving slide block to reciprocate on the second moving guide rail;
the second moving slide block is arranged on the second moving guide rail along a third direction and is used for driving the loading platform to reciprocate along the second moving guide rail;
the second servo motor is arranged on one side of the second moving slide block along the fourth direction and is used for driving the second moving guide rail to enable the second moving slide block to reciprocate on the second moving guide rail, and the third direction is perpendicular to the fourth direction;
The loading platform is fixed on one side, far away from the second moving guide rail, of the second moving slide block and is used for placing at least one chip to be detected.
12. A detection system comprising a memory and a processor coupled to each other, the processor being configured to execute program instructions stored in the memory to implement the method of detecting a chip defect according to any one of claims 1 to 7.
13. A non-transitory computer readable storage medium having stored thereon program instructions, which when executed by a processor, implement the method of detecting a chip defect according to any one of claims 1 to 7.
CN202310134291.2A 2023-02-10 2023-02-10 Chip defect detection method, detection system and storage medium Pending CN116256366A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310134291.2A CN116256366A (en) 2023-02-10 2023-02-10 Chip defect detection method, detection system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310134291.2A CN116256366A (en) 2023-02-10 2023-02-10 Chip defect detection method, detection system and storage medium

Publications (1)

Publication Number Publication Date
CN116256366A true CN116256366A (en) 2023-06-13

Family

ID=86678897

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310134291.2A Pending CN116256366A (en) 2023-02-10 2023-02-10 Chip defect detection method, detection system and storage medium

Country Status (1)

Country Link
CN (1) CN116256366A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116953486A (en) * 2023-09-18 2023-10-27 深圳华海达科技有限公司 Chip testing jig and chip detection method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116953486A (en) * 2023-09-18 2023-10-27 深圳华海达科技有限公司 Chip testing jig and chip detection method
CN116953486B (en) * 2023-09-18 2023-12-05 深圳华海达科技有限公司 Chip testing jig and chip detection method

Similar Documents

Publication Publication Date Title
US11774735B2 (en) System and method for performing automated analysis of air samples
CN105021628B (en) Method for detecting surface defects of optical fiber image inverter
CN110044405B (en) Automatic automobile instrument detection device and method based on machine vision
CN111612737B (en) Artificial board surface flaw detection device and detection method
CN102590218A (en) Device and method for detecting micro defects on bright and clean surface of metal part based on machine vision
KR20080080998A (en) Defect inspection device for inspecting defect by image analysis
SG173068A1 (en) Methods for examining a bonding structure of a substrate and bonding structure inspection devices
KR102108956B1 (en) Apparatus for Performing Inspection of Machine Vision and Driving Method Thereof, and Computer Readable Recording Medium
CN114136975A (en) Intelligent detection system and method for surface defects of microwave bare chip
CN104777172A (en) Quick and intelligent defective optical lens detection device and method
CN109932369A (en) A kind of abnormity display panel testing method and device
CN116256366A (en) Chip defect detection method, detection system and storage medium
CN112333443A (en) Lens performance detection system and method
CN113030095A (en) Polaroid appearance defect detecting system
CN111426693A (en) Quality defect detection system and detection method thereof
CN102854195B (en) Method for detecting defect coordinates on color filter
CN206311047U (en) A kind of product profile tolerance testing equipment
CN110148141B (en) Silk-screen optical filter small piece detection counting method and device
CN109622404B (en) Automatic sorting system and method for micro-workpieces based on machine vision
Du et al. An automated optical inspection (AOI) platform for three-dimensional (3D) defects detection on glass micro-optical components (GMOC)
CN114219758A (en) Defect detection method, system, electronic device and computer readable storage medium
CN212031322U (en) Panel material edge defect detecting system
CN219799258U (en) Chip defect detection system
CN111330869A (en) Visual detection method and system for on-line grading of lens
CN114286078A (en) Camera module lens appearance inspection method and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination