CN111050088B - Mechanism to calibrate imaging brightness of camera for detecting die defects - Google Patents

Mechanism to calibrate imaging brightness of camera for detecting die defects Download PDF

Info

Publication number
CN111050088B
CN111050088B CN201911393769.3A CN201911393769A CN111050088B CN 111050088 B CN111050088 B CN 111050088B CN 201911393769 A CN201911393769 A CN 201911393769A CN 111050088 B CN111050088 B CN 111050088B
Authority
CN
China
Prior art keywords
die image
image
camera
die
gray scale
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911393769.3A
Other languages
Chinese (zh)
Other versions
CN111050088A (en
Inventor
彭义
袁也
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intel Products Chengdu Co Ltd
Intel Corp
Original Assignee
Intel Products Chengdu Co Ltd
Intel Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Products Chengdu Co Ltd, Intel Corp filed Critical Intel Products Chengdu Co Ltd
Priority to CN201911393769.3A priority Critical patent/CN111050088B/en
Publication of CN111050088A publication Critical patent/CN111050088A/en
Application granted granted Critical
Publication of CN111050088B publication Critical patent/CN111050088B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details

Abstract

Mechanisms for calibrating the imaging brightness of a camera detecting die defects are disclosed herein. According to one aspect of the present disclosure, a method for calibrating imaging brightness of a camera detecting die defects includes: acquiring a die image shot by the camera; judging whether the die image is matched with a specified reference die image in brightness; and adjusting a setting associated with an imaging brightness of the camera in response to determining that the die image does not match in brightness with the reference die image.

Description

Mechanism to calibrate imaging brightness of camera for detecting die defects
Technical Field
The present disclosure relates generally to semiconductor manufacturing processes and, more particularly, to mechanisms for calibrating the imaging brightness of a camera that detects die defects.
Background
In a typical semiconductor manufacturing process, after a wafer is manufactured, individual dies are separated from the wafer by dicing for subsequent testing and packaging. For each die, besides performing a function and performance test one by one to classify the die, it is necessary to detect possible defects of the die in one or more steps, find the defects as early as possible and take countermeasures to avoid adverse effects on subsequent processes. The detection of the die defect is usually performed by performing image analysis processing on a photo image of the die taken by a camera, but a simple and efficient means is still lacking in terms of ensuring accuracy and reliability of the die defect detection and the like.
Disclosure of Invention
In this summary, selected concepts are presented in a simplified form and are further described below in the detailed description. This summary is not intended to identify any key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
According to an aspect of the present disclosure, there is provided a method for calibrating imaging brightness of a camera detecting a die defect, the method comprising: acquiring a die image shot by the camera; judging whether the die image is matched with a specified reference die image in brightness; and adjusting a setting associated with an imaging brightness of the camera in response to determining that the die image does not match in brightness with the reference die image.
According to another aspect of the present disclosure, there is provided a computing device comprising: a memory for storing instructions; and at least one processor coupled to the memory, wherein the instructions, when executed by the at least one processor, cause the at least one processor to: acquiring a die image shot by the camera; judging whether the die image is matched with a specified reference die image in brightness; and adjusting a setting associated with an imaging brightness of the camera in response to determining that the die image does not match in brightness with the reference die image.
According to still another aspect of the present disclosure, there is provided an apparatus for calibrating an imaging brightness of a camera detecting a die defect, the apparatus including: means for acquiring a die image taken by the camera; means for determining whether the die image matches in intensity a specified reference die image; and means for adjusting a setting associated with an imaging brightness of the camera in response to determining that the die image does not match in brightness with the reference die image.
According to yet another aspect of the disclosure, there is provided a computer-readable storage medium having instructions stored thereon, which when executed by at least one processor, cause the at least one processor to perform any of the methods described in the disclosure.
Drawings
Implementations of the present disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to the same or similar parts and in which:
FIG. 1 illustrates a block diagram of an exemplary system in accordance with some implementations of the present disclosure;
FIG. 2 illustrates a schematic diagram of an overall process flow in accordance with some implementations of the present disclosure;
FIG. 3 illustrates a flow diagram of an example method in accordance with some implementations of the present disclosure;
FIG. 4 illustrates a flowchart of example operations according to some implementations of the present disclosure;
FIG. 5 illustrates a block diagram of an example apparatus in accordance with some implementations of the present disclosure; and
fig. 6 illustrates a block diagram of an example computing device, in accordance with some implementations of the present disclosure.
Detailed Description
In the following description, for purposes of explanation, numerous specific details are set forth. However, it is understood that implementations of the disclosure may be practiced without these specific details. In other instances, well-known circuits, structures and techniques have not been shown in detail in order not to obscure the understanding of this description.
Reference throughout this specification to "one implementation," "an example implementation," "some implementations," "various implementations," or the like, means that the implementation of the disclosure described may include a particular feature, structure, or characteristic, however, it is not necessary for every implementation to include the particular feature, structure, or characteristic. In addition, some implementations may have some, all, or none of the features described for other implementations.
Various operations may be described as multiple discrete actions or operations in turn, in a manner that is most helpful in understanding the claimed subject matter. However, the order of description should not be construed as to imply that these operations are necessarily order dependent. In particular, the operations may be performed out of the order presented. In other implementations, various additional operations may be performed and/or various operations that have been described may be omitted.
In the specification and claims, the phrase "a and/or B" may be used to denote one of the following: (A) (B), (A) and (B). Similarly, the phrases "A, B and/or C" that may appear are used to denote one of: (A) (B), (C), (A and B), (A and C), (B and C), (A and B and C).
In the description and claims, the terms "coupled" and "connected," along with their derivatives, may be used. It should be understood that these terms are not intended as synonyms for each other. Rather, in particular implementations, "connected" is used to indicate that two or more elements are in direct physical or electrical contact with each other, and "coupled" is used to indicate that two or more elements cooperate or interact with each other, but they may or may not be in direct physical or electrical contact.
After the wafer is diced, each die cut is inspected for possible defects in one or more operational stages. Die defects may typically include contamination of the die surface, which may be glue, dust, etc. remaining from previous processes; die defects may also include chipping of the die edges, which may be generated during wafer dicing; in addition, the die defect may also include an offset of the die relative to a designated placement location on the carrier, and so on. These and other die defects need to be discovered as early as possible in order to take countermeasures, such as removing contamination, adjusting placement, and even discarding unsatisfactory die, etc., so as not to adversely affect or waste subsequent processing.
The detection of die defects is usually done by using computer vision techniques by image analysis processing of the photo image of the die taken with a camera, based on a grayscale image corresponding to the photo image of the die. A gray scale image can be considered as the result of measuring the brightness of light at each pixel location, the value of each pixel representing the corresponding gray scale level. Therefore, image brightness is one of the crucial influencing factors for die defect detection.
However, there is no effective solution in the prior art for ensuring the brightness stability of the image captured by the camera to ensure the accuracy and reliability of the die defect detection based on the image.
Fig. 1 illustrates a block diagram of an example system 100 in accordance with some implementations of the present disclosure. Exemplary system 100 may typically be a system or machine for picking up dies. As shown in fig. 1, the system 100 may include a pickup mechanism 110, a camera 120, and a control unit 130. These and other components of system 100 may be communicatively coupled to one another by wire and/or wirelessly for the transmission of various desired control and data signals.
As shown in fig. 1, system 100 may include a pick-up mechanism 110. The pick-up mechanism 110 may include electromechanical components such as a pick-up head, a flipping device, etc. for picking up a die from a source location, moving it and placing it on a specified target location as desired (e.g., under the control of the control unit 130 shown in fig. 1). For example, the pick mechanism 110 may be used to pick individual dies from the diced wafer onto designated target placement locations on a tray, the dies on the tray may later be sent to another machine for functional and performance testing one by one, and so on. For example, the pick mechanism 110 may be used to pick individual dies from a tray into designated target cavities on a carrier tape, which may then be transferred to a subsequent process, such as packaging the dies, and the like.
The system 100 may also include a camera 120. The camera 120 is used to take a picture of a die according to specified parameters/settings after the pick-up mechanism 110 has placed the die on a specified target location, and the taken picture will be used to detect die defects.
Furthermore, the system may also comprise a control unit 130, which may be implemented for example as a computing device such as a computer. The control unit 130 may be used to control the operation of the pick-up mechanism 110. Furthermore, in some implementations, the control unit 130 may also be used to control the operation of the camera 120, and may perform image analysis processing on the die image taken by the camera 120 based on computer vision techniques to detect die defects.
As previously described, the camera 120 is used to take a photograph in accordance with specified parameters/settings. The specified parameters/settings may initially be set in accordance with requirements that best satisfy computer vision algorithms for die defect detection. However, the shooting location/environment may not be the same, and the related components of the camera 120 may age over time, and various reasons including these may result in the parameters/settings that are originally considered to be optimal, but it cannot be guaranteed that the brightness of the shot die image always meets the requirements of the computer vision algorithm for detecting the die defects, and further, missing detection or false alarm of the die defects may be caused. For example, pixels in the die image having a grayscale value below a certain threshold may be considered defective by computer vision algorithms, in which case if the image brightness is too high, some originally defective pixel locations may be missed, and if the image brightness is too low, some originally normal pixel locations may be mistaken for defects.
According to some implementations of the present disclosure, a calibration mechanism may also be disposed in the control unit 130 for calibrating the imaging brightness of the camera 120. The calibration mechanism can timely find the problem in the aspect of image brightness according to the die image which is shot by the camera 120 and used for detecting the die defects, so that the imaging brightness of the camera 120 is compensated to ensure the stability of the imaging brightness, and the accuracy and the reliability of the die defect detection based on the die image are effectively ensured. In some implementations, the calibration mechanism may be implemented by a program (e.g., executed by a processor of control unit 130) disposed in control unit 130.
It should be noted that the system architecture shown in fig. 1 is merely exemplary and not limiting. For example, in some implementations the control unit 130 may be deployed at the same location as the pickup mechanism 110 and the camera 120; in yet other implementations, the control unit 130 may be deployed remotely with respect to the picking mechanism 110 and the camera 120, and so on. Further, for example, in some implementations, the control unit 130 may be implemented as a plurality of discrete units for controlling the operation of the pick-up mechanism 110, controlling the operation of the camera 120, image-analyzing die images captured by the camera 120 based on computer vision techniques to detect die defects, calibrating the imaging brightness of the camera 120, and so forth, which may be communicatively coupled to one another as desired. Other implementations of the system architecture are possible, and the disclosure is not limited thereto.
Fig. 2 illustrates a schematic diagram of an overall process flow 200 according to some implementations of the present disclosure. As shown in fig. 2, process flow 200 may generally be a cyclical process in which, at stage 210, a camera is used to capture a die image that will be used to detect a die defect. Then, at stage 220, a brightness match check is performed using the captured die image (specifically, against a specified reference die image, as described in further detail below). Here, the die image to be inspected may be one die image of a group of die images taken in the previous stage 210, for the sake of overhead and efficiency, etc., but the present disclosure is not limited thereto. Next, at stage 230, depending on the result of the check of stage 220, camera imaging brightness related settings are adjusted to achieve compensation. Thereafter, the process flow returns to stage 210 to continue with camera capture of die images (with adjusted settings) for the next group of dies, then the inspection of stage 220, adjustment … … of stage 230, and so on, looping. Therefore, continuous calibration and monitoring of camera imaging brightness related settings are simply and efficiently realized, and the accuracy and reliability of die defect detection are ensured.
Referring next to fig. 3, a flow diagram of an exemplary method 300 in accordance with some implementations of the present disclosure is shown. The exemplary method 300 may be implemented in the control unit 130 shown in fig. 1 or any similar or related entity.
The method 300 may be used to calibrate the imaging brightness of a camera that detects die defects. As shown in fig. 3, the method 300 begins at step 310, where an image of a die taken by a camera (e.g., the camera 120 shown in fig. 1) is acquired.
In some implementations, the camera 120 may be a CMOS based camera, a CCD based camera, or the like, although the disclosure is not so limited. The die image taken by the camera 120 will be used to detect die defects based on computer vision techniques. Computer vision techniques typically perform detection based on grayscale images. In some implementations, the camera 120 may directly capture grayscale images; alternatively, the camera 120 may capture a color image, which may then be grayed out accordingly for subsequent use.
In some examples, the camera 120 is used to take a picture of a die placed on a designated target location, such as a target placement location on a pallet, a target hole on a carrier tape, and so forth. Die defects detected using computer vision techniques may include, but are not limited to: smudging of the die surface, chipping of the die edges, shifting of the die relative to a designated placement location on a carrier such as a tray or carrier tape, etc., although the disclosure is not so limited.
In some implementations, an image of each die taken by the camera 120 for detecting die defects may be acquired and used for calibration as described in the present disclosure. For example, each time camera 120 takes an image of a die, the image may be used to implement the calibration mechanism described in this disclosure. In some implementations, for cost and efficiency considerations in production practice, a die image may be selected from a corresponding number of die images taken by the camera 120 for a group of dies to implement the calibration mechanism described in the present disclosure. The specific number of dies in a set as described herein may be set depending on the actual requirements, e.g. considering the number of dies per production lot, etc. Further, in some implementations, the selection described herein may be random, or may be pre-set (e.g., selecting an image in a fixed sequential position in a group of images), and so forth. The present disclosure is not limited to the specific implementations described above.
Further, in some implementations, acquiring the die image taken by the camera at step 110 may include pre-processing the die image. The pre-processing may include: the die image is grayed out to obtain a corresponding grayscale image, which can be used to implement the calibration mechanism described in this disclosure. The value of each pixel in the grayscale image represents a corresponding grayscale level. For example, for the case where 8 bits are used to store each pixel value of a grayscale image, the characterizable grayscale levels are 256 levels, represented by 0-255, where 0 may represent black, corresponding to the lowest luminance, and 255 may represent white, corresponding to the highest luminance.
Further, in some implementations, the pre-processing of the die image may include cropping the die image to preserve an active area therein, where the size of the active area (measured in number of pixels) may correspond to the size of a specified reference die image (which will be described in further detail below), e.g., 2048x2048 pixels, thereby avoiding the effect of the remaining area (which may be considered noise) on the calibration mechanism according to the present disclosure. Further, in some implementations, the reference location for the trimming operation can be any suitable location point, such as a geometric center of the die in the die image, a fixed location on a carrier (e.g., tray, tape) in the die image, and so forth. The present disclosure is not limited to the specific implementations described above.
Those skilled in the art will appreciate that the pre-processing of the die image may also include other image processing operations, and any combination of the above and other operations is possible, depending on the particular implementation requirements.
The method 300 then proceeds to step 320 where it is determined whether the die image matches in intensity with a specified reference die image. In some implementations, the image brightness of the specified reference die image conforms to the design requirements of a computer vision algorithm for die defect detection, e.g., the die image is prepared in accordance with being able to best meet the requirements of the computer vision algorithm. For example, some die defects may be artificially created with reference to the requirements and capabilities of the computer vision algorithm, and then the die may be photographed using specified parameters/settings associated with the camera 120 (e.g., which may be optimal under current conditions), ensuring that such defects in the photographed die image can be accurately identified by the computer vision algorithm. Such die image may be selected as the specified reference die image. It should be noted that the specific factors considered for the preparation of the specified reference die image are not limited to the specific examples described above, but may be determined depending on the specific implementation requirements and/or the experience of the operator.
Turning first to fig. 4, a flowchart of exemplary operations 400 according to some implementations of the present disclosure is shown. Exemplary operation 400 may correspond to step 320 of exemplary method 300, described previously, of determining whether the die image matches in luminance with a specified reference die image.
More specifically, as shown in fig. 4, operation 400 may include step 410, in which an image gray scale range is divided into a plurality of consecutive gray scale intervals of equal size. Continuing with the previous example of storing each pixel value of a grayscale image with 8 bits, the image grayscale range is 256-level, and in some implementations, the image grayscale range may be divided with every 10 grayscale levels as one grayscale interval, e.g., grayscale [0-9] is set to the first grayscale interval, grayscale [10-19] is set to the second grayscale interval, grayscale [20-29] is set to the third grayscale interval … …, and so on. It will be appreciated by those skilled in the art that other size settings for the gray scale interval are possible.
Next, operation 400 may include step 420, in which a number of gray scale values of pixels of the die image falling into each of the plurality of gray scale intervals is counted. Taking the size of the die image as 2048 × 2048 pixels as an example, the number of pixels whose grayscale values fall into each of the multiple grayscale intervals set is counted for a total of 4194304 pixels. In some implementations of the present disclosure, for the case of storing each pixel value of the grayscale image with 8 bits, counting the number of pixels falling into each interval by taking every 10 grayscale levels as a grayscale interval, can well reflect the grayscale distribution of the pixels of the die image, and avoid the adverse effect of selecting too large or too small grayscale intervals on the calibration mechanism according to the present disclosure.
Operation 400 may also include step 430, in which one or more gray scale regions of the die image containing the largest number of pixels are determined from the count of step 420. For example, for a particular die image, the gray scale interval with the largest number of pixels may be the interval with the gray scale level [ 150-.
Operation 400 may then further include step 440, in which it is checked whether the one or more gray scale intervals of the die image determined in step 430 coincide with one or more gray scale intervals of the reference die image that contain the largest number of pixels. Here, the number of pixels of each gray scale section of the reference die image may be determined in the same manner as described above, and a description thereof will not be repeated. In addition, in some implementations, in consideration of efficiency and other factors, the number of pixels in each gray scale interval of the reference die image and the corresponding one or more gray scale intervals containing the largest number of pixels may be predetermined, and such information is stored in a memory and directly fetched for use each time the check of step 440 is performed, thereby avoiding resource waste caused by repeated calculation. For example, for the specified reference die image, the gray scale interval containing the largest number of pixels may be the interval with the gray scale level of [ 170-.
The following discusses, with the above example, only checking whether the die image and the gray scale interval of the reference die image containing the largest number of pixels coincide. As shown above, the gray scale section of the die image containing the largest number of pixels is the section with the gray scale level [ 150-.
In addition, in some implementations, even if the grayscale intervals of the die image and the reference die image that contain the largest number of pixels are not the same, it can be considered whether the difference between the two is within an allowable range, and if so, the two can be considered as being consistent, and accordingly it can be determined that the die image and the reference die image match in brightness. For example, assuming that the die image and the reference die image have the highest number of pixels in the gray scale interval, although not the same, but the two gray scale intervals are directly adjacent, it may still be determined in some cases that the die image and the reference die image match in brightness. It is to be understood that the above-described scenarios are exemplary only and not limiting.
Although only the case of inspecting one gray scale interval containing the largest number of pixels of the die image and the reference die image is discussed here, a similar mechanism may be equally applicable to inspecting more than one gray scale interval. For example, it may be checked whether one gray scale section of the die image containing the largest number of pixels falls within a range of two gray scale sections of the reference die image containing the largest number of pixels (i.e., gray scale sections that are ranked in the first two bits in terms of the number of pixels); for example, it may be checked whether one of two gray scale intervals of the reference die image containing the largest number of pixels is the same as one gray scale interval of the reference die image containing the largest number of pixels; and so on. These and other variations are also within the scope of the present disclosure.
Further, in some implementations, those pixels of the die image and the reference die image whose grayscale values do not satisfy the specified condition can be ignored in determining whether the die image matches in luminance with the specified reference die image in step 320. The specified condition may include that the gray-scale values of the pixels are not within a certain range, depending on the particular implementation, those pixels whose gray-scale values are not within the certain range (which may be, for example, gray-scale levels 42-253) may be considered noise pixels for the calibration mechanism and thus may be disregarded during the counting process, e.g., step 420, in order to avoid interference with the calibration.
Further, in some implementations, the gray value distribution of the die image determined from the count of step 420 may also be presented in a visual form (e.g., on a graphical user interface displayed on a display device) when determining whether the die image matches in brightness with the specified reference die image in step 320. In some implementations, the grey value distribution of the pixels of the die image may be presented in contrast to the grey value distribution of the pixels of the reference die image, thereby allowing an operator to visually observe the difference between the two, facilitating the operator to take some additional processing measures. The presentation may specifically take one or more of a variety of possible forms such as a table, a bar chart, a pie chart, etc., although the present disclosure is not so limited.
Although one particular implementation of step 320 of method 300 is described above in connection with fig. 4, it is to be understood that the above implementation is not the only implementation. For example, in some implementations, determining whether the die image matches in luminance with a specified reference die image in step 320 may additionally or alternatively take into account the average luminance level of each of the two images, and so forth.
Returning to fig. 3, if it is determined in step 320 that the die image matches the reference die image in brightness, method 300 may end execution of this round without adjusting the settings associated with the imaging brightness of the camera. On the other hand, in response to determining in step 320 that the die image matches the reference die image in brightness, method 300 proceeds to step 330, where settings associated with the imaging brightness of the camera are adjusted.
In some implementations, the setting associated with the imaging brightness of the camera may include an exposure time of the camera. For example, when the brightness of the die image is determined to be high relative to the reference die image, the exposure time of the camera may be reduced accordingly; and when the brightness of the die image is determined to be low, the exposure time of the camera may be increased accordingly. The adjustment to the setting associated with the imaged brightness of the camera may be in accordance with a difference in brightness of the die image and the reference die image, which in some implementations is dependent on a distance between one or more grayscale intervals of the die image containing the greatest number of pixels and one or more grayscale intervals of the reference die image containing the greatest number of pixels. The distance may be measured in number of grey levels. In some implementations according to the present disclosure, the calibration mechanism may employ a preset mapping table in which a correlation between a relative or absolute value of the distance and an adjustment amount of the exposure time of the camera is stored. A specific adjustment magnitude can be determined by looking up the mapping table and sending an adjustment signal to the camera accordingly.
Further, in some implementations, the settings associated with the imaging brightness of the camera may also include other settings, such as: an aperture size of the camera, a sensitivity of the camera, and/or a brightness of a fill light provided with or coupled to the camera, among others. Similar mechanisms as previously described can be employed to determine the magnitude of adjustment for each of these other settings. Further, in some implementations, adjustments to any combination of the above settings are also possible.
Referring now to fig. 5, shown is a block diagram of an example apparatus 500 in accordance with some implementations of the present disclosure. The example apparatus 500 may be implemented in the control unit 130 shown in fig. 1 or any similar or related entity.
The apparatus 500 may be used to calibrate the imaging brightness of a camera that detects die defects. As shown in fig. 5, apparatus 500 may include a module 510 for acquiring a die image taken by the camera. The apparatus 500 may also include a module 520 for determining whether the die image matches in intensity with a specified reference die image. Additionally, apparatus 500 may further include a module 530 for adjusting a setting associated with an imaging brightness of the camera in response to determining that the die image does not match in brightness with the reference die image.
Further, in some implementations, one or more of the above-described modules of apparatus 500 may include further modules, and/or apparatus 500 may include additional modules to perform other operations that have been described in the specification, such as described in connection with the flowchart of exemplary method 300 of fig. 3 and the flowchart of exemplary operation 400 of fig. 4. Further, in some implementations, the various modules of the apparatus 500 may also be combined or split depending on actual needs, which also fall within the scope of the present disclosure.
Those skilled in the art will appreciate that the exemplary apparatus 500 may be implemented in software, hardware, firmware, or any combination thereof.
Fig. 6 illustrates a block diagram of an exemplary computing device 600 in accordance with some implementations of the present disclosure. The exemplary computing device 600 may be implemented in the control unit 130 shown in fig. 1 or any similar or related entity.
The computing device 600 may be used to calibrate the imaging brightness of a camera that detects die defects. As shown in fig. 6, the computing device 600 may include at least one processor 610. The processor 610 may include any type of general purpose processing unit (e.g., CPU, GPU, etc.), special purpose processing unit, core, circuit, controller, etc. Computing device 600 may also include memory 620. Memory 620 may include any type of media that can be used to store data. In some implementations, the memory 620 is configured to store instructions that, when executed, cause the at least one processor 610 to perform the operations described herein, e.g., as described in connection with the flowchart of the example method 300 of fig. 3 and the flowchart of the example operation 400 of fig. 4.
In addition, in some implementations, computing device 600 may also be coupled to or equipped with one or more peripheral components, which may include, but are not limited to, a display, speakers, a mouse, a keyboard, and so forth. In addition, in some implementations, computing device 600 may also be equipped with a communication interface that may support various types of wired/wireless communication protocols to communicate with a communication network. Examples of communication networks may include, but are not limited to: local Area Networks (LANs), Metropolitan Area Networks (MANs), Wide Area Networks (WANs), public telephone networks, the internet, intranets, the internet of things, infrared networks, bluetooth networks, Near Field Communication (NFC) networks, ZigBee networks, and the like.
Further, in some implementations, the above and other components may communicate with each other via one or more buses/interconnects, which may support any suitable bus/interconnect protocol, including Peripheral Component Interconnect (PCI), PCI express, Universal Serial Bus (USB), serial attached scsi (sas), serial ata (sata), Fibre Channel (FC), system management bus (SMBus), or other suitable protocol.
Those skilled in the art will appreciate that the above description of the architecture of computing device 600 is merely exemplary and not limiting, and that devices of other architectures are possible, provided that they can be used to implement the functionality described herein.
Various implementations of the disclosure may include or operate on multiple components, units, modules, instances, or mechanisms, which may be implemented in hardware, software, firmware, or any combination thereof. Examples of hardware may include, but are not limited to: devices, processors, microprocessors, circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth), integrated circuits, Application Specific Integrated Circuits (ASIC), Programmable Logic Devices (PLD), Digital Signal Processors (DSP), Field Programmable Gate Array (FPGA), memory units, logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth. Examples of software may include, but are not limited to: software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, Application Programming Interfaces (API), instruction sets, computer code segments, words, values, symbols, or any combination thereof. Determining whether an implementation is implemented using hardware, software, and/or firmware may vary depending on factors such as the desired computational rate, power levels, heat tolerances, processing cycle budget, input data rates, output data rates, memory resources, data bus speeds and other design or performance constraints, as desired for a given implementation.
Some implementations described herein may include an article of manufacture. The article of manufacture may comprise a storage medium. Examples of storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. Storage media may include, but are not limited to: random Access Memory (RAM), Read Only Memory (ROM), Programmable Read Only Memory (PROM), Erasable Programmable Read Only Memory (EPROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, Compact Discs (CD), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium capable of being used to store information. In some implementations, an article of manufacture may store executable computer program instructions that, when executed by one or more processing units, cause the processing units to perform the operations described herein. The executable computer program instructions may include any suitable type of code, for example, source code, compiled code, interpreted code, executable code, static code, dynamic code, and the like. The executable computer program instructions may be implemented using any suitable high-level, low-level, object-oriented, visual, compiled and/or interpreted programming language.
Some exemplary implementations of the present disclosure are described below.
Example 1 may include a method for calibrating imaging brightness of a camera detecting die defects, the method comprising: acquiring a die image shot by the camera; judging whether the die image is matched with a specified reference die image in brightness; and adjusting a setting associated with an imaging brightness of the camera in response to determining that the die image does not match in brightness with the reference die image.
Example 2 may include the subject matter of example 1, wherein determining whether the die image matches in intensity with a specified reference die image comprises: dividing an image gray scale range into a plurality of continuous gray scale intervals with equal size; counting a number of gray scale values of pixels of the die image falling into each of the plurality of gray scale intervals; determining one or more gray scale intervals of the die image containing the largest number of pixels according to the count; and checking whether the determined one or more gray scale intervals of the die image are consistent with one or more gray scale intervals of the reference die image containing the largest number of pixels.
Example 3 may include the subject matter of example 2, wherein pixels of the die image and the reference die image whose grayscale values do not satisfy a specified condition are ignored in determining whether the die image matches in luminance with a specified reference die image.
Example 4 may include the subject matter of example 2, wherein determining whether the die image matches in intensity with a specified reference die image further comprises: visually presenting a gray value distribution of pixels of the die image determined from the count in contrast to a gray value distribution of pixels of the reference die image.
Example 5 may include the subject matter of example 2, wherein the setting associated with the imaging brightness of the camera comprises an exposure time of the camera, and wherein adjusting the setting associated with the imaging brightness of the camera comprises: adjusting an exposure time of the camera according to a difference in brightness of the die image and the reference die image, wherein the difference is dependent on a distance between the determined one or more gray scale intervals of the die image and one or more gray scale intervals of the reference die image containing a maximum number of pixels.
Example 6 may include the subject matter of example 1, wherein acquiring the die image taken by the camera includes pre-processing the die image, the pre-processing including at least one of: graying the tube core image to obtain a grayscale image; and cropping the die image to leave an active area therein, the active area having a size corresponding to the size of the reference die image.
Example 7 may include the subject matter of example 1, wherein obtaining die images taken by the camera includes selecting one die image from a corresponding number of die images taken by the camera for a group of dies.
Example 8 may include a computing device comprising: a memory for storing instructions; and at least one processor coupled to the memory, wherein the instructions, when executed by the at least one processor, cause the at least one processor to: acquiring a die image shot by the camera; judging whether the die image is matched with a specified reference die image in brightness; and adjusting a setting associated with an imaging brightness of the camera in response to determining that the die image does not match in brightness with the reference die image.
Example 9 may include the subject matter of example 8, wherein determining whether the die image matches in intensity with a specified reference die image comprises: dividing an image gray scale range into a plurality of continuous gray scale intervals with equal size; counting a number of gray scale values of pixels of the die image falling into each of the plurality of gray scale intervals; determining one or more gray scale intervals of the die image containing the largest number of pixels according to the count; and checking whether the determined one or more gray scale intervals of the die image are consistent with one or more gray scale intervals of the reference die image containing the largest number of pixels.
Example 10 may include the subject matter of example 9, wherein pixels of the die image and the reference die image whose grayscale values do not satisfy a specified condition are ignored in determining whether the die image matches in luminance with a specified reference die image.
Example 11 may include the subject matter of example 9, wherein determining whether the die image matches in intensity with a specified reference die image further comprises: visually presenting a gray value distribution of pixels of the die image determined from the count in contrast to a gray value distribution of pixels of the reference die image.
Example 12 may include the subject matter of example 9, wherein the setting associated with the imaging brightness of the camera comprises an exposure time of the camera, and wherein adjusting the setting associated with the imaging brightness of the camera comprises: adjusting an exposure time of the camera according to a difference in brightness of the die image and the reference die image, wherein the difference is dependent on a distance between the determined one or more gray scale intervals of the die image and one or more gray scale intervals of the reference die image containing a maximum number of pixels.
Example 13 may include the subject matter of example 8, wherein acquiring the die image taken by the camera includes pre-processing the die image, the pre-processing including at least one of: graying the tube core image to obtain a grayscale image; and cropping the die image to leave an active area therein, the active area having a size corresponding to the size of the reference die image.
Example 14 may include the subject matter of example 8, wherein obtaining die images taken by the camera includes selecting one die image from a corresponding number of die images taken by the camera for a group of dies.
Example 15 may include an apparatus to calibrate imaging brightness of a camera to detect die defects, the apparatus comprising: means for acquiring a die image taken by the camera; means for determining whether the die image matches in intensity a specified reference die image; and means for adjusting a setting associated with an imaging brightness of the camera in response to determining that the die image does not match in brightness with the reference die image.
Example 16 may include the subject matter of example 15, wherein the means for determining whether the die image matches in intensity the specified reference die image comprises: means for dividing an image gray scale range into a plurality of consecutive gray scale intervals of equal size; means for counting a number of gray scale values of pixels of the die image falling into each of the plurality of gray scale intervals; means for determining one or more gray scale intervals of the die image containing a maximum number of pixels based on the count; and means for checking whether the determined one or more gray scale intervals of the die image coincide with one or more gray scale intervals of the reference die image that contain the largest number of pixels.
Example 17 may include the subject matter of example 16, wherein pixels of the die image and the reference die image whose grayscale values do not satisfy a specified condition are ignored in determining whether the die image matches in luminance with a specified reference die image.
Example 18 may include the subject matter of example 16, wherein the means for determining whether the die image matches in intensity the specified reference die image further comprises: means for visually rendering a distribution of grayscale values of pixels of the die image determined from the count as opposed to a distribution of grayscale values of pixels of the reference die image.
Example 19 may include the subject matter of example 16, wherein the setting associated with the imaging brightness of the camera comprises an exposure time of the camera, and wherein the means for adjusting the setting associated with the imaging brightness of the camera comprises: means for adjusting an exposure time of the camera according to a difference in brightness of the die image and the reference die image, wherein the difference is dependent on a distance between the determined one or more grayscale intervals of the die image and one or more grayscale intervals of the reference die image that contain the largest number of pixels.
Example 20 may include the subject matter of example 15, wherein means for acquiring a die image captured by the camera comprises means for pre-processing the die image, the pre-processing comprising at least one of: graying the tube core image to obtain a grayscale image; and cropping the die image to leave an active area therein, the active area having a size corresponding to the size of the reference die image.
Example 21 may include the subject matter of example 15, wherein means for obtaining die images taken by the camera comprises means for selecting one die image from a corresponding number of die images taken by the camera for a group of dies.
Example 22 may include a computer-readable storage medium having instructions stored thereon, which when executed by at least one processor, cause the at least one processor to perform any of the methods described in this disclosure.
What has been described above includes examples of the disclosed architecture. It is, of course, not possible to describe every conceivable combination of components and/or methodologies, but one of ordinary skill in the art may recognize that many further combinations and permutations are possible. Accordingly, the novel architecture is intended to embrace all such alterations, modifications and variations that fall within the spirit and scope of the appended claims.

Claims (13)

1. A method for calibrating imaging brightness of a camera detecting die defects, comprising:
acquiring a die image shot by the camera;
judging whether the die image is matched with a specified reference die image in brightness; and
in response to determining that the die image does not match in brightness with the reference die image, adjusting a setting associated with an imaging brightness of the camera,
wherein determining whether the die image matches in luminance with a specified reference die image comprises:
dividing an image gray scale range into a plurality of continuous gray scale intervals with equal size;
counting a number of gray scale values of pixels of the die image falling into each of the plurality of gray scale intervals;
determining a gray scale interval containing the largest number of pixels of the die image according to the counting; and
checking whether the determined one gray scale interval of the die image is the same as one gray scale interval of the reference die image containing the largest number of pixels,
wherein the setting associated with the imaging brightness of the camera comprises an exposure time of the camera, and wherein adjusting the setting associated with the imaging brightness of the camera comprises: adjusting an exposure time of the camera according to a difference in brightness between the die image and the reference die image, wherein the difference depends on a distance between the determined one gray scale interval of the die image and the one gray scale interval of the reference die image containing the largest number of pixels.
2. The method of claim 1, wherein pixels of the die image and the reference die image whose grayscale values do not satisfy a specified condition are ignored in determining whether the die image matches in luminance with a specified reference die image.
3. The method of claim 1, wherein determining whether the die image matches in intensity with a specified reference die image further comprises:
visually presenting a gray value distribution of pixels of the die image determined from the count in contrast to a gray value distribution of pixels of the reference die image.
4. The method of claim 1, wherein acquiring the die image taken by the camera comprises pre-processing the die image, the pre-processing comprising at least one of:
graying the tube core image to obtain a grayscale image; and
cropping the die image to leave an active area therein, the active area having a size corresponding to the size of the reference die image.
5. The method of claim 1, wherein acquiring die images taken by the camera comprises selecting one die image from a corresponding number of die images taken by the camera for a group of dies.
6. A computing device, comprising:
a memory for storing instructions; and
at least one processor coupled to the memory, wherein the instructions, when executed by the at least one processor, cause the at least one processor to:
acquiring a die image shot by a camera;
judging whether the die image is matched with a specified reference die image in brightness; and
in response to determining that the die image does not match in brightness with the reference die image, adjusting a setting associated with an imaging brightness of the camera,
wherein determining whether the die image matches in luminance with a specified reference die image comprises:
dividing an image gray scale range into a plurality of continuous gray scale intervals with equal size;
counting a number of gray scale values of pixels of the die image falling into each of the plurality of gray scale intervals;
determining a gray scale interval containing the largest number of pixels of the die image according to the counting; and
checking whether the determined one gray scale interval of the die image is the same as one gray scale interval of the reference die image containing the largest number of pixels,
wherein the setting associated with the imaging brightness of the camera comprises an exposure time of the camera, and wherein adjusting the setting associated with the imaging brightness of the camera comprises: adjusting an exposure time of the camera according to a difference in brightness between the die image and the reference die image, wherein the difference depends on a distance between the determined one gray scale interval of the die image and the one gray scale interval of the reference die image containing the largest number of pixels.
7. The computing device of claim 6, wherein pixels of the die image and the reference die image whose grayscale values do not satisfy a specified condition are ignored in determining whether the die image matches in luminance with a specified reference die image.
8. The computing device of claim 6, wherein to determine whether the die image matches in intensity with a specified reference die image further comprises to:
visually presenting a gray value distribution of pixels of the die image determined from the count in contrast to a gray value distribution of pixels of the reference die image.
9. The computing device of claim 6, wherein to acquire the die image taken by the camera comprises to pre-process the die image, the pre-processing comprising at least one of:
graying the tube core image to obtain a grayscale image; and
cropping the die image to leave an active area therein, the active area having a size corresponding to the size of the reference die image.
10. The computing device of claim 6, wherein to acquire die images taken by the camera comprises to select one die image from a corresponding number of die images taken by the camera for a group of dies.
11. An apparatus for calibrating imaging brightness of a camera detecting die defects, the apparatus comprising:
means for acquiring a die image taken by the camera;
means for determining whether the die image matches in intensity a specified reference die image; and
means for adjusting a setting associated with an imaging brightness of the camera in response to determining that the die image does not match in brightness with the reference die image,
wherein the means for determining whether the die image matches in luminance with a specified reference die image comprises:
means for dividing an image gray scale range into a plurality of consecutive gray scale intervals of equal size;
means for counting a number of gray scale values of pixels of the die image falling into each of the plurality of gray scale intervals;
means for determining a gray scale interval of the die image containing the largest number of pixels based on the count; and
means for checking whether the determined one gray scale interval of the die image is the same as one gray scale interval of the reference die image containing the largest number of pixels,
wherein the setting associated with the imaging brightness of the camera comprises an exposure time of the camera, and wherein the means for adjusting the setting associated with the imaging brightness of the camera comprises: means for adjusting an exposure time of the camera according to a difference in brightness between the die image and the reference die image, wherein the difference is dependent on a distance between the determined one gray scale interval of the die image and one gray scale interval of the reference die image containing the largest number of pixels.
12. A computer-readable storage medium having stored thereon instructions, which when executed by at least one processor, cause the at least one processor to perform the method of any one of claims 1-5.
13. A computer program product comprising instructions which, when executed by at least one processor, cause the at least one processor to carry out the method according to any one of claims 1-5.
CN201911393769.3A 2019-12-30 2019-12-30 Mechanism to calibrate imaging brightness of camera for detecting die defects Active CN111050088B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911393769.3A CN111050088B (en) 2019-12-30 2019-12-30 Mechanism to calibrate imaging brightness of camera for detecting die defects

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911393769.3A CN111050088B (en) 2019-12-30 2019-12-30 Mechanism to calibrate imaging brightness of camera for detecting die defects

Publications (2)

Publication Number Publication Date
CN111050088A CN111050088A (en) 2020-04-21
CN111050088B true CN111050088B (en) 2021-12-21

Family

ID=70241733

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911393769.3A Active CN111050088B (en) 2019-12-30 2019-12-30 Mechanism to calibrate imaging brightness of camera for detecting die defects

Country Status (1)

Country Link
CN (1) CN111050088B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114418540B (en) * 2022-01-19 2022-11-15 揭阳市科和电子实业有限公司 Triode manufacturing and crystal fixing process thereof

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101013250A (en) * 2006-01-30 2007-08-08 索尼株式会社 Exposure control apparatus and image pickup apparatus
CN101783888A (en) * 2010-03-23 2010-07-21 中国科学院西安光学精密机械研究所 Automatic exposure method based on analogous column diagram
CN104580925A (en) * 2014-12-31 2015-04-29 安科智慧城市技术(中国)有限公司 Image brightness controlling method, device and camera
CN105847708A (en) * 2016-05-26 2016-08-10 武汉大学 Image-histogram-analysis-based automatic exposure adjusting method and system for linear array camera
CN109725499A (en) * 2017-10-30 2019-05-07 台湾积体电路制造股份有限公司 Defect inspection method and defect detecting system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10726540B2 (en) * 2017-10-17 2020-07-28 International Business Machines Corporation Self-similarity analysis for defect detection on patterned industrial objects

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101013250A (en) * 2006-01-30 2007-08-08 索尼株式会社 Exposure control apparatus and image pickup apparatus
CN101783888A (en) * 2010-03-23 2010-07-21 中国科学院西安光学精密机械研究所 Automatic exposure method based on analogous column diagram
CN104580925A (en) * 2014-12-31 2015-04-29 安科智慧城市技术(中国)有限公司 Image brightness controlling method, device and camera
CN105847708A (en) * 2016-05-26 2016-08-10 武汉大学 Image-histogram-analysis-based automatic exposure adjusting method and system for linear array camera
CN109725499A (en) * 2017-10-30 2019-05-07 台湾积体电路制造股份有限公司 Defect inspection method and defect detecting system

Also Published As

Publication number Publication date
CN111050088A (en) 2020-04-21

Similar Documents

Publication Publication Date Title
EP3785603B1 (en) Machine learning-based fundus image detection method, apparatus, and system
US10758119B2 (en) Automatic fundus image capture system
CN106920219B (en) Article defect detection method, image processing system and computer readable recording medium
US11501428B2 (en) Method, apparatus and system for detecting fundus image based on machine learning
US20130202188A1 (en) Defect inspection method, defect inspection apparatus, program product and output unit
US10701352B2 (en) Image monitoring device, image monitoring method, and recording medium
CN111630568B (en) Electronic device and control method thereof
KR101941585B1 (en) Embedded system for examination based on artificial intelligence thereof
JP2012142935A (en) Night scene light source detection device and night scene light source detection method
US20150116543A1 (en) Information processing apparatus, information processing method, and storage medium
CN111050088B (en) Mechanism to calibrate imaging brightness of camera for detecting die defects
CN109489560B (en) Linear dimension measuring method and device and intelligent terminal
KR20190067439A (en) Control value setting apparatus and method of semiconductor vision inspection system based on deep learning
US8797443B2 (en) Method for checking camera
WO2008120182A2 (en) Method and system for verifying suspected defects of a printed circuit board
KR20150068884A (en) Semiconductor inspecting method, semiconductor inspecting apparatus and semiconductor manufacturing method
KR101559338B1 (en) System for testing camera module centering and method for testing camera module centering using the same
JP2006229626A (en) Defective pixel detecting method
CN111161211B (en) Image detection method and device
JP2009017158A (en) Camera inspection device
CN114387249A (en) Detection method, detection device, equipment and storage medium
EP3979637A1 (en) Calibration method
KR100605226B1 (en) Apparatus and method for detecting foreign material in digital camera module
JP2007059858A (en) Chip image inspection method and its system
JP2020162116A (en) Method of classifying and correcting image sensor defects utilizing defective-pixel information from color channels

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant