Disclosure of Invention
The application provides a method for detecting weak defects on an imaging chip, which aims to solve the problem that the weak defects cannot be accurately detected in the prior art, and the weak defects can be detected according to a normal defect detection method by eliminating the influence factors of the imaging chip and a lens on the detection of the weak defects and amplifying the weak defects by using an imaging angle and a gray level stretching technology, so that the detection result is more visual.
The application provides a method for detecting weak defects on an imaging chip, which comprises the following steps:
acquiring flat field calibration coefficients FPN and PRNU;
storing flat field calibration coefficients FPN and PRNU in a camera RAM;
shooting an imaging chip to be detected, and acquiring an acquired image Img;
carrying out gray stretching on the collected image Img;
and comparing the defects according to the image after the gray stretching to obtain a final defect detection image.
Optionally, the capturing the imaging chip to be detected, and acquiring the collected image Img includes:
and adjusting the angle between the camera imaging surface and the light emitting surface of the light source to acquire images.
Optionally, the adjusting the angle between the camera imaging surface and the light emitting surface of the light source to acquire the image includes:
fixing the camera on a horizontal table, adjusting the light emitting surface of the light source to enable the included angle between the light emitting surface of the light source and the imaging surface of the camera to be 45 degrees, and collecting an image Img 1;
rotating the light source around the camera, respectively adjusting the included angle between the light emitting surface of the light source and the imaging surface of the camera to be 45 degrees when the light source rotates by 90 degrees, and sequentially collecting images Img2, Img3 and Img 4;
calculating an acquired image Img; wherein Img ═ (Img1+ Img2+ Img3+ Img 4)/4.
Optionally, the captured images Img1, Img2, Img3 and Img4 are obtained by averaging three images continuously captured in time, respectively.
Optionally, the gray stretching of the collected image Img is calculated by using the following formula:
here, max (·) represents the maximum value calculation, min (·) represents the minimum value calculation, and new _ Img represents the image after the gradation stretching.
Optionally, the step of comparing the defects according to the gray-level stretched image to obtain a final defect detection image includes:
calculating the average value of the gray values of the image after gray stretching;
comparing the deviation of the gray value of each pixel point in the image after gray stretching with the average value point by point; if the deviation is larger than 8% of the mean value of the stretched image, the point is considered as a defect point, and the gray value is marked as 255; if the deviation is less than or equal to 8% of the image mean value, the point is considered as a normal point, and the gray value is marked as 0;
and forming a final defect detection image by all the pixel points marked by the gray value.
Optionally, the obtaining flat-field calibration coefficients FPN and PRNU includes:
executing an image acquisition process to obtain a dark field image and a bright field image;
calculating a flat field calibration coefficient through the dark field image and the bright field image;
performing flat field calibration on all pixel points by adopting a formula Output (Input-FPN) PRNU, and then outputting an image; here, Input and Output represent Input data and Output data of an image, respectively.
Optionally, the flat-field calibration coefficients FPN and PRNU are calculated by the following equations:
wherein, IdarkAnd IlightThe dark-field image and the bright-field image are represented, respectively, and max (-) represents the maximum operation.
Optionally, the executing the image collecting process, and the obtaining the dark field image and the bright field image includes:
respectively shooting at least three dark field images and at least three bright field images;
calculating the gray average value of all the shot dark field images to be used as the dark field image for executing the next step;
and calculating the gray level average value of all the shot bright field images to be used as the bright field image for executing the next step.
Optionally, the dark field map is an image acquired when the camera is in a darkroom, the light source is turned off completely, and the exposure time is the minimum; the bright field image is an image acquired when the camera shoots the flat light source in normal exposure time and the gray value of the image reaches 80% of the saturation value of the image.
According to the technical scheme, the method for detecting the weak defects on the imaging chip comprises the steps of obtaining flat field calibration coefficients FPN and PRNU; storing flat field calibration coefficients FPN and PRNU in a camera RAM; shooting an imaging chip to be detected, and acquiring an acquired image Img; carrying out gray stretching on the collected image Img; and comparing the defects according to the image after the gray stretching to obtain a final defect detection image. According to the method, the problems of photoelectric response inconsistency and lens shadow of an imaging chip are eliminated through flat field calibration, then the angle between the imaging surface of a camera and a light source is adjusted to acquire an image, the deviation value of weak defects and surrounding gray scales in an original image is improved, and finally the difference between the weak defects and the surrounding gray scales is amplified through gray scale stretching, namely accurate detection of the defects is implemented.
Detailed Description
Referring to fig. 1, a flowchart of a method for detecting weak defects on an imaging chip according to the present disclosure is shown;
as can be seen from fig. 1, an embodiment of the present application provides a method for detecting a weak defect on an imaging chip, including:
s10: acquiring flat field calibration coefficients FPN and PRNU; where FPN represents a fixed pattern noise value and PRNU represents a photo-electric response non-uniformity coefficient. In this embodiment, step S10 is a process of performing one flat field calibration on the camera to be tested after the lens is mounted, so that the gray scale values of the output images are substantially consistent when the camera subsequently shoots an object with the same gray scale, and is a calibration process of the overall performance of the camera and the lens, thereby avoiding the influence of the photoelectric inconsistency of the imaging chip and the shadow of the lens on image acquisition, and when the camera to be tested performs image acquisition again after step S10, the data is output after the flat field calibration, so that the data of the subsequent steps is more accurate.
Referring to fig. 2, an exploded step diagram of step S10 in the method provided by the present application under one embodiment is shown;
as shown in fig. 2, further, in a possible embodiment, the step S10 can be specifically divided into the following three steps:
s11: executing an image acquisition process to obtain a dark field image and a bright field image; in this embodiment, a two-point method is used for the image acquisition process of flat field calibration, and a dark field image and a bright field image can be acquired in various ways, for example, in a feasible embodiment, the dark field image is an image acquired when the camera is in a dark room, the light source is completely turned off, and the exposure time is the minimum; the bright field image is an image acquired by a camera shooting a flat light source in a normal exposure time to enable the gray value of the image to reach 80% of the saturation value of the image, and if the saturation value of the image is 255, the brightness value of the image to be acquired should be 80% of 255, namely about 204, it should be noted that the uniformity of the adopted flat light source should reach at least 95% to acquire the bright field image with high quality.
Referring to fig. 3, an exploded step diagram of step S11 in the method provided by the present application under one embodiment is provided.
Further, as shown in fig. 3, in order to reduce the temporal noise of the image, in a preferred embodiment, the step S11 can be subdivided into the following three sub-steps:
s111: respectively shooting at least three dark field images and at least three bright field images; although the interference of the time-domain noise can be effectively eliminated as the number of shots increases, it is preferable to take three pictures each in this embodiment because the operation efficiency is lowered by taking a plurality of shots.
S112: calculating the gray average value of all the shot dark field images to be used as the dark field image for executing the next step;
s113: and calculating the gray level average value of all the shot bright field images to be used as the bright field image for executing the next step.
Therefore, the purpose of acquiring images for multiple times and averaging is to reduce time domain noise, so that the obtained image response curve finally used for flat field calibration is more accurate, and the processing precision is improved.
As shown in fig. 2, after the dark field map and the bright field map are acquired, step S12 is executed: calculating a flat field calibration coefficient through the dark field image and the bright field image; in the step, as the response of each photosensitive unit of the image sensor is basically linear, the response of the sensing unit can be embodied in the form of a coordinate graph on the premise of flat field calibration;
specifically, the flat-field calibration coefficients FPN and PRNU are calculated by the following formulas:
wherein, IdarkAnd IlightThe dark-field image and the bright-field image are represented, respectively, and max (-) represents the maximum operation.
Referring to fig. 4, a response diagram of the photosensitive unit in the process of calculating the flat field calibration coefficient according to the method provided by the present application is shown;
as can be seen from fig. 4, since the fixed pattern noise value FPN is indicative of the dark field performance of the image, the gray scale value of the photosensitive unit should be minimum 0 without being photosensitive, but in practice, it may not be 0, and the fixed pattern noise value FPN is used to adjust the gray scale value of the dark field map to 0, which is represented by the intercept of the straight line in the graph; the photoelectric response inconsistency coefficient PRNU solves the corresponding inconsistency of each photosensitive unit, which is reflected as the slope of the straight line in the figure, and it can be understood that in the flat field calibration, the slopes of the responses of the photosensitive units are equal under the effect of the photoelectric response inconsistency coefficient PRNU. Therefore, to ensure that the response of all the photosites is linear, each photosite corresponds to a set of flat field calibration coefficients.
S13: performing flat field calibration on all pixel points by adopting a formula Output (Input-FPN) PRNU, and then outputting an image; here, Input and Output represent Input data and Output data of an image, respectively.
After performing a flat field calibration on the camera to be tested, step S20 needs to be executed: storing flat field calibration coefficients FPN and PRNU in a camera RAM; in this way, in each subsequent image capturing, the Output data of the image sensor is substituted as the Input data in step S13, and the final Output data Output of the camera is obtained for defect detection.
S30: shooting an imaging chip to be detected, and acquiring an acquired image Img; the protective layer (such as glass) on the surface layer of the imaging chip is uneven due to defects such as pits, scratches and the like, and the difference of gray values of images obtained by light supplementing shooting from different angles is reflected in the camera, so that the gray value of the edge position of the defect is lower than that of the normal position, and the defect can be identified by improving the difference of the gray values of the defect position and the periphery of the defect. For obtaining the captured images Img at different angles, there may be a manner not limited to one, such as adjusting the position of the light source, or adjusting the position of the camera; in a feasible embodiment, the image can be acquired by adjusting the position of the light source, so that the position of the camera is stable;
referring to fig. 5, an exploded step diagram of step S30 in one embodiment of the method provided herein is shown;
as can be seen from fig. 5, the method of step S30 can be implemented by step S31: the step S31 can be further divided into the following steps:
s311: fixing the camera on a horizontal table, adjusting the light emitting surface of the light source to enable the included angle between the light emitting surface of the light source and the imaging surface of the camera to be 45 degrees, and collecting an image Img 1;
s312: rotating the light source around the camera, respectively adjusting the included angle between the light emitting surface of the light source and the imaging surface of the camera to be 45 degrees when the light source rotates by 90 degrees, and sequentially collecting images Img2, Img3 and Img 4;
s313: calculating an acquired image Img; wherein Img ═ (Img1+ Img2+ Img3+ Img 4)/4; specifically, the gray value of each corresponding pixel point in the four acquired images is averaged, and then the pixel points with the gray value average value form the acquired image Img.
The operation process of steps S311 to S313 can be illustrated by fig. 6 and 7: before the image Img1 is obtained, the imaging surface of the camera is stably fixed on a horizontal camera platform in an upward mode, and then the position of the light source is adjusted, so that an included angle between the light emitting surface of the light source and the imaging surface of the camera is 45 degrees, as shown in fig. 6; after the image Img1 is obtained, rotating the light source from the light source position 1 to the light source position 2, ensuring that the included angle between the light emitting surface and the camera imaging surface is still 45 degrees, and then obtaining an image Img 2; by analogy, after the light source rotates for a circle, images of four positions are obtained; it should be noted that, in the process of rotating the light source, the light source position 1, the light source position 2, the light source position 3, and the light source position 4 should be kept on the same horizontal plane as much as possible, so that the distance between the light source and the camera is kept unchanged, and the influence on the detection accuracy due to too large difference in gray level values of the acquired images is avoided.
Further, in order to increase the detection accuracy, in a preferred example, the collected images Img1, Img2, Img3 and Img4 are obtained by averaging three images continuously captured in time, so that the interference of random noise on the detection result can be effectively reduced.
S40: carrying out gray stretching on the collected image Img; in this embodiment, the purpose of performing gray stretching on the image Img is to enlarge the slight gray difference of the defect in the image, and by the gray stretching, the smallest gray value in the original image is assigned to 0, and the largest gray value is assigned to 255, so that the gray value difference can be observed more intuitively.
Specifically, the gray stretching of the collected image Img is calculated by adopting the following formula:
here, max (·) represents the maximum value calculation, min (·) represents the minimum value calculation, and new _ Img represents the image after the gradation stretching.
S50: and carrying out defect detection according to the image after gray stretching to obtain a final defect detection image. There are various defect detection methods, and the specific method is not limited in this embodiment;
referring to fig. 8, an exploded step diagram of step S50 in one embodiment of the method provided herein is shown;
as can be seen from fig. 8, in a possible embodiment, step S50 can be decomposed as:
s51: calculating the average value of the gray values of the image after gray stretching;
s52: comparing the deviation of the gray value of each pixel point in the image after gray stretching with the average value point by point; if the deviation is larger than 8% of the mean value of the stretched image, the point is considered as a defect point, and the gray value is marked as 255; if the deviation is less than or equal to 8% of the image mean value, the point is considered as a normal point, and the gray value is marked as 0; therefore, in the image marked by the gray value, the defect point is more visually highlighted, and the defect position is favorably and quickly locked.
S53: and forming a final defect detection image by all the pixel points marked by the gray value.
According to the technical scheme, the method for detecting the weak defects on the imaging chip comprises the steps of obtaining flat field calibration coefficients FPN and PRNU; storing flat field calibration coefficients FPN and PRNU in a camera RAM; shooting an imaging chip to be detected, and acquiring an acquired image Img; carrying out gray stretching on the collected image Img; and comparing the defects according to the image after the gray stretching to obtain a final defect detection image. According to the method, the problems of photoelectric response inconsistency and lens shadow of an imaging chip are eliminated through flat field calibration, then the angle between the imaging surface of a camera and a light source is adjusted to acquire an image, the deviation value of weak defects and surrounding gray scales in an original image is improved, and finally the difference between the weak defects and the surrounding gray scales is amplified through gray scale stretching, namely accurate detection of the defects is implemented.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.