CN109525840B - Method for detecting weak defects on imaging chip - Google Patents

Method for detecting weak defects on imaging chip Download PDF

Info

Publication number
CN109525840B
CN109525840B CN201811548239.7A CN201811548239A CN109525840B CN 109525840 B CN109525840 B CN 109525840B CN 201811548239 A CN201811548239 A CN 201811548239A CN 109525840 B CN109525840 B CN 109525840B
Authority
CN
China
Prior art keywords
image
gray
camera
img
light source
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811548239.7A
Other languages
Chinese (zh)
Other versions
CN109525840A (en
Inventor
郭慧
姚毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Luster LightTech Co Ltd
Original Assignee
Luster LightTech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Luster LightTech Co Ltd filed Critical Luster LightTech Co Ltd
Priority to CN201811548239.7A priority Critical patent/CN109525840B/en
Publication of CN109525840A publication Critical patent/CN109525840A/en
Application granted granted Critical
Publication of CN109525840B publication Critical patent/CN109525840B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

The application discloses a method for detecting weak defects on an imaging chip, which comprises the steps of obtaining flat field calibration coefficients FPN and PRNU; storing flat field calibration coefficients FPN and PRNU in a camera RAM; shooting an imaging chip to be detected, and acquiring an acquired image Img; carrying out gray stretching on the collected image Img; and comparing the defects according to the image after the gray stretching to obtain a final defect detection image. According to the method, the problems of photoelectric response inconsistency and lens shadow of an imaging chip are eliminated through flat field calibration, then the angle between the imaging surface of a camera and a light source is adjusted to acquire an image, the deviation value of weak defects and surrounding gray scales in an original image is improved, and finally the difference between the weak defects and the surrounding gray scales is amplified through gray scale stretching, namely accurate detection of the defects is implemented.

Description

Method for detecting weak defects on imaging chip
Technical Field
The application relates to the technical field of defect detection, in particular to a method for detecting weak defects on an imaging chip.
Background
The core device of the camera is an imaging chip, which is essentially an image sensor. In the use process of the camera, the image sensor can convert optical signals on the photosensitive surface into electric signals in corresponding proportion to the optical signals by utilizing the photoelectric conversion function of the photoelectric device, and then digital images are obtained through analog-to-digital conversion. The surface of the imaging chip is usually provided with a layer of transparent material, such as glass, to protect the circuit.
However, due to the reasons of production process, transportation, material aging, etc., the protective layer on the surface of the imaging chip often has tiny defects such as pits, scratches, etc., which are not only not easily perceived, but also directly adversely affect the imaging effect. Because the difference between the gray value of the defect and the gray value of the surrounding part of the defect is very small, and the imaging chip has the characteristic of inconsistent photoelectric response, the detection method and the detection device in the prior art cannot accurately and effectively detect the weak defect, and in addition, the influence of the lens on the uniformity of the image greatly increases the difficulty of defect detection.
Disclosure of Invention
The application provides a method for detecting weak defects on an imaging chip, which aims to solve the problem that the weak defects cannot be accurately detected in the prior art, and the weak defects can be detected according to a normal defect detection method by eliminating the influence factors of the imaging chip and a lens on the detection of the weak defects and amplifying the weak defects by using an imaging angle and a gray level stretching technology, so that the detection result is more visual.
The application provides a method for detecting weak defects on an imaging chip, which comprises the following steps:
acquiring flat field calibration coefficients FPN and PRNU;
storing flat field calibration coefficients FPN and PRNU in a camera RAM;
shooting an imaging chip to be detected, and acquiring an acquired image Img;
carrying out gray stretching on the collected image Img;
and comparing the defects according to the image after the gray stretching to obtain a final defect detection image.
Optionally, the capturing the imaging chip to be detected, and acquiring the collected image Img includes:
and adjusting the angle between the camera imaging surface and the light emitting surface of the light source to acquire images.
Optionally, the adjusting the angle between the camera imaging surface and the light emitting surface of the light source to acquire the image includes:
fixing the camera on a horizontal table, adjusting the light emitting surface of the light source to enable the included angle between the light emitting surface of the light source and the imaging surface of the camera to be 45 degrees, and collecting an image Img 1;
rotating the light source around the camera, respectively adjusting the included angle between the light emitting surface of the light source and the imaging surface of the camera to be 45 degrees when the light source rotates by 90 degrees, and sequentially collecting images Img2, Img3 and Img 4;
calculating an acquired image Img; wherein Img ═ (Img1+ Img2+ Img3+ Img 4)/4.
Optionally, the captured images Img1, Img2, Img3 and Img4 are obtained by averaging three images continuously captured in time, respectively.
Optionally, the gray stretching of the collected image Img is calculated by using the following formula:
Figure BDA0001909924410000021
here, max (·) represents the maximum value calculation, min (·) represents the minimum value calculation, and new _ Img represents the image after the gradation stretching.
Optionally, the step of comparing the defects according to the gray-level stretched image to obtain a final defect detection image includes:
calculating the average value of the gray values of the image after gray stretching;
comparing the deviation of the gray value of each pixel point in the image after gray stretching with the average value point by point; if the deviation is larger than 8% of the mean value of the stretched image, the point is considered as a defect point, and the gray value is marked as 255; if the deviation is less than or equal to 8% of the image mean value, the point is considered as a normal point, and the gray value is marked as 0;
and forming a final defect detection image by all the pixel points marked by the gray value.
Optionally, the obtaining flat-field calibration coefficients FPN and PRNU includes:
executing an image acquisition process to obtain a dark field image and a bright field image;
calculating a flat field calibration coefficient through the dark field image and the bright field image;
performing flat field calibration on all pixel points by adopting a formula Output (Input-FPN) PRNU, and then outputting an image; here, Input and Output represent Input data and Output data of an image, respectively.
Optionally, the flat-field calibration coefficients FPN and PRNU are calculated by the following equations:
Figure BDA0001909924410000022
wherein, IdarkAnd IlightThe dark-field image and the bright-field image are represented, respectively, and max (-) represents the maximum operation.
Optionally, the executing the image collecting process, and the obtaining the dark field image and the bright field image includes:
respectively shooting at least three dark field images and at least three bright field images;
calculating the gray average value of all the shot dark field images to be used as the dark field image for executing the next step;
and calculating the gray level average value of all the shot bright field images to be used as the bright field image for executing the next step.
Optionally, the dark field map is an image acquired when the camera is in a darkroom, the light source is turned off completely, and the exposure time is the minimum; the bright field image is an image acquired when the camera shoots the flat light source in normal exposure time and the gray value of the image reaches 80% of the saturation value of the image.
According to the technical scheme, the method for detecting the weak defects on the imaging chip comprises the steps of obtaining flat field calibration coefficients FPN and PRNU; storing flat field calibration coefficients FPN and PRNU in a camera RAM; shooting an imaging chip to be detected, and acquiring an acquired image Img; carrying out gray stretching on the collected image Img; and comparing the defects according to the image after the gray stretching to obtain a final defect detection image. According to the method, the problems of photoelectric response inconsistency and lens shadow of an imaging chip are eliminated through flat field calibration, then the angle between the imaging surface of a camera and a light source is adjusted to acquire an image, the deviation value of weak defects and surrounding gray scales in an original image is improved, and finally the difference between the weak defects and the surrounding gray scales is amplified through gray scale stretching, namely accurate detection of the defects is implemented.
Drawings
In order to more clearly explain the technical solution of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious to those skilled in the art that other drawings can be obtained according to the drawings without any creative effort.
FIG. 1 is a flow chart of a method for detecting weak defects on an imaging chip according to the present application;
FIG. 2 is a diagram illustrating an exploded step of step S10 in the method provided herein, in one embodiment;
FIG. 3 is a diagram illustrating an exploded step of step S11 in the method provided herein, in one embodiment;
FIG. 4 is a schematic diagram illustrating the response of the photosensitive unit in calculating the flat-field calibration coefficients according to the method provided by the present application;
FIG. 5 is a diagram illustrating an exploded step of step S30 in the method provided herein, in one embodiment;
fig. 6 and 7 are explanatory diagrams illustrating the processes of steps S311 to S313 in the method provided by the present application;
fig. 8 is an exploded step diagram of step S50 in the method provided by the present application, according to an embodiment.
Detailed Description
Referring to fig. 1, a flowchart of a method for detecting weak defects on an imaging chip according to the present disclosure is shown;
as can be seen from fig. 1, an embodiment of the present application provides a method for detecting a weak defect on an imaging chip, including:
s10: acquiring flat field calibration coefficients FPN and PRNU; where FPN represents a fixed pattern noise value and PRNU represents a photo-electric response non-uniformity coefficient. In this embodiment, step S10 is a process of performing one flat field calibration on the camera to be tested after the lens is mounted, so that the gray scale values of the output images are substantially consistent when the camera subsequently shoots an object with the same gray scale, and is a calibration process of the overall performance of the camera and the lens, thereby avoiding the influence of the photoelectric inconsistency of the imaging chip and the shadow of the lens on image acquisition, and when the camera to be tested performs image acquisition again after step S10, the data is output after the flat field calibration, so that the data of the subsequent steps is more accurate.
Referring to fig. 2, an exploded step diagram of step S10 in the method provided by the present application under one embodiment is shown;
as shown in fig. 2, further, in a possible embodiment, the step S10 can be specifically divided into the following three steps:
s11: executing an image acquisition process to obtain a dark field image and a bright field image; in this embodiment, a two-point method is used for the image acquisition process of flat field calibration, and a dark field image and a bright field image can be acquired in various ways, for example, in a feasible embodiment, the dark field image is an image acquired when the camera is in a dark room, the light source is completely turned off, and the exposure time is the minimum; the bright field image is an image acquired by a camera shooting a flat light source in a normal exposure time to enable the gray value of the image to reach 80% of the saturation value of the image, and if the saturation value of the image is 255, the brightness value of the image to be acquired should be 80% of 255, namely about 204, it should be noted that the uniformity of the adopted flat light source should reach at least 95% to acquire the bright field image with high quality.
Referring to fig. 3, an exploded step diagram of step S11 in the method provided by the present application under one embodiment is provided.
Further, as shown in fig. 3, in order to reduce the temporal noise of the image, in a preferred embodiment, the step S11 can be subdivided into the following three sub-steps:
s111: respectively shooting at least three dark field images and at least three bright field images; although the interference of the time-domain noise can be effectively eliminated as the number of shots increases, it is preferable to take three pictures each in this embodiment because the operation efficiency is lowered by taking a plurality of shots.
S112: calculating the gray average value of all the shot dark field images to be used as the dark field image for executing the next step;
s113: and calculating the gray level average value of all the shot bright field images to be used as the bright field image for executing the next step.
Therefore, the purpose of acquiring images for multiple times and averaging is to reduce time domain noise, so that the obtained image response curve finally used for flat field calibration is more accurate, and the processing precision is improved.
As shown in fig. 2, after the dark field map and the bright field map are acquired, step S12 is executed: calculating a flat field calibration coefficient through the dark field image and the bright field image; in the step, as the response of each photosensitive unit of the image sensor is basically linear, the response of the sensing unit can be embodied in the form of a coordinate graph on the premise of flat field calibration;
specifically, the flat-field calibration coefficients FPN and PRNU are calculated by the following formulas:
Figure BDA0001909924410000041
wherein, IdarkAnd IlightThe dark-field image and the bright-field image are represented, respectively, and max (-) represents the maximum operation.
Referring to fig. 4, a response diagram of the photosensitive unit in the process of calculating the flat field calibration coefficient according to the method provided by the present application is shown;
as can be seen from fig. 4, since the fixed pattern noise value FPN is indicative of the dark field performance of the image, the gray scale value of the photosensitive unit should be minimum 0 without being photosensitive, but in practice, it may not be 0, and the fixed pattern noise value FPN is used to adjust the gray scale value of the dark field map to 0, which is represented by the intercept of the straight line in the graph; the photoelectric response inconsistency coefficient PRNU solves the corresponding inconsistency of each photosensitive unit, which is reflected as the slope of the straight line in the figure, and it can be understood that in the flat field calibration, the slopes of the responses of the photosensitive units are equal under the effect of the photoelectric response inconsistency coefficient PRNU. Therefore, to ensure that the response of all the photosites is linear, each photosite corresponds to a set of flat field calibration coefficients.
S13: performing flat field calibration on all pixel points by adopting a formula Output (Input-FPN) PRNU, and then outputting an image; here, Input and Output represent Input data and Output data of an image, respectively.
After performing a flat field calibration on the camera to be tested, step S20 needs to be executed: storing flat field calibration coefficients FPN and PRNU in a camera RAM; in this way, in each subsequent image capturing, the Output data of the image sensor is substituted as the Input data in step S13, and the final Output data Output of the camera is obtained for defect detection.
S30: shooting an imaging chip to be detected, and acquiring an acquired image Img; the protective layer (such as glass) on the surface layer of the imaging chip is uneven due to defects such as pits, scratches and the like, and the difference of gray values of images obtained by light supplementing shooting from different angles is reflected in the camera, so that the gray value of the edge position of the defect is lower than that of the normal position, and the defect can be identified by improving the difference of the gray values of the defect position and the periphery of the defect. For obtaining the captured images Img at different angles, there may be a manner not limited to one, such as adjusting the position of the light source, or adjusting the position of the camera; in a feasible embodiment, the image can be acquired by adjusting the position of the light source, so that the position of the camera is stable;
referring to fig. 5, an exploded step diagram of step S30 in one embodiment of the method provided herein is shown;
as can be seen from fig. 5, the method of step S30 can be implemented by step S31: the step S31 can be further divided into the following steps:
s311: fixing the camera on a horizontal table, adjusting the light emitting surface of the light source to enable the included angle between the light emitting surface of the light source and the imaging surface of the camera to be 45 degrees, and collecting an image Img 1;
s312: rotating the light source around the camera, respectively adjusting the included angle between the light emitting surface of the light source and the imaging surface of the camera to be 45 degrees when the light source rotates by 90 degrees, and sequentially collecting images Img2, Img3 and Img 4;
s313: calculating an acquired image Img; wherein Img ═ (Img1+ Img2+ Img3+ Img 4)/4; specifically, the gray value of each corresponding pixel point in the four acquired images is averaged, and then the pixel points with the gray value average value form the acquired image Img.
The operation process of steps S311 to S313 can be illustrated by fig. 6 and 7: before the image Img1 is obtained, the imaging surface of the camera is stably fixed on a horizontal camera platform in an upward mode, and then the position of the light source is adjusted, so that an included angle between the light emitting surface of the light source and the imaging surface of the camera is 45 degrees, as shown in fig. 6; after the image Img1 is obtained, rotating the light source from the light source position 1 to the light source position 2, ensuring that the included angle between the light emitting surface and the camera imaging surface is still 45 degrees, and then obtaining an image Img 2; by analogy, after the light source rotates for a circle, images of four positions are obtained; it should be noted that, in the process of rotating the light source, the light source position 1, the light source position 2, the light source position 3, and the light source position 4 should be kept on the same horizontal plane as much as possible, so that the distance between the light source and the camera is kept unchanged, and the influence on the detection accuracy due to too large difference in gray level values of the acquired images is avoided.
Further, in order to increase the detection accuracy, in a preferred example, the collected images Img1, Img2, Img3 and Img4 are obtained by averaging three images continuously captured in time, so that the interference of random noise on the detection result can be effectively reduced.
S40: carrying out gray stretching on the collected image Img; in this embodiment, the purpose of performing gray stretching on the image Img is to enlarge the slight gray difference of the defect in the image, and by the gray stretching, the smallest gray value in the original image is assigned to 0, and the largest gray value is assigned to 255, so that the gray value difference can be observed more intuitively.
Specifically, the gray stretching of the collected image Img is calculated by adopting the following formula:
Figure BDA0001909924410000061
here, max (·) represents the maximum value calculation, min (·) represents the minimum value calculation, and new _ Img represents the image after the gradation stretching.
S50: and carrying out defect detection according to the image after gray stretching to obtain a final defect detection image. There are various defect detection methods, and the specific method is not limited in this embodiment;
referring to fig. 8, an exploded step diagram of step S50 in one embodiment of the method provided herein is shown;
as can be seen from fig. 8, in a possible embodiment, step S50 can be decomposed as:
s51: calculating the average value of the gray values of the image after gray stretching;
s52: comparing the deviation of the gray value of each pixel point in the image after gray stretching with the average value point by point; if the deviation is larger than 8% of the mean value of the stretched image, the point is considered as a defect point, and the gray value is marked as 255; if the deviation is less than or equal to 8% of the image mean value, the point is considered as a normal point, and the gray value is marked as 0; therefore, in the image marked by the gray value, the defect point is more visually highlighted, and the defect position is favorably and quickly locked.
S53: and forming a final defect detection image by all the pixel points marked by the gray value.
According to the technical scheme, the method for detecting the weak defects on the imaging chip comprises the steps of obtaining flat field calibration coefficients FPN and PRNU; storing flat field calibration coefficients FPN and PRNU in a camera RAM; shooting an imaging chip to be detected, and acquiring an acquired image Img; carrying out gray stretching on the collected image Img; and comparing the defects according to the image after the gray stretching to obtain a final defect detection image. According to the method, the problems of photoelectric response inconsistency and lens shadow of an imaging chip are eliminated through flat field calibration, then the angle between the imaging surface of a camera and a light source is adjusted to acquire an image, the deviation value of weak defects and surrounding gray scales in an original image is improved, and finally the difference between the weak defects and the surrounding gray scales is amplified through gray scale stretching, namely accurate detection of the defects is implemented.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (9)

1. A method for detecting weak defects on an imaging chip, the method comprising:
acquiring flat field calibration coefficients FPN and PRNU;
storing flat field calibration coefficients FPN and PRNU in a camera RAM;
acquiring collected images Img obtained by light supplementing and shooting for a camera at different angles by adopting the camera with stored flat field calibration coefficients FPN and PRNU and an imaging chip to be measured; the acquiring of the collected image Img obtained by light supplement shooting of the camera at different angles comprises: adjusting the angle between the camera imaging surface and the light emitting surface of the light source to acquire an image;
carrying out gray stretching on the collected image Img;
and comparing the defects according to the image after the gray stretching to obtain a final defect detection image.
2. The method of claim 1, wherein the adjusting the angle between the imaging surface of the camera and the light emitting surface of the light source to capture the image comprises:
fixing the camera on a horizontal table, adjusting the light emitting surface of the light source to enable the included angle between the light emitting surface of the light source and the imaging surface of the camera to be 45 degrees, and collecting an image Img 1;
rotating the light source around the camera, respectively adjusting the included angle between the light emitting surface of the light source and the imaging surface of the camera to be 45 degrees when the light source rotates by 90 degrees, and sequentially collecting images Img2, Img3 and Img 4;
calculating an acquired image Img; wherein Img ═ (Img1+ Img2+ Img3+ Img 4)/4.
3. The method for detecting weak defects on an imaging chip as claimed in claim 2, wherein the captured images Img1, Img2, Img3 and Img4 are obtained by averaging three images continuously captured in time, respectively.
4. The method for detecting weak defects on an imaging chip as claimed in claim 1, wherein the gray scale stretching of the captured image Img is calculated by using the following formula:
Figure FDA0002873805710000011
here, max (·) represents the maximum value calculation, min (·) represents the minimum value calculation, and new _ Img represents the image after the gradation stretching.
5. The method for detecting weak defects on an imaging chip according to claim 1, wherein the step of comparing the defects according to the gray-scale stretched image to obtain a final defect detection image comprises:
calculating the average value of the gray values of the image after gray stretching;
comparing the deviation of the gray value of each pixel point in the image after gray stretching with the average value point by point; if the deviation is larger than 8% of the mean value of the stretched image, the point is considered as a defect point, and the gray value is marked as 255; if the deviation is less than or equal to 8% of the image mean value, the point is considered as a normal point, and the gray value is marked as 0;
and forming a final defect detection image by all the pixel points marked by the gray value.
6. The method of claim 1, wherein the obtaining flat field calibration coefficients (FPN and PRNU) comprises:
executing an image acquisition process to obtain a dark field image and a bright field image;
calculating a flat field calibration coefficient through the dark field image and the bright field image;
performing flat field calibration on all pixel points by adopting a formula Output (Input-FPN) PRNU, and then outputting an image; here, Input and Output represent Input data and Output data of an image, respectively.
7. The method of claim 6, wherein the flat field calibration coefficients FPN and PRNU are calculated by the following equations:
Figure FDA0002873805710000021
wherein, IdarkAnd IlightThe dark-field image and the bright-field image are represented, respectively, and max (-) represents the maximum operation.
8. The method as claimed in claim 6, wherein said performing an image acquisition process to obtain dark-field and bright-field images comprises:
respectively shooting at least three dark field images and at least three bright field images;
calculating the gray average value of all the shot dark field images to be used as the dark field image for executing the next step;
and calculating the gray level average value of all the shot bright field images to be used as the bright field image for executing the next step.
9. The method for detecting weak defects on an imaging chip as claimed in claim 6, wherein the dark field pattern is an image captured when the camera is in a dark room, the light source is turned off and the exposure time is minimum; the bright field image is an image acquired when the camera shoots the flat light source in normal exposure time and the gray value of the image reaches 80% of the saturation value of the image.
CN201811548239.7A 2018-12-18 2018-12-18 Method for detecting weak defects on imaging chip Active CN109525840B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811548239.7A CN109525840B (en) 2018-12-18 2018-12-18 Method for detecting weak defects on imaging chip

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811548239.7A CN109525840B (en) 2018-12-18 2018-12-18 Method for detecting weak defects on imaging chip

Publications (2)

Publication Number Publication Date
CN109525840A CN109525840A (en) 2019-03-26
CN109525840B true CN109525840B (en) 2021-04-09

Family

ID=65796111

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811548239.7A Active CN109525840B (en) 2018-12-18 2018-12-18 Method for detecting weak defects on imaging chip

Country Status (1)

Country Link
CN (1) CN109525840B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110166648A (en) * 2019-06-06 2019-08-23 杭州国翌科技有限公司 A kind of camera detection locking means and device based on optical imagery
CN110996095B (en) * 2019-12-03 2021-09-14 哈尔滨工程大学 Multiplication CCD multiplication gain fitting measurement method
CN111141746B (en) * 2020-02-10 2022-07-15 上海工程技术大学 Method and system for automatically detecting length of refill tail oil
CN113447485A (en) * 2020-03-26 2021-09-28 捷普电子(新加坡)公司 Optical detection method
CN114723651A (en) * 2020-12-22 2022-07-08 东方晶源微电子科技(北京)有限公司 Defect detection model training method, defect detection method, device and equipment
CN113379835B (en) * 2021-06-29 2024-06-04 深圳中科飞测科技股份有限公司 Calibration method, device and equipment of detection equipment and readable storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN206725466U (en) * 2017-04-20 2017-12-08 图麟信息科技(上海)有限公司 Cover-plate glass defect detecting device based on multi-angle combination dark field imaging

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101819162A (en) * 2010-05-13 2010-09-01 山东大学 Empty bottle wall defect detection method and device
JP5676419B2 (en) * 2011-11-24 2015-02-25 株式会社日立ハイテクノロジーズ Defect inspection method and apparatus
CN104101611A (en) * 2014-06-06 2014-10-15 华南理工大学 Mirror-like object surface optical imaging device and imaging method thereof
CN104730079B (en) * 2015-03-10 2018-09-07 盐城市圣泰阀门有限公司 Defect detecting system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN206725466U (en) * 2017-04-20 2017-12-08 图麟信息科技(上海)有限公司 Cover-plate glass defect detecting device based on multi-angle combination dark field imaging

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DALSA相机平场校正步骤;机器视觉001;《URL:https://blog.csdn.net/liubing8609/article/details/42386747》;20150104;全文 *
基于机器视觉的全自动汽车零件筛选系统;沈伟,庞全,范影乐等;《仪表技术与传感器》;20090915(第9期);第1-4页 *

Also Published As

Publication number Publication date
CN109525840A (en) 2019-03-26

Similar Documents

Publication Publication Date Title
CN109525840B (en) Method for detecting weak defects on imaging chip
WO2020253827A1 (en) Method and apparatus for evaluating image acquisition accuracy, and electronic device and storage medium
CN105651203B (en) A kind of high dynamic range 3 D measuring method of adaptive striped brightness
CN108088845B (en) Imaging correction method and device based on weak information retention
US20160234489A1 (en) Method for measuring performance parameters and detecting bad pixels of an infrared focal plane array module
CN109856164B (en) Optimization device for acquiring large-range images by machine vision and detection method thereof
WO2009147821A1 (en) Resin material detection testing device and memory recording medium
CN110570411A (en) mura detection method and device based on coefficient of variation
CN112033965A (en) 3D arc surface defect detection method based on differential image analysis
CN111025701B (en) Curved surface liquid crystal screen detection method
WO2015158024A1 (en) Image processing method and apparatus, and automatic optical detector
US8481918B2 (en) System and method for improving the quality of thermal images
CN107833223B (en) Fruit hyperspectral image segmentation method based on spectral information
CN116934833A (en) Binocular vision-based underwater structure disease detection method, equipment and medium
CN116668831A (en) Consistency adjusting method and device for multi-camera system
CN107454388B (en) Image processing method and apparatus using the same
JP2012028987A (en) Image processing apparatus
CN115022610A (en) Flat field correction method for linear array camera
CN114964032A (en) Blind hole depth measuring method and device based on machine vision
CN109186941A (en) A kind of detection method and system of light source uniformity
CN111062887B (en) Image definition judging method based on improved Retinex algorithm
JP2012059213A (en) Binarization processing method and image processing apparatus
TWI818715B (en) A method for visual inspection of curved objects
Bedrich et al. Electroluminescence imaging of pv devices: Determining the image quality
CN116634285B (en) Automatic white balance method of linear array camera for raw material detection equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 100094 Beijing city Haidian District Cui Hunan loop 13 Hospital No. 7 Building 7 room 701

Applicant after: Lingyunguang Technology Co.,Ltd.

Address before: 100094 Beijing city Haidian District Cui Hunan loop 13 Hospital No. 7 Building 7 room 701

Applicant before: Beijing lingyunguang Technology Group Co.,Ltd.

Address after: 100094 Beijing city Haidian District Cui Hunan loop 13 Hospital No. 7 Building 7 room 701

Applicant after: Beijing lingyunguang Technology Group Co.,Ltd.

Address before: 100094 Beijing city Haidian District Cui Hunan loop 13 Hospital No. 7 Building 7 room 701

Applicant before: LUSTER LIGHTTECH GROUP Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant