CN111292344A - Detection method of camera module - Google Patents
Detection method of camera module Download PDFInfo
- Publication number
- CN111292344A CN111292344A CN201811494065.0A CN201811494065A CN111292344A CN 111292344 A CN111292344 A CN 111292344A CN 201811494065 A CN201811494065 A CN 201811494065A CN 111292344 A CN111292344 A CN 111292344A
- Authority
- CN
- China
- Prior art keywords
- camera module
- center
- image
- photosensitive element
- contour
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 9
- 230000003287 optical effect Effects 0.000 claims abstract description 25
- 238000000034 method Methods 0.000 claims description 14
- 230000001186 cumulative effect Effects 0.000 claims description 6
- 238000005315 distribution function Methods 0.000 claims description 6
- 238000004519 manufacturing process Methods 0.000 claims description 3
- 230000000295 complement effect Effects 0.000 claims description 2
- 239000004065 semiconductor Substances 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 7
- 238000003384 imaging method Methods 0.000 description 3
- 241000270295 Serpentes Species 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/66—Analysis of geometric attributes of image moments or centre of gravity
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N17/00—Diagnosis, testing or measuring for television systems or their details
- H04N17/002—Diagnosis, testing or measuring for television systems or their details for television cameras
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a detection method of a camera module, which is applied to the camera module with a camera lens and a photosensitive element, and comprises the following steps: (A) acquiring an original image by using a camera lens and a photosensitive element; (B) converting an original image into a gray image; (C) converting the gray image into a binary image according to a critical gray value; (D) obtaining boundary contours of a plurality of pixels in a binary image, wherein the gray value of the pixels is greater than or equal to a critical gray value; (E) obtaining a contour center of the boundary contour; and (F) judging whether the optical axis of the camera lens is opposite to the imaging center of the photosensitive element or not according to the imaging center and the outline center of the photosensitive element.
Description
Technical Field
The invention relates to the field of optics, in particular to a detection method of a camera module.
Background
In recent years, with the evolution of the electronic industry and the vigorous development of industrial technology, the trend of designing and developing various electronic devices is gradually moving toward portable and easy-to-carry, so as to facilitate users to apply to mobile commerce, entertainment, leisure and other purposes anytime and anywhere. For example, various camera modules are widely applied to various fields, such as the field of portable electronic devices, such as smart phones and wearable electronic devices, and have the advantages of small size and portability, so that people can take out images at any time when in use, acquire and store the images, or further upload the images to the internet through a mobile network, thereby not only having important commercial value, but also enabling the daily life of the general public to be more colorful. Of course, camera modules are widely used not only in the field of portable electronic devices but also in the field of vehicle electronics where safety is important.
Please refer to fig. 1, which is a conceptual diagram of a conventional camera module. The camera module 1 includes a camera lens 11 and a photosensitive element 12, the photosensitive element 12 is used for sensing an external light beam passing through the camera lens 11 and projected thereon to obtain an image, wherein whether an optical axis 111 of the camera lens 11 can be aligned with an imaging center 121 of the photosensitive element 12 is an important key for affecting the imaging quality of the camera module 1. Therefore, how to effectively detect whether the optical axis 111 of the camera lens 11 is aligned with the imaging center 121 of the photosensitive element 12 in the production and assembly processes of the camera module 1 has become an issue to be researched.
Disclosure of Invention
The present invention is directed to a method for detecting a camera module, and more particularly to a method for detecting whether an optical axis of a camera lens is aligned with an imaging center of a photosensitive element.
In a preferred embodiment, the present invention provides a method for detecting a camera module, which is applied to a camera module having a camera lens and a photosensitive element, and includes:
(A) acquiring an original image by using the camera lens and the photosensitive element;
(B) converting the original image into a gray scale image (gray scale image);
(C) converting the gray image into a binary image (binary image) according to a threshold gray value;
(D) obtaining a boundary contour of a plurality of pixels which are greater than or equal to (greater than or equal to) the critical gray value in the binary image;
(E) obtaining a contour center of the boundary contour; and
(F) and judging whether an optical axis of the camera lens is aligned with the imaging center of the photosensitive element or not according to the imaging center of the photosensitive element and the outline center.
Drawings
FIG. 1: is a conceptual diagram of an existing camera module.
FIG. 2: is a preferred method flowchart of the detection method of the camera module of the present invention.
FIG. 3: is a conceptual diagram of the original image acquired through step S1 shown in fig. 2.
FIG. 4: is a conceptual diagram illustrating the gray scale image obtained from the original image shown in fig. 3 through the step S2 shown in fig. 2.
FIG. 5: is a preferred execution flow chart of step S3 shown in fig. 2.
FIG. 6A: is a conceptual diagram of each gray scale value and the corresponding number of pixels in the gray scale image shown in fig. 4.
FIG. 6B: is a preferred cumulative distribution function plot for the gray scale image shown in fig. 4.
FIG. 7: the concept of converting the gray image shown in fig. 4 into a binary image through steps S31 and S32 shown in fig. 5 is illustrated.
FIG. 8: is a conceptual diagram for obtaining the boundary contour for the binary image shown in fig. 7 through step S4 shown in fig. 2.
FIG. 9: is a conceptual diagram illustrating the boundary contour shown in fig. 8 obtained the contour center through step S5 shown in fig. 2.
Wherein the reference numerals are as follows:
1 Camera Module
2 original image
3 Gray scale image
4 binary image
11 image pickup lens
12 photosensitive element
41 boundary profile
42 optical circle
43 center of contour
44 center of binary image
111 optical axis
121 center of imaging
Step S1
Step S2
Step S3
Step S4
Step S5
Step S6
Step S31
Step S32
Detailed Description
First, it is described that the detection method of the camera module of the present disclosure can be applied to the camera module 1 shown in fig. 1, and is applied to a production line of the camera module 1, generally, the light source density on the photosensitive element 12 is higher as approaching to the optical axis 111 of the camera lens 11, so the detection method of the camera module of the present disclosure uses the luminance values (Intensity) of a plurality of pixels on the photosensitive element 12 to find the position (optical center) on the photosensitive element 12 corresponding to the optical axis 111 of the camera lens 11, and then compares the position with the distance between the imaging center 121 of the photosensitive element 12 to determine whether the optical axis 111 of the camera lens 11 is aligned with the imaging center 121 of the photosensitive element 12. In a preferred embodiment, the photosensitive element 12 can be a Complementary Metal-Oxide-Semiconductor (CMOS) or a Charge Coupled Device (CCD), and the imaging center 121 is located at the center of the entire photosensitive element 12, but not limited thereto.
Please refer to fig. 2, which is a flowchart illustrating a preferred method of detecting a camera module according to the present invention. The detection method of the camera module comprises the following steps: step S1, acquiring an original image by using the camera lens and the photosensitive element; step S2, converting the original image into a gray scale image (gray scale image); step S3, converting the gray image into a binary image (binary image) according to a threshold gray value; step S4, obtaining a boundary contour of a plurality of pixels which are greater than or equal to (greater than or equal to) the critical gray value in the binary image; step S5, obtaining the contour center of the boundary contour; in step S6, it is determined whether the optical axis of the imaging lens is aligned with the imaging center of the photosensitive element according to the imaging center of the photosensitive element and the contour center of the boundary contour.
The following describes the steps S1 to S6 with a preferred embodiment, and is assisted by the contents shown in fig. 3 to 9. Fig. 3 shows the original image 2 obtained in step S1 shown in fig. 2, which may be an RGB type color image or a CMYK type color image. Fig. 4 is a gray scale image 3 obtained from the original image shown in fig. 3 through step S2 shown in fig. 2, wherein each pixel in the gray scale image 3 is represented by a gray scale value of 0-255, and the different gray scale values represent different brightness respectively.
Furthermore, fig. 5 illustrates a preferred execution flowchart of step S3 shown in fig. 2, which includes: step S31, obtaining a critical gray value corresponding to a specific coverage rate by using a Cumulative Distribution Function (CDF); in step S32, each pixel in the grayscale image that is greater than or equal to (≧) the critical grayscale value is classified as a high-luminance pixel, and each pixel in the grayscale image that is less than (<) the critical grayscale value is classified as a low-luminance pixel, so as to binarize the grayscale image.
Further, referring to fig. 6A and 6B, fig. 6A illustrates the number of pixels (vertical axis) corresponding to each gray level (horizontal axis) in the gray image, and the cumulative distribution function shown in fig. 6B can be obtained by performing step S31, where the cumulative distribution function is defined as: fX(x) P (X ≦ X), P is the coverage, X is the grayscale value, and X is a random variable. In the preferred embodiment, the specific coverage is set to 0.4, but the practical application is not limited thereto, and as shown in fig. 6B, the threshold gray level value corresponding to the specific coverage 0.4 is 120, i.e., in the preferred embodiment, the threshold gray level value 120 can be obtained through the step S31 shown in fig. 5.
In addition, the cumulative distribution function is an integral of the probability density function, and can completely describe the probability distribution of a random variable X, which is known to those skilled in the art, and therefore will not be described herein again.
In the present preferred embodiment, step S32 is executed to convert the gray-scale image 3 shown in fig. 4 into the binary image 4 shown in fig. 7 by setting each pixel in the gray-scale image 3 that is greater than or equal to (≧) the critical gray-scale value 120 to the maximum gray-scale value (e.g., 255) and each pixel in the gray-scale image 3 that is less than (<) the critical gray-scale value 120 to the minimum gray-scale value (e.g., 0), wherein the pixel in the binary image 4 shown in fig. 7 that is set to the maximum gray-scale value (e.g., 255) is indicated by a black dot for clarity.
Please refer to fig. 8, which is a boundary contour of the binary image shown in fig. 7 obtained through the step S4 shown in fig. 2. In the preferred embodiment, step S4 is to obtain the boundary Contour 41 of a plurality of pixels set as the gray maximum (e.g. 255) in the binary image 4 by using an Active Contour Model (Active Contour Model); the active contour model, also called "Snakes", is a structure for extracting object contour lines from a two-dimensional image that may contain noise, and is well known in the art, and therefore, the description thereof is omitted here. Of course, the present disclosure is not limited to the boundary contour obtained by the active contour model, and those skilled in the art can make any equivalent design changes according to the actual application requirements.
Please refer to fig. 9, which illustrates the contour center of the boundary contour shown in fig. 8 obtained through the step S5 shown in fig. 2. In the preferred embodiment, step S5 is to obtain the optical circle 42 Fitting the boundary contour 41 by using an Ellipse Fitting Algorithm (elipse Fitting Algorithm) and use the center of the optical circle 42 as the contour center 43, and since the size of the binary image 4 corresponds to the size of the photosensitive element 12, the contour center 43 obtained in step S5 can represent the position (optical center) of the photosensitive element 12 corresponding to the optical axis 111 of the camera lens 11; the ellipse fitting algorithm is also known by those skilled in the art, and therefore will not be described herein. Of course, the disclosure is not limited to the use of ellipse fitting algorithm to obtain the contour center of the boundary contour, and those skilled in the art can make any equivalent design changes according to the actual application requirements.
Likewise, since the size of the binary image 4 corresponds to the size of the photosensitive element 12, the center 44 of the binary image 4 may represent the imaging center 121 of the photosensitive element 12. In the preferred embodiment, when the distance between the center 44 of the binary image 4 (representing the imaging center 121 of the photosensitive element 12) and the contour center 43 of the boundary contour 41 (representing the position of the photosensitive element 12 corresponding to the optical axis 111 of the camera lens 11) is within a predetermined distance, it is determined that the imaging center 121 of the photosensitive element 12 overlaps or is adjacent to the position of the photosensitive element 12 corresponding to the optical axis 111 of the camera lens 11, it is determined that the optical axis 111 of the camera lens 11 is aligned with the imaging center 121 of the photosensitive element 12, and conversely, when the distance between the center 44 of the binary image 4 (representing the imaging center 121 of the photosensitive element 12) and the contour center 43 of the boundary contour 41 (i.e. the position of the photosensitive element 12 corresponding to the optical axis 111 of the camera lens 11) is greater than or within a predetermined distance, it is determined that the optical axis 111 of the camera lens 11 is not aligned with the imaging center 121 of the photosensitive element 12, at this time, the camera lens 11 and the photosensitive element 12 need to be reassembled or corrected.
The above description is only a preferred embodiment of the present invention and is not intended to limit the scope of the present invention, so that other equivalent changes or modifications without departing from the spirit of the present invention are intended to be included in the appended claims.
Claims (9)
1. A detection method of a camera module is applied to the camera module with a camera lens and a photosensitive element, and comprises the following steps:
(A) acquiring an original image by using the camera lens and the photosensitive element;
(B) converting the original image into a gray scale image (gray scale image);
(C) converting the gray image into a binary image (binary image) according to a threshold gray value;
(D) obtaining a boundary contour of a plurality of pixels which are greater than or equal to (greater than or equal to) the critical gray value in the binary image;
(E) obtaining a contour center of the boundary contour; and
(F) and judging whether an optical axis of the camera lens is aligned with the imaging center of the photosensitive element or not according to the imaging center of the photosensitive element and the outline center.
2. The method for detecting a camera module according to claim 1, wherein the step (C) comprises:
(C1) the threshold gray level corresponding to a specific coverage rate is obtained by using a Cumulative Distribution Function (CDF).
3. The method for inspecting a camera module according to claim 2, wherein the specific cover ratio is 0.4.
4. The method for detecting a camera module according to claim 2, wherein the step (C) further comprises:
(C2) each pixel in the grayscale image that is greater than or equal to (≧) the threshold grayscale value is classified as a high-luminance pixel, and each pixel in the grayscale image that is less than (<) the threshold grayscale value is classified as a low-luminance pixel.
5. The method for detecting a camera module according to claim 1, wherein the step (D) comprises:
the boundary Contour is obtained using an Active Contour Model (Active Contour Model).
6. The method for detecting a camera module according to claim 1, wherein the step (E) comprises:
an Ellipse Fitting Algorithm (Ellipse Fitting Algorithm) is used to obtain an optical circle Fitting the boundary profile and a center of the optical circle is used as the profile center.
7. The method according to claim 1, wherein in the step (F), when the imaging center is overlapped with or adjacent to the contour center, the optical axis is determined to be aligned with the imaging center.
8. The method for inspecting a camera module according to claim 1, which is applied to a production line of the camera module.
9. The method as claimed in claim 1, wherein the photosensitive element is a Complementary Metal-Oxide-Semiconductor (CMOS) or a Charge Coupled Device (CCD).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811494065.0A CN111292344A (en) | 2018-12-07 | 2018-12-07 | Detection method of camera module |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811494065.0A CN111292344A (en) | 2018-12-07 | 2018-12-07 | Detection method of camera module |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111292344A true CN111292344A (en) | 2020-06-16 |
Family
ID=71023078
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811494065.0A Pending CN111292344A (en) | 2018-12-07 | 2018-12-07 | Detection method of camera module |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111292344A (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105678739A (en) * | 2015-12-29 | 2016-06-15 | 中国兵器科学研究院宁波分院 | Resolution test method for three-dimensional image of cone beam CT system |
WO2017080441A1 (en) * | 2015-11-09 | 2017-05-18 | 宁波舜宇光电信息有限公司 | Method for finding optical centre of lens, device for selecting shadow computation region for lens and testing surround view video-shooting module, method for testing white balance of surround view video-shooting module, and wide-angle integrating sphere |
EP3264361A1 (en) * | 2015-02-13 | 2018-01-03 | BYD Company Limited | Method and device for calculating ridge distance |
CN108683907A (en) * | 2018-05-31 | 2018-10-19 | 歌尔股份有限公司 | Optics module picture element flaw detection method, device and equipment |
-
2018
- 2018-12-07 CN CN201811494065.0A patent/CN111292344A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3264361A1 (en) * | 2015-02-13 | 2018-01-03 | BYD Company Limited | Method and device for calculating ridge distance |
WO2017080441A1 (en) * | 2015-11-09 | 2017-05-18 | 宁波舜宇光电信息有限公司 | Method for finding optical centre of lens, device for selecting shadow computation region for lens and testing surround view video-shooting module, method for testing white balance of surround view video-shooting module, and wide-angle integrating sphere |
CN105678739A (en) * | 2015-12-29 | 2016-06-15 | 中国兵器科学研究院宁波分院 | Resolution test method for three-dimensional image of cone beam CT system |
CN108683907A (en) * | 2018-05-31 | 2018-10-19 | 歌尔股份有限公司 | Optics module picture element flaw detection method, device and equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Shi et al. | Normalised gamma transformation‐based contrast‐limited adaptive histogram equalisation with colour correction for sand–dust image enhancement | |
KR102474715B1 (en) | Parallax Mask Fusion of Color and Mono Images for Macrophotography | |
JP5701182B2 (en) | Image processing apparatus, image processing method, and computer program | |
US10015371B2 (en) | Electronic device and image tracking method thereof | |
US10291823B2 (en) | Apparatus and method for color calibration | |
CN111353948B (en) | Image noise reduction method, device and equipment | |
WO2017121018A1 (en) | Method and apparatus for processing two-dimensional code image, and terminal and storage medium | |
US10382712B1 (en) | Automatic removal of lens flares from images | |
CN109509156B (en) | Image defogging processing method based on generation countermeasure model | |
US11398016B2 (en) | Method, system, and computer-readable medium for improving quality of low-light images | |
Rashid et al. | Single image dehazing using CNN | |
WO2019201184A1 (en) | License plate enhancement method, apparatus and electronic device | |
WO2020065995A1 (en) | Image processing device, control method, and control program | |
US11887280B2 (en) | Method, system, and computer-readable medium for improving quality of low-light images | |
JP2007129535A (en) | Image processor, image processing method, program thereof, and computer-readable recording medium recorded with same program | |
CN112272832A (en) | Method and system for DNN-based imaging | |
EP3247107A1 (en) | Method and device for obtaining a hdr image by graph signal processing | |
CN106651816B (en) | Method and system for halftone processing of digital image | |
TWI669962B (en) | Method for detecting camera module | |
US20230343119A1 (en) | Captured document image enhancement | |
CN114049264A (en) | Dim light image enhancement method and device, electronic equipment and storage medium | |
CN111292344A (en) | Detection method of camera module | |
US10896344B2 (en) | Information processing apparatus, information processing method, and computer program | |
Liu et al. | Modified grey world method to detect and restore colour cast images | |
CN115731115A (en) | Data processing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20200616 |
|
WD01 | Invention patent application deemed withdrawn after publication |