CN110800294A - Method, equipment and system for detecting camera module and machine-readable storage medium - Google Patents

Method, equipment and system for detecting camera module and machine-readable storage medium Download PDF

Info

Publication number
CN110800294A
CN110800294A CN201880039292.6A CN201880039292A CN110800294A CN 110800294 A CN110800294 A CN 110800294A CN 201880039292 A CN201880039292 A CN 201880039292A CN 110800294 A CN110800294 A CN 110800294A
Authority
CN
China
Prior art keywords
camera module
threshold
image
dirty
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201880039292.6A
Other languages
Chinese (zh)
Inventor
赵超
常坚
任伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
Shenzhen DJ Innovation Industry Co Ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN110800294A publication Critical patent/CN110800294A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

A method, equipment, a system and a machine readable storage medium for detecting a camera module are provided. Wherein the method comprises: acquiring a specific image shot by a camera module; acquiring the brightness distribution and/or the shape characteristic of the specific image; and judging whether the camera module is dirty or not according to the brightness distribution and/or the shape characteristics. The method does not need manual detection, so that the detection result is irrelevant to subjective judgment, and the accuracy of the detection result is improved. In addition, the speed of determining the detection result by the method is far higher than the speed of manual detection, and the efficiency of detecting the dirt of the camera module can be improved.

Description

Method, equipment and system for detecting camera module and machine-readable storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method, an apparatus, a system, and a machine-readable storage medium for detecting a camera module.
Background
Most of the existing electronic equipment is provided with a camera module. In the assembling process, foreign matters such as dust and hair may fall on the photosensitive chip, or the image is dirty when the mirror surface is uneven and greasy, so that the camera module needs to be dirty before being configured.
At present, the dirty detection of camera module is accomplished by the manual work, and inspection personnel adopt the microscope to look over positions such as the mirror surface of camera module to judge the dirty of existence. However, the method has low detection efficiency and strong subjectivity of manual judgment, so that the detection result has large error.
Disclosure of Invention
The invention provides a method, equipment and a system for detecting a camera module and a machine-readable storage medium.
According to a first aspect of the present invention, there is provided a method for detecting a camera module, including:
acquiring a specific image shot by a camera module;
acquiring the brightness distribution and/or the shape characteristic of the specific image;
and judging whether the camera module is dirty or not according to the brightness distribution and/or the shape characteristics.
Optionally, the specific image includes M specific shapes, where M is a natural number.
Optionally, the specific shape includes at least one of a rectangle, a triangle, and a circle.
Optionally, the brightness distribution is a parameter value of a designated parameter of each pixel point in the specific image, where the designated parameter at least includes a brightness value or a gray value.
Optionally, the shape feature is an amount of deformation of the particular shape.
Optionally, the determining whether the camera module is dirty according to the brightness distribution includes:
acquiring a first threshold value and a second threshold value of the designated parameter;
and judging whether the camera module is dirty or not according to the brightness distribution, the first threshold and the second threshold.
Optionally, the first threshold and the second threshold are learned based on a big data manner or are preset based on an empirical value.
Optionally, the determining whether the camera module is dirty according to the brightness distribution, the first threshold and the second threshold includes:
acquiring the number A of pixel points of which the parameter values of the specified parameters in the specific image are between the first threshold and the second threshold; the first threshold is greater than the second threshold;
and judging whether the camera module is dirty or not according to the quantity A.
Optionally, determining whether the camera module is dirty according to the quantity a includes:
comparing the quantity A with a preset quantity threshold value AA;
if the A is larger than or equal to the AA, determining that the camera module is dirty; and if the A is smaller than the AA, determining that the camera module is not polluted.
Optionally, determining whether the camera module is dirty according to the quantity a includes:
acquiring the number B of pixel points of which the parameter values of the designated parameters in the specific image are greater than the first threshold;
calculating the ratio C of the quantity A to the quantity B;
if the ratio C is larger than or equal to a preset first ratio threshold value CC, determining that the camera module is dirty; and if the C is smaller than the CC, determining that the camera module is not polluted.
Optionally, the determining whether the camera module is dirty according to the brightness distribution, the first threshold and the second threshold includes:
acquiring a first segmentation image of the specific image according to the brightness distribution and the first threshold value, and acquiring a second segmentation image of the specific image according to the brightness distribution and the second threshold value;
and judging whether the camera module is dirty or not according to the first segmentation image and the second segmentation image.
Optionally, the obtaining a first segmented image of the specific image according to the brightness distribution and the first threshold comprises:
aiming at each pixel point in the specific image, comparing the parameter value of the specified parameter of the pixel point with the first threshold value;
if the parameter value is larger than or equal to the first threshold value, updating the parameter value of the pixel point to a first set value; if the parameter value is smaller than the first threshold value, updating the parameter value of the pixel point to a second set value;
and forming a first segmentation image according to the updated parameter value of each pixel point.
Optionally, acquiring a second segmented image of the specific image according to the brightness distribution and the second threshold comprises:
aiming at each pixel point in the specific image, comparing the parameter value of the designated parameter of the pixel point with a preset second threshold value;
if the parameter value is smaller than or equal to the second threshold value, updating the parameter value of the pixel point to a second set value; if the parameter value is larger than the second threshold value, updating the parameter value of the pixel point to a first set value;
and forming a second segmentation image according to the updated parameter value of each pixel point.
Optionally, the determining whether the camera module is dirty according to the first split image and the second split image includes:
acquiring the number D1 of pixel points of which the parameter values of the designated parameters in the first divided image are the first set values, and the number D2 of pixel points of which the parameter values of the designated parameters in the second divided image are the first set values;
obtaining a difference D between the D2 and the D1;
obtaining the ratio E of the D and the D1;
if the E is larger than or equal to a preset second ratio value EE, determining that the camera module is dirty; and if the E is smaller than the EE, determining that the camera module is not polluted.
Optionally, the shape characteristic refers to a degree of deformation of the particular shape.
Optionally, the degree of deformation is characterized by a fill rate of the particular shape; the filling rate is a ratio of the area of the connected domain of the specific shape to the area of the minimum circumscribed figure.
Optionally, the acquiring the shape feature of the specific image comprises:
acquiring N connected domains in a first segmentation image; n is a natural number;
acquiring a minimum external graph of each connected domain in the N connected domains; the minimum circumscribed figure is the same as the specific shape;
and determining the shape characteristics of the specific image according to the connected domain and the corresponding minimum circumscribed graph.
Optionally, the obtaining N connected regions within the first segmented image includes:
acquiring M1 connected domains in the first segmentation image; m1 is a positive integer and is greater than or equal to M;
acquiring the attribute of each connected domain in the M1 connected domains;
filtering out connected domains which do not meet preset conditions from the M1 connected domains according to the attribute of each connected domain to obtain N connected domains;
the preset condition is that the distance between the center of the connected domain and the center of the specific image is smaller than or equal to a preset distance threshold.
Optionally, the attributes of the connected domain include: at least one of a center position, an area, a minimum circumscribed figure, an area of the minimum circumscribed figure, an aspect ratio, a mutual position of a connected domain boundary and a first split image boundary.
Optionally, determining the shape feature of the specific image according to the connected domain and the corresponding minimum circumscribed graph includes:
respectively calculating the area of each connected domain and the area of the corresponding minimum circumscribed graph;
and calculating the ratio F of the area of each connected domain to the area of the corresponding minimum circumscribed figure, wherein the ratio F is the filling rate of the specific image.
Optionally, the determining whether the camera module is dirty according to the shape feature includes:
if the ratio F corresponding to each of the N connected domains is greater than or equal to a preset deformation threshold FF, determining that the camera module is not polluted;
and if the ratio F corresponding to one connected domain in the N connected domains is smaller than the deformation threshold FF, determining that the camera module is dirty.
According to a second aspect of the present invention, there is provided an apparatus for detecting a camera module, including a processor and a memory, where the memory stores a plurality of instructions, and the processor reads the instructions from the memory to implement the steps of the method of the first aspect.
According to a third aspect of the present invention, there is provided a system for detecting a camera module, comprising the apparatus for detecting a camera module according to the second aspect, a light source module and a sealed box; wherein the content of the first and second substances,
the light source module is used for providing uniform light emission;
the sealed box body is arranged outside the light source module and used for providing a detection environment only emitting light from the light source module for the camera module;
before detection, the camera module is placed in the sealed box body, and a normal line of a mirror surface in the camera module is perpendicular to a light-emitting surface of the light source module;
the equipment is connected with the camera module and used for controlling the camera module to shoot specific images and detecting whether the camera module is dirty or not according to the specific images.
Optionally, the light source module includes: surface light source and sign light-transmitting plate;
the surface light source is provided with a smooth light emitting surface; the mark light-transmitting plate is attached to the light-emitting surface;
the sign light-transmitting plate is made of black light-absorbing materials, and a plurality of holes in specific shapes are formed in the sign light-transmitting plate.
Optionally, the light source module includes: a surface light source; the surface light source is provided with a light-emitting surface made of black light absorption materials, and a plurality of holes in specific shapes are formed in the light-emitting surface.
Optionally, the specific shape comprises: at least one of rectangular, triangular and circular.
Optionally, the plurality of specially shaped holes are regularly distributed.
Optionally, the system further comprises a position adjusting module consisting of a movable component and a static component; the movable assembly is used for being fixed on the light source module or the camera module; the distance between the light source module and the camera module is related to the relative position between the movable component and the static component.
Optionally, a distance between the light source module and the camera module is smaller than or equal to a first distance; the first distance is a distance corresponding to the light-emitting surface of the light source module when the image shot by the camera module is full of the light-emitting surface.
Optionally, the system further comprises a display module, wherein the display module is connected with the equipment and at least used for displaying the dirty detection result of the camera module.
According to a fourth aspect of the present invention, there is provided a machine-readable storage medium having stored thereon computer instructions which, when executed, implement the steps of the method of the first aspect.
According to the technical scheme, the specific image shot by the camera module is obtained; then acquiring the brightness distribution and/or deformation characteristics of the specific image; and finally, judging whether the camera module is dirty or not according to the brightness distribution and/or the deformation characteristics. Therefore, manual detection is not needed in the embodiment, so that the detection result is irrelevant to subjective judgment, and the accuracy of the detection result is favorably improved. In addition, the speed of this embodiment determination testing result is greater than the speed of artifical detection far away, can promote the dirty efficiency of detection camera module.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
Fig. 1 is a block diagram of a system for detecting a camera module according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a light source module according to an embodiment of the invention;
fig. 3 is a schematic structural diagram of a light source module according to another embodiment of the present invention;
FIG. 4 is a diagram illustrating a specific image including M specific shapes according to an embodiment of the present invention;
fig. 5 is a schematic diagram illustrating a position relationship between a light-sensing area of an image sensor and a light-emitting surface of a light source module in a camera module according to an embodiment of the present invention; fig. 5(a) shows a scene in which the light-emitting surface is located inside the photosensitive area; fig. 5(b) is a scene with a photosensitive area inside the light-emitting surface;
fig. 6 is a schematic structural diagram of a position adjustment module according to an embodiment of the present invention; FIG. 6(a) is a top view of the sealed container of FIG. 1 with the top cover removed (top of FIG. 6); FIG. 6(b) is a front view of the sealed enclosure of FIG. 1 with the front side (front view in FIG. 6) removed;
fig. 7 is a block diagram of a system for detecting a camera module according to another embodiment of the present invention;
fig. 8 is a block diagram of an apparatus for detecting a camera module according to an embodiment of the present invention;
fig. 9 is a schematic flowchart illustrating a method for detecting a camera module according to an embodiment of the present invention;
fig. 10 is a schematic flowchart illustrating a method for detecting a camera module according to another embodiment of the present invention;
fig. 11 is a schematic flowchart illustrating a method for detecting a camera module according to another embodiment of the present invention;
fig. 12 is a schematic flowchart illustrating a method for detecting a camera module according to yet another embodiment of the present invention;
fig. 13 is a schematic flowchart illustrating a method for detecting a camera module according to yet another embodiment of the present invention;
fig. 14 is a schematic flowchart illustrating a method for detecting a camera module according to another embodiment of the present invention;
FIG. 15 is a schematic diagram of a first segmentation image acquisition process provided by an embodiment of the present invention;
FIG. 16 is a schematic diagram of a second segmentation image acquisition process provided by an embodiment of the present invention;
fig. 17 is a schematic flowchart of a method for detecting a camera module according to another embodiment of the present invention;
fig. 18 is a schematic flowchart of a method for detecting a camera module according to another embodiment of the present invention;
FIG. 19 is a diagram of connected domains for a particular image provided by an embodiment of the invention;
FIG. 20 is a flowchart illustrating a method for acquiring a connected domain according to an embodiment of the present invention;
FIG. 21 is a diagram illustrating a minimum bounding graph of a connected domain provided by an embodiment of the present invention;
FIG. 22 is a schematic flow chart illustrating the process of obtaining the fill factor according to an embodiment of the present invention;
fig. 23 is a schematic flowchart illustrating a method for detecting a camera module according to another embodiment of the present invention;
fig. 24 is a flowchart illustrating a method for detecting a camera module according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The existing camera module at least comprises an image sensor and a lens, and in the process of shooting a specific image, light rays from a light source can reach the image sensor through the lens, and the image sensor receives light to obtain a corresponding image. If no dirt exists, the lens and the air between the lens and the image sensor can be regarded as uniform media, so that the optical paths of all the light rays reaching the image sensor are equal.
In practical application, the camera lens of the camera module can be stained with oil stains, fingerprints, dust and other stains in the production, manufacturing, detection and other processes, and the image sensor can be stained with dust and other stains, so that air between the camera lens or the camera lens and the image sensor is no longer an even medium, corresponding refraction can be generated for ensuring that the optical path is equal, light is led to be emitted to a position where the light is not required to be emitted, and the image which is polluted and is not formed is different. Contamination detection of the camera module is therefore required prior to deployment.
At present, the dirty detection of camera module is accomplished by the manual work, and inspection personnel adopt the microscope to look over positions such as the mirror surface of camera module to judge the dirty of existence. However, the method has low detection efficiency and strong subjectivity of manual judgment, so that the detection result has large error.
To this end, an embodiment of the present invention provides a system for detecting a camera module, which solves the above problems, and fig. 1 is a block diagram of a system for detecting a camera module according to an embodiment of the present invention. Referring to fig. 1, a system for detecting a camera module includes: light source module 11, sealed box 12 and the equipment 13 that detects the camera module. Wherein:
the sealed case 12 is disposed outside the light source module 11, and the camera module 14 to be inspected needs to be placed inside the sealed case 12 before inspection. And a normal 141 of the mirror surface of the camera module 14 is perpendicular to the light-emitting surface 112 of the light source module 11. The shape of the light-emitting surface is not limited, and may be rectangular, circular, polygonal, or the like, and the description will be given by taking the light-emitting surface as a circle. The light source module 11 may be fixed to the sealed case by means of adhesion, bolts, or the like, or may be fixed to the movable assembly 151 of the subsequent position adjustment module 15. The camera module 14 may be detachably fixed inside the sealed box 12 by means of adhesion, bolts, or the like, or may be detachably fixed on the movable component 151 of the subsequent position adjustment module 15.
In this embodiment, the sealed box 12 can prevent light outside the box from entering the inside thereof, that is: when the light source module 11 is turned off (i.e. no light is emitted), the interior of the sealed box 12 is in a dark state; after the light source module 11 is turned on, only the light emitted from the light source module 11 exists in the sealed box 12.
In this embodiment, the light source module 11 can provide uniform light emission. When looking at the light emitting surface of the light source module 11 from the perspective of the camera module 14, M specific shapes can be seen, where M is a natural number.
In an embodiment, the specific shape may include: at least one of rectangular, triangular and circular. Of course, the skilled person may also select the shape of regular polygon such as regular pentagon, regular hexagon, etc. instead of the rectangle, etc., and the solution of the present application may be implemented as well.
In this embodiment, M may take a value of 1, that is, one specific shape may be used. Of course, to improve the detection accuracy, M may take a plurality of values, and in this case, a plurality of specific shapes may be provided, and these specific shapes may be arranged in a plurality of regular patterns, for example, arranged in rows, columns, or patterns, so as to arrange as many holes as possible. For convenience of description, the following embodiments are described with a square as an example.
In another embodiment, the structure of the light source module 11 may be adjusted to form a specific shape in a specific image, which includes the following steps:
in a first mode, referring to fig. 2, the light source module 11 may include: a surface light source 113 and a logo light-transmitting plate 114. Wherein, the surface light source 113 is provided with a smooth light emitting surface; the light-emitting surface is attached with a mark transparent plate 114. The sign light-transmitting panel 114 may also be referred to as a Chart table, in general. The sign transparent plate 114 may be provided with M (natural number) holes 115 having a specific shape, so that light can pass through the holes 115 and the light projected onto the sign transparent plate 114 can not pass through. For this purpose, the mark light-transmitting plate 114 can be made of black light-absorbing material, such as black cloth, carbon nanotube black material, or graphene. A technician can select a suitable material to make the sign light-transmitting plate 114 according to a specific scene, and on the basis that the holes 115 can transmit light and the areas outside the holes 115 do not transmit light, the material selected by the technician to make the sign light-transmitting plate 114 also falls within the protection scope of the present application.
In an embodiment, the light source module 11 may further include an adhesive layer 116. In this way, the mark light-transmitting plate 114 can be bonded to the light-emitting surface 112 side of the surface light source 113 via the adhesive layer 116. The adhesive layer 116 may be made of a solid material, and the surface light source 113 and the mark light-transmitting plate 114 are respectively placed on both sides of the adhesive layer 116 and then compressed. Of course, the adhesive layer 116 may be formed by applying a liquid material uniformly on the surface light source 113, pressing the mark light-transmitting plate 114 on the surface light source 113, and curing the liquid material. It can be understood that the adhesive layer 116 needs to have a better light transmittance, so as to ensure that the light emitted from the surface light source 113 reaches the mark light-transmitting plate 114 as much as possible, and thus the light emitting surface with M specific shapes can be seen at the position of the camera module 14.
In a second way, referring to fig. 3, the light source module 11 may include: a surface light source 113. The surface light source 113 is provided with an exit surface 112 made of a black light absorbing material, and the exit surface 112 is provided with M holes 115 with specific shapes, so that light emitted by the surface light source 113 can only exit through the holes 115, and the exit surface with M specific shapes can be seen at the position of the camera module 14.
In this embodiment, the device 13 for detecting the camera module is connected to the camera module 14, the connection mode may include wired connection and wireless connection, and the technician may select a suitable connection mode according to a specific scene, which is not limited herein.
A button or a virtual button may be set on the device 13 for detecting the camera module, and a detector may trigger the button or the virtual button on the device 13 to control the camera module 14 to shoot a specific image, where the specific image may include a specific shape as shown in fig. 4. The camera module 14 may transmit the shot specific image to the apparatus 13 that detects the camera module. Thereafter, the apparatus 13 for detecting a camera module can detect whether the camera module is dirty or not according to a specific image. Wherein the filth can include outside filth such as spot, greasy dirt, fingerprint vestige or dust, can also include that the camera lens is inhomogeneous, the mirror surface is uneven, image sensor sensitization has the self filth of camera module 14 such as problem. That is, the system provided by the present invention can detect the self-contamination and/or the external contamination of the camera module 14, and the detection manner will be described in detail in the following method embodiments, which will not be described herein.
In this embodiment, the positional relationship between the light emitting surfaces of the camera module 14 and the light source module 11, as shown in fig. 5(a) and 5(b), may include:
referring to fig. 5(a), a technician may also adjust the position of the camera module 14, the focal length of the lens in the camera module 14, or the position of the light source module 11, so that the light-sensing area 142 of the image sensor in the camera module 14 is larger than or externally connected to the light-emitting surface of the light source module 11. Thus, the specific image may include the area around the light emitting surface of the light source module 11.
In this embodiment, the device 13 for detecting the camera module can shoot at least one specific image at different angles by adjusting the lens angle of the camera module 14, so as to obtain a plurality of specific images. Based on at least one specific image at each angle, the device 13 calls an image recognition algorithm to recognize the light-emitting surface area, so that whether the area corresponding to the camera module 14 is dirty or not can be determined. Then, the degree of contamination at a plurality of angles can be combined to determine whether the camera module 14 is contaminated. The detailed description is given later, and will not be explained herein.
Referring to fig. 5(b), a technician may adjust the position of the camera module 14 or the focal length of the lens in the camera module 14, so that the photosensitive area 142 of the image sensor in the camera module 14 is smaller than or internally tangent to the light-emitting surface of the light source module 11. In this way, the specific image may not include the area around the light-emitting surface of the light source module 11, that is, the specific image may correspond to the effective detection area of the light-emitting surface of the light source module 11, so that only one specific image is needed to detect whether the camera module 14 is dirty, thereby reducing the data calculation amount of the device 13 for detecting the camera module and improving the real-time performance of subsequent detection of dirt.
It should be noted that the skilled person may select the arrangement shown in fig. 5(a) or fig. 5(b) according to a specific scenario. Of course, the technician may also select another way to adjust the position relationship between the camera module 14 and the light exit surface, and when the specific image is obtained and whether the camera module is dirty or not can be detected by using the specific image, the corresponding scheme also falls within the scope of the present application.
Since the camera modules have different sizes, the position relationship between the light emitting surfaces of the camera module 14 and the light source module 11 changes. Referring to fig. 6(a) and 6(b), in an embodiment, the system for detecting a camera module further includes a position adjustment module 15 composed of a movable component 151 and a stationary component 152. The movable assembly 151 may be fixed to the light source module 11 or the camera module 14 (fig. 6 illustrates the fixed camera module 14 as an example). The distance L between the light source module 11 and the camera module 14 is related to the relative position between the movable assembly 151 and the stationary assembly 152. It can be understood that by moving the movable assembly 151, the relative position between the movable assembly 151 and the stationary assembly 152 can be adjusted, so as to adjust the position of the light-emitting surface of the light source module 11 in the light-sensing area 142 of the image sensor in the camera module 14.
In another embodiment, before detecting the camera module 14, the detecting person may adjust the position between the camera module 14 and the light source module 11 by the position adjusting module 15 in advance, so that the light emitting surface of the light source module 11 appears as shown in fig. 5(a) or fig. 5(b) at the position of the photosensitive area 142, and record the distance between the light source module 11 and the camera module 14 in this scenario. In an embodiment, a distance between the light-emitting surface of the light source module 11 and the light-sensing area 142 (i.e., the light-sensing area 142 is inscribed in the light-emitting surface) is referred to as a first distance. For example, when the distance L between the light source module 11 and the camera module 14 is less than or equal to the first distance, the light-emitting surface of the light source module 11 may be filled with the photosensitive area 142; when the distance is greater than the first distance, the light-emitting surface of the light source module 11 cannot fill the photosensitive area 142.
It is thus clear that through setting up the position adjustment module, can make the system that detects the camera module detect the camera module of unidimensional not in this embodiment to improve the accommodation of system.
In an embodiment, referring to fig. 7, the system for detecting a camera module may further include a display module 16. The display module 16 is connected with the equipment 13 for detecting the camera module and is at least used for displaying the dirty detection result of the camera module. Of course, the display module 16 may also display a specific image captured by the camera module 14, and display a detection result of a previous time, and a technician may set the display content of the display module according to a specific scene, which is not limited herein.
An embodiment of the present invention further provides an apparatus for detecting a camera module, and fig. 8 is a block diagram of an apparatus for detecting a camera module according to an embodiment of the present invention. Referring to fig. 8, an apparatus 800 for detecting a camera module includes a processor 801 and a memory 802. The memory 802 stores a plurality of instructions, and the processor 801 reads the instructions from the memory 802, so as to implement the method shown in fig. 9, including:
901, acquiring a specific image shot by a camera module;
acquiring the brightness distribution and/or shape characteristics of a specific image 902;
and 903, finally, judging whether the camera module is dirty or not according to the brightness distribution and/or the shape characteristics of the specific image.
It is thus clear that the equipment of the detection camera module that this embodiment provided can confirm automatically that the camera module whether has dirty, need not artifical the detection to make the testing result and subjective judgement irrelevant, be favorable to promoting the degree of accuracy of testing result. In addition, the speed of this embodiment determination testing result is greater than the speed of artifical detection far away, can promote the dirty efficiency of detection camera module.
The following describes, with reference to the system for detecting a camera module shown in fig. 1 to 7, the apparatus for detecting a camera module shown in fig. 8, and the schematic flow chart of the method shown in fig. 9, a process of determining whether the camera module is dirty by the apparatus for detecting a camera module, where the process may include a scene three and a scene four formed by a scene one, a scene two, a scene one, and a scene two. It can be understood that each scene can detect one camera module, and can also detect more than two camera module simultaneously. The detection process of more than two camera modules is similar to the detection process of one camera module, and for convenience of description, the detection of one camera module is used as an example for explanation in the following process.
Scene one
In the scene, the processor judges whether the camera module is dirty or not according to the brightness distribution of the specific image. If the camera module is not dirty, the light only passes through the holes on the mark light-transmitting plate and is not refracted, so that a specific shape can be formed. If the camera module is dirty, the light passing through the hole is refracted under the influence of the dirt, so that the light is emitted to a background area (an area outside a specific shape) which is supposed to be black, white spots on an image are increased, and the background is different in brightness.
It should be noted that each pixel point in the specific image may include parameter values of a plurality of parameters, such as a brightness value, a gray value, and the like. In this embodiment, the gray value of each pixel is used as an assigned parameter for description.
Referring to fig. 10, the processor may obtain the luminance distribution of the specific image by obtaining the gray value of each pixel point in the specific image, for example, the value range of the gray value may be 0 to 255 (corresponding to step 902).
Ideally, the gray values of the pixels in the specific shape in the captured specific image are all the maximum gray values (e.g., 255), and the gray values of the pixels in the region outside the specific shape, i.e., the background region, are all the minimum gray values (e.g., 0). Due to objective errors (meeting standards) such as light emission, air and lens uniformity of the light source module, gray values of pixels in a specific area are not necessarily the maximum value, light energy loss in the transmission process occurs, and the gray value of a corresponding white point after the light source module reaches a background area is not the maximum gray value (for example, 200).
In order to ensure that the white point located in the background region when the background is bright abnormally can be selected, in this embodiment, threshold values of the gray value, that is, a first threshold value and a second threshold value, may be preset, and the first threshold value is greater than the second threshold value. Ideally, the gray values of the pixels in the specific shape should be all greater than or equal to the second threshold, the gray values of the pixels outside the specific shape should be all less than or equal to the first threshold, and there is no pixel with a gray value between the first threshold and the second threshold. In practical applications, since the intensity of the light at each position of the light-emitting surface of the light source module may be strong or weak and the mark light-transmitting plate is not theoretically pure black, the light may have brightness when reaching the mark light-transmitting plate, that is, the gray value of the pixel point in the region outside the specific shape may be greater than the first threshold. Or attenuation loss exists before the light reaches the photosensitive region, so that the gray value of the pixel points in the specific shape is smaller than the second threshold, namely, a few pixel points with the gray value between the first threshold and the second threshold exist.
In addition, if the camera module is dirty, the light which should reach the specific shape and is influenced by the dirt is attenuated and refracted certainly to reach the outside of the specific area, so that the gray value of the pixel points (white points) in the area outside the specific area exceeds the first threshold, and thus, the number of the pixel points of which the gray value is greater than the first threshold is far greater than the number of the pixel points of which the gray value is greater than the second threshold.
In other words, ideally, the number of pixels having a gray value between the first threshold and the second threshold should be 0; under the condition that the error is allowed, the number of pixel points with the gray value between the first threshold and the second threshold is less (for example, 1% -4%) and the gray value is less; in the case of soiling, the number of pixels whose gray values lie between the first threshold value and the second threshold value increases rapidly (for example by more than 10%).
The first threshold and the second threshold may be learned based on an existing big data manner, and a specific learning manner may refer to a related technology, which is not limited herein.
Of course, the first threshold value and the second threshold value may be set in advance based on empirical values. For example, the first threshold may be set to 180 and the second threshold may be set to 65. The skilled person may select the first threshold and the second threshold according to a specific scenario, and in case of being able to implement the solution of the present application, the technical person also falls into the scope of protection of the present application.
Thus, the processor may read the first threshold value and the second threshold value of the gradation value from the memory (corresponding to step 1001).
Finally, the processor can determine whether the camera module is dirty or not according to the brightness distribution, the first threshold and the second threshold (corresponding to step 1002).
In this embodiment, the manners for determining whether the camera module is dirty according to the brightness distribution, the first threshold and the second threshold may include a first manner and a second manner. Wherein:
the principle of the method is as follows: if the number of pixel points with the gray scale value between the first threshold value and the second threshold value in the specific image is 0 or less (for example, 1% -4%), it is determined that the camera module is clean. And if the number of the camera modules is large, determining that the camera modules are dirty.
Referring to fig. 11, for each pixel point in a specific image, the processor compares the gray value of each pixel point with the first threshold and the second threshold, and then counts the number a of pixel points having gray values between the first threshold and the second threshold (corresponding to step 1101). Then, the processor may determine whether the camera module is dirty or not according to the quantity a (corresponding to step 1102).
In one embodiment, referring to fig. 12, the processor obtains a preset number threshold AA, and compares the size of the number threshold AA with the size of the number a (corresponding to step 1201). If the number a is greater than or equal to the number threshold AA, the processor may determine that the camera module is dirty (corresponding to step 1202).
The number threshold AA may be learned based on big data or may be preset based on an empirical value. For example, a model of the number threshold AA and the dirty result of the camera module may be established, and under the condition that the accuracy of the dirty result is ensured, the value range of the number threshold AA is determined, and the value range is randomly selected, or the minimum or maximum value is selected as the final value of the number threshold AA.
In another embodiment, referring to fig. 13, the processor may count the number B of pixels having a gray value greater than the first threshold in the specific image while counting the number a of pixels having a gray value between the first threshold and the second threshold (corresponding to step 1301). The processor then calculates the ratio C of the number a and the number B (corresponding to step 1302). Then, the processor acquires a preset first ratio threshold value CC, compares C with CC, and determines that the camera module is dirty if C is larger than or equal to CC; if C is smaller than CC, it is determined that the camera module is not dirty (corresponding to step 1303).
It should be noted that the first ratio threshold CC may be learned based on big data or may be preset based on empirical values, and the setting mode may refer to the setting mode of the first threshold and the second threshold, which is not limited herein.
The principle of the method is as follows: and if the number of pixel points with the gray values between the first threshold value and the second threshold value in the specific image is 0 or less, determining that the camera module is clean. And if the number of the camera modules is large, determining that the camera modules are dirty. Continuing extension, regarding the same frame of specific image, taking the first threshold as a critical point of the gray value, and then adjusting the gray value of each pixel point in the specific image: if the gray value of the pixel point is larger than the first threshold, the gray value of the pixel point is adjusted to be the maximum gray value (for example 255); if the gray value of the pixel point is smaller than or equal to the first threshold, the gray value of the pixel point adjusts the minimum gray value (such as 0), and therefore the binary image corresponding to the specific image, namely the first segmentation image, is obtained. Similarly, a binary image, i.e., a second segmentation image, corresponding to the specific image is obtained by using the second threshold as a critical point of the gray value. If the camera module is clean, the difference between the number of pixels with the gray value of 255 in the first divided image and the number of pixels with the gray value of 255 in the second divided image should be 0 or less. If the camera module is dirty, then the quantity difference is more.
It should be noted that, the concept that whether the camera module is dirty is determined by the number of the pixels with the gray value of 255 between the first divided image and the second divided image, and the technician can also determine whether the camera module is dirty by the number of the pixels with the gray value of 0. Since the gray value 0 and the gray value 255 are two gray values of each divided image, and the number of corresponding pixels is complementary, it can be determined whether the camera module is dirty by using the number change between the pixel point of which the gray value is 0 in the second divided image and the pixel point of which the gray value is 0 in the first divided image, that is, if the camera module is clean, the difference between the number of the pixels of which the gray value is 0 in the first divided image and the number of the pixels of which the gray value is 0 in the second divided image should be 0 or less. If the camera module is dirty, then the quantity difference is more.
Referring to fig. 14, the processor acquires a first divided image and a second divided image according to the luminance distribution of the specific image, the first threshold value, and the second threshold value (corresponding to step 1401).
First, referring to fig. 15, the step of acquiring the first segmentation image according to the brightness distribution and the first threshold by the processor includes:
fig. 15(a) is a schematic diagram of the luminance distribution of a specific image. For each pixel point in the specific image, assuming that the first threshold is 180, the processor compares the gray value of the pixel point with the first threshold to obtain a comparison result as shown in fig. 15(b), where the bold and underlined numbers are the gray values of the pixel points greater than or equal to the first threshold, and the other numbers are the gray values of the pixel points smaller than the first threshold. Finally, the processor updates the gray value of the pixel point greater than or equal to the first threshold to the first set value, and updates the gray value of the pixel point whose gray value is less than the first threshold to the second set value, so as to obtain the first divided image as shown in fig. 15 (c).
Wherein, the first setting value may be set to 255, i.e. the maximum value of the gray value; the second setting value may be set to 0, i.e., the maximum value of the gradation value. Of course, the first set value may be set to 1, and the second set value may also be set to 0. The technician can set the setting according to the available scene, and the setting is not limited herein.
Referring to fig. 16, the step of acquiring the second segmentation image according to the brightness distribution and the second threshold by the processor includes:
fig. 16(a) is a schematic diagram of the luminance distribution of a specific image. For each pixel point in the specific image, assuming that the second threshold is 65, the processor compares the gray value of the pixel point with the second threshold to obtain a comparison result as shown in fig. 16(b), where the bold and underlined numbers are the gray values of the pixel points smaller than or equal to the second threshold, and the other numbers are the gray values of the pixel points larger than the second threshold. Finally, the processor updates the gray value of the pixel point less than or equal to the second threshold to the second set value, and updates the gray value of the pixel point whose gray value is greater than the second threshold to the first set value, so as to obtain the second divided image as shown in fig. 16 (c).
Then, the processor may determine whether the camera module is dirty or not according to the first and second divided images (corresponding to step 1402), referring to fig. 17, including:
first, the processor may obtain the number D1 of pixels in the first divided image whose gray-level value is the first set value (255), where D1 is 13 in fig. 15 (c). Meanwhile, the processor may further obtain the number D2 of the pixels in the second divided image whose gray-level value is the first set value (255), where D2 is 14 in fig. 16(c) (corresponding to step 1701).
Second, the processor obtains the difference D between D2 and D1, where fig. 15(c) and fig. 16(c) correspond to the difference D being 1 (corresponding to step 1702).
Thirdly, the processor obtains a ratio E between the difference D and the number D1 of the pixels in the first divided image with the gray-level value being the first set value (corresponding to step 1702), and fig. 15 corresponds to the ratio E being 1/13 × 100% — 7.69%.
Fourthly, the processor obtains a preset second ratio threshold value EE and compares the sizes of the value E and the value EE. If E is larger than or equal to EE, determining that the camera module is dirty; and if E is smaller than EE, determining that the camera module is not polluted. For example, the second ratio threshold EE is 5%, and the ratio E is greater than EE, the processor may determine that the camera module is dirty.
It should be noted that the second ratio threshold EE may be learned based on big data or may be preset based on empirical values, and the setting mode may refer to the setting mode of the first threshold and the second threshold, which is not limited herein.
According to the technical scheme, the specific image shot by the camera module is obtained; then obtaining the brightness distribution of the specific image; and finally, judging whether the camera module is dirty or not according to the brightness distribution. Therefore, manual detection is not needed in the embodiment, so that the detection result is irrelevant to subjective judgment, and the accuracy of the detection result is favorably improved. In addition, the speed of this embodiment determination testing result is greater than the speed of artifical detection far away, can promote the dirty efficiency of detection camera module.
Scene two
In the scene, the processor judges whether the camera module is dirty or not according to the shape characteristic distribution of the specific image. If the camera module is not dirty, the specific shape in the specific image is regular and is the same as the shape of the hole because the camera module is vertical to the light-emitting surface of the light source module. If the camera module is dirty, the light can be refracted and cause the edge of the specific shape to change, such as concave or convex, so that the specific shape is deformed. Therefore, in the present embodiment, the shape feature of the specific shape refers to a degree of deformation of the specific shape. In one embodiment, the degree of deformation is characterized by the fill rate of a particular shape. The fill factor is a ratio of the area of the connected component of a specific shape to the area of the minimum circumscribed figure.
First, the processor in this embodiment acquires the shape feature of the specific image, referring to fig. 18, including:
1801, the processor acquires N connected components in the first segmented image, and obtains a connected component distribution map as shown in fig. 19. Wherein N is a natural number.
In this embodiment, referring to fig. 20, the processor may call a preset connected component algorithm to obtain M1 connected components in the first segmented image (corresponding to step 2001); wherein M1 is a positive integer and is greater than or equal to the number M of particular shapes. The processor may then obtain the attributes for each of the M1 connected domains (corresponding to step 2002). Wherein the attributes include: at least one of a center position, an area, a minimum circumscribed figure, an area of the minimum circumscribed figure, an aspect ratio, a mutual position of a connected domain boundary and a first split image boundary. Then, the processor filters out the connected domains which do not meet the preset condition from the M1 connected domains according to the attribute of each connected domain, and obtains N connected domains (corresponding to step 2003).
It should be noted that, in this embodiment, the connected component algorithm may be implemented by a four-connected method or an eight-connected method in the related art, and the present invention is not limited thereto.
It should be noted that, in an embodiment, the attribute of the connected domain may be a center position and an aspect ratio, and the preset condition may be that a distance between the center position of the connected domain and the center position of the specific image is smaller than or equal to a preset distance threshold, it can be seen that in this embodiment, by setting the preset condition, the connected domain that occurs when the edge of the photosensitive region 142 is cut into a specific shape under the condition that the photosensitive region 142 of the image sensor is inscribed in the light exit surface can be filtered out, so that a complete connected domain around the center position of the specific image is obtained, and the accuracy of determining whether the camera module is dirty is improved.
1802, a processor acquires a minimum circumscribed graph of each connected domain in N connected domains; the minimum circumscribed figure is the same as the specific shape. Referring to fig. 21, the connected component 20 in the lower right dotted circle has the same shape as the specific shape, so the minimum circumscribed figure 22 is the same as the specific shape, and therefore, the minimum circumscribed figure is not labeled for the connected component having the same shape as the minimum circumscribed figure in the present embodiment. With continued reference to fig. 21, the connected component 21 in the upper right dotted circle is deformed, and the minimum circumscribed figure 22 is still the same as the specific shape.
1803, the processor determines the shape feature of the specific image according to the connected domain and the corresponding minimum circumscribed figure. In one embodiment, referring to FIG. 22, the processor calculates the area of each connected component and the area of the corresponding minimum circumscribed graph (corresponding to step 2201). For example, for the connected component 21 in the dotted circle at the upper right side of fig. 21, the processor may calculate the area of the connected component 21 to be 0.0945 square mm, and at the same time, the processor may calculate the area of the minimum circumscribed figure of the connected component 21 to be 0.10 square mm. As another example, the connected domains 20 have an area of 0.10 square millimeters. Finally, the processor may calculate a ratio F of the area of the connected component to the area of the corresponding minimum circumscribed figure. And continuing to use the ratio F of the connected domain 20, the connected domain 21 and the minimum circumscribed graph 22, wherein the ratio F corresponding to the connected domain 20 is 100%, and the ratio F corresponding to the connected domain 21 is 94.5%, that is, the ratio F is the filling rate of the connected domain and the minimum circumscribed graph.
Since the degree of deformation of the specific image is characterized by the filling rate in the present embodiment, the present embodiment can obtain the degree of deformation of the specific image, that is, the degree of deformation of the specific image is 94.5%.
Secondly, the processor judges whether the camera module is dirty or not according to the shape characteristics. Referring to fig. 23, the processor obtains a preset deformation threshold FF (corresponding to step 2301), which may be obtained based on a big data learning manner or preset based on an empirical value. Then, the processor corresponds to a ratio F and a deformation threshold FF, and if the ratio F corresponding to each of the N connected domains is larger than or equal to the preset deformation threshold FF, it is determined that the camera module is not polluted; if the ratio F corresponding to one connected domain in the N connected domains is smaller than the deformation threshold FF, it is determined that the camera module is dirty (corresponding to step 2302).
For example, the deformation threshold FF may be set to 95%. Since the ratio F corresponding to the presence of a connected component in a specific image is 94.5% smaller than the deformation threshold FF (95%), it can be determined that the camera module is dirty based on the specific image.
According to the technical scheme, the specific image shot by the camera module is obtained; then acquiring the deformation characteristic of the specific image; and finally, judging whether the camera module is dirty or not according to the deformation characteristics. Therefore, manual detection is not needed in the embodiment, so that the detection result is irrelevant to subjective judgment, and the accuracy of the detection result is favorably improved. In addition, the speed of this embodiment determination testing result is greater than the speed of artifical detection far away, can promote the dirty efficiency of detection camera module.
Scene three
It is understood that the features of scenario one and scenario two can be combined with each other without conflict to form different solutions, which also fall within the scope of protection of the present application. One combination scheme is described below, see fig. 24:
first, the processor may first obtain a specific image captured by the camera module, and obtain a brightness distribution of the specific image, where the brightness distribution may be as shown in fig. 15 (a).
Then, the processor acquires a first threshold and a second threshold of the gray-scale value, and the processor may acquire a first divided image of the specific image according to the luminance distribution and the first threshold, where the manner of acquiring the first divided image may refer to fig. 15(a) to 15(c), and the first divided image may refer to fig. 15 (c); and acquiring a second divided image of the specific image based on the luminance distribution and the second threshold, the manner of acquiring the second divided image may refer to fig. 16(a) to 16(c), and the second divided image may refer to fig. 16 (c).
Then, the processor may calculate the number D1 of the pixels with the first gray-scale value in the first divided image and the number D2 of the pixels with the first gray-scale value in the second divided image.
Further, the processor calculates a ratio E between the difference D and the quantity D1 based on the difference D between the quantity D1 and the quantity D2. The processor continuously obtains a preset second ratio threshold value EE, and compares the ratio E with the second ratio threshold value EE. If E is larger than or equal to EE, determining that the camera module is dirty; and if the E is smaller than the EE, continuously acquiring the first segmentation image of the specific image according to the brightness distribution and the first threshold value, or adopting the previously acquired first segmentation image.
Then, the processor may call a preset connected component algorithm to obtain N connected components in the first segmented image, wherein the step of filtering out the connected components may refer to fig. 20. The connected domain algorithm can be realized by adopting a four-connection method or an eight-connection method in the related art. The processor obtains the minimum circumscribed graph of each connected domain in the N connected domains, wherein the minimum circumscribed graph is the same as the specific shape in the specific image, and the connected domain and the minimum circumscribed graph thereof can be seen in fig. 21.
Continuing, the processor calculates the area of each connected domain and the area of the minimum circumscribed figure corresponding to each connected domain, and calculates a ratio F based on the two areas, wherein the ratio F is the filling rate of the specific image.
And finally, the processor acquires a deformation threshold FF, and compares the ratio F corresponding to each connected domain F with the size of the deformation threshold FF. If the ratio F is larger than or equal to FF, determining that the camera module is not polluted; and if the ratio F is smaller than FF, determining that the camera module is dirty.
According to the technical scheme, the specific image shot by the camera module is obtained; then obtaining the brightness distribution and deformation characteristics of the specific image; and finally, judging whether the camera module is dirty or not according to the brightness distribution and the deformation characteristics. Therefore, manual detection is not needed in the embodiment, so that the detection result is irrelevant to subjective judgment, and the accuracy of the detection result is favorably improved. In addition, the speed of this embodiment determination testing result is greater than the speed of artifical detection far away, can promote the dirty efficiency of detection camera module.
Scene four
Scene one to scene three introduce a scheme of determining whether the camera module is dirty or not by using a specific image.
When the size of the camera module is large, the processor can divide the lens in the camera module into X (positive integer) areas, for example, X takes a value of 4, so that the processor can control the image sensor to respectively acquire a specific image corresponding to each angle in the X angles, and then determine whether the area corresponding to the lens is dirty or not according to the specific image of each angle. Finally, the dirty results of the X areas are combined, so that whether the camera module is dirty or not can be obtained. The scheme for determining whether each area of the camera module is dirty by the processor can refer to the scheme of the first scene, the scheme of the second scene and the scheme of the third scene, and details are not repeated here.
It can be seen that, this embodiment need not to increase image sensor's resolution ratio through dividing the camera lens of camera module into a plurality of regions, also need not to adjust the distance between camera module and the image sensor (need not to increase the size of the sealed box in the system that detects camera module promptly), can also make this scheme be applicable to the detection scene of the camera module of different sizes.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above detailed description of the detection apparatus and method provided by the embodiments of the present invention has been presented, and the present invention has been made by applying specific examples to explain the principle and the implementation of the present invention, and the above description of the embodiments is only used to help understanding the method and the core idea of the present invention; to sum up, the present disclosure should not be construed as limiting the invention, which will be described in the following description but will be modified within the scope of the invention by the spirit of the present disclosure.

Claims (31)

1. A method for detecting a camera module, the method comprising:
acquiring a specific image shot by a camera module;
acquiring the brightness distribution and/or the shape characteristic of the specific image;
and judging whether the camera module is dirty or not according to the brightness distribution and/or the shape characteristics.
2. The method according to claim 1, wherein the specific image includes M specific shapes, M being a natural number.
3. The method of claim 2, wherein the specific shape comprises at least one of a rectangle, a triangle, and a circle.
4. The method according to claim 1, wherein the brightness distribution is a parameter value of a specific parameter of each pixel in the specific image, and the specific parameter at least includes a brightness value or a gray value.
5. The method of claim 2, wherein the shape feature is an amount of deformation of the particular shape.
6. The method of claim 4, wherein determining whether the camera module is dirty or not according to the brightness distribution comprises:
acquiring a first threshold value and a second threshold value of the designated parameter;
and judging whether the camera module is dirty or not according to the brightness distribution, the first threshold and the second threshold.
7. The method according to claim 6, wherein the first threshold value and the second threshold value are learned based on a big data manner or are set in advance based on an empirical value.
8. The method of claim 6, wherein determining whether the camera module is dirty or not according to the brightness distribution, the first threshold and the second threshold comprises:
acquiring the number A of pixel points of which the parameter values of the specified parameters in the specific image are between the first threshold and the second threshold based on the brightness distribution; the first threshold is greater than the second threshold;
and judging whether the camera module is dirty or not according to the quantity A.
9. The method of claim 8, wherein determining whether the camera module is dirty or not according to the quantity A comprises:
comparing the quantity A with a preset quantity threshold value AA;
if the A is larger than or equal to the AA, determining that the camera module is dirty; and if the A is smaller than the AA, determining that the camera module is not polluted.
10. The method of claim 8, wherein determining whether the camera module is dirty or not according to the quantity A comprises:
acquiring the number B of pixel points of which the parameter values of the designated parameters in the specific image are greater than the first threshold;
calculating the ratio C of the quantity A to the quantity B;
if the ratio C is larger than or equal to a preset first ratio threshold value CC, determining that the camera module is dirty; and if the C is smaller than the CC, determining that the camera module is not polluted.
11. The method of claim 6, wherein determining whether the camera module is dirty or not according to the brightness distribution, the first threshold and the second threshold comprises:
acquiring a first segmentation image of the specific image according to the brightness distribution and the first threshold value, and acquiring a second segmentation image of the specific image according to the brightness distribution and the second threshold value;
and judging whether the camera module is dirty or not according to the first segmentation image and the second segmentation image.
12. The method of claim 11, wherein obtaining the first segmented image of the particular image based on the intensity distribution and the first threshold comprises:
aiming at each pixel point in the specific image, comparing the parameter value of the specified parameter of the pixel point with the first threshold value;
if the parameter value is larger than or equal to the first threshold value, updating the parameter value of the pixel point to a first set value; if the parameter value is smaller than the first threshold value, updating the parameter value of the pixel point to a second set value;
and forming a first segmentation image according to the updated parameter value of each pixel point.
13. The method of claim 11, wherein obtaining the second segmented image of the particular image based on the intensity distribution and the second threshold comprises:
aiming at each pixel point in the specific image, comparing the parameter value of the designated parameter of the pixel point with a preset second threshold value;
if the parameter value is smaller than or equal to the second threshold value, updating the parameter value of the pixel point to a second set value; if the parameter value is larger than the second threshold value, updating the parameter value of the pixel point to a first set value;
and forming a second segmentation image according to the updated parameter value of each pixel point.
14. The method of claim 11, wherein determining whether the camera module is dirty or not according to the first and second segmented images comprises:
acquiring the number D1 of pixel points of which the parameter values of the designated parameters in the first divided image are the first set values, and the number D2 of pixel points of which the parameter values of the designated parameters in the second divided image are the first set values;
obtaining a difference D between the D2 and the D1;
obtaining the ratio E of the D and the D1;
if the E is larger than or equal to a preset second ratio value EE, determining that the camera module is dirty; and if the E is smaller than the EE, determining that the camera module is not polluted.
15. The method of claim 2, wherein the shape characteristic is a degree of deformation of the particular shape.
16. The method of claim 15, wherein the degree of deformation is characterized by a fill rate of the particular shape; the filling rate is a ratio of the area of the connected domain of the specific shape to the area of the minimum circumscribed figure.
17. The method of claim 16, wherein obtaining shape features of the particular image comprises:
acquiring N connected domains in a first segmentation image; n is a natural number;
acquiring a minimum external graph of each connected domain in the N connected domains; the minimum circumscribed figure is the same as the specific shape;
and determining the shape characteristics of the specific image according to the connected domain and the corresponding minimum circumscribed graph.
18. The method of claim 17, wherein obtaining N connected components within the first segmented image comprises:
acquiring M1 connected domains in the first segmentation image; m1 is a positive integer and is greater than or equal to M;
acquiring the attribute of each connected domain in the M1 connected domains;
filtering out connected domains which do not meet preset conditions from the M1 connected domains according to the attribute of each connected domain to obtain N connected domains;
the preset condition is that the distance between the center of the connected domain and the center of the specific image is smaller than or equal to a preset distance threshold.
19. The method of claim 18, wherein the properties of the connected domain comprise: at least one of a center position, an area, a minimum circumscribed figure, an area of the minimum circumscribed figure, an aspect ratio, a mutual position of a connected domain boundary and a first split image boundary.
20. The method of claim 17, wherein determining the shape feature of the particular image from the connected component and the corresponding minimal circumscribed graphic comprises:
respectively calculating the area of each connected domain and the area of the corresponding minimum circumscribed graph;
and calculating the ratio F of the area of each connected domain to the area of the corresponding minimum circumscribed figure, wherein the ratio F is the filling rate of the specific image.
21. The method of claim 20, wherein determining whether the camera module is dirty or not according to the shape feature comprises:
if the ratio F corresponding to each of the N connected domains is greater than or equal to a preset deformation threshold FF, determining that the camera module is not polluted;
and if the ratio F corresponding to one connected domain in the N connected domains is smaller than the deformation threshold FF, determining that the camera module is dirty.
22. An apparatus for inspecting a camera module, the apparatus comprising a processor and a memory, the memory storing a plurality of instructions, the processor reading the instructions from the memory for implementing the steps of the method of any one of claims 1 to 21.
23. A system for inspecting camera modules, comprising the apparatus for inspecting camera modules of claim 22, a light source module and a sealed box; wherein the content of the first and second substances,
the light source module is used for providing uniform light emission;
the sealed box body is arranged outside the light source module and used for providing a detection environment only emitting light from the light source module for the camera module;
before detection, the camera module is placed in the sealed box body, and a normal line of a mirror surface in the camera module is perpendicular to a light-emitting surface of the light source module;
the equipment is connected with the camera module and used for controlling the camera module to shoot specific images and detecting whether the camera module is dirty or not according to the specific images.
24. The system of claim 23, wherein the light source module comprises: surface light source and sign light-transmitting plate;
the surface light source is provided with a smooth light emitting surface; the mark light-transmitting plate is attached to the light-emitting surface;
the sign light-transmitting plate is made of black light-absorbing materials, and a plurality of holes in specific shapes are formed in the sign light-transmitting plate.
25. The system of claim 23, wherein the light source module comprises: a surface light source; the surface light source is provided with a light-emitting surface made of black light absorption materials, and a plurality of holes in specific shapes are formed in the light-emitting surface.
26. The system of claim 24 or 25, wherein the particular shape comprises: at least one of rectangular, triangular and circular.
27. The system of claim 24 or 25, wherein the plurality of shaped holes are regularly distributed.
28. The system of claim 23, further comprising a position adjustment module comprising a moving component and a stationary component; the movable assembly is used for being fixed on the light source module or the camera module; the distance between the light source module and the camera module is related to the relative position between the movable component and the static component.
29. The system of claim 28, wherein a distance between the light source module and the camera module is less than or equal to a first distance; the first distance is a distance corresponding to the light-emitting surface of the light source module when the image shot by the camera module is full of the light-emitting surface.
30. The system of claim 23, further comprising a display module connected to the device and configured to at least display a result of the contamination detection of the camera module.
31. A machine-readable storage medium having stored thereon computer instructions which, when executed, implement the steps of the method of any one of claims 1 to 21.
CN201880039292.6A 2018-07-27 2018-07-27 Method, equipment and system for detecting camera module and machine-readable storage medium Pending CN110800294A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/097666 WO2020019352A1 (en) 2018-07-27 2018-07-27 Method, device and system for inspecting camera module, and machine readable storage medium

Publications (1)

Publication Number Publication Date
CN110800294A true CN110800294A (en) 2020-02-14

Family

ID=69181213

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880039292.6A Pending CN110800294A (en) 2018-07-27 2018-07-27 Method, equipment and system for detecting camera module and machine-readable storage medium

Country Status (2)

Country Link
CN (1) CN110800294A (en)
WO (1) WO2020019352A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111175221A (en) * 2020-02-28 2020-05-19 西安石油大学 Detection device for corrosivity of metal product
CN111724383A (en) * 2020-06-30 2020-09-29 重庆盛泰光电有限公司 Camera module black spot detection system based on turntable
CN111787315A (en) * 2020-08-05 2020-10-16 昆山软龙格自动化技术有限公司 Embedded high-speed operation network card device based on FPGA
CN111882540A (en) * 2020-07-28 2020-11-03 歌尔科技有限公司 Method, device and equipment for detecting stains on camera protective cover
CN112235565A (en) * 2020-09-25 2021-01-15 横店集团东磁有限公司 Device and method for Flare abnormity detection by matching with planar light source
CN112255239A (en) * 2020-10-22 2021-01-22 青岛歌尔声学科技有限公司 Pollution position detection method, device, equipment and computer readable storage medium
CN114390225A (en) * 2022-01-20 2022-04-22 上海安翰医疗技术有限公司 Method and device for detecting dirt of capsule endoscope

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111678673A (en) * 2020-05-25 2020-09-18 歌尔光学科技有限公司 Lens detection method, lens detection device and readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103179428A (en) * 2011-12-23 2013-06-26 鸿富锦精密工业(深圳)有限公司 System and method for testing camera module stains
CN103501435A (en) * 2013-09-27 2014-01-08 上海半导体照明工程技术研究中心 Method and device for testing dynamic range of camera by utilizing closed LED (light emitting diode) light source lamp box
CN104463827A (en) * 2013-09-17 2015-03-25 联想(北京)有限公司 Image acquisition module automatic detection method and corresponding electronic device
CN205005200U (en) * 2015-10-22 2016-01-27 潍坊歌尔电子有限公司 Lens case detection device
CN105991996A (en) * 2015-02-15 2016-10-05 宁波舜宇光电信息有限公司 Detection system and detection method for camera module group

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103179428A (en) * 2011-12-23 2013-06-26 鸿富锦精密工业(深圳)有限公司 System and method for testing camera module stains
CN104463827A (en) * 2013-09-17 2015-03-25 联想(北京)有限公司 Image acquisition module automatic detection method and corresponding electronic device
CN103501435A (en) * 2013-09-27 2014-01-08 上海半导体照明工程技术研究中心 Method and device for testing dynamic range of camera by utilizing closed LED (light emitting diode) light source lamp box
CN105991996A (en) * 2015-02-15 2016-10-05 宁波舜宇光电信息有限公司 Detection system and detection method for camera module group
CN205005200U (en) * 2015-10-22 2016-01-27 潍坊歌尔电子有限公司 Lens case detection device

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111175221A (en) * 2020-02-28 2020-05-19 西安石油大学 Detection device for corrosivity of metal product
CN111724383A (en) * 2020-06-30 2020-09-29 重庆盛泰光电有限公司 Camera module black spot detection system based on turntable
CN111882540A (en) * 2020-07-28 2020-11-03 歌尔科技有限公司 Method, device and equipment for detecting stains on camera protective cover
CN111787315A (en) * 2020-08-05 2020-10-16 昆山软龙格自动化技术有限公司 Embedded high-speed operation network card device based on FPGA
CN112235565A (en) * 2020-09-25 2021-01-15 横店集团东磁有限公司 Device and method for Flare abnormity detection by matching with planar light source
CN112255239A (en) * 2020-10-22 2021-01-22 青岛歌尔声学科技有限公司 Pollution position detection method, device, equipment and computer readable storage medium
CN114390225A (en) * 2022-01-20 2022-04-22 上海安翰医疗技术有限公司 Method and device for detecting dirt of capsule endoscope

Also Published As

Publication number Publication date
WO2020019352A1 (en) 2020-01-30

Similar Documents

Publication Publication Date Title
CN110800294A (en) Method, equipment and system for detecting camera module and machine-readable storage medium
CN105021628B (en) Method for detecting surface defects of optical fiber image inverter
CN107796825B (en) Device detection method
CN110261410A (en) A kind of detection device and method of glass lens defect
CN111474177A (en) Liquid crystal screen backlight foreign matter defect detection method based on computer vision
JP2001527645A (en) Uneven defect detection method and detection device
WO2019244946A1 (en) Defect identifying method, defect identifying device, defect identifying program, and recording medium
US20220222855A1 (en) System and method for determining whether a camera component is damaged
JP2008170256A (en) Flaw detection method, flaw detection program and inspection device
JPH05223746A (en) Method and device for detecting defect of transparent object
JP2004028930A (en) Device and method for detecting foreign matter in container
CN109844810B (en) Image processing method and image processing apparatus
JP2012237585A (en) Defect inspection method
CN116237266A (en) Flange size measuring method and device
CN115330794B (en) LED backlight foreign matter defect detection method based on computer vision
JP2005164565A (en) Defect detection method for flat panel light- related plate element in low and high resolution images
JP3871963B2 (en) Surface inspection apparatus and surface inspection method
JP2011220755A (en) Surface appearance inspection apparatus
JP2008111705A (en) Method and program for detecting defect and inspection apparatus
JP7293907B2 (en) Visual inspection management system, visual inspection management device, visual inspection management method and program
JP2002230522A (en) Device and method for detecting defect of subject of inspection
JP2005140655A (en) Method of detecting stain flaw, and stain flaw detector
TWI776275B (en) Image identification device and image identification method
CN111815705A (en) Laser tracker light filtering protective lens pollution identification method and device and electronic equipment
JP2007198762A (en) Flaw detection method and detector

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200214