WO2022126871A1 - Defect layer detection method and system based on light field camera and detection production line - Google Patents

Defect layer detection method and system based on light field camera and detection production line Download PDF

Info

Publication number
WO2022126871A1
WO2022126871A1 PCT/CN2021/079838 CN2021079838W WO2022126871A1 WO 2022126871 A1 WO2022126871 A1 WO 2022126871A1 CN 2021079838 W CN2021079838 W CN 2021079838W WO 2022126871 A1 WO2022126871 A1 WO 2022126871A1
Authority
WO
WIPO (PCT)
Prior art keywords
defect
depth
light field
reference plane
information
Prior art date
Application number
PCT/CN2021/079838
Other languages
English (en)
French (fr)
Inventor
Haotian LI
Zhiwen QIAN
Original Assignee
Vomma (Shanghai) Technology Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vomma (Shanghai) Technology Co., Ltd. filed Critical Vomma (Shanghai) Technology Co., Ltd.
Priority to KR1020237017783A priority Critical patent/KR20230096057A/ko
Publication of WO2022126871A1 publication Critical patent/WO2022126871A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/22Measuring arrangements characterised by the use of optical techniques for measuring depth
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/94Investigating contamination, e.g. dust
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/557Depth or shape recovery from multiple images from light fields, e.g. from plenoptic cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8854Grading and classifying of flaws
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8806Specially adapted optical and illumination features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10052Images from lightfield camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • G06T2207/10152Varying illumination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30121CRT, LCD or plasma display

Definitions

  • the present invention relates to the technical field of three-dimensional photoelectric detection of screen defects, in particular to a defect layer detection system and method based on light field camera.
  • the light field camera emerges to provide a novel solution direction for three-dimensional defect detection.
  • Microlens arrays are additionally arranged in a sensor and a main lens of a conventional camera of the light field camera to further record the propagation direction of rays to form a unique light field image encoded by a lens array.
  • the light field image is rendered to further obtain three-dimensional information.
  • defect detection and three-dimensional measurement can be conducted by means of a transparent or semi-transparent medium.
  • the three-dimensional defect detection technology that recognizes object defects and obtains corresponding three-dimensional information is a core technology in the fields of machine vision and measurement
  • human eye detection and two-dimensional camera detection are still frequently used for defect detection of screens in industrial field.
  • the two-dimensional industrial camera only can detect where there are dust and defects on screens or not, but cannot distinguish dust from defects and cannot position defects specifically in a multilayered structure.
  • Patent CN106303175A discloses a multi-view and single-light field camera-based virtual reality three-dimensional data acquisition method, including the steps of S101, acquiring a microlens calibration image of the single light field camera; S102, positioning a center position of the microlens by using the calibration image; S103, acquiring a light field picture; S104, selecting a same pixel with same relative position under each microlens in the light field image; S105, solving a pixel value embedded into a matrix thereof by taking the selected pixel as a sampling point to further form an image of a view; and S106, selecting pixels in different positions and repeating the steps 103 to S105 till all pixels are selected completely.
  • Patent CN111351446A discloses a light field camera calibration method for three-dimensional topography measurement.
  • the method includes the steps of: calibrating calibration plates in different positions in a space and corresponding light field original images to acquire a correspondingly relationship between a light field disparity image and three-dimensional space information; shooting a plurality of defocused soft light pure color calibration plates by using the light field camera to obtain a light field white image; calculating a vignetting-removing matrix according to the light field camera white image; performing iterative computation to obtain a sub-pixel level center coordinate matrix of the microlens of the light field camera; shooting a plurality of round point calibration plates in a known three-dimensional space and removing vignetting by the light field camera; and establishing a light field mathematical model between a three-dimensional coordinate and disparity, and performing fitting calculation to obtain a center coordinate of the round point and a disparity value corresponding to round point calibration according to a three-dimensional imaging rule of the light field and three-dimensional space information of the round point.
  • Patent CN111340888A discloses a light field camera calibration method and system without a white image.
  • the method comprises the steps of: acquiring a light field original image of an electronic checkerboard shot by the light field camera and then calibrating a microlens array according to the light field original image to generate a calibration result of the microlens array and a center point lattice of the microlens array; and extracting a linear feature of the light field original image by adopting a template matching method and taking the linear feature as internal and external parameters for calibrating a projection model of the light field camera by the calibration data.
  • the method is not dependent on the white image and can be used for obtain the center point lattice, an array gesture of the microlens and the internal and external parameters of the projection model of the camera only by processing an original light field of the checkerboard, so that the method has the characteristics of high calibration precision of the light field camera and wide application range.
  • the present invention aims to provide a defect layer detection method and system based on a light field camera, and a detection production line.
  • a defect layer detection method based on a light field camera including:
  • a depth reference plane acquisition step acquiring depth reference plane information according to a light field depth image
  • a defect extraction step extracting a defect according to a light field multi-view image and acquiring defect depth information according to the light field depth image;
  • a relative distance acquisition step acquiring a distance from the defect to the depth reference plane according to the depth reference plane information and the defect depth information;
  • a defect layer information acquisition step obtaining a spatial position of the defect in the to-be-detected product according to the distance from the defect to the depth reference plane and structural information of the to-be-detected product.
  • the to-be-detected product is a multilayered structural product, and the structural information of the to-be-detected product includes a distance relationship among multilayered structures;
  • distribution of the defect in the multilayered structure is obtained according to the distance from the defect to the depth reference plane and the structural information of the to-be-detected product.
  • the defect depth information includes a maximum depth of the defect and a minimum depth of the defect, and the distance from the defect to the depth reference plane includes a maximum distance from the defect to the depth reference plane and a minimum distance from the defect to the depth reference plane;
  • layers where the maximum distance from the defect to the depth reference plane and the minimum distance from the defect to the depth reference plane in the multilayered structure of the to-be-detected product are located are obtained separately.
  • information interference in a pixel mesh in the multi-view image is weakened first by Gaussian filter, then binarization threshold value processing of extracting bright spots and dark spots on the image is performed separately, a connected domain is analyzed and an area with relatively small area of the connected domain is removed, processed points are considered as defect points, and information corresponding to pixel positions, i.e. the depth information of the defect, is acquired in the light field depth image.
  • the defect layer detection system based on the light field camera according to the present invention including:
  • a depth reference plane acquisition module for acquiring depth reference plane information according to a light field depth image
  • a defect extraction module for extracting t defect according to a light field multi-view image and acquiring defect depth information according to the light field depth image
  • a relative distance acquisition module for acquiring a distance from the defect to the depth reference plane according to the depth reference plane information and the defect depth information
  • a defect layer information acquisition module for obtaining a spatial position of the defect in a to-be-detected product according to the distance from the defect to the depth reference plane and the structural information of the to-be-detected product.
  • the to-be-detected product is a multilayered structural product, and the structural information of the to-be-detected product comprises a distance relationship among multilayered structures;
  • distribution of the defect in the multilayered structure is obtained according to the distance from the defect to the depth reference plane and the structural information of the to-be-detected product.
  • the defect depth information includes a maximum depth of the defect and a minimum depth of the defect, and the distance from the defect to the depth reference plane includes a maximum distance from the defect to the depth reference plane and a minimum distance from the defect to the depth reference plane;
  • layers where the maximum distance from the defect to the depth reference plane and the minimum distance from the defect to the depth reference plane in the multilayered structure of the to-be-detected product are located are obtained separately.
  • information interference in a pixel mesh in the multi-view image is weakened first by Gaussian filter, then binarization threshold value processing of extracting bright spots and dark spots on the image is performed separately, a connected domain is analyzed and an area with relatively small area of the connected domain is removed, processed points are considered as defect points, and information corresponding to pixel positions, i.e. the depth information of the defect, is acquired in the light field depth image.
  • the computer readable storage medium that stores the computer program, wherein the computer program implements the steps of the defect layer detection method based on the light field camera when being executed by a processor.
  • a product defect detection production line includes the defect layer detection system based on the light field camera, or includes the computer readable storage medium storing the computer program.
  • the present invention has the following beneficial effects:
  • the three-dimensional detection on the defect of a product such as a screen can be realized by using the light field camera, and the position of the defect in a product structure can be judged precisely.
  • the light field camera has the characteristic of passive measurement, the light field camera can perform three-dimensional imaging by means of a transparent or semi-transparent medium. Therefore, three-dimensional detection on defects above the depth reference plane can be performed and three-dimensional detection on defects below the depth reference plane can be performed, too.
  • the three-dimensional detection method of the light field camera only needs a single camera to shoot at a single time without scanning by employing the moving mechanism, so that the light field camera has the characteristics of efficiency and convenience.
  • the light field camera can measure the depth of the defect so as to layer the defect.
  • Fig. 1 is a detection process flow diagram according to one embodiment of the present invention
  • Fig. 2 is a schematic diagram of shooting screen of a detected mobile phone by the light field camera according to the embodiment of the present invention
  • Fig. 3 is a center view image of the light field of screen glass of an LCD mobile phone calculated the embodiment of the present invention
  • Fig. 4 is a depth image corresponding to the Fig. 3;
  • Fig. 5 is a side view of a layered point cloud image corresponding to the Fig. 3;
  • Fig. 6 is a center view image of a screen of an OLCD watch calculated the embodiment of the present invention.
  • Fig. 7 is a depth image corresponding to the Fig. 6;
  • Fig. 8 is a side view of a layered point cloud image corresponding to the Fig. 6;
  • the defect layer detection method based on the light field camera according to the present invention including:
  • a light field camera calibration step for calibrating the light field camera
  • a defect area depth shooting step for shooting a defect area of a to-be-detected product by using the light field camera to obtain a multi-view image of the light field and obtain a light field depth image according to the multi-view image of the light field;
  • a depth reference plane acquisition step acquiring depth reference plane information according to a light field depth image
  • a defect extraction step extracting a defect according to a light field multi-view image and acquiring defect depth information according to the light field depth image;
  • a relative distance acquisition step acquiring a distance from the defect to the depth reference plane according to the depth reference plane information and the defect depth information;
  • a defect layer information acquisition step obtaining a spatial position of the defect in the to-be-detected product according to the distance from the defect to the depth reference plane and structural information of the to-be-detected product.
  • the light field camera calibration step includes:
  • Step A1 shooting a plurality of defocused soft light plates by using the light field camera matched with an aperture for light field white image calibration and performing microlens center calibration.
  • an optical lens with corresponding focal length and amplifying factor is selected as a lens of the microlens according to an area size, a measuring depth range and a defect dimension range of a screen of the to-be-detected object.
  • the aperture of the lens of the microlens is adjusted to make the aperture of the lens of the microlens be matched with the aperture of the main lens of the light field camera.
  • the images of the defocused soft light pure color calibration plates are shot by the light field camera, wherein the microlens in the image is just or approximately in a tangential state, wherein the light field white image or the light field white image camera means a pure white background image shot by the light field camera.
  • the plurality of pure color background plates with relatively uniform light intensity in the defocused position of the light field camera, i.e. the defocused soft light pure color calibration plates are shot.
  • the plurality of light field white images are equalized and normalized to obtain the vignetting-removing matrix W% (u, v) , wherein urepresents a horizontal coordinate value of the pixel under an image coordinate system and vrepresents a longitudinal coordinate value of the pixel under the image coordinate system, wherein it is necessary to click and remove all the subsequently shot original images of the light field to remove the vignetting-removing matrix so as to calibrate the light field white images, wherein the original images of the light field mean the light field images that are not processed by a multi-view image algorithm of the light field; and after finishing the light field image calibration step, the light field white image is processed by employing a filter, the noise of the light field white image is removed and non-maxima suppression is performed on the filtered light field white image; further, the local maximum value is taken according to the processed image, wherein the maximum image is just an integer level center of the microlens of the light field camera; and by taking the integer level center of the microlens as an initial iterative
  • Step A2 performing light field camera dimension calibration
  • the dimension calibration plate is fixed in a focal plane area of the light field camera, the dimension calibration plate is continuously moved from the focal plane to a fixed spatial distance and performing shooting, and the spatial positions of the points on the calibration plate being known, thereby obtaining the spatial positions of the points on the dimension calibration plate in the whole moving process; and calibration points of the round points form a dispersed circle on the light field image, processing is performed to obtain the diameter of the dispersed circle to further calculate the disparity value of the dispersed circle and the pixel coordinate of the dispersed circle, and a relationship between the three-dimensional coordinate and the pixel coordinate of the light field camera in the space and the disparity value is obtained according to the light field camera dimension calibration model;
  • Step A3 making the screen display an image capable of highlighting the defects according to different defect types.
  • the system provided by the present invention does not need an extra light source to make the screen shine itself.
  • the defect area depth shooting step includes:
  • Step A4 focusing the light field camera near the defect to shoot a light field camera original light field image containing the defect information
  • Step A5 shooting the defect area of the detected screen by using the light field camera and processing the image to obtain the light field multi-view image.
  • Light field multi-view rendering is performed to obtain the light field multi-view image with the defect information, wherein the light field multi-view image is not different from a conventional two-dimensional camera image substantially, and can be regarded as shooting a same object by a plurality of two-dimensional cameras at different angles, such that defect extraction can be performed on the light field multi-view image;
  • Step A6 obtaining a depth image, constructing a three-dimensional model of the image according to dimension calibration data; and acquiring the depth reference plane; the light field disparity image is calculated according to shooting by the light field camera; and the light field disparity image is converted into the light field depth image according to the light field camera dimension calibration result, wherein
  • depth reference plane information is acquired according to a light field depth image, wherein the depth reference plane has certain texture information so as to provide an upper surface of a structure with stable depth information; for a LCD screen, the structure is a color filter and for an OLED screen, the structure is an organic emitting layer; as the structure provides texture information of the image, and depth of the upper surface of the structure can be calculated by means of a light field algorithm; the structure is fixed in position in the to-be-detected object, such that the structure can be used as a reference surface for calculating a defect height; and a plane equation of the depth reference plane is calculated by fitting of a polynomial.
  • each pixel point in the light field depth image is converted into a true dimension coordinate (xu, v, yu, v, zu, v) , wherein xu,v, yu, v, zu, v represent coordinates of a pixel on X, Y and Z axes with horizontal and longitudinal coordinate values in the image coordinate system being (u, v) ;
  • the defect is extracted according to the light field multi-view image, and defect depth information is obtained according to the light field depth image; specifically, information interference in a pixel mesh in the multi-view image is weakened first by Gaussian filter, then binarization threshold value processing of extracting bright spots and dark spots on the image is performed separately, a connected domain is analyzed and an area with relatively small area of the connected domain is removed; the points processed can be considered as defect points, and then information of a corresponding pixel position is obtained in the light field depth image to obtain the defect depth information;
  • a relative distance acquisition step acquiring a distance from the defect to the depth reference plane according to the depth reference plane information and the defect depth information. Specifically, as the equation of the depth reference plane and the coordinates of the defect point are known, the distance from the defect point to the depth reference plane can be calculated by a distance formula from the point to the plane.
  • a defect layer information acquisition step obtaining a spatial position of the defect in the to-be-detected product according to the distance from the defect to the depth reference plane and the structural information of the to-be-detected product. Further, the layer of the defect in a to-be-detected product is obtained according to the spatial position, i.e. which layer or layers, and then information of the layers are mapped to the spatial coordinate system to form layered point cloud of the defect.
  • the to-be-detected product is a multilayered structural product, and the structural information of the to-be-detected product comprises a distance relationship among multilayered structures.
  • the to-be-detected product includes any one of a screen, a pair of VR glasses, a display and a television.
  • the screen can be a screen of any one equipment among an intelligent terminal, a vehicle-mounted screen, a display and a television.
  • the screen can be screens such as a LC screen, an OLED screen, a QLED screen, a MiniLED screen and a MicroLED screen.
  • the defect layer information acquisition step distribution of the defect in the multilayered structure is obtained according to the distance from the defect to the depth reference plane and the structural information of the to-be-detected product.
  • the distance from the defect to the depth reference plane is obtained, and the distances from other layers in the multilayered structure to the layer corresponding to the depth reference plane are known, distance relationships between the defect and the layers in the multilayered structure can be obtained.
  • the distance from the defect to the layer in the multilayered structure is zero or negative, and the defect is distributed in the layer.
  • the defect depth information includes a maximum depth of the defect and a minimum depth of the defect
  • the distance from the defect to the depth reference plane includes a maximum distance from the defect to the depth reference plane and a minimum distance from the defect to the depth reference plane.
  • the defect is distributed in at least two layers, so that layers where the maximum distance from the defect to the depth reference plane and the minimum distance from the defect to the depth reference plane in the multilayered structure of the to-be-detected product are located are obtained separately, and therefore, a condition that the defect extends from which layer to which layer is known.
  • the present invention provides a defect layer detection system based on a light field camera which can be implemented by executing the step flow of the defect layer detection method based on the light field camera, i.e. the defect layer detection method based on the light field camera is a preferred embodiment of the defect layer detection system based on the light field camera.
  • the defect layer detection system based on the light field camera according to the present invention comprising:
  • a depth reference plane acquisition module for acquiring depth reference plane information according to the light field depth image
  • a defect extraction module for extracting the defect according to the light field multi-view image and acquiring defect depth information according to the light field depth image
  • a relative distance acquisition module for acquiring the distance from the defect to the depth reference plane according to the depth reference plane information and the defect depth information
  • a defect layer information acquisition module for obtaining the spatial position of the defect in the to-be-detected product according to the distance from the defect to the depth reference plane and the structural information of the to-be-detected product.
  • the to-be-detected product is a multilayered structural product, and the structural information of the to-be-detected product includes a distance relationship among multilayered structures;
  • distribution of the defect in the multilayered structure is obtained according to the distance from the defect to the depth reference plane and the structural information of the to-be-detected product.
  • the defect depth information comprises a maximum depth of the defect and a minimum depth of the defect
  • the distance from the defect to the depth reference plane comprises a maximum distance from the defect to the depth reference plane and a minimum distance from the defect to the depth reference plane
  • layers of the maximum distance from the defect to the depth reference plane and the minimum distance from the defect to the depth reference plane in the multilayered structure of the to-be-detected product are obtained, separately.
  • depth reference plane information is acquired according to a light field depth image, wherein the depth reference plane has certain texture information so as to provide an upper surface of a structure with stable depth information; for a LCD screen, the structure is a color filter and for an OLED screen, the structure is an organic emitting layer; as the structure provides texture information of the image, and depth of the upper surface of the structure can be calculated by means of a light field algorithm; the structure is fixed in position in the to-be-detected object, such that the structure can be used as a reference surface for calculating a defect height; and a plane equation of the depth reference plane is calculated by fitting of a polynomial.
  • each pixel point in the light field depth image is converted into a true dimension coordinate (xu, v, yu, v, zu, v) , wherein xu,v, yu, v, zu, v represent coordinates of a pixel on X, Y and Z axes with horizontal and longitudinal coordinate values in the image coordinate system being ;
  • the defect is extracted according to the light field multi-view image, and defect depth information is obtained according to the light field depth image; specifically, information interference in a pixel mesh in the multi-view image is weakened first by Gaussian filter, then binarization threshold value processing of extracting bright spots and dark spots on the image is performed separately, a connected domain is analyzed and an area with relatively small area of the connected domain is removed.
  • the points processed can be considered as defect points, and then information of a corresponding pixel position is obtained in the light field depth image to obtain the defect depth information;
  • a relative distance acquisition module is used for acquiring a distance from the defect to the depth reference plane according to the depth reference plane information and the defect depth information. Specifically, as the equation of the depth reference plane and the coordinates of the defect point are known, the distance from the defect point to the depth reference plane can be calculated by a distance formula from the point to the plane.
  • a defect layer information acquisition module for obtaining the spatial position of the defect in the to-be-detected product according to the distance from the defect to the depth reference plane and the structural information of the to-be-detected product. Further, the layer of the defect in a to-be-detected product is obtained according to the spatial position, i.e. which layer or layers, and then information of the layers are mapped to the spatial coordinate system to form layered point cloud of the defect.
  • the to-be-detected product is a multilayered structural product, and the structural information of the to-be-detected product comprises a distance relationship among multilayered structures.
  • the to-be-detected product includes any one of a screen, a pair of VR glasses, a display and a television.
  • the screen can be a screen of any one equipment among an intelligent terminal, a vehicle-mounted screen, a display and a television.
  • the screen can be screens such as a LC screen, an OLED screen, a QLED screen, a MiniLED screen and a MicroLED screen.
  • the defect layer information acquisition module For the to-be-detected product of a multilayered structure such as the screen, in the defect layer information acquisition module, distribution of the defect in the multilayered structure is obtained according to the distance from the defect to the depth reference plane and the structural information of the to-be-detected product. Specifically, as it is known that which layer in the product of the multilayered structure corresponds to the depth reference plane, the distance from the defect to the depth reference plane is obtained, and the distances from other layers in the multilayered structure to the layer corresponding to the depth reference plane are known, distance relationships between the defect and the layers in the multilayered structure can be obtained. For example, the distance from the defect to the layer in the multilayered structure is zero or negative, and the defect is distributed in the layer.
  • the defect depth information includes a maximum depth of the defect and a minimum depth of the defect
  • the distance from the defect to the depth reference plane includes a maximum distance from the defect to the depth reference plane and a minimum distance from the defect to the depth reference plane.
  • the defect is distributed in at least two layers, so that layers where the maximum distance from the defect to the depth reference plane and the minimum distance from the defect to the depth reference plane in the multilayered structure of the to-be-detected product are located are obtained separately, and therefore, a condition that the defect extends from which layer to which layer is known.
  • the computer readable storage medium that stores the computer program, wherein the steps of the defect layer detection method based on the light field camera are implemented when the computer program is executed by a processing unit.
  • the computer readable storage medium can be a read-only or read-write memory such as a disc and an optical disc in equipment such as a singlechip microcomputer, a DSP, a processing unit, a data center, a server, a PC, an intelligent terminal, a special machine and a light field camera.
  • a product defect detection production line comprises the defect layer detection system based on the light field camera, or the computer readable storage medium that stores the computer program.
  • the product defect detection production line can be a mobile phone screen detection production line, an intelligent watch screen detection production line, a tablet personal computer screen detection production line, a vehicle-mounted screen detection production line, a display screen detection production line, a VR glass screen detection production line and an optical lens group detection production line.
  • the application scenes including a defect layer detection embodiment for the mobile phone LCD screen and a defect layer detection embodiment for an intelligent watch OLED screen.
  • a defect layer detection embodiment for the mobile phone LCD screen To judge the layer where the defect is located in the screen is helpful for a plant to determine the problem of which step, so that the production process is improved effectively. Meanwhile, if the defect appears on the surface of the screen, the defect may be dust which can be removed by cleaning, so that the percent of pass is improved. Therefore, three-dimensional detection the screen defect is of great benefit to existing production.
  • the specific process of the defect layer detection embodiment for the mobile phone LCD screen is as follows:
  • the two-dimensional camera is used first to shoot a lightened LCD mobile phone screen integrally to determine a horizontal direction position of the defect in the image and determine the relative position of the defect on the mobile phone screen; the information is conveyed to a conveyor belt; and the conveyor belt moves a detected sample to the light field camera, the view of which including the position of the defect.
  • the light field camera matched with a lens with 3.6 times amplifying factor shoots the detected sample; the light field camera matched with the lens with the proper aperture and focal length shoots the defocused soft light pure color calibration plates for light field white image calibration and microlens center calibration; the light field camera shoots the plurality of dimension calibration plates in different spatial positions for light field camera dimension calibration; the light field camera is focused near the defect to shoot the light field camera original light field image containing the defect information; light field multi-view rendering and depth calculation are performed to obtain a corresponding two-dimensional center view image (Fig. 3) of the mobile phone screen defect and a corresponding depth image (Fig.
  • the defect is extracted according to the defect shape and color information in the light field multi-view image; the plane equation of the pixel is calculated by fitting of a polynomial and the distance from the defect to the depth reference plane is calculated; the mobile phone screen has seven layers, the number of layer where the defect is located in the structure can be determined according to the distance from the defect to the depth reference plane and the known screen structure information, the number of layer is mapped to the spatial coordinate system and the layered point cloud (Fig. 5) of the defect is formed finally, Wherein the fifth layer is the layer where the depth reference plane is located, so that except the sixth layer where the defect is located, points in the point cloud are distributed in the fifth layer.
  • the two-dimensional camera is used first to shoot a lightened OLED intelligent watch screen integrally to determine a horizontal direction position of the defect in the image and determine the relative position of the defect on the screen; the information is conveyed to a conveyor belt; and the conveyor belt moves a detected sample to the light field camera, the view of which including the position of the defect.
  • the light field camera matched with a lens with 2 times amplifying factor shoots the detected sample; the light field camera matched with the lens with the proper aperture and focal length shoots the defocused soft light pure color calibration plates for light field white image calibration and microlens center calibration; the light field camera shoots the plurality of dimension calibration plates in different spatial positions for light field camera dimension calibration; the light field camera is focused near the defect to shoot the light field camera original light field image containing the defect information; light field multi-view rendering and depth calculation are performed to obtain a corresponding two-dimensional center view image (Fig. 6) of the mobile phone screen defect and a corresponding depth image (Fig.
  • the defect is extracted according to the defect shape and color information in the light field multi-view image; the plane equation of the pixel is calculated by fitting of a polynomial and the distance from the defect to the depth reference plane is calculated; It is only needed to distinguish a pixel layer from a surface layer for the screen, the number of layer where the defect is located in the structure can be determined according to the distance from the defect to the pixel and the known screen thickness, the number of layer is mapped to the spatial coordinate system and the layered point cloud (Fig. 5) of the defect is formed finally, wherein the lower layer is the pixel layer and the upper layer is a glass surface. It can be seen that the dust is detected successfully and is divided into the upper surface layer, i.e. the glass surface.
  • system, device and modules thereof provided by the present invention can implement the same program in form of a logic gate, a switch, an application-specific integrated circuit, a programmable logic controller, an embedded microcontroller and the like fully by logically programming the method steps. Therefore, the system, device and modules thereof provided by the present invention are considered a hardware part and modules of various programs included therein are also considered structures in the hardware part. Modules for implementing various functions can be also considered software programs that implement the method and the structures in the hardware part.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Immunology (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Chemical & Material Sciences (AREA)
  • Pathology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
  • Length Measuring Devices By Optical Means (AREA)
PCT/CN2021/079838 2020-12-15 2021-03-09 Defect layer detection method and system based on light field camera and detection production line WO2022126871A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020237017783A KR20230096057A (ko) 2020-12-15 2021-03-09 라이트 필드 카메라를 기반으로 한 결함 레이어링 검출 방법과 시스템 및 검출 생산 라인

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011481669.9A CN114693583A (zh) 2020-12-15 2020-12-15 基于光场相机的缺陷分层检测方法和系统及检测产线
CN202011481669.9 2020-12-15

Publications (1)

Publication Number Publication Date
WO2022126871A1 true WO2022126871A1 (en) 2022-06-23

Family

ID=82059600

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/079838 WO2022126871A1 (en) 2020-12-15 2021-03-09 Defect layer detection method and system based on light field camera and detection production line

Country Status (3)

Country Link
KR (1) KR20230096057A (zh)
CN (1) CN114693583A (zh)
WO (1) WO2022126871A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116071353A (zh) * 2023-03-06 2023-05-05 成都盛锴科技有限公司 一种螺栓装配检测方法及系统
CN116912475A (zh) * 2023-09-11 2023-10-20 深圳精智达技术股份有限公司 一种显示屏异物检测方法、装置、电子设备和存储介质
CN116958146A (zh) * 2023-09-20 2023-10-27 深圳市信润富联数字科技有限公司 3d点云的采集方法及装置、电子装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105300308A (zh) * 2015-11-26 2016-02-03 凌云光技术集团有限责任公司 一种基于浅景深成像的深度测量方法及系统
CN105842885A (zh) * 2016-03-21 2016-08-10 凌云光技术集团有限责任公司 一种液晶屏缺陷分层定位方法及装置
US20180103247A1 (en) * 2016-10-07 2018-04-12 Kla-Tencor Corporation Three-Dimensional Imaging For Semiconductor Wafer Inspection
CN109765245A (zh) * 2019-02-25 2019-05-17 武汉精立电子技术有限公司 大尺寸显示屏缺陷检测定位方法
CN110349132A (zh) * 2019-06-25 2019-10-18 武汉纺织大学 一种基于光场相机深度信息提取的织物瑕疵检测方法
CN111351446A (zh) * 2020-01-10 2020-06-30 奕目(上海)科技有限公司 一种用于三维形貌测量的光场相机校准方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105300308A (zh) * 2015-11-26 2016-02-03 凌云光技术集团有限责任公司 一种基于浅景深成像的深度测量方法及系统
CN105842885A (zh) * 2016-03-21 2016-08-10 凌云光技术集团有限责任公司 一种液晶屏缺陷分层定位方法及装置
US20180103247A1 (en) * 2016-10-07 2018-04-12 Kla-Tencor Corporation Three-Dimensional Imaging For Semiconductor Wafer Inspection
CN109765245A (zh) * 2019-02-25 2019-05-17 武汉精立电子技术有限公司 大尺寸显示屏缺陷检测定位方法
CN110349132A (zh) * 2019-06-25 2019-10-18 武汉纺织大学 一种基于光场相机深度信息提取的织物瑕疵检测方法
CN111351446A (zh) * 2020-01-10 2020-06-30 奕目(上海)科技有限公司 一种用于三维形貌测量的光场相机校准方法

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116071353A (zh) * 2023-03-06 2023-05-05 成都盛锴科技有限公司 一种螺栓装配检测方法及系统
CN116071353B (zh) * 2023-03-06 2023-09-05 成都盛锴科技有限公司 一种螺栓装配检测方法及系统
CN116912475A (zh) * 2023-09-11 2023-10-20 深圳精智达技术股份有限公司 一种显示屏异物检测方法、装置、电子设备和存储介质
CN116912475B (zh) * 2023-09-11 2024-01-09 深圳精智达技术股份有限公司 一种显示屏异物检测方法、装置、电子设备和存储介质
CN116958146A (zh) * 2023-09-20 2023-10-27 深圳市信润富联数字科技有限公司 3d点云的采集方法及装置、电子装置
CN116958146B (zh) * 2023-09-20 2024-01-12 深圳市信润富联数字科技有限公司 3d点云的采集方法及装置、电子装置

Also Published As

Publication number Publication date
KR20230096057A (ko) 2023-06-29
CN114693583A (zh) 2022-07-01

Similar Documents

Publication Publication Date Title
WO2022126871A1 (en) Defect layer detection method and system based on light field camera and detection production line
JP6855587B2 (ja) 視点から距離情報を取得するための装置及び方法
CN113205593B (zh) 一种基于点云自适应修复的高反光表面结构光场三维重建方法
TWI618640B (zh) 立體列印系統以及立體列印方法
WO2022126870A1 (en) Three-dimensional imaging method and method based on light field camera and three-dimensional imaging measuring production line
CN104101611A (zh) 一种类镜面物体表面光学成像装置及其成像方法
JP5627622B2 (ja) 固体撮像装置および携帯情報端末
CN109406527B (zh) 一种微型摄像模组镜头精细外观缺陷检测系统及方法
CN102630299A (zh) 缺陷检查用图像处理装置和缺陷检查用图像处理方法
CN112824881A (zh) 一种基于光场相机的透明或半透明介质缺陷检测系统及方法
CN115456945A (zh) 一种芯片引脚缺陷的检测方法、检测装置及设备
CN114280075B (zh) 一种管类零件表面缺陷在线视觉检测系统及检测方法
CN112816493A (zh) 一种芯片打线缺陷检测方法及装置
CN114252449B (zh) 一种基于线结构光的铝合金焊缝表面质量检测系统及方法
CN112748071A (zh) 透明或半透明介质缺陷检测系统和方法
CN105979248B (zh) 具有混合深度估计的图像处理系统及其操作方法
CN107490342A (zh) 一种基于单双目视觉的手机外观检测方法
CN109191516B (zh) 结构光模组的旋转纠正方法、装置及可读存储介质
CN110842930A (zh) 基于dlp和相机标定的用于机器人的视觉装置及测量方法
CN115839957A (zh) 显示模组夹层缺陷的检测方法、装置、设备及存储介质
Li et al. High dynamic range 3D measurements based on space–time speckle correlation and color camera
CN213121672U (zh) 一种基于双目视觉的图像检测装置
CN112782176A (zh) 一种产品外观检测方法及装置
CN113483655A (zh) 一种pcb检测系统及方法
CN113034455B (zh) 一种平面物件麻点检测方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21904812

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20237017783

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21904812

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 17.01.2024)