WO2022126870A1 - Three-dimensional imaging method and method based on light field camera and three-dimensional imaging measuring production line - Google Patents

Three-dimensional imaging method and method based on light field camera and three-dimensional imaging measuring production line Download PDF

Info

Publication number
WO2022126870A1
WO2022126870A1 PCT/CN2021/079837 CN2021079837W WO2022126870A1 WO 2022126870 A1 WO2022126870 A1 WO 2022126870A1 CN 2021079837 W CN2021079837 W CN 2021079837W WO 2022126870 A1 WO2022126870 A1 WO 2022126870A1
Authority
WO
WIPO (PCT)
Prior art keywords
light field
image
field camera
gold wire
camera
Prior art date
Application number
PCT/CN2021/079837
Other languages
French (fr)
Inventor
Haotian LI
Yuzhe XU
Original Assignee
Vomma (Shanghai) Technology Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vomma (Shanghai) Technology Co., Ltd. filed Critical Vomma (Shanghai) Technology Co., Ltd.
Publication of WO2022126870A1 publication Critical patent/WO2022126870A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/557Depth or shape recovery from multiple images from light fields, e.g. from plenoptic cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0075Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for altering, e.g. increasing, the depth of field or depth of focus
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10052Images from lightfield camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30141Printed circuit board [PCB]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/957Light-field or plenoptic cameras or camera modules

Definitions

  • the present invention relates to the technical field of three-dimensional photoelectric measurement for conventional objects, in particular to a three-dimensional imaging system based on a linear array light field camera scanning device and a chip detection system and detection production line based on the light field camera and a two-dimensional camera.
  • the light field camera is a novel camera emerging in recent years.
  • Microlens arrays are additionally arranged in a sensor and a main lens of a conventional camera of the light field camera to further record the propagation direction of rays to form a unique light field image encoded by a lens array.
  • the light field image is rendered to further obtain three-dimensional information.
  • Patent CN106303175A discloses a multi-view and single-light field camera-based virtual reality three-dimensional data acquisition method, including the steps of S101, acquiring a microlens calibration image of the single light field camera; S102, positioning a center position of the microlens by using the calibration image; S103, acquiring a light field picture; S104, selecting a same pixel with same relative position under each microlens in the light field image; S105, solving a pixel value embedded into a matrix thereof by taking the selected pixel as a sampling point to further form an image of a view; and S106, selecting pixels in different positions and repeating the steps 103 to S105 till all pixels are selected completely.
  • Patent CN111351446A discloses a light field camera calibration method for three-dimensional topography measurement.
  • the method includes the steps of: calibrating calibration plates in different positions in a space and corresponding light field original images to acquire a correspondingly relationship between a light field disparity image and three-dimensional space information; shooting a plurality of defocused soft light pure color calibration plates by using the light field camera to obtain a light field white image; calculating a vignetting-removing matrix according to the light field camera white image; performing iterative computation to obtain a sub-pixel level center coordinate matrix of the microlens of the light field camera; shooting a plurality of round point calibration plates in a known three-dimensional space and removing vignetting by the light field camera; and establishing a light field mathematical model between a three-dimensional coordinate and disparity, and performing fitting calculation to obtain a center coordinate of the round point and a disparity value corresponding to round point calibration according to a three-dimensional imaging rule of the light field and three-dimensional space information of the round point.
  • Patent CN111340888A discloses a light field camera calibration method and system without a white image.
  • the method comprises the steps of: acquiring a light field original image of an electronic checkerboard shot by the light field camera and then calibrating a microlens array according to the light field original image to generate a calibration result of the microlens array and a center point lattice of the microlens array; and extracting a linear feature of the light field original image by adopting a template matching method and taking the linear feature as internal and external parameters for calibrating a projection model of the light field camera by the calibration data.
  • the method is not dependent on the white image and can be used for obtain the center point lattice, an array gesture of the microlens and the internal and external parameters of the projection model of the camera only by processing an original light field of the checkerboard, so that the method has the characteristics of high calibration precision of the light field camera and wide application range.
  • a mainstream detection means of industrial vision of a current machine still depends on the 2D industrial camera, that is, features of the to-be-detected object is extracted from a gray-scale map for measurement in an X-Y plane.
  • the light field camera is a novel 3D camera emerging in recent years.
  • Microlens arrays are additionally arranged in a sensor and a main lens of a conventional camera (2D camera) of the light field camera to further record the propagation direction of rays to form a unique light field image encoded by a lens array.
  • the light field image is rendered to further obtain three-dimensional information.
  • Patent CN106303175A discloses a multi-view and single-light field camera-based virtual reality three-dimensional data acquisition method, including the steps of S101, acquiring a microlens calibration image of the single light field camera; S102, positioning a center position of the microlens by using the calibration image; S103, acquiring a light field picture; S104, selecting a same pixel with same relative position under each microlens in the light field image; S105, solving a pixel value embedded into a matrix thereof by taking the selected pixel as a sampling point to further form an image of a view; and S106, selecting pixels in different positions and repeating the steps 103 to S105 till all pixels are selected completely.
  • Patent CN111351446A discloses a light field camera calibration method for three-dimensional topography measurement.
  • the method includes the steps of: calibrating calibration plates in different positions in a space and corresponding light field original images to acquire a correspondingly relationship between a light field disparity image and three-dimensional space information; shooting a plurality of defocused soft light pure color calibration plates by using the light field camera to obtain a light field white image; calculating a vignetting-removing matrix according to the light field camera white image; performing iterative computation to obtain a sub-pixel level center coordinate matrix of the microlens of the light field camera; shooting a plurality of round point calibration plates in a known three-dimensional space and removing vignetting by the light field camera; and establishing a light field mathematical model between a three-dimensional coordinate and disparity, and performing fitting calculation to obtain a center coordinate of the round point and a disparity value corresponding to round point calibration according to a three-dimensional imaging rule of the light field and three-dimensional space information of the round point.
  • Patent CN111340888A discloses a light field camera calibration method and system without a white image.
  • the method comprises the steps of: acquiring a light field original image of an electronic checkerboard shot by the light field camera and then calibrating a microlens array according to the light field original image to generate a calibration result of the microlens array and a center point lattice of the microlens array; and extracting a linear feature of the light field original image by adopting a template matching method and taking the linear feature as internal and external parameters for calibrating a projection model of the light field camera by the calibration data.
  • the method is not dependent on the white image and can be used for obtain the center point lattice, an array gesture of the microlens and the internal and external parameters of the projection model of the camera only by processing an original light field of the checkerboard, so that the method has the characteristics of high calibration precision of the light field camera and wide application range.
  • the present invention aims to provide a three-dimensional imaging method and system based on a light field camera, and a three-dimensional imaging measuring production line.
  • the three-dimensional imaging method based on the light field camera including:
  • a shot image acquisition step acquiring a plurality of groups of to-be-spliced images, wherein each group of to-be-spliced images comprises a light field depth image and a light field multi-view image obtained by shooting a to-be-detected object by the linear array light field camera in a shooting position, and the shooting positions are distributed to form a linear array parallel to a width direction of an image sensor of the linear array light field camera;
  • an image splicing step splicing the plurality of groups of to-be-spliced images to obtain a spliced image, wherein the spliced image comprises a spliced light field depth image and a spliced light field multi-view image;
  • a three-dimensional acquisition step obtaining three-dimensional imaging information of the to-be-detected object according to the spliced image.
  • a resolution of the linear array light field camera is n*m, and a moving interval of the linear array light field camera between adjacent shooting positions is m pixels; and a dimension of a spliced image generated after N times of movement of the linear array light field camera is n*m* (N+1) .
  • the three-dimensional information acquisition module includes:
  • a cloud point information acquisition module for obtaining cloud point information and dimension information of the to-be-detected object according to the center view image and the center view light field depth image.
  • the three-dimensional imaging system based on the light field camera including:
  • a shot image acquisition module for acquiring the plurality of groups of to-be-spliced images, wherein each group of to-be-spliced images comprises the light field depth image and the light field multi-view image obtained by shooting a to-be-detected object by a linear array light field camera in a shooting position, and the shooting positions are distributed to form the linear array parallel to a width direction of an image sensor of the linear array light field camera;
  • an image splicing module for splicing the plurality of groups of to-be-spliced images to obtain the spliced image, wherein the spliced image comprises a spliced light field depth image and a spliced light field multi-view image;
  • a three-dimensional information acquisition module for obtaining three-dimensional imaging information of the to-be-detected object according to the spliced image.
  • a resolution of the linear array light field camera is n*m, and a moving interval of the linear array light field camera between adjacent shooting positions is m pixels; and a dimension of a spliced image generated after N times of movement of the linear array light field camera is n*m* (N+1) .
  • the three-dimensional information acquisition module includes:
  • a center view image acquisition module for splicing to obtain a center view image
  • a center view light field depth image acquisition module for splicing to obtain a center view light field depth image
  • a point cloud information acquisition module for obtaining point cloud information and dimension information of the to-be-detected object according to the center view image and the center view light field depth image.
  • the computer program implements the steps of the three-dimensional imaging method based on the light field camera as defined in claim 1 are implemented when being executed by a processor.
  • a light field camera scanning device including:
  • the linear array light field camera being mounted on the moving mechanism and the moving mechanism driving the light field camera to move to each shooting position to scan, wherein the shooting positions are distributed to form a linear array parallel to a width direction of an image sensor of the linear array light field camera; the light source irradiates rays to the to-be-detected object; and the linear array light field camera shoots a light field image of the to-be-detected objected in each shooting position.
  • the light field camera scanning device includes a controller.
  • the controller comprises the computer readable storage medium storing the computer program as defined in claim 7.
  • a three-dimensional imaging measuring production line including the three-dimensional imaging system
  • a gold wire positioning step acquiring the detected image, performing positioning in the detected image to obtain paired solder balls, and taking a connecting line between the paired solder balls as a basis for feature extraction of the gold wire, wherein the connecting line is a straight line segment;
  • a gold wire feature extraction step in the detected image, drawing a midperpendicular of the straight line segment, considering points meeting the demand among all the points on the midperpendicular as points on the gold wire and marking the points as gold wire feature points;
  • a gold wire defect analyzing step obtaining a gold wire extension shape according to a plurality of gold wire feature points and analyzing a first type defect of the gold wire according to the gold wire extension shape.
  • the three-dimensional image is obtained by shooting the chip by the light field camera, wherein the center view image of the light field camera serves as the detected image;
  • the gold wire feature point is converted into a three-dimensional space to obtain a wire height of the gold wire according to the three-dimensional image obtained by the light field camera, and a second type defect of the gold wire is analyzed according to the wire height.
  • the gold wire feature extraction step includes:
  • an initializing step taking the linear segment as a currently selected line segment to trigger execution of a midperpendicular step
  • a midperpendicular step drawing a midperpendicular for the currently selected line segment, considering points meeting the demand among all the points on the midperpendicular as points on the gold wire and marking the points as gold wire feature points; and trigging execution of a selecting step;
  • a selecting step dividing the currently selected line segment by the current midperpendicular into two line segments that serve as two currently selected line segments updated, and returning to trigger execution of the midperpendicular step;
  • a chip detection method based on a light field camera and a two-dimensional camera including:
  • a gold wire positioning module for acquiring the detected image, performing positioning in the detected image to obtain paired solder balls, and taking a connecting line between the paired solder balls as a basis for feature extraction of the gold wire, wherein the connecting line is a straight line segment;
  • a gold wire feature extraction module for drawing a midperpendicular of the straight line segment in the detected image, considering points meeting the demand among all the points on the midperpendicular as points on the gold wire and marking the points as gold wire feature points;
  • a gold wire defect analyzing module for obtaining the gold wire extension shape according to the plurality of gold wire feature points and analyzing the first type defect of the gold wire according to the gold wire extension shape.
  • the three-dimensional image is obtained by shooting the chip by the light field camera, wherein the center view image of the light field camera serves as the detected image;
  • the gold wire feature point is converted into a three-dimensional space to obtain a wire height of the gold wire according to the three-dimensional image obtained by the light field camera, and a second type defect of the gold wire is analyzed according to the wire height.
  • the gold wire feature extraction module includes:
  • an initializing module for taking the linear segment as a currently selected line segment to trigger execution of a midperpendicular step
  • a midperpendicular module for drawing a midperpendicular for the currently selected line segment, considering points meeting the demand among all the points on the midperpendicular as points on the gold wire and marking the points as gold wire feature points, and trigging execution of a selecting step;
  • a selecting module for dividing the currently selected line segment by the current midperpendicular into two line segments that serve as two currently selected line segments updated, and returning to trigger execution of the midperpendicular step;
  • a chip detection system based on a light field camera and a two-dimensional camera including:
  • the computer readable storage medium storing the computer program, wherein the steps of the defect layer detection method based on the light field camera are implemented when the computer program is executed by a processing unit.
  • a product defect detection production line includes the system for detecting the gold wire of the chip or includes the chip detection system based on the light field camera and the two-dimensional camera or includes the computer readable storage medium storing the computer program.
  • the present invention has the following beneficial effects:
  • scanning by the light field camera is performed to obtain the high resolution light field multi-view image, such that a better three-dimensional imaging result is obtained, thereby improving the measuring precision.
  • High resolution imaging on three-dimensional shape of the to-be-detected object is achieved.
  • the image of the light field scanned by the linear array light field camera has a higher resolution, and compared with the image obtained by the common light field camera, the obtained light field multi-view image further has the higher resolution, such that the defect of sacrificing resolutions in x and y directions of the light field camera is solved well.
  • a more precise three-dimensional image can be calculated via the light field multi-view image with high resolution.
  • Defects such as no wire, broken wire, upward foot and wire deviation and the like can be realized by using a solder ball of a chip shot and detected by the two-dimensional camera in combination with a gold wire of the chip shot and detected by the light field camera.
  • the falling defect of the gold wire of the chip can be detected by analyzing the height of the wire.
  • Fig. 1 is a flow diagram of a three-dimensional imaging process according to one of embodiments of the present invention.
  • Fig. 2 is a schematic diagram of scanning and shooting a detected medium under light source irradiation by the linear array light field camera according to the embodiment of the present invention.
  • Fig. 3 is a part of an area of a light field image of the chip scanned at one time by the linear array light field camera in the embodiment of the present invention, the light field image resolution obtained by scanning at one time being 7912*10.
  • Fig. 4 is a part of an area of a light field image spliced after multiple times of scanning by the linear array light field camera, resolution of the light field image obtained by multiple times of scanning being 7912*5430.
  • Fig. 5 is a center view image of the light field of the chip.
  • Fig. 6 is a light field depth image corresponding to the Fig. 5.
  • Fig. 7 is a three-dimensional point cloud image corresponding to the Fig. 5;
  • Fig. 8 is a structural schematic diagram of the present invention.
  • Fig. 9 is a picture where the defect type of the gold wire is a falling wire.
  • Fig. 9a is a center view diagram under the light field camera
  • Fig. 9b is a depth diagram
  • Fig. 9c is a a point cloud screenshot
  • Fig. 9d is a three-dimensional curve graph of the gold wire in a marked area.
  • Fig. 10 is a picture where the defect type of the gold wire is a broken wire.
  • Fig. 10a is a center view diagram under the light field camera
  • Fig. 10b is a depth diagram
  • Fig. 10c is a point cloud screenshot
  • Fig. 10d is a three-dimensional curve graph of the gold wire in a marked area.
  • Fig. 11 is a picture where the defect type of the solder ball is a defected ball.
  • Fig. 11a is a picture shot by the two-dimensional camera and
  • Fig. 11b is a binary image after image processing.
  • Fig. 12 is a picture where the defect type of the solder ball is a small ball.
  • Fig. 12a is a picture shot by the two-dimensional camera and Fig. 12b is an image after image processing.
  • Fig. 13 is a step flow diagram of the method of the present invention.
  • Fig. 14 is a detected image.
  • Fig. 15 is a picture of each gold wire feature point in the detected image.
  • First light source 201 red light
  • Second light source 202 (green light)
  • the three-dimensional imaging method based on the light field camera including:
  • a light field camera calibration step calibrating the light field camera in the linear array light field camera, wherein an image sensor of the linear array light field camera and an image sensor in an area array light field camera are different in type, and the image sensor in the linear array light field camera is an image sensor, the width of which is the diameter of the microlens in a single row of microlenses and the length of which is the length (the sum of diameters of the microlens in the single row of microlenses) of the single row microlens; and the sensor in the area array light field camera is as same as the image sensor in a common area array 2D camera;
  • a shooting step acquiring each group of to-be-spliced images in each shooting position of an area array by the light field camera or acquiring each group of to-be-spliced images in each shooting position of a linear array by the linear array light field camera,
  • each group of to-be-spliced images includes the light field depth image and the light field multi-view image obtained by shooting a to-be-detected object by the linear array light field camera in the shooting position, and the shooting positions are distributed to form the linear array parallel to a width direction of the image sensor of the linear array light field camera;
  • an image splicing step splicing the plurality of groups of to-be-spliced images to obtain a spliced image, wherein the spliced image comprises a spliced light field depth image and a spliced light field multi-view image;
  • a three-dimensional acquisition step obtaining three-dimensional imaging information of the to-be-detected object according to the spliced image.
  • the light field camera calibration step including:
  • step A1 adjusting a focal length and an aperture of the linear array light field camera, and scanning and shooting a plurality of defocused soft light pure color calibration plates to acquire a light field white image camera; and calculating a vignetting-removing matrix and a sub-pixel level center coordinate matrix of the microlens of the light field camera according to the light field white image camera;
  • the step A1 specifically includes: adjusting the main lens and the aperture of the linear array light field camera to make the microlens array of the white image of an original light field of the light field camera is just or approximately tangential; enabling the linear array light field camera to shoot images of the plurality of defocused soft light pure color calibration plates which are pure color background plates with uniform light intensity at the defocused position of the light field camera, wherein the vignetting-removing matrix is a matrix obtained by equalizing and normalizing the white images W (u, v) of the plurality of original light fields; the sub-pixel level center coordinate matrix of the microlens of the light field camera is a sub-pixel level
  • the aperture of the microlens with the aperture of the microlens, specifically, scanning and shooting the images of the defocused soft light pure color calibration plates by the linear array light field camera, wherein the microlens in the image is just or approximately in a tangential state, wherein the light field white image or the light field white image camera means a pure white background image shot by the light field camera, and at the time, the shape of the microlens array is reflected obvious particularly on the image; and thus, the aperture can be adjusted based on the image to ensure that the images of the microlenses are just tangential; after adjustment, shooting the plurality of pure color background plates with relatively uniform light intensity in the defocused position of the linear array light field camera, i.e.
  • the defocused soft light pure color calibration plates equalizing and normalizing the plurality of light field white images to obtain the vignetting-removing matrix wherein it is necessary to click and remove all the subsequently shot original images of the light field to remove the vignetting-removing matrix so as to calibrate the light field white images, wherein the original images of the light field mean the light field images that are not processed by a multi-view image algorithm of the light field; and after finishing the light field white image calibration step, processing the light field white image by employing a filter, removing the noise of the light field white image and performing non-maxima suppression on the filtered light field white image; further, taking the local maximum value according to the processed image, wherein the maximum image is just an integer level center of the microlens of the light field camera; and by taking the integer level center of the microlens as an initial iterative value, performing iterative optimization on a microlens arranging grid to obtain the microlens arranging angle and interval finally so as to obtain the sub-pixel level microlens center
  • step 2a shooting a plurality of round point calibration plates with known spatial three-dimensional positions by employing the light field camera and establishing a light field mathematic model from three-dimensional coordinate to disparity to finish dimension calibration of the light field camera; specifically, the three-dimensional coordinate of each round point on the round point calibration plate being known, and shooting the calibration plate by employing the light field camera to obtain a degree of dispersion of the round points of the calibration plate and a corresponding disparity value so as to further fit and calibrate a relationship between the disparity value and the three-dimensional coordinate; more particularly, it is necessary to assemble a displacement table and a dimension calibration plate for the linear array light field camera dimension calibration step: first, fixing the dimension calibration plate in a focal plane area of the linear array light field camera, continuously moving the calibration plate from the focal plane to a fixed spatial distance and performing scanning and shooting, and the spatial positions of the points on the calibration plate being known, thereby obtaining the spatial positions of the points on the calibration plate in the whole moving process; and calibration points of the round points forming a dis
  • step A3 irradiating the to-be-detected object by the light source to acquire a shot image of the to-be-detected object under the light source; specifically, setting of an angle of the light source: enabling rays of the light source to irradiate the to-be-detected object clearly, wherein it is ensured that the linear array light camera can image the to-be-detected object under multi-spectrum rays;
  • the moving mechanism of the linear array camera moves a distance that is 10 times of pixel every time, wherein there is overlapped part between the spliced images;
  • the light field camera performing light field multi-view rendering after shooting the to-be-detected object to obtain the light field multi-view image and the light field disparity image, and converting the disparity image into the light field depth image via a conversion relationship between the previously calibrated disparity and three-dimensional coordinate; more particularly, based on the original light field image, performing conventional light field rendering and depth estimation: first, performing light field multi-view rendering to obtain the light field multi-view image of the to-be-detected image; then performing further calculation to obtain the light field disparity image, and converting the light field disparity image into the light field depth image according to a light field camera dimension calibration result, wherein the light field depth image contains depth information of the to-be-detected object as well, so that three-dimensional imaging can be performed on the to-be-detected object;
  • the image splicing step splicing the plurality of groups of to-be-spliced images to obtain a spliced image, wherein the spliced image comprises a spliced light field depth image and a spliced light field multi-view image; specifically, the distance that the moving mechanism that drives the linear array light field camera to move every time is m pixels (if the resolution of the linear array light field camera is n*m) , the spliced image generated is n*m* (n+1) after N times of movements, and the quality of the spliced image is dependent on precision of a transmission mechanism;
  • the to-be-detected object can be an object that is relatively large in area and relatively small in defect, and it is better applied to detecting the type of objects: for example, glass defect depth detection, screen defect depth detection, gold wire and solder ball detection for semiconductors and the like.
  • the three-dimensional imaging system based on the light field camera can be implemented by executing the step flow of the three-dimensional imaging method based on the light field camera, i.e. the three-dimensional method based on the light field camera is a preferred embodiment of the three-dimensional imaging system based on the light field camera.
  • the three-dimensional imaging system based on the light field camera including:
  • a shot image acquisition module for acquiring the plurality of groups of to-be-spliced images, wherein each group of to-be-spliced images includes the light field depth image and the light field multi-view image obtained by shooting a to-be-detected object by a light field camera in a shooting position, and the shooting positions are distributed to form an area array; or each group of to-be-spliced images includes the light field depth image and the light field multi-view image obtained by shooting a to-be-detected object by a linear array light field camera in a shooting position, and the shooting positions are distributed to form an linear array parallel to the width direction of the image sensor of the linear array light field camera;
  • an image splicing module for splicing the plurality of groups of to-be-spliced images to obtain the spliced image, wherein the spliced image comprises the spliced light field depth image and the spliced light field multi-view image;
  • a three-dimensional acquisition module for obtaining the three-dimensional imaging information of the to-be-detected object according to the spliced image.
  • a resolution of the linear array light field camera is n*m, and a moving interval of the linear array light field camera between adjacent shooting positions is m pixels; and a dimension of a spliced image generated after N times of movement of the linear array light field camera is n*m* (N+1) .
  • the three-dimensional information acquisition module includes:
  • a center view image acquisition module for splicing to obtain a center view image
  • a center view light field depth image acquisition module for splicing to obtain a center view light field depth image
  • a cloud point information acquisition module for obtaining cloud point information and dimension information of the to-be-detected object according to the center view image and the center view light field depth image.
  • the present invention provides a light field camera scanning device, including a moving mechanism, a light source and a linear array light field camera, the linear array light field camera being mounted on the moving mechanism and the moving mechanism driving the light field camera to move to each shooting position to scan, wherein the shooting positions are distributed to form a linear array parallel to a width direction of an image sensor of the linear array light field camera; the light source irradiates rays to the to-be-detected object; and the linear array light field camera shoots a light field image of the to-be-detected objected in each shooting position.
  • a computer readable storage medium storing a computer program
  • the computer program implements the steps of the three-dimensional imaging method based on the light field camera as defined in claim 1 are implemented when being executed by a processor.
  • the computer readable storage medium can be a read-only or read-write memory such as a disc and an optical disc in equipment such as a singlechip microcomputer, a DSP, a processing unit, a data center, a server, a PC, an intelligent terminal, a special machine and a light field camera.
  • a product defect detection production line includes the three-dimensional imaging system based on the light field camera, or the computer readable storage medium storing the computer program.
  • the three-dimensional measuring production line can be a glass defect depth detection production line, a screen defect depth detection production line, a detection production line for gold wire and solder balls of semiconductors and the like.
  • the linear array light field camera scanning device performs scanning by employing the light field camera matched with the lens with four-time amplifying factor according to size and height of the chip.
  • the light field camera is matched with the lens of the microlens with proper aperture and focal length to scan and shoot the defocused soft light pure color calibration plates for calibration of the light field white image and the center of the microlens; the light field camera scans and shoots the plurality of dimension calibration plates different in spatial positions for calibration of the light field camera dimension;
  • the linear array light field camera scanning device is matched with the light source to polish the chip so as to image well on the light field camera; multi-view rendering and depth calculating of the light field are performed on the image of the light field obtained every time to finally obtain the center view image (diagram 5) of the corresponding chip and the corresponding light field depth image (diagram 6) ; and finally, the point cloud information and dimension information (diagram 7) of the chip are obtained.
  • the embodiment of the present invention provides a chip detection system based on the light field camera and the two-dimensional camera and aims to implement detection of various defects of the semiconductor chip, wherein the solder balls of the chip are detected by employing the two-dimensional camera and the gold wire of the chip is detected by employing the system for detecting the gold wire by means of the light field camera.
  • a chip detection system based on a light field camera and a two-dimensional camera, including the to-be-detected object, the light source, the light field camera and the two-dimensional camera.
  • the to-be-detected object is the chip
  • the specific detected object is the solder balls and gold wire on the chip
  • corresponding detection is solder ball and gold wire detection.
  • the light source is used for irradiating rays, such as annular light, to the to-be-detected object; the light field camera shoots and acquires the image of the to-be-detected object for reconstructing the three-dimensional shape of the gold wire of the chip and judging the defect of the gold wire; and the two-dimensional camera shoots the chip for detecting the defect of the solder balls of the chip.
  • rays such as annular light
  • the system includes the steps of shooting the plurality of defocused soft light plates by using the light field camera with matched aperture for calibration of the light field white image and calibration of the center of the microlens; performing light field camera dimension calibration and erecting the light source; shooting the semiconductor by using the light field camera and processing the image to obtain the multi-view and depth image so as to finally obtain three-dimensional information of the to-be-detected object for gold wire detection; and shooting the semiconductor chip by employing the two-dimensional camera to obtain high resolution two-dimensional information of the to-be-detected object for solder ball detection.
  • a system for detecting the gold wire of the chip includes:
  • a gold wire positioning module for acquiring the detected image, with reference to Fig. 14, performing positioning in the detected image to obtain paired solder balls, and taking a connecting line between the paired solder balls as a basis for feature extraction of the gold wire, wherein the connecting line is a straight line segment; in particular, the paired solder balls are two solder balls connected directly via the gold wire in a designed demand; in a preferred embodiment, the positions of the solder balls of the chip are obtained by a preset template, the solder balls are obtained by matching the detected image with the template, and further, preferably, the centers of the solder balls are obtained.
  • the connecting line between the centers of the paired solder balls are taken as the basis for feature extraction of the gold wire; preferably, in the gold wire positioning module, the three-dimensional image is obtained by shooting the chip by the light field camera, wherein the center view image of the light field camera serves as the detected image;
  • a gold wire feature extraction module for drawing a midperpendicular of the straight line segment in the detected image, considering points meeting the demand among all the points on the midperpendicular as points on the gold wire and marking the points as gold wire feature points, with reference to Fig. 8;
  • a gold wire defect analyzing module for obtaining the gold wire extension shape according to the plurality of gold wire feature points and analyzing the first type defect of the gold wire according to the gold wire extension shape;
  • the first type defect includes no wire, broken wire, upward foot and wire deviation;
  • the gold wire defect analyzing module the gold wire feature point is converted into a three-dimensional space to obtain a wire height of the gold wire according to the three-dimensional image obtained by the light field camera, and a second type defect of the gold wire is analyzed according to the wire height;
  • the second type defect includes the falling wire; more in particular, no wire means missing of wire on the chip, broken wire means the gold wire is broken, upward foot similar to broken wire means breakage at the solder balls, and wire deviation means the distance between points on the gold wire and the connecting lines of two points of the centers of the solder balls exceeds a certain threshold value.
  • the falling wire means the height of the wire is obviously smaller than a normal wire. Therefore, when the falling wire is calculated, it is needed to convert the extracted feature points into the 3d space by a dimension calibration algorithm to calculate the wire height and judge whether wire falls or not.
  • the gold wire feature extraction module includes:
  • an initializing module for taking the linear segment as a currently selected line segment to trigger execution of a midperpendicular step
  • a midperpendicular module for drawing a midperpendicular for the currently selected line segment, considering points meeting the demand among all the points on the midperpendicular as points on the gold wire and marking the points as gold wire feature points, and trigging execution of a selecting step;
  • a selecting module for dividing the currently selected line segment by the current midperpendicular into two line segments that serve as two currently selected line segments updated, and returning to trigger execution of the midperpendicular step;
  • the centers of the two solder balls of the solder ball pair are connected to form a straight line, then the midperpendicular is found on the straight line, and the point meeting a certain gray value is searched on the midperpendicular. If the distance from the point to the center point of the straight line meets the demand, it is considered that the point is the point on the gold wire. Iteration is performed by way of the method to obtain many points on the gold wire, and the gold wire features are extracted therewith.
  • the light field camera calibration step includes the steps:
  • step A1 selecting an optical lens with a proper focal length and an amplifying factor according to an area size and a measuring depth range of the to-be-detected object; adjusting the aperture of the lens of the microlens to be matched with the aperture of the light field camera, i.e.
  • the aperture of the microlens with the aperture of the microlens, specifically, scanning the images of the defocused soft light pure color calibration plates by the light field camera, wherein the microlens in the image is just or approximately in a tangential state, wherein the light field white image or the light field white image camera means a pure white background image shot by the light field camera, and at the time, the shape of the microlens array is reflected obvious particularly on the image; thus, the aperture can be adjusted based on the image to ensure that the images of the microlenses are just tangential; after adjustment, shooting the plurality of pure color background plates with relatively uniform light intensity in the defocused position of the light field camera, i.e.
  • the defocused soft light pure color calibration plates equalizing and normalizing the plurality of light field white images to obtain the vignetting-removing matrix wherein it is necessary to click and remove all the subsequently shot original images of the light field to remove the vignetting-removing matrix so as to calibrate the light field white images, wherein the original images of the light field mean the light field images that are not processed by a multi-view image algorithm of the light field; u represents a horizontal coordinate value of the pixel under an image coordinate system and v represents a longitudinal coordinate value of the pixel under the image coordinate system, and after finishing the light field image calibration step, processing the light field white image by employing a filter, removing the noise of the light field white image and performing non-maxima suppression on the filtered light field white image; further, taking the local maximum value according to the processed image, wherein the maximum image is just an integer level center of the microlens of the light field camera; and by taking the integer level center of the microlens as an initial iterative value, performing iterative optimization on
  • step A2 it is necessary to assemble a displacement table and a dimension calibration plate for the light field camera dimension calibration step: first, fixing the dimension calibration plate in a focal plane area of the linear array light field camera, continuously moving the calibration plate from the focal plane to a fixed spatial distance and performing shooting, and the spatial positions of the points on the calibration plate being known, thereby obtaining the spatial positions of the points on the calibration plate in the whole moving process; and calibration points of the round points forming a dispersed circle on the light field image, performing processing to obtain the diameter of the dispersed circle to further calculate the disparity value of the dispersed circle and the pixel coordinate of the dispersed circle, and performing fitting to obtain a relationship between the three-dimensional coordinate and the pixel coordinate of the light field camera in the space and the disparity value according to the linear array light field camera dimension calibration model;
  • step A3 as shown in the Fig. 8, irradiating the to-be-detected object from different angles by the light source, and shooting the to-be-detected object by the camera;
  • step A4 based on the original light field image, performing conventional light field rendering and depth estimation: first, performing light field multi-view rendering to obtain the light field multi-view image of the to-be-detected image; then performing further calculation to obtain the light field disparity image, and converting the light field disparity image into the light field depth image according to a light field camera dimension calibration result, wherein the light field depth image contains depth information of the to-be-detected object as well, so that three-dimensional imaging can be performed on the to-be-detected object;
  • step A5 obtaining the defect type of the gold wire via the gold wire defect detection algorithm after obtaining the three-dimensional image of the chip;
  • step A6 obtaining the two-dimensional picture by the two-dimensional camera and obtaining the defect type of the solder balls via the solder ball defect detection algorithm.
  • the former two embodiments are embodiments for detecting defects of the gold wire by the light field camera and the latter two embodiments are embodiments for detecting defects of the solder balls by the two-dimensional camera.
  • Fig. 9a is a center view diagram under the light field camera
  • Fig. 9b is a depth diagram
  • Fig. 9c is a point cloud screenshot
  • Fig. 9d is a three-dimensional curve graph of the gold wire in a marked are. It can be judged from the three-dimensional curve graph that the defect type of the gold wire is the falling wire.
  • Fig. 10a is a center view diagram under the light field camera
  • Fig. 10b is a depth diagram
  • Fig. 10c is a a point cloud screenshot
  • Fig. 10d is a three-dimensional curve graph of the gold wire in a marked area. It can be judged from the three-dimensional curve graph that the defect type of the gold wire is the broken wire.
  • Fig. 11a is a picture shot by the two-dimensional camera and Fig. 11b is a binary image after image processing. It can be judged via the image that the defect type in a target area is defected ball.
  • Fig. 12a is a picture shot by the two-dimensional camera and Fig. 12b is an image after image processing.
  • the radius of the ball can be calculated via the algorithm. If the radius exceeds a certain threshold value, the ball is judged as a big ball and if the radius is smaller than the certain threshold value, the the ball is judged as a small ball.
  • the computer readable storage medium storing the computer program, wherein the steps of the defect layer detection method based on the light field camera are implemented when the computer program is executed by a processing unit.
  • the computer readable storage medium can be a read-only or read-write memory such as a disc and an optical disc in equipment such as a singlechip microcomputer, a DSP, a processing unit, a data center, a server, a PC, an intelligent terminal, a special machine and a light field camera.
  • a product defect detection production line includes the system for detecting the gold wire of the chip or includes the chip detection system based on the light field camera and the two-dimensional camera or includes the computer readable storage medium storing the computer program.
  • a system for detecting the gold wire of the chip includes:
  • a gold wire positioning step acquiring the detected image, performing positioning in the detected image to obtain paired solder balls, and taking a connecting line between the paired solder balls as a basis for feature extraction of the gold wire, wherein the connecting line is a straight line segment;
  • a gold wire feature extraction step in the detected image, drawing a midperpendicular of the straight line segment, considering points meeting the demand among all the points on the midperpendicular as points on the gold wire and marking the points as gold wire feature points;
  • a gold wire defect analyzing step obtaining a gold wire extension shape according to a plurality of gold wire feature points and analyzing a first type defect of the gold wire according to the gold wire extension shape.
  • the three-dimensional image is obtained by shooting the chip by the light field camera, wherein the center view image of the light field camera serves as the detected image;
  • the feature point of the gold wire is converted into a three-dimensional space to obtain a wire height of the gold wire according to the three-dimensional image obtained by the light field camera, and a second type defect of the gold wire is analyzed according to the wire height.
  • the gold wire feature extraction step includes:
  • an initializing step taking the linear segment as a currently selected line segment to trigger execution of a midperpendicular step
  • a midperpendicular step drawing a midperpendicular for the currently selected line segment, considering points meeting the demand among all the points on the midperpendicular as points on the gold wire and marking the points as gold wire feature points; and trigging execution of a selecting step;
  • a selecting step dividing the currently selected line segment by the current midperpendicular into two line segments that serve as two currently selected line segments updated, and returning to trigger execution of the midperpendicular step;
  • a chip detection method based on a light field camera and a two-dimensional camera including:
  • system, device and modules thereof provided by the present invention can implement the same program in form of a logic gate, a switch, an application-specific integrated circuit, a programmable logic controller, an embedded microcontroller and the like fully by logically programming the method steps. Therefore, the system, device and modules thereof provided by the present invention are considered a hardware part and modules of various programs included therein are also considered structures in the hardware part. Modules for implementing various functions can be also considered software programs that implement the method and the structures in the hardware part.

Abstract

A three-dimensional imaging system and system based on a light field camera and a three-dimensional imaging measuring production line, including a moving mechanism (101), a light source (201,202,203), a light field camera or a linear array light field camera (100), and a chip detection system and detection production line based on the light field camera and a two-dimensional camera. Images are obtained separately by multiple times of scanning by the light field camera or the linear array light field camera (100) and are further spliced to obtain a high resolution light field multi-view image, such that a better three-dimensional imaging result is obtained, thereby improving the measuring precision. Defects such as no wire, broken wire, upward foot and wire deviation and the like can be realized by using a solder ball of a chip shot and detected by the two-dimensional camera in combination with a gold wire of the chip shot and detected by the light field camera. The falling defect of the gold wire of the chip can be detected by analyzing the height of the wire.

Description

THREE-DIMENSIONAL IMAGING METHOD AND METHOD BASED ON LIGHT FIELD CAMERA AND THREE-DIMENSIONAL IMAGING MEASURING PRODUCTION LINE TECHNICAL FIELD
The present invention relates to the technical field of three-dimensional photoelectric measurement for conventional objects, in particular to a three-dimensional imaging system based on a linear array light field camera scanning device and a chip detection system and detection production line based on the light field camera and a two-dimensional camera.
BACKGROUND
The light field camera is a novel camera emerging in recent years. Microlens arrays are additionally arranged in a sensor and a main lens of a conventional camera of the light field camera to further record the propagation direction of rays to form a unique light field image encoded by a lens array. The light field image is rendered to further obtain three-dimensional information.
The technical problem of how to calibrate the light field camera has been solved in the prior art.
Patent CN106303175A discloses a multi-view and single-light field camera-based virtual reality three-dimensional data acquisition method, including the steps of S101, acquiring a microlens calibration image of the single light field camera; S102, positioning a center position of the microlens by using the calibration image; S103, acquiring a light field picture; S104, selecting a same pixel with same relative position under each microlens in the light field image; S105, solving a pixel value embedded into a matrix thereof by taking the selected pixel as a sampling point to further form an image of a view; and S106, selecting pixels in different positions and repeating the steps 103 to S105 till all pixels are selected completely.
Patent CN111351446A discloses a light field camera calibration method for three-dimensional topography measurement. The method includes the steps of: calibrating calibration plates in different positions in a space and corresponding light field original images to acquire a correspondingly relationship between a light field disparity image and three-dimensional space information; shooting a plurality of defocused soft light pure color calibration plates by using the light field camera to obtain a light field white image; calculating a vignetting-removing matrix according to the light field camera white image; performing iterative computation to obtain a sub-pixel level center coordinate matrix of the microlens of the light field camera; shooting a plurality of round point calibration plates in a known three-dimensional space and removing vignetting by the light field camera; and establishing a light field mathematical model between a three-dimensional coordinate and disparity, and performing fitting calculation to obtain a center coordinate of the round point and a disparity value corresponding to round point calibration according to a three-dimensional imaging rule of the light field and three-dimensional space information of the round point. The method can be used for converting the light field disparity image into the three-dimensional space information without lens distortion.
Patent CN111340888A discloses a light field camera calibration method and system without a white image. The method comprises the steps of: acquiring a light field original image of an electronic checkerboard shot by the light field camera and then calibrating a microlens array according to the light field original image to generate a calibration result of the microlens array and a center point lattice of the microlens array; and extracting a linear feature of the light field original image by adopting a template matching method and taking the linear feature as internal and external parameters for calibrating a projection model of the light field camera by the calibration data. The method is not dependent on the white image and can be used for obtain the center point lattice, an array gesture of the microlens and the internal and external parameters of the projection model of the camera only by processing an original light  field of the checkerboard, so that the method has the characteristics of high calibration precision of the light field camera and wide application range.
However, resolutions in x and y directions are sacrificed during light field camera imaging, such that resolution of the light field multi-view image obtained by the light field camera with relatively high resolution is still not high.
In addition, a mainstream detection means of industrial vision of a current machine still depends on the 2D industrial camera, that is, features of the to-be-detected object is extracted from a gray-scale map for measurement in an X-Y plane.
In case of height measurement or Z direction information needed, and under the circumference of measuring height, depth, thickness, planeness, volume, wear and the like, 2D industrial vision is always helpless. Furthermore, when it is unable to extract the features of the to-be-detected object accurately due to poor gray level image contrast, it can be often considered to extract and measure the features by height segmentation, and at the time, a 3D industrial vision technology becomes an important detection means for solving industrial vision problems of machines.
The light field camera is a novel 3D camera emerging in recent years. Microlens arrays are additionally arranged in a sensor and a main lens of a conventional camera (2D camera) of the light field camera to further record the propagation direction of rays to form a unique light field image encoded by a lens array. The light field image is rendered to further obtain three-dimensional information.
However, resolutions in X and Y directions are sacrificed during light field camera imaging, such that resolution of the light field multi-view image obtained by the light field camera with relatively high resolution is still not high.
The technical problem of how to calibrate the light field camera has been solved in the prior art.
Patent CN106303175A discloses a multi-view and single-light field camera-based virtual reality three-dimensional data acquisition method, including the steps of S101, acquiring a microlens calibration image of the single light field camera; S102,  positioning a center position of the microlens by using the calibration image; S103, acquiring a light field picture; S104, selecting a same pixel with same relative position under each microlens in the light field image; S105, solving a pixel value embedded into a matrix thereof by taking the selected pixel as a sampling point to further form an image of a view; and S106, selecting pixels in different positions and repeating the steps 103 to S105 till all pixels are selected completely.
Patent CN111351446A discloses a light field camera calibration method for three-dimensional topography measurement. The method includes the steps of: calibrating calibration plates in different positions in a space and corresponding light field original images to acquire a correspondingly relationship between a light field disparity image and three-dimensional space information; shooting a plurality of defocused soft light pure color calibration plates by using the light field camera to obtain a light field white image; calculating a vignetting-removing matrix according to the light field camera white image; performing iterative computation to obtain a sub-pixel level center coordinate matrix of the microlens of the light field camera; shooting a plurality of round point calibration plates in a known three-dimensional space and removing vignetting by the light field camera; and establishing a light field mathematical model between a three-dimensional coordinate and disparity, and performing fitting calculation to obtain a center coordinate of the round point and a disparity value corresponding to round point calibration according to a three-dimensional imaging rule of the light field and three-dimensional space information of the round point. The method can be used for converting the light field disparity image into the three-dimensional space information without lens distortion.
Patent CN111340888A discloses a light field camera calibration method and system without a white image. The method comprises the steps of: acquiring a light field original image of an electronic checkerboard shot by the light field camera and then calibrating a microlens array according to the light field original image to generate a calibration result of the microlens array and a center point lattice of the microlens  array; and extracting a linear feature of the light field original image by adopting a template matching method and taking the linear feature as internal and external parameters for calibrating a projection model of the light field camera by the calibration data. The method is not dependent on the white image and can be used for obtain the center point lattice, an array gesture of the microlens and the internal and external parameters of the projection model of the camera only by processing an original light field of the checkerboard, so that the method has the characteristics of high calibration precision of the light field camera and wide application range.
However, in the prior art where gold wire detection is copied with, analysis cannot be performed well to obtain the gold wire defect.
SUMMARY
In order to overcome defects in the prior art, the present invention aims to provide a three-dimensional imaging method and system based on a light field camera, and a three-dimensional imaging measuring production line.
The three-dimensional imaging method based on the light field camera, including:
a shot image acquisition step: acquiring a plurality of groups of to-be-spliced images, wherein each group of to-be-spliced images comprises a light field depth image and a light field multi-view image obtained by shooting a to-be-detected object by the linear array light field camera in a shooting position, and the shooting positions are distributed to form a linear array parallel to a width direction of an image sensor of the linear array light field camera;
an image splicing step: splicing the plurality of groups of to-be-spliced images to obtain a spliced image, wherein the spliced image comprises a spliced light field depth image and a spliced light field multi-view image;
a three-dimensional acquisition step: obtaining three-dimensional imaging information of the to-be-detected object according to the spliced image.
Preferably, a resolution of the linear array light field camera is n*m, and a moving interval of the linear array light field camera between adjacent shooting positions is m pixels; and a dimension of a spliced image generated after N times of movement of the linear array light field camera is n*m* (N+1) .
Preferably, the three-dimensional information acquisition module includes:
a center view image acquisition step for splicing to obtain a center view image;
a center view light field depth image acquisition step for splicing to obtain a center view light field depth image; and
a cloud point information acquisition module for obtaining cloud point information and dimension information of the to-be-detected object according to the center view image and the center view light field depth image.
The three-dimensional imaging system based on the light field camera, including:
a shot image acquisition module for acquiring the plurality of groups of to-be-spliced images, wherein each group of to-be-spliced images comprises the light field depth image and the light field multi-view image obtained by shooting a to-be-detected object by a linear array light field camera in a shooting position, and the shooting positions are distributed to form the linear array parallel to a width direction of an image sensor of the linear array light field camera;
an image splicing module for splicing the plurality of groups of to-be-spliced images to obtain the spliced image, wherein the spliced image comprises a spliced light field depth image and a spliced light field multi-view image; and
a three-dimensional information acquisition module for obtaining three-dimensional imaging information of the to-be-detected object according to the spliced image.
Preferably, a resolution of the linear array light field camera is n*m, and a moving interval of the linear array light field camera between adjacent shooting positions is m pixels; and a dimension of a spliced image generated after N times of movement of the linear array light field camera is n*m* (N+1) .
Preferably, the three-dimensional information acquisition module includes:
a center view image acquisition module for splicing to obtain a center view image;
a center view light field depth image acquisition module for splicing to obtain a center view light field depth image;
a point cloud information acquisition module for obtaining point cloud information and dimension information of the to-be-detected object according to the center view image and the center view light field depth image.
According to a computer readable storage medium storing a computer program, the computer program implements the steps of the three-dimensional imaging method based on the light field camera as defined in claim 1 are implemented when being executed by a processor.
A light field camera scanning device, including:
a moving mechanism, a light source and a linear array light field camera, the linear array light field camera being mounted on the moving mechanism and the moving mechanism driving the light field camera to move to each shooting position to scan, wherein the shooting positions are distributed to form a linear array parallel to a width direction of an image sensor of the linear array light field camera; the light source irradiates rays to the to-be-detected object; and the linear array light field camera shoots a light field image of the to-be-detected objected in each shooting position.
Preferably, the light field camera scanning device includes a controller.
The controller comprises the computer readable storage medium storing the computer program as defined in claim 7.
A three-dimensional imaging measuring production line, including the three-dimensional imaging system;
or including the computer readable storage medium storing the computer program;
or including the light field camera scanning device.
A method for detecting a gold wire of a chip, wherein the spliced image obtained by the three-dimensional imaging method based on the light field camera serves as a detected image, the method including:
a gold wire positioning step: acquiring the detected image, performing positioning in the detected image to obtain paired solder balls, and taking a connecting line between the paired solder balls as a basis for feature extraction of the gold wire, wherein the connecting line is a straight line segment;
a gold wire feature extraction step: in the detected image, drawing a midperpendicular of the straight line segment, considering points meeting the demand among all the points on the midperpendicular as points on the gold wire and marking the points as gold wire feature points; and
a gold wire defect analyzing step: obtaining a gold wire extension shape according to a plurality of gold wire feature points and analyzing a first type defect of the gold wire according to the gold wire extension shape.
Preferably, in the gold wire positioning step, the three-dimensional image is obtained by shooting the chip by the light field camera, wherein the center view image of the light field camera serves as the detected image;
and in the gold wire defect analyzing step, the gold wire feature point is converted into a three-dimensional space to obtain a wire height of the gold wire according to the three-dimensional image obtained by the light field camera, and a second type defect of the gold wire is analyzed according to the wire height.
Preferably the gold wire feature extraction step includes:
an initializing step: taking the linear segment as a currently selected line segment to trigger execution of a midperpendicular step;
a midperpendicular step: drawing a midperpendicular for the currently selected line segment, considering points meeting the demand among all the points on the midperpendicular as points on the gold wire and marking the points as gold wire feature points; and trigging execution of a selecting step;
a selecting step: dividing the currently selected line segment by the current midperpendicular into two line segments that serve as two currently selected line segments updated, and returning to trigger execution of the midperpendicular step;
and repeatedly trigging execution of the midperpendicular step and the selecting step till all the currently selected line segments are smaller than a length threshold value.
A chip detection method based on a light field camera and a two-dimensional camera, including:
detecting solder balls of a chip by using the two-dimensional camera; and
detecting a gold wire of the chip by using the light field camera by means of the method for detecting the gold wire of the chip.
A system for detecting a gold wire of a chip, wherein the spliced image obtained by the three-dimensional imaging method based on the light field camera serves as a detected image, the system including:
a gold wire positioning module for acquiring the detected image, performing positioning in the detected image to obtain paired solder balls, and taking a connecting line between the paired solder balls as a basis for feature extraction of the gold wire, wherein the connecting line is a straight line segment;
a gold wire feature extraction module for drawing a midperpendicular of the straight line segment in the detected image, considering points meeting the demand among all the points on the midperpendicular as points on the gold wire and marking the points as gold wire feature points; and
a gold wire defect analyzing module for obtaining the gold wire extension shape according to the plurality of gold wire feature points and analyzing the first type defect of the gold wire according to the gold wire extension shape.
Preferably, in the gold wire positioning module, the three-dimensional image is obtained by shooting the chip by the light field camera, wherein the center view image of the light field camera serves as the detected image;
and in the gold wire defect analyzing module, the gold wire feature point is converted into a three-dimensional space to obtain a wire height of the gold wire according to the three-dimensional image obtained by the light field camera, and a second type defect of the gold wire is analyzed according to the wire height.
Preferably the gold wire feature extraction module includes:
an initializing module for taking the linear segment as a currently selected line segment to trigger execution of a midperpendicular step;
a midperpendicular module for drawing a midperpendicular for the currently selected line segment, considering points meeting the demand among all the points on the midperpendicular as points on the gold wire and marking the points as gold wire feature points, and trigging execution of a selecting step;
and a selecting module for dividing the currently selected line segment by the current midperpendicular into two line segments that serve as two currently selected line segments updated, and returning to trigger execution of the midperpendicular step;
and execution of the midperpendicular module and the selecting module are repeatedly trigged till all the currently selected line segments are smaller than a length threshold value.
A chip detection system based on a light field camera and a two-dimensional camera, including:
detecting the solder balls of the chip by using the two-dimensional camera; and
detecting a gold wire of the chip by using the light field camera by means of the system for detecting the gold wire of the chip.
The computer readable storage medium storing the computer program, wherein the steps of the defect layer detection method based on the light field camera are implemented when the computer program is executed by a processing unit.
A product defect detection production line according to the present invention includes the system for detecting the gold wire of the chip or includes the chip detection  system based on the light field camera and the two-dimensional camera or includes the computer readable storage medium storing the computer program.
Compared with the prior art, the present invention has the following beneficial effects:
1. According to the present invention, scanning by the light field camera is performed to obtain the high resolution light field multi-view image, such that a better three-dimensional imaging result is obtained, thereby improving the measuring precision. High resolution imaging on three-dimensional shape of the to-be-detected object is achieved.
2. Compared with a common light field camera, the image of the light field scanned by the linear array light field camera has a higher resolution, and compared with the image obtained by the common light field camera, the obtained light field multi-view image further has the higher resolution, such that the defect of sacrificing resolutions in x and y directions of the light field camera is solved well. A more precise three-dimensional image can be calculated via the light field multi-view image with high resolution.
3. By combining the two-dimensional camera with the light field camera, a successive detection mode on the solder balls and the gold wire of the chip is achieved, thereby obtaining an integral solution for chip detection.
4. Defects such as no wire, broken wire, upward foot and wire deviation and the like can be realized by using a solder ball of a chip shot and detected by the two-dimensional camera in combination with a gold wire of the chip shot and detected by the light field camera.
5. The falling defect of the gold wire of the chip can be detected by analyzing the height of the wire.
BRIEF DESCRIPTION OF THE DRAWINGS
By reading and referring to detailed description made by the following drawings to non-restrictive embodiments, other features, purposes and advantages of the present invention will become more obvious.
Fig. 1 is a flow diagram of a three-dimensional imaging process according to one of embodiments of the present invention.
Fig. 2 is a schematic diagram of scanning and shooting a detected medium under light source irradiation by the linear array light field camera according to the embodiment of the present invention.
Fig. 3 is a part of an area of a light field image of the chip scanned at one time by the linear array light field camera in the embodiment of the present invention, the light field image resolution obtained by scanning at one time being 7912*10.
Fig. 4 is a part of an area of a light field image spliced after multiple times of scanning by the linear array light field camera, resolution of the light field image obtained by multiple times of scanning being 7912*5430.
Fig. 5 is a center view image of the light field of the chip.
Fig. 6 is a light field depth image corresponding to the Fig. 5.
Fig. 7 is a three-dimensional point cloud image corresponding to the Fig. 5;
Fig. 8 is a structural schematic diagram of the present invention.
Fig. 9 is a picture where the defect type of the gold wire is a falling wire. Fig. 9a is a center view diagram under the light field camera, Fig. 9b is a depth diagram, Fig. 9c is a a point cloud screenshot and Fig. 9d is a three-dimensional curve graph of the gold wire in a marked area.
Fig. 10 is a picture where the defect type of the gold wire is a broken wire. Fig. 10a is a center view diagram under the light field camera, Fig. 10b is a depth diagram, Fig. 10c is a point cloud screenshot and Fig. 10d is a three-dimensional curve graph of the gold wire in a marked area.
Fig. 11 is a picture where the defect type of the solder ball is a defected ball. Fig. 11a is a picture shot by the two-dimensional camera and Fig. 11b is a binary image after image processing.
Fig. 12 is a picture where the defect type of the solder ball is a small ball. Fig. 12a is a picture shot by the two-dimensional camera and Fig. 12b is an image after image processing.
Fig. 13 is a step flow diagram of the method of the present invention.
Fig. 14 is a detected image.
Fig. 15 is a picture of each gold wire feature point in the detected image.
In the drawings:
Linear array light field camera 100
Moving mechanism 101 of the linear array light field camera
First light source 201 (red light)
Second light source 202 (green light)
Third light source 203 (blue light)
To-be-detected object 300
Chip 1000
Annular light 2000
DETAILED DESCRIPTION
The present invention is described in detail with reference to the particular examples below. The following particular examples will be conducive to further understanding by those skilled in the art on the present invention, but is not intended to limit the present invention in any forms. It should be noted that variations and improvements still can be made by those skilled in the technical field without departing the concept of the present invention. All of these fall within the scope of protection of the present invention.
The three-dimensional imaging method based on the light field camera, including:
a light field camera calibration step: calibrating the light field camera in the linear array light field camera, wherein an image sensor of the linear array light field camera and an image sensor in an area array light field camera are different in type, and the image sensor in the linear array light field camera is an image sensor, the width of which is the diameter of the microlens in a single row of microlenses and the length of which is the length (the sum of diameters of the microlens in the single row of microlenses) of the single row microlens; and the sensor in the area array light field camera is as same as the image sensor in a common area array 2D camera;
a shooting step: acquiring each group of to-be-spliced images in each shooting position of an area array by the light field camera or acquiring each group of to-be-spliced images in each shooting position of a linear array by the linear array light field camera,
wherein each group of to-be-spliced images includes the light field depth image and the light field multi-view image obtained by shooting a to-be-detected object by the linear array light field camera in the shooting position, and the shooting positions are distributed to form the linear array parallel to a width direction of the image sensor of the linear array light field camera;
an image splicing step: splicing the plurality of groups of to-be-spliced images to obtain a spliced image, wherein the spliced image comprises a spliced light field depth image and a spliced light field multi-view image;
a three-dimensional acquisition step: obtaining three-dimensional imaging information of the to-be-detected object according to the spliced image.
The light field camera calibration step, including:
step A1: adjusting a focal length and an aperture of the linear array light field camera, and scanning and shooting a plurality of defocused soft light pure color calibration plates to acquire a light field white image camera; and calculating a vignetting-removing matrix and a sub-pixel level center coordinate matrix of the microlens of the light field camera according to the light field white image camera; the  step A1 specifically includes: adjusting the main lens and the aperture of the linear array light field camera to make the microlens array of the white image of an original light field of the light field camera is just or approximately tangential; enabling the linear array light field camera to shoot images of the plurality of defocused soft light pure color calibration plates which are pure color background plates with uniform light intensity at the defocused position of the light field camera, wherein the vignetting-removing matrix is a matrix
Figure PCTCN2021079837-appb-000001
obtained by equalizing and normalizing the white images W (u, v) of the plurality of original light fields; the sub-pixel level center coordinate matrix of the microlens of the light field camera is a sub-pixel level microlens center coordinate subjected to iterative optimization of the light images of the light field camera
Figure PCTCN2021079837-appb-000002
after local maximum processing on each microlens, wherein u represents a coordinate of the light field white image in x direction and v represents a coordinate of the light field white image in y direction; more particularly, selecting an optical lens with a proper focal length and an amplifying factor as a lens of the microlens according to an area size and a measuring depth range of the to-be-detected object; adjusting the aperture of the lens of the microlens to be matched with the aperture of the light field camera, i.e. matching the aperture of the microlens with the aperture of the microlens, specifically, scanning and shooting the images of the defocused soft light pure color calibration plates by the linear array light field camera, wherein the microlens in the image is just or approximately in a tangential state, wherein the light field white image or the light field white image camera means a pure white background image shot by the light field camera, and at the time, the shape of the microlens array is reflected obvious particularly on the image; and thus, the aperture can be adjusted based on the image to ensure that the images of the microlenses are just tangential; after adjustment, shooting the plurality of pure color background plates with relatively uniform light intensity in the defocused position of the linear array light field camera, i.e. the defocused soft light pure color calibration plates; equalizing and  normalizing the plurality of light field white images to obtain the vignetting-removing matrix
Figure PCTCN2021079837-appb-000003
wherein it is necessary to click and remove all the subsequently shot original images of the light field to remove the vignetting-removing matrix so as to calibrate the light field white images, wherein the original images of the light field mean the light field images that are not processed by a multi-view image algorithm of the light field; and after finishing the light field white image calibration step, processing the light field white image by employing a filter, removing the noise of the light field white image and performing non-maxima suppression on the filtered light field white image; further, taking the local maximum value according to the processed image, wherein the maximum image is just an integer level center of the microlens of the light field camera; and by taking the integer level center of the microlens as an initial iterative value, performing iterative optimization on a microlens arranging grid to obtain the microlens arranging angle and interval finally so as to obtain the sub-pixel level microlens center;
step 2a: shooting a plurality of round point calibration plates with known spatial three-dimensional positions by employing the light field camera and establishing a light field mathematic model from three-dimensional coordinate to disparity to finish dimension calibration of the light field camera; specifically, the three-dimensional coordinate of each round point on the round point calibration plate being known, and shooting the calibration plate by employing the light field camera to obtain a degree of dispersion of the round points of the calibration plate and a corresponding disparity value so as to further fit and calibrate a relationship between the disparity value and the three-dimensional coordinate; more particularly, it is necessary to assemble a displacement table and a dimension calibration plate for the linear array light field camera dimension calibration step: first, fixing the dimension calibration plate in a focal plane area of the linear array light field camera, continuously moving the calibration plate from the focal plane to a fixed spatial distance and performing scanning and  shooting, and the spatial positions of the points on the calibration plate being known, thereby obtaining the spatial positions of the points on the calibration plate in the whole moving process; and calibration points of the round points forming a dispersed circle on the light field image, performing processing to obtain the diameter of the dispersed circle to further calculate the disparity value of the dispersed circle and the pixel coordinate of the dispersed circle, and performing fitting to obtain a relationship between the three-dimensional coordinate and the pixel coordinate of the light field camera in the space and the disparity value according to the linear array light field camera dimension calibration model;
step A3: irradiating the to-be-detected object by the light source to acquire a shot image of the to-be-detected object under the light source; specifically, setting of an angle of the light source: enabling rays of the light source to irradiate the to-be-detected object clearly, wherein it is ensured that the linear array light camera can image the to-be-detected object under multi-spectrum rays;
in the shooting step, taking the size of the designed linear array light field camera: 7912*10 as an example, in order to ensure that the light field picture scanned at a single time can be spliced well in the scanning and shooting process, the moving mechanism of the linear array camera moves a distance that is 10 times of pixel every time, wherein there is overlapped part between the spliced images;
in the shot image acquisition step, in particular, the light field camera performing light field multi-view rendering after shooting the to-be-detected object to obtain the light field multi-view image and the light field disparity image, and converting the disparity image into the light field depth image via a conversion relationship between the previously calibrated disparity and three-dimensional coordinate; more particularly, based on the original light field image, performing conventional light field rendering and depth estimation: first, performing light field multi-view rendering to obtain the light field multi-view image of the to-be-detected image; then performing further calculation to obtain the light field disparity image, and converting the light field  disparity image into the light field depth image according to a light field camera dimension calibration result, wherein the light field depth image contains depth information of the to-be-detected object as well, so that three-dimensional imaging can be performed on the to-be-detected object;
in the image splicing step: splicing the plurality of groups of to-be-spliced images to obtain a spliced image, wherein the spliced image comprises a spliced light field depth image and a spliced light field multi-view image; specifically, the distance that the moving mechanism that drives the linear array light field camera to move every time is m pixels (if the resolution of the linear array light field camera is n*m) , the spliced image generated is n*m* (n+1) after N times of movements, and the quality of the spliced image is dependent on precision of a transmission mechanism;
in the three-dimensional information acquisition step, in particular, performing three-dimensional mapping on the light field depth image based on the dimension calibration result to acquire the three-dimensional point cloud information of the to-be-detected object; and in particular, the to-be-detected object can be an object that is relatively large in area and relatively small in defect, and it is better applied to detecting the type of objects: for example, glass defect depth detection, screen defect depth detection, gold wire and solder ball detection for semiconductors and the like.
The three-dimensional imaging system based on the light field camera can be implemented by executing the step flow of the three-dimensional imaging method based on the light field camera, i.e. the three-dimensional method based on the light field camera is a preferred embodiment of the three-dimensional imaging system based on the light field camera.
The three-dimensional imaging system based on the light field camera, including:
a shot image acquisition module for acquiring the plurality of groups of to-be-spliced images, wherein each group of to-be-spliced images includes the light field depth image and the light field multi-view image obtained by shooting a to-be-detected object by a light field camera in a shooting position, and the shooting positions are  distributed to form an area array; or each group of to-be-spliced images includes the light field depth image and the light field multi-view image obtained by shooting a to-be-detected object by a linear array light field camera in a shooting position, and the shooting positions are distributed to form an linear array parallel to the width direction of the image sensor of the linear array light field camera;
an image splicing module for splicing the plurality of groups of to-be-spliced images to obtain the spliced image, wherein the spliced image comprises the spliced light field depth image and the spliced light field multi-view image; and
a three-dimensional acquisition module for obtaining the three-dimensional imaging information of the to-be-detected object according to the spliced image.
Preferably, a resolution of the linear array light field camera is n*m, and a moving interval of the linear array light field camera between adjacent shooting positions is m pixels; and a dimension of a spliced image generated after N times of movement of the linear array light field camera is n*m* (N+1) .
Preferably, the three-dimensional information acquisition module includes:
a center view image acquisition module for splicing to obtain a center view image;
a center view light field depth image acquisition module for splicing to obtain a center view light field depth image;
a cloud point information acquisition module for obtaining cloud point information and dimension information of the to-be-detected object according to the center view image and the center view light field depth image.
The present invention provides a light field camera scanning device, including a moving mechanism, a light source and a linear array light field camera, the linear array light field camera being mounted on the moving mechanism and the moving mechanism driving the light field camera to move to each shooting position to scan, wherein the shooting positions are distributed to form a linear array parallel to a width direction of an image sensor of the linear array light field camera; the light source irradiates rays to  the to-be-detected object; and the linear array light field camera shoots a light field image of the to-be-detected objected in each shooting position.
According to a computer readable storage medium storing a computer program, the computer program implements the steps of the three-dimensional imaging method based on the light field camera as defined in claim 1 are implemented when being executed by a processor. The computer readable storage medium can be a read-only or read-write memory such as a disc and an optical disc in equipment such as a singlechip microcomputer, a DSP, a processing unit, a data center, a server, a PC, an intelligent terminal, a special machine and a light field camera.
A product defect detection production line according to the present invention includes the three-dimensional imaging system based on the light field camera, or the computer readable storage medium storing the computer program. The three-dimensional measuring production line can be a glass defect depth detection production line, a screen defect depth detection production line, a detection production line for gold wire and solder balls of semiconductors and the like.
More detailed description will be made on the present invention in combination with specific application scenes, the application scenes including a three-dimensional imaging embodiment of the ship.
The specific process of the embodiment is as follows:
The linear array light field camera scanning device performs scanning by employing the light field camera matched with the lens with four-time amplifying factor according to size and height of the chip. The light field camera is matched with the lens of the microlens with proper aperture and focal length to scan and shoot the defocused soft light pure color calibration plates for calibration of the light field white image and the center of the microlens; the light field camera scans and shoots the plurality of dimension calibration plates different in spatial positions for calibration of the light field camera dimension; the linear array light field camera scanning device is matched with the light source to polish the chip so as to image well on the light field camera;  multi-view rendering and depth calculating of the light field are performed on the image of the light field obtained every time to finally obtain the center view image (diagram 5) of the corresponding chip and the corresponding light field depth image (diagram 6) ; and finally, the point cloud information and dimension information (diagram 7) of the chip are obtained.
The embodiment of the present invention provides a chip detection system based on the light field camera and the two-dimensional camera and aims to implement detection of various defects of the semiconductor chip, wherein the solder balls of the chip are detected by employing the two-dimensional camera and the gold wire of the chip is detected by employing the system for detecting the gold wire by means of the light field camera.
As shown in the Fig. 8, a chip detection system based on a light field camera and a two-dimensional camera, including the to-be-detected object, the light source, the light field camera and the two-dimensional camera. The to-be-detected object is the chip, the specific detected object is the solder balls and gold wire on the chip, and corresponding detection is solder ball and gold wire detection. The light source is used for irradiating rays, such as annular light, to the to-be-detected object; the light field camera shoots and acquires the image of the to-be-detected object for reconstructing the three-dimensional shape of the gold wire of the chip and judging the defect of the gold wire; and the two-dimensional camera shoots the chip for detecting the defect of the solder balls of the chip.
The system includes the steps of shooting the plurality of defocused soft light plates by using the light field camera with matched aperture for calibration of the light field white image and calibration of the center of the microlens; performing light field camera dimension calibration and erecting the light source; shooting the semiconductor by using the light field camera and processing the image to obtain the multi-view and depth image so as to finally obtain three-dimensional information of the to-be-detected object for gold wire detection; and shooting the semiconductor chip by employing the  two-dimensional camera to obtain high resolution two-dimensional information of the to-be-detected object for solder ball detection.
In particular, a system for detecting the gold wire of the chip includes:
a gold wire positioning module for acquiring the detected image, with reference to Fig. 14, performing positioning in the detected image to obtain paired solder balls, and taking a connecting line between the paired solder balls as a basis for feature extraction of the gold wire, wherein the connecting line is a straight line segment; in particular, the paired solder balls are two solder balls connected directly via the gold wire in a designed demand; in a preferred embodiment, the positions of the solder balls of the chip are obtained by a preset template, the solder balls are obtained by matching the detected image with the template, and further, preferably, the centers of the solder balls are obtained. The connecting line between the centers of the paired solder balls are taken as the basis for feature extraction of the gold wire; preferably, in the gold wire positioning module, the three-dimensional image is obtained by shooting the chip by the light field camera, wherein the center view image of the light field camera serves as the detected image;
a gold wire feature extraction module for drawing a midperpendicular of the straight line segment in the detected image, considering points meeting the demand among all the points on the midperpendicular as points on the gold wire and marking the points as gold wire feature points, with reference to Fig. 8; and
a gold wire defect analyzing module for obtaining the gold wire extension shape according to the plurality of gold wire feature points and analyzing the first type defect of the gold wire according to the gold wire extension shape; the first type defect includes no wire, broken wire, upward foot and wire deviation; in the gold wire defect analyzing module, the gold wire feature point is converted into a three-dimensional space to obtain a wire height of the gold wire according to the three-dimensional image obtained by the light field camera, and a second type defect of the gold wire is analyzed according to the wire height; the second type defect includes the falling wire; more in  particular, no wire means missing of wire on the chip, broken wire means the gold wire is broken, upward foot similar to broken wire means breakage at the solder balls, and wire deviation means the distance between points on the gold wire and the connecting lines of two points of the centers of the solder balls exceeds a certain threshold value. All the above three defects can be detected and obtained by the center view image shot by the light field camera, that is, only the center view image of the light field is needed. But the falling wire means the height of the wire is obviously smaller than a normal wire. Therefore, when the falling wire is calculated, it is needed to convert the extracted feature points into the 3d space by a dimension calibration algorithm to calculate the wire height and judge whether wire falls or not.
In the preferred embodiment, the gold wire feature extraction module includes:
an initializing module for taking the linear segment as a currently selected line segment to trigger execution of a midperpendicular step;
a midperpendicular module for drawing a midperpendicular for the currently selected line segment, considering points meeting the demand among all the points on the midperpendicular as points on the gold wire and marking the points as gold wire feature points, and trigging execution of a selecting step;
and a selecting module for dividing the currently selected line segment by the current midperpendicular into two line segments that serve as two currently selected line segments updated, and returning to trigger execution of the midperpendicular step;
and execution of the midperpendicular module and the selecting module are repeatedly trigged till all the currently selected line segments are smaller than a length threshold value.
For example, by taking a gold wire as an example, the centers of the two solder balls of the solder ball pair are connected to form a straight line, then the midperpendicular is found on the straight line, and the point meeting a certain gray value is searched on the midperpendicular. If the distance from the point to the center point of the straight line meets the demand, it is considered that the point is the point  on the gold wire. Iteration is performed by way of the method to obtain many points on the gold wire, and the gold wire features are extracted therewith.
Furthermore, specifically, the light field camera calibration step includes the steps:
step A1: selecting an optical lens with a proper focal length and an amplifying factor according to an area size and a measuring depth range of the to-be-detected object; adjusting the aperture of the lens of the microlens to be matched with the aperture of the light field camera, i.e. matching the aperture of the microlens with the aperture of the microlens, specifically, scanning the images of the defocused soft light pure color calibration plates by the light field camera, wherein the microlens in the image is just or approximately in a tangential state, wherein the light field white image or the light field white image camera means a pure white background image shot by the light field camera, and at the time, the shape of the microlens array is reflected obvious particularly on the image; thus, the aperture can be adjusted based on the image to ensure that the images of the microlenses are just tangential; after adjustment, shooting the plurality of pure color background plates with relatively uniform light intensity in the defocused position of the light field camera, i.e. the defocused soft light pure color calibration plates; equalizing and normalizing the plurality of light field white images to obtain the vignetting-removing matrix
Figure PCTCN2021079837-appb-000004
wherein it is necessary to click and remove all the subsequently shot original images of the light field to remove the vignetting-removing matrix so as to calibrate the light field white images, wherein the original images of the light field mean the light field images that are not processed by a multi-view image algorithm of the light field; u represents a horizontal coordinate value of the pixel under an image coordinate system and v represents a longitudinal coordinate value of the pixel under the image coordinate system, and after finishing the light field image calibration step, processing the light field white image by employing a filter, removing the noise of the light field white image and performing non-maxima suppression on the filtered light field white image; further, taking the local maximum  value according to the processed image, wherein the maximum image is just an integer level center of the microlens of the light field camera; and by taking the integer level center of the microlens as an initial iterative value, performing iterative optimization on a microlens arranging grid to obtain the microlens arranging angle and interval finally so as to obtain the sub-pixel level microlens center;
step A2: it is necessary to assemble a displacement table and a dimension calibration plate for the light field camera dimension calibration step: first, fixing the dimension calibration plate in a focal plane area of the linear array light field camera, continuously moving the calibration plate from the focal plane to a fixed spatial distance and performing shooting, and the spatial positions of the points on the calibration plate being known, thereby obtaining the spatial positions of the points on the calibration plate in the whole moving process; and calibration points of the round points forming a dispersed circle on the light field image, performing processing to obtain the diameter of the dispersed circle to further calculate the disparity value of the dispersed circle and the pixel coordinate of the dispersed circle, and performing fitting to obtain a relationship between the three-dimensional coordinate and the pixel coordinate of the light field camera in the space and the disparity value according to the linear array light field camera dimension calibration model;
step A3: as shown in the Fig. 8, irradiating the to-be-detected object from different angles by the light source, and shooting the to-be-detected object by the camera;
step A4: based on the original light field image, performing conventional light field rendering and depth estimation: first, performing light field multi-view rendering to obtain the light field multi-view image of the to-be-detected image; then performing further calculation to obtain the light field disparity image, and converting the light field disparity image into the light field depth image according to a light field camera dimension calibration result, wherein the light field depth image contains depth information of the to-be-detected object as well, so that three-dimensional imaging can be performed on the to-be-detected object;
step A5: obtaining the defect type of the gold wire via the gold wire defect detection algorithm after obtaining the three-dimensional image of the chip;
and step A6: obtaining the two-dimensional picture by the two-dimensional camera and obtaining the defect type of the solder balls via the solder ball defect detection algorithm.
Four specific embodiments are attached to the present invention, the former two embodiments are embodiments for detecting defects of the gold wire by the light field camera and the latter two embodiments are embodiments for detecting defects of the solder balls by the two-dimensional camera.
Fig. 9a is a center view diagram under the light field camera, Fig. 9b is a depth diagram, Fig. 9c is a point cloud screenshot and Fig. 9d is a three-dimensional curve graph of the gold wire in a marked are. It can be judged from the three-dimensional curve graph that the defect type of the gold wire is the falling wire.
Fig. 10a is a center view diagram under the light field camera, Fig. 10b is a depth diagram, Fig. 10c is a a point cloud screenshot and Fig. 10d is a three-dimensional curve graph of the gold wire in a marked area. It can be judged from the three-dimensional curve graph that the defect type of the gold wire is the broken wire.
Fig. 11a is a picture shot by the two-dimensional camera and Fig. 11b is a binary image after image processing. It can be judged via the image that the defect type in a target area is defected ball.
Fig. 12a is a picture shot by the two-dimensional camera and Fig. 12b is an image after image processing. The radius of the ball can be calculated via the algorithm. If the radius exceeds a certain threshold value, the ball is judged as a big ball and if the radius is smaller than the certain threshold value, the the ball is judged as a small ball.
The computer readable storage medium storing the computer program, wherein the steps of the defect layer detection method based on the light field camera are implemented when the computer program is executed by a processing unit. The computer readable storage medium can be a read-only or read-write memory such as a  disc and an optical disc in equipment such as a singlechip microcomputer, a DSP, a processing unit, a data center, a server, a PC, an intelligent terminal, a special machine and a light field camera.
A product defect detection production line according to the present invention includes the system for detecting the gold wire of the chip or includes the chip detection system based on the light field camera and the two-dimensional camera or includes the computer readable storage medium storing the computer program.
A system for detecting the gold wire of the chip includes:
a gold wire positioning step: acquiring the detected image, performing positioning in the detected image to obtain paired solder balls, and taking a connecting line between the paired solder balls as a basis for feature extraction of the gold wire, wherein the connecting line is a straight line segment;
a gold wire feature extraction step: in the detected image, drawing a midperpendicular of the straight line segment, considering points meeting the demand among all the points on the midperpendicular as points on the gold wire and marking the points as gold wire feature points; and
a gold wire defect analyzing step: obtaining a gold wire extension shape according to a plurality of gold wire feature points and analyzing a first type defect of the gold wire according to the gold wire extension shape.
Preferably, in the gold wire positioning step, the three-dimensional image is obtained by shooting the chip by the light field camera, wherein the center view image of the light field camera serves as the detected image;
and in the gold wire defect analyzing step, the feature point of the gold wire is converted into a three-dimensional space to obtain a wire height of the gold wire according to the three-dimensional image obtained by the light field camera, and a second type defect of the gold wire is analyzed according to the wire height.
Preferably the gold wire feature extraction step includes:
an initializing step: taking the linear segment as a currently selected line segment to trigger execution of a midperpendicular step;
A midperpendicular step: drawing a midperpendicular for the currently selected line segment, considering points meeting the demand among all the points on the midperpendicular as points on the gold wire and marking the points as gold wire feature points; and trigging execution of a selecting step;
a selecting step: dividing the currently selected line segment by the current midperpendicular into two line segments that serve as two currently selected line segments updated, and returning to trigger execution of the midperpendicular step;
and repeatedly trigging execution of the midperpendicular step and the selecting step till all the currently selected line segments are smaller than a length threshold value.
A chip detection method based on a light field camera and a two-dimensional camera, including:
detecting the solder balls of the chip by using the two-dimensional camera; and
detecting a gold wire of the chip by using the light field camera by means of the method for detecting the gold wire of the chip.
Those skilled in the art shall know that except in form of a pure way of a computer readable program code to implement the system, device and modules thereof provided by the present invention, the system, device and modules thereof provided by the present invention can implement the same program in form of a logic gate, a switch, an application-specific integrated circuit, a programmable logic controller, an embedded microcontroller and the like fully by logically programming the method steps. Therefore, the system, device and modules thereof provided by the present invention are considered a hardware part and modules of various programs included therein are also considered structures in the hardware part. Modules for implementing various functions can be also considered software programs that implement the method and the structures in the hardware part.
The particular examples of the present invention are described as above. It is needed to understand that the present invention is not limited to the specific embodiments, and those skilled in the art can made various variations or modifications within the scope of the claims without affecting the substantial contents of the present invention. In the absence of conflict, the embodiments of the application and features in the embodiments can be combined with one another arbitrarily.

Claims (12)

  1. A three-dimensional imaging method based on a light field camera, comprising:
    a shot image acquisition step: acquiring a plurality of groups of to-be-spliced images, wherein each group of to-be-spliced images comprises a light field depth image and a light field multi-view image obtained by shooting a to-be-detected object by the linear array light field camera in a shooting position, and the shooting positions are distributed to form a linear array parallel to a width direction of an image sensor of the linear array light field camera;
    an image splicing step: splicing the plurality of groups of to-be-spliced images to obtain a spliced image, wherein the spliced image comprises a spliced light field depth image and a spliced light field multi-view image; and
    a three-dimensional information acquisition step: obtaining three-dimensional imaging information of the to-be-detected object according to the spliced image.
  2. A three-dimensional imaging system based on a light field camera, comprising:
    a shot image acquisition module for acquiring the plurality of groups of to-be-spliced images, wherein each group of to-be-spliced images comprises the light field depth image and the light field multi-view image obtained by shooting a to-be-detected object by a linear array light field camera in a shooting position, and the shooting positions are distributed to form the linear array parallel to a width direction of an image sensor of the linear array light field camera;
    an image splicing module for splicing the plurality of groups of to-be-spliced images to obtain the spliced image, wherein the spliced image comprises a spliced light field depth image and a spliced light field multi-view image; and
    a three-dimensional information acquisition module for obtaining three-dimensional imaging information of the to-be-detected object according to the spliced image.
  3. The three-dimensional imaging system based on the light field camera according to claim 2, characterized in that a resolution of the linear array light field camera is n*m, and a moving interval of the linear array light field camera between  adjacent shooting positions is m pixels; and a dimension of a spliced image generated after N times of movement of the linear array light field camera is n*m* (N+1) ;
    the three-dimensional information acquisition module comprises:
    a center view image acquisition module for splicing to obtain a center view image;
    a center view light field depth image acquisition module for splicing to obtain a center view light field depth image; and
    a point cloud information acquisition module for obtaining point cloud information and dimension information of the to-be-detected object according to the center view image and the center view light field depth image.
  4. A computer readable storage medium storing a computer program, characterized in that the computer program implements the steps of the three-dimensional imaging method based on the light field camera as defined in claim 1 are implemented when being executed by a processor.
  5. A light field camera scanning device, comprising:
    a moving mechanism, a light source and a linear array light field camera, the linear array light field camera being mounted on the moving mechanism and the moving mechanism driving the light field camera to move to each shooting position to scan, wherein the shooting positions are distributed to form a linear array parallel to a width direction of an image sensor of the linear array light field camera; the light source irradiates rays to the to-be-detected object; and the linear array light field camera shoots a light field image of the to-be-detected objected in each shooting position.
  6. The light field camera scanning device according to claim 5, further comprising a controller,
    wherein the controller comprises the computer readable storage medium storing the computer program as defined in claim 7.
  7. A three-dimensional imaging measuring production line, comprising the three-dimensional imaging system as defined in any one of claims 2-3;
    or comprising the computer readable storage medium storing the computer program as defined in claim 4;
    or comprising the light field camera scanning device as defined in claim 5 or 6.
  8. A method for detecting a gold wire of a chip, characterized in that the spliced image obtained by the three-dimensional imaging method based on the light field camera as defined in claim 1 serves as a detected image, the method comprising:
    a gold wire positioning step: acquiring the detected image, performing positioning in the detected image to obtain paired solder balls, and taking a connecting line between the paired solder balls as a basis for feature extraction of the gold wire, wherein the connecting line is a straight line segment;
    a gold wire feature extraction step: in the detected image, drawing a midperpendicular of the straight line segment, considering points meeting the demand among all the points on the midperpendicular as points on the gold wire and marking the points as gold wire feature points; and
    a gold wire defect analyzing step: obtaining a gold wire extension shape according to a plurality of gold wire feature points and analyzing a first type defect of the gold wire according to the gold wire extension shape.
  9. A chip detection method based on a light field camera and a two-dimensional camera, comprising:
    detecting solder balls of a chip by using the two-dimensional camera; and
    detecting a gold wire of the chip by using the light field camera by means of the method for detecting the gold wire of the chip as defined in claim 8.
  10. A system for detecting a gold wire of a chip, characterized in that the spliced image obtained by the three-dimensional imaging method based on the light field camera as defined in claim 2 serves as a detected image, the system comprising:
    a gold wire positioning module for acquiring the detected image, performing positioning in the detected image to obtain paired solder balls, and taking a connecting  line between the paired solder balls as a basis for feature extraction of the gold wire, wherein the connecting line is a straight line segment;
    a gold wire feature extraction module for drawing a midperpendicular of the straight line segment in the detected image, considering points meeting the demand among all the points on the midperpendicular as points on the gold wire and marking the points as gold wire feature points; and
    a gold wire defect analyzing module for obtaining a gold wire extension shape according to the plurality of gold wire feature points and analyzing a first type defect of the gold wire according to the gold wire extension shape.
  11. A chip detection system based on a light field camera and a two-dimensional camera, comprising:
    detecting solder balls of a chip by using the two-dimensional camera; and
    detecting a gold wire of the chip by using the light field camera by means of the system for detecting the gold wire of the chip as defined in claim 10.
  12. A product defect detection production line, comprising the gold wire detection system as defined in claim 10, or the chip detection system based on the light field camera and the two-dimensional camera as defined in claim 11.
PCT/CN2021/079837 2020-12-15 2021-04-23 Three-dimensional imaging method and method based on light field camera and three-dimensional imaging measuring production line WO2022126870A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011475925.3A CN114636385B (en) 2020-12-15 2020-12-15 Three-dimensional imaging method and system based on light field camera and three-dimensional imaging measurement production line
CN202011475925.3 2020-12-15

Publications (1)

Publication Number Publication Date
WO2022126870A1 true WO2022126870A1 (en) 2022-06-23

Family

ID=81944865

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/079837 WO2022126870A1 (en) 2020-12-15 2021-04-23 Three-dimensional imaging method and method based on light field camera and three-dimensional imaging measuring production line

Country Status (2)

Country Link
CN (1) CN114636385B (en)
WO (1) WO2022126870A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115908242A (en) * 2022-09-22 2023-04-04 深圳市明测科技有限公司 Chip gold wire whole line detection method and system for multi-channel image fusion
CN116952845A (en) * 2023-09-19 2023-10-27 宁德时代新能源科技股份有限公司 Battery sealing nail welding detection system and method
CN117036175A (en) * 2023-10-08 2023-11-10 之江实验室 Linear array image splicing method, device, medium and equipment

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117053690A (en) * 2023-10-10 2023-11-14 合肥联宝信息技术有限公司 Imaging method and device for to-be-positioned piece, electronic equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103884650A (en) * 2014-03-28 2014-06-25 北京大恒图像视觉有限公司 Multi-photosource linear array imaging system and method
CN106331439A (en) * 2015-07-10 2017-01-11 深圳超多维光电子有限公司 Micro lens array imaging device and imaging method
CN108175535A (en) * 2017-12-21 2018-06-19 北京理工大学 A kind of dentistry spatial digitizer based on microlens array
US20180210394A1 (en) * 2017-01-26 2018-07-26 The Charles Stark Draper Laboratory, Inc. Method and Apparatus for Light Field Generation
CN110648345A (en) * 2019-09-24 2020-01-03 中国烟草总公司郑州烟草研究院 Method and system for detecting flow of tobacco shred materials on conveying belt based on light field imaging
CN111538223A (en) * 2020-04-30 2020-08-14 北京大学 Holographic projection method based on light beam deflection
CN211825732U (en) * 2019-12-20 2020-10-30 武汉精立电子技术有限公司 Non-contact film pressing backlight panel detection device and backlight panel automatic detection line
CN111982921A (en) * 2020-05-21 2020-11-24 北京安视中电科技有限公司 Hole defect detection method and device, conveying platform and storage medium
CN112040140A (en) * 2020-09-02 2020-12-04 衢州光明电力投资集团有限公司赋腾科技分公司 Wide-view-field high-resolution hybrid imaging device based on light field

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105806249A (en) * 2016-04-15 2016-07-27 南京拓控信息科技股份有限公司 Method for achieving image collection and depth measurement simultaneously through a camera
CN106303175A (en) * 2016-08-17 2017-01-04 李思嘉 A kind of virtual reality three dimensional data collection method based on single light-field camera multiple perspective
CN107977998B (en) * 2017-11-30 2021-01-26 浙江大学 Light field correction splicing device and method based on multi-view sampling
CN111161404B (en) * 2019-12-23 2023-05-09 华中科技大学鄂州工业技术研究院 Annular scanning morphology three-dimensional reconstruction method, device and system
CN111340888B (en) * 2019-12-23 2020-10-23 首都师范大学 Light field camera calibration method and system without white image
CN111351446B (en) * 2020-01-10 2021-09-21 奕目(上海)科技有限公司 Light field camera calibration method for three-dimensional topography measurement

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103884650A (en) * 2014-03-28 2014-06-25 北京大恒图像视觉有限公司 Multi-photosource linear array imaging system and method
CN106331439A (en) * 2015-07-10 2017-01-11 深圳超多维光电子有限公司 Micro lens array imaging device and imaging method
US20180210394A1 (en) * 2017-01-26 2018-07-26 The Charles Stark Draper Laboratory, Inc. Method and Apparatus for Light Field Generation
CN108175535A (en) * 2017-12-21 2018-06-19 北京理工大学 A kind of dentistry spatial digitizer based on microlens array
CN110648345A (en) * 2019-09-24 2020-01-03 中国烟草总公司郑州烟草研究院 Method and system for detecting flow of tobacco shred materials on conveying belt based on light field imaging
CN211825732U (en) * 2019-12-20 2020-10-30 武汉精立电子技术有限公司 Non-contact film pressing backlight panel detection device and backlight panel automatic detection line
CN111538223A (en) * 2020-04-30 2020-08-14 北京大学 Holographic projection method based on light beam deflection
CN111982921A (en) * 2020-05-21 2020-11-24 北京安视中电科技有限公司 Hole defect detection method and device, conveying platform and storage medium
CN112040140A (en) * 2020-09-02 2020-12-04 衢州光明电力投资集团有限公司赋腾科技分公司 Wide-view-field high-resolution hybrid imaging device based on light field

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115908242A (en) * 2022-09-22 2023-04-04 深圳市明测科技有限公司 Chip gold wire whole line detection method and system for multi-channel image fusion
CN115908242B (en) * 2022-09-22 2023-10-03 深圳市明测科技有限公司 Chip gold thread whole line detection method and system for multichannel image fusion
CN116952845A (en) * 2023-09-19 2023-10-27 宁德时代新能源科技股份有限公司 Battery sealing nail welding detection system and method
CN117036175A (en) * 2023-10-08 2023-11-10 之江实验室 Linear array image splicing method, device, medium and equipment
CN117036175B (en) * 2023-10-08 2024-01-09 之江实验室 Linear array image splicing method, device, medium and equipment

Also Published As

Publication number Publication date
CN114636385A (en) 2022-06-17
CN114636385B (en) 2023-04-28

Similar Documents

Publication Publication Date Title
WO2022126870A1 (en) Three-dimensional imaging method and method based on light field camera and three-dimensional imaging measuring production line
JP6855587B2 (en) Devices and methods for acquiring distance information from a viewpoint
CN107607040B (en) Three-dimensional scanning measurement device and method suitable for strong reflection surface
CN110276808B (en) Method for measuring unevenness of glass plate by combining single camera with two-dimensional code
US10043290B2 (en) Image processing to enhance distance calculation accuracy
US20230362344A1 (en) System and Methods for Calibration of an Array Camera
TWI729995B (en) Generating a merged, fused three-dimensional point cloud based on captured images of a scene
CN113205593B (en) High-light-reflection surface structure light field three-dimensional reconstruction method based on point cloud self-adaptive restoration
US7139424B2 (en) Stereoscopic image characteristics examination system
JP6363863B2 (en) Information processing apparatus and information processing method
CN109859272B (en) Automatic focusing binocular camera calibration method and device
CN108074267B (en) Intersection point detection device and method, camera correction system and method, and recording medium
CN108562250B (en) Keyboard keycap flatness rapid measurement method and device based on structured light imaging
CN112116576B (en) Polarization structure light imaging and improved defect detection method
TWI480578B (en) Method for detecting optical center of wide-angle lens, and optical center detection apparatus
JP5633058B1 (en) 3D measuring apparatus and 3D measuring method
CN110542390A (en) 3D object scanning method using structured light
JP5412757B2 (en) Optical system distortion correction method and optical system distortion correction apparatus
WO2014011182A1 (en) Convergence/divergence based depth determination techniques and uses with defocusing imaging
CN113324478A (en) Center extraction method of line structured light and three-dimensional measurement method of forge piece
KR20230096057A (en) Defect Layering Detection Method and System Based on Light Field Camera and Detection Production Line
Chen et al. Finding optimal focusing distance and edge blur distribution for weakly calibrated 3-D vision
JP5136108B2 (en) 3D shape measuring method and 3D shape measuring apparatus
CN115456945A (en) Chip pin defect detection method, detection device and equipment
CN108663370B (en) End face inspection apparatus and focused image data acquisition method thereof

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21904811

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 17.11.2023)