WO2019117802A1 - A system for obtaining 3d images of objects and a process thereof - Google Patents

A system for obtaining 3d images of objects and a process thereof Download PDF

Info

Publication number
WO2019117802A1
WO2019117802A1 PCT/SG2017/050616 SG2017050616W WO2019117802A1 WO 2019117802 A1 WO2019117802 A1 WO 2019117802A1 SG 2017050616 W SG2017050616 W SG 2017050616W WO 2019117802 A1 WO2019117802 A1 WO 2019117802A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
pixels
intensity
light
blue light
Prior art date
Application number
PCT/SG2017/050616
Other languages
French (fr)
Inventor
Kok Weng Wong
Albert Archwamety
Jun Kang NG
Chee Chye LEE
Original Assignee
Mit Semiconductor Pte Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mit Semiconductor Pte Ltd filed Critical Mit Semiconductor Pte Ltd
Priority to PCT/SG2017/050616 priority Critical patent/WO2019117802A1/en
Publication of WO2019117802A1 publication Critical patent/WO2019117802A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/95Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
    • G01N21/9515Objects of complex shape, e.g. examined with use of a surface follower device
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8883Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges involving the calculation of gauges, generating models

Definitions

  • the present invention relates to the inspection of small components for defects, and more specifically, to a system and method for rapidly obtaining three- dimensional (3D) images of surface features of objects to detect small flaws or irregularities
  • Stereo vision is the extraction of three dimensional (“3D”) information from two or more digital images.
  • 3D information can be extracted by examination of the relative positions of objects in the two panels.
  • the relative depth information can be obtained in the form of a disparity map, which encodes the difference in horizontal coordinates of
  • This disparity map are inversely proportional to the scene depth at the corresponding pixel location.
  • 3D technology is particularly important to many industrial applications. In high- precision manufacturing, it is often necessary be visually inspect objects to ensure that there are no flaws or irregularities. 3D vision can be essential as the inspection involves examining small but critical features on each component. For example, Automated Optical Inspection (AOI) systems are often used to analyze and evaluate electrical circuits, including flat panel displays, integrated circuits, chip carriers and printed circuit boards.
  • AOI Automated Optical Inspection
  • US 20140028833 A1 describes an inspection device that uses a cluster of lights to capture multiple images of a component. The images are combined to provide a single image which allows defects to be more easily identified.
  • this device has limitations. It will not detect shallow indentations or protrusions on the surface that do not cast a prominent shadow. Further, only defects parallel to the light will cast a prominent shadow and the location of a shadow is different from the location of the defect.
  • US 7295720 B2 uses a single lens and multiple flash units to render a“stylized” image of an object such as an electrical component.
  • methods that render a non-photorealistic stylized image based on detecting casted shadows to enhance the depth in the image require a silhouette edge and are not suitable for detection of surface details without high contrast edges.
  • the renderring of stylized images reduces the texture and features and are generally ineffective on surfaces without edge discontinuities.
  • the system should be capable of detecting small flaws or irregularities
  • the invention includes a system for obtaining a three-dimensional image of surface features of a substantially flat object, comprising (a) a camera, (b) a blue light source (c) a red light source and (d) a computer/processing unit.
  • the blue light source illuminates the object from a first location and the red light source illuminates the object from a second location.
  • the camera can capture a single image of the object to be subsequently separated into a red light image and a blue light image.
  • the computer/processing unit determines the intensity of reflected light of pixels.
  • the intensity of reflected red light and the intensity of reflected blue light can be analyzed based on Lambert’s Cosine Law to detect irregularities or surface flaws on the object.
  • the light sources can be a blue LED and a red LED.
  • the ratio of light from the blue light source and the red light source can be adjustable from 1 :99 to 99:1.
  • the object to be imaged/photographed can be a wafer or an integrated circuit (IC) package.
  • the camera can be a CCD camera or a CMOS camera.
  • the invention also includes a process for detecting and/or visualizing a flaw or irregularity on the surface of an object using a camera comprising the steps of (a) illuminating an object with a blue light source from a first area, (b) illuminating an object with a red light source from a second area, (c) capturing a color image of the surface of the object, (d) separating the image into a red light image and a blue light image, (e) analyzing the intensity of light pixels on the red light image, (f) analyzing the intensity of light pixels on the blue light image, (g) determining a surface angle for pixels based on an analysis of the intensity of reflected red light and the intensity of reflected blue light and (h) generating a three-dimensional image of surface features of the object based on surface angles of pixels.
  • the analysis in step (g) can use the principles of Lambert’s Cosine Law.
  • the process can include the additional step of generating a three- dimensional map to visualize three-dimensional features.
  • a flaw or irregularity on the surface of the object can be indentation and/or protrusion feature.
  • the red light image and the blue light image can be subjected to digital signal processing.
  • the object that is photographed/imaged can be a wafer or an integrated circuit (1C) package.
  • the invention also includes a method of obtaining a three-dimensional image of surface features of an object, comprising the steps of (a) providing a blue light source from a first area of incidence, (b) providing a red light source from a second area of incidence, (c) obtaining a color image of the surface of the object, (d) separating the color image into a red light image and a blue light image using a computer and/or RBG filter, (e) analyzing the intensity of rows of pixels of reflected light on the red light image, (f) analyzing the intensity of rows of pixels of reflected light on the blue light image, (g) identifying a difference in orientation among pixels wherein a positive value indicates the next pixel is higher and a negative value indicates the next pixel is lower, (h) determining the topology of the surface by fitting lines through pixels and (i) identifying irregularities and/or surface flaws on the object.
  • the principles of Lambert’s law can be used for the analysis in the step (g).
  • the method can include the additional step of generating a three-dimensional map.
  • Surface features can include indentation features on a surface of the object and/or protrusion features.
  • the red light image and the blue light image can be subjected to digital signal processing.
  • the invention includes a system and method for obtaining three dimensional (3D) images of the surfaces of objects to detect minor flaws and irregularities. Images can be captured from two white light sources or two colored light sources. 3D features are obtained by analyzing and comparing the images using an image processing algorithm.
  • the system is well suited for industrial uses that require a high volume of objects to be rapidly inspected for defects as small as a few microns in any plane.
  • a first aspect of the invention is a system for obtaining 3D images of an object comprising a camera, at least two light sources, a diffuser and a beam splitter.
  • the camera captures light from the light sources when each projects a gradient across the object to provide respective images.
  • the 3D features of the object are revealed by combining and processing the images.
  • a second aspect of the invention is a system for obtaining a 3D image of an object wherein a camera captures at least two images using at least two light sources from different positions.
  • a third aspect of the invention is a system for obtaining a 3D image of an object wherein a single image is captured using colored light sources. Multiple images can then be extracted using a color filter.
  • a fourth aspect of the invention is a process for obtaining 3D images of an object comprising the steps of, capturing at least one image of the object, wherein the object is illuminated by light from at least two light sources, each one projecting a different illumination gradient across the object to provide respective images and processing 3D features of at least two captured images using image processing software.
  • a fifth aspect of the invention is a system for obtaining 3D images of an object using multiple sources of colored light.
  • a camera captures an image that is subsequently separated into two images based on the wavelengths of light. Each pixel is analyzed and defects are located based on the principles of Lambert’s Law.
  • FIG. 1 depicts an arrangement of components of a system for obtaining 3D images of small irregularities or flaws on an object, according to one aspect of the invention.
  • FIG. 2 is a flow chart that describes a process for obtaining 3D images of an object from a camera, according to one aspect of the invention.
  • FIG. 3 depicts an arrangement of components of a system for obtaining 3D images of small irregularities or flaws on an object, according to one aspect of the invention.
  • FIG. 4A is a flow chart that describes a process for obtaining 3D images of an object from a camera, according to one aspect of the invention.
  • FIG. 4B is a flow chart that describes a process of analyzing pixels based on Lambert’s Cosine Law, according to one aspect of the invention.
  • FIG. 5 depicts the illumination of multiple points and the resultant variation in intensities, according to the principle of Lambert’s Cosine Law.
  • FIG. 6A depicts the illumination of two points on the surface of an object, according to one aspect of the invention.
  • FIG. 6B depicts the illumination of two points on the surface of an object, according to one aspect of the invention.
  • FIG. 7 A depicts the illumination principle of Lambert’s Cosine Law that is utilized by the invention.
  • FIG. 7B depicts the illumination principle of Lambert’s Cosine Law that is utilized by the invention.
  • FIG. 7C depicts the illumination principle of Lambert’s Cosine Law that is utilized by the invention.
  • FIG. 7D depicts the illumination principle of Lambert’s Cosine Law that is utilized by the invention.
  • FIG. 8A shows a captured image separated based on a color filter, according to one aspect of the invention.
  • FIG. 8B shows a captured image separated based on a color filter, according to one aspect of the invention.
  • FIG. 8C shows a reconstructed from the two images of FIGS. 8A and 8B, according to one aspect of the invention.
  • FIG. 8D is the image of FIG. 8C, after filtering the high frequency texture and retaining the low frequency, showing features representing the surface topography suitable for digital signal processing, according to one aspect of the invention.
  • FIG. 9A is a graph showing scan of a line from left to right, the shape of an impulse indicating an indentation on a surface, according to one aspect of the invention.
  • FIG. 9B is a graph showing scan of a line from left to right, the shape of an impulse indicating a protrusion on a surface, according to one aspect of the invention.
  • FIG. 10 depicts a three dimensional (3D) topography map generated from two images. DETAILED DESCRIPTION OF THE INVENTION
  • the invention is primarily described for use for imaging electrical and computer components, it is understood that the invention is not so limited and can be used in the screening/imaging of other components as well as in other various industries.
  • the invention is conducive to inspecting small objects that have one or more flat surfaces.
  • Other applications include, for example, but not limited to, using the invention in aerospace, automotive, computer, biotechnology and pharmaceutical industries.
  • 8-bit in computer architecture, refers to 8-bit integers, memory addresses, or other data units are those that are 8 bits (1 octet) wide.
  • 128-bit integers, memory addresses, or other data units are those that are 128 bits (16 octets) wide.
  • the term“Bayer filter” or“Bayer filter mosaic” refers to a color filter array (CFA) for arranging RGB color filters on a square grid of photo sensors. Its particular arrangement of color filters is commonly used in single-chip digital image sensors in digital cameras, camcorders, and scanners to create a color image.
  • the filter pattern is 50% green, 25% red and 25% blue, hence is also called BGGR, RGBG, GRGB or RGGB.
  • bar light refers to a light source with a series of LEDs arranged in lines or bars, used for direct lighting of a substrate.
  • LEDs are arranged at high density on a single flat circuit board so that an object can be illuminated from a desired angle.
  • beam splitter refers to a mirror or prism or a combination of the two that is used to divide a beam of radiation into two or more parts.
  • a beam splitter splits an incident beam of light into two output beams which diverge at a fixed angle.
  • the letters“r” and“t” can denote the reflectance and transmittance respectively along a particular path through the beam-splitter.
  • binocular disparity refers to the difference in coordinates of similar features within two stereo images.
  • CCD camera or“three-CCD camera” is a camera whose imaging system uses three separate charge-coupled devices (CCDs), each one taking a separate measurement of the primary colors, red, green, or blue light. Light coming into the lens is split by a trichroic prism assembly, which directs the appropriate wavelength ranges of light to their respective CCDs.
  • CCDs charge-coupled devices
  • CMOS complementary metal-oxide semiconductor
  • PCB printed circuit board
  • the term“continuously variable beam splitter” refers to a beam splitter in which allows the user to continuously vary the transmitted intensity of a linearly polarized beam of light.
  • An attenuator accomplishes this by using a zero-order half- wave plate in a rotation mount and a polarizing beam splitter cube. This combination allows it to achieve split ratios of 1 :99 to 99:1 for P:S polarized light.
  • conjugates points refers to the object point and image point of a lens system. Because all of the light paths from the object to the image are reversible, it follows that if the object were placed where the image is, an image would be formed at the original object position.
  • the term“diffuser” refers to a device (as a reflector) for distributing the light of a lamp evenly.
  • image rectification refers to a transformation process used to project two or more images onto a common image plane. This process has several degrees of freedom and there are many strategies for transforming images to the common plane.
  • a Lambertian surface provides uniform diffusion of the incident radiation such that its radiance or luminance is the same in all directions from which it can be measured.
  • LED light emitting diode
  • LED refers to a semiconductor device that emits visible light when an electric current passes through it. In most LEDs, the light is monochromatic, occurring at a single wavelength.
  • optical attenuator refers to a device used to reduce the power level of an optical signal.
  • the basic types of optical attenuators are fixed, step-wise variable, and continuously variable.
  • stamp gradient refers to a gradient of colored light, which is typically linear across a region.
  • the term“white light fringes Michelson interferometer” refers to a common configuration for optical interferometry. Using a beam splitter, a light source is split into two arms. Each of those is reflected back toward the beam splitter which then combines their amplitudes interferometrically. The resulting interference pattern that is not directed back toward the source is typically directed to some type of photoelectric detector or camera. Depending on the interferometer's particular application, the two paths may be of different lengths or include optical materials or components under test.
  • the Twyman-Green interferometer is a variation of the Michelson interferometer used to test small optical components.
  • the basic characteristics distinguishing it from the Michelson configuration are the use of a monochromatic point light source and a collimator.
  • the present invention relates to a system and method for obtaining 3D images of surfaces, using a single camera.
  • the three dimensional (3D) images of objects can be obtained using one of two methods.
  • a first approach is to capture a pair of images.
  • a first image is taken when an object is illuminated with a first source of light.
  • a second image is taken when the object is illuminated with a second source of light. Because the first and second light sources illuminate the object from different directions, minor flaws and/or imperfections can be detected by comparing the first image and the second image.
  • 3D features obtained from each captured image can be reprocessed with software. Defects and/or irregularities can be identified and visualized based on the principles of Lambert’s Cosine Law and an image processing algorithm. Images suitable for obtaining surface topography can also be produced from the two images using digital signal processing.
  • a single image is captured from two light colored sources.
  • the image is split into red and blue channels based on the different wavelengths of light.
  • Each image is analyzed as described for the first approach. This approach is faster because a single image is taken with two steady light sources.
  • the object can be photographed while it is moving across the field of view.
  • the system is well suited for industrial uses that require a high volume of objects to be rapidly inspected for defects as small as a few microns.
  • FIG. 1 depicts an arrangement of the components (system 100) according to one aspect of the invention.
  • the system 100 includes a camera 150, a first light source 135 and a second light source 140 for illuminating an object 105.
  • Other components include a diffuser 120 and a beam splitter 115.
  • the two light sources (135 and 140) transmit light through the diffuser 120, and then to the object 105 via the beam splitter 115.
  • the beam splitter 115 functions to direct the illumination onto the object 105.
  • the camera 201 can be a monochrome or color camera. As shown, the light sources are directed to the object at the same (or similar) angle of incidence.
  • the camera 150 captures a first image when light from the first light source 135 projects a first illumination gradient across the object 105 from a first location.
  • the camera 150 captures a second image when light from the second light source 140 projects a second illumination gradient across the object 105 from a second location.
  • the first image and second image are combined for analysis of 3D surface features.
  • FIG. 2 is a flow chart 200 that describes the steps in a process according to one aspect of the invention.
  • Light from a first light source is projected upon an object 205.
  • the camera captures a first image of the object 210.
  • Light from a second light source is projected upon an object 215.
  • the camera captures a second image of the object 220.
  • the two images are processed using an algorithm to derive the surface topography of the object 225.
  • the images can be used to detect surface features such as indentations or imperfections.
  • the system is most conducive for use with planar or substantially planar objects.
  • FIG. 3 depicts another arrangement of the components 300 of the invention.
  • the object 105 to be inspected can be stationary or moving across the stage or field of view.
  • the camera 150 takes a single image of the object 105. Accordingly, objects can be photographed at a greater speed. Objects can be passed across the stage and rapidly photographed according to the user needs and capabilities of the system.
  • a camera records the images for inspection as part of a rapid, high volume screening process.
  • the two light sources illuminate the object at the same time.
  • the two light sources may comprise a red light 335 and a blue light 340 to illuminate the object. As shown, the red light has a diffuser 325 and the blue light has a diffuser 330.
  • a central diffuser 120, and a beam splitter 115 can also be included.
  • the blue light source and the red light source can be adjustable (i.e. from 1 :99 to 99:1 ).
  • the two colored lights 335 and 340 transmit light through the central diffuser 120, and then to the object 105 via the beam splitter 115.
  • the light illuminates the object 105.
  • the beam splitter 115 functions to direct the illumination onto the object 105.
  • a color camera is necessary because of the use of blue and red light.
  • Diffuse surfaces such as a wafer or integrated circuit (IC) package, follow Lambert’s Cosine Law. This means that reflected light/energy from a small surface area in a particular direction is proportional to the cosine of the angle between that direction and the surface normal.
  • a radiating surface has a radiance that is independent of the viewing angle, the surface is said to be perfectly diffuse or a
  • FIG. 4A is a flow chart 400 that describes the steps in a process according to one aspect of the invention. Light from two sources is projected upon a substantially flat object 410. The camera captures an image of the object 420. The image can then be split into a red image and a blue image 430. Thereafter, the two images can be processed using an algorithm to derive the surface topography of the object 440. A 3D map can be generated 450 to visualize and detect small surface features such as indentations or imperfections. The system is most conducive for use with planar or substantially planar objects.
  • FIG. 4B is a flow chart that describes step 440 in detail.
  • the principle of Lambert’s Cosine Law is used to detect flaws and/or imperfections on the surface of a substrate.
  • each pixel on an image is analyzed to determine the intensity of reflected light 510.
  • the intensity of red light and the intensity of blue light can be quantified separately.
  • the surface angle can be calculated based on the relationship defined by Lambert’s Cosine Law 520.
  • the surface angle can be determined for the red light and blue light separately. For each calculation, there can be two mathematically correct solutions. However, by comparing the two figures, the actual solution will be apparent 530. Only one solution will lie within the area of the substrate. This process is repeated for each pixel 540. Thereafter, the data can be compiled from the entire substrate 550.
  • FIG. 5 depicts the principle of Lambert’s Law of Cosines.
  • the irradiance or illuminance falling on a surface varies as the cosine of the incident angle.
  • the perceived measurement area orthogonal to the incident flux is reduced at oblique angles, causing light to spread out over a wider area than it would if perpendicular to the surface.
  • the arrows depict the illumination of light from different angles with the percentage of light that is reflected.
  • a diffuse Lambertian surface obeys the cosine law by distributing reflected energy in proportion to the cosine of the reflected angle. For light that is illuminated perpendicular to the surface (i.e. 0°), 100% of the light will be reflected. For light that is illuminated from an angle of 85°, just 9% will be reflected because much of the light is dispersed.
  • a light source 510 illuminates two points on the surface of an object.
  • a camera 150 takes an image of the object. The intensity of light reflected from the surface will vary according to the angle of incident light.
  • FIG. 6A light is emitted onto point A 560 and point B 570 from a single light source 510.
  • the light reflected from point A will appear brighter to the camera because of the small angle relative to the surface normal.
  • the light reflected from point B will not appear as bright to the camera because of the greater angle relative to the surface normal.
  • FIG. 6B the light reflected from point B will appear brighter to the camera because of the small angle relative to the surface normal.
  • the light reflected from point A will not appear as bright to the camera.
  • each light source is directed onto the object from a different source/location.
  • red light can be illuminated from a first light source 335 and blue light can be illuminated from a second light source 340.
  • the light sources are combined in an image, the light can thereafter be separated based on a Bayer mask pattern.
  • the intensities of reflected light can then be quantified and analyzed.
  • blue image intensity and red image intensity one can determine the surface orientation of each location (i.e. pixel) on the image.
  • the mixture of blue light (450 nm) and red light (650 nm) wavelength are detected/photographed by the camera.
  • the camera After recording an image, the mixture of blue light (450 nm) and red light (650 nm) wavelength are detected/photographed by the camera.
  • the camera After recording an image, the mixture of blue light (450 nm) and red light (650 nm) wavelength are detected/photographed by the camera.
  • wavelengths are extracted at each location, the blue filter and red filter used in alternate locations (Bayer pattern) of the image pixels.
  • FIG. 7 A, 7B, 7C and 7D This principle is further illustrated in FIG. 7 A, 7B, 7C and 7D.
  • the four figures depict the principle of the invention, wherein two light sources can be analyzed separately using a color filter. Two images can be obtained and analyzed as light is illuminated onto an object with a flat surface (one red, one blue). The amount of reflected light will be the same in both examples (FIG. 7A and FIG. 7B) as the intensity of illuminated light and the incident angle (Q) remain the same. Referring to FIG. 7A and FIG. 7B, based on the surface orientation to a line, there are two possible solutions, either solution (602 or 603) can be the correct solution.
  • the amount of reflected light will be the same if the angle (a) remains constant as depicted. That is, for a Lambertian surface, the amount of reflected light will be the same if the intensity of light and the incident angle are equal.
  • solution (604 or 605) can be the correct solution.
  • solution 602 and 604 will be closest to each other. Taking the average of the two angles will give the orientation of the surface.
  • Solution 603 and 605 are both outliers and hence, neither are a correct solution.
  • FIG. 8A and FIG. 8B are images that have been separated by a color filter (although they are presented in black and white).
  • the left and right side of images will show stronger blue intensity or stronger red intensity due to the arrangement of the blue and red light respectively.
  • the detector can distinguish the blue red intensity by its wavelength differences. The result will produce single unmodified mixture of blue and red intensity image.
  • the blue and red intensity information (following Lambert’s law) of the device is retained for processing.
  • Shading on the two captured images indicate a normal vector of the surface of the object.
  • an evenly lit image (FIG. 8C) can be reconstructed from the two images (FIG. 8A and FIG. 8B). The texture and shadings with lights shining from the right are evident in the photo.
  • edge detection on FIG. 8A and FIG. 8B it is possible to use edge detection on FIG. 8A and FIG. 8B in localizing image processing to the device package area by masking out the background. This will reduce the time needed for the calculation. Further, an additional mask (to remove ball or other component) can be added to speed up the calculation. Working within the masked package area, the average orientation of every column of pixels can be calculated.
  • the average orientation of the object can be determined by calculating the “average orientation of a column of pixels.” Further, the device warpage in the orientation image can be filtered by calculating the“difference in orientation" of each pixel’s surface orientation to its column average. This can be important as each object, though substantially planar, can be warped to a degree and/or can be photographed while it is not completely flat.
  • Examples of further image processing include scanning a line from left to right, the impulse which indicates an indentation 410 on the surface as shown in FIG. 9A. Scanning a line from left to right, the shape of the impulse indicates a protrusion 420 on the surface as shown in FIG. 9B.
  • the three dimensional (3D) topography map of the image is generated from the two (i.e. red and blue) images. This is depicted in FIG. 10. Defects in the object cause subtle changes in the light intensity. The intensity changes reflect the direction to the light source. The defects can be highlighted and overlayed to a grayscale image.
  • the set up and design can capture small defects such as indentations and protrusions on still or moving objects.
  • the device is sensitive enough to detect defects down to the micron level at high speed.
  • an integrated circuit (IC) package is inspected for flaws and/or irregularities. Individual pixels on a surface of the object are imaged and the respective intensity of reflected light (red and blue) is measured. The surface angle of each pixel is determined to detect the presence of deviations from the flat surface. A protrusion or indentation can be classified by size and/or depth. If the area and/or depth is outside a set of parameters, the package can be classified as defective.
  • the system is capable of operating as part of an assembly line or independently as a quality control or inspection device.
  • a red light source and a blue light source illuminate an object.
  • the light sources can be arranged on a single flat circuit board so that the object can be illuminated from desired angles.
  • the object moves across the stage or field of view and is photographed as it passes through a field of view.
  • the object can be photographed while stationary.
  • the camera records the images for inspection while the object is illuminated as part of a rapid, high volume screening process.
  • separate images can be regenerated by splitting the captured color image into different colors based on the different light colors (i.e. a color filter) or Bayer mask pattern. Specifically, the captured single image can be split into a blue image and a red image.
  • the regenerated blue and red images can be compiled and compared with one another to reveal discrepancies based on differences in intensities of reflected light that are quantified and analyzed. This can be done for a series of pixels, whereby pixels are analyzed in series of rows (i.e. a scan from left to right) on each image. An average orientation is determined for the pixels. Each pixel is then characterized individually. Deviations from Lambert’s law (as described below) will indicate if the surface, where a pixel is located, is flawed or irregular. In this regard, the two images (red and blue) can be compared to determine the location of the flaw or irregularity and further characterize it.
  • a pixel can be designated as positive or negative, depending on it orientation.
  • A“difference in orientation” that is positive indicates the next pixel (i.e. a scan from left to right) is higher.
  • a negative value indicates the next pixel is lower.
  • For a“difference in orientation” above a threshold the next pixel height is increased by one.
  • For“difference in orientation” below a threshold the next pixel height is decreased by one.
  • the next step is converting the difference in height of every pixel into a grayscale.
  • the baseline will be grayscale of 128-bit.
  • the higher surface will be greater than 128-bit while the lower surface will be less than 128-bit.
  • 8-bit image representation one can use blob analysis to detect a bright blob and dark blob.
  • the differences in the average gray- value of the blob with the background will be the contrast.
  • its length, width, area, contrast and shape can be analyzed using an image processing algorithm.
  • edge detection e.g. on FIG. 8A and FIG. 8B
  • Additional masks e.g. to remove a ball or component
  • the image can be calibrated to calculate the length, width and area.
  • a bright blob will be categorized as a protrusion with a calculated length, width and area.
  • a dark blob will be categorized as an indentation with a calculated length, width and area.
  • the object can be categorized as“good” or“defective.”
  • the detected location of an irregularity/defect will be the actual location.
  • Other devices rely on a shadow casted in which case, the shadow is not the actual defect location.

Landscapes

  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The present invention includes a system and method for obtaining three dimensional images of the surfaces of objects to detect small flaws and irregularities. An image of an object is captured from two colored light sources. The image is split into red and blue channels. Defects such as protrusions or irregularities can thereafter be identified and visualized based on Lambert's Cosine Law. The system is well suited for industrial uses that require a high volume of objects to be rapidly inspected for defects as small as a few microns.

Description

A SYSTEM FOR OBTAINING 3D IMAGES OF
OBJECTS AND A PROCESS THEREOF
TECHNICAL FIELD
[0001] The present invention relates to the inspection of small components for defects, and more specifically, to a system and method for rapidly obtaining three- dimensional (3D) images of surface features of objects to detect small flaws or irregularities
BACKGROUND
[0002] Stereo vision is the extraction of three dimensional (“3D”) information from two or more digital images. By comparing information about a scene from two vantage points, 3D information can be extracted by examination of the relative positions of objects in the two panels. The relative depth information can be obtained in the form of a disparity map, which encodes the difference in horizontal coordinates of
corresponding image points. The values in this disparity map are inversely proportional to the scene depth at the corresponding pixel location.
[0003] 3D technology is particularly important to many industrial applications. In high- precision manufacturing, it is often necessary be visually inspect objects to ensure that there are no flaws or irregularities. 3D vision can be essential as the inspection involves examining small but critical features on each component. For example, Automated Optical Inspection (AOI) systems are often used to analyze and evaluate electrical circuits, including flat panel displays, integrated circuits, chip carriers and printed circuit boards.
[0004] In traditional stereo vision, two cameras, displaced horizontally from one another are used to obtain two differing views on a scene, in a manner similar to human binocular vision. By comparing these two images, the relative depth information can be obtained in the form of a disparity map, which encodes the difference in horizontal coordinates of corresponding image points. However, traditional stereo vision is often unsuited for industrial use because it is slow and has limited sensitivity. Alternative methods have been developed in an effort toward improving 3D imaging technology.
[0005] For example, US 20140028833 A1 describes an inspection device that uses a cluster of lights to capture multiple images of a component. The images are combined to provide a single image which allows defects to be more easily identified. However, this device has limitations. It will not detect shallow indentations or protrusions on the surface that do not cast a prominent shadow. Further, only defects parallel to the light will cast a prominent shadow and the location of a shadow is different from the location of the defect.
[0006] US 7295720 B2 uses a single lens and multiple flash units to render a“stylized” image of an object such as an electrical component. However, methods that render a non-photorealistic stylized image based on detecting casted shadows to enhance the depth in the image require a silhouette edge and are not suitable for detection of surface details without high contrast edges. Further, the renderring of stylized images reduces the texture and features and are generally ineffective on surfaces without edge discontinuities.
[0007] A need, therefore, exists for a system and method to overcome the shortcomings that come with conventional 3D imaging systems. Specifically, there is a need for an improved system and method for obtaining 3D images of the surfaces of objects. The system should be capable of detecting small flaws or irregularities
(especially in the vertical or“z” plane) in a large field of view. It should also be capable of operating at a high speed so that many objects can be inspected or screened in a short period of time. SUMMARY OF THE INVENTION
[0008] The following summary is provided to facilitate an understanding of some of the innovative features unique to the disclosed embodiment and is not intended to be a full description. A full appreciation of the various aspects of the embodiments disclosed herein can be gained by taking into consideration the entire specification, claims, drawings, and abstract as a whole.
[0009] The invention includes a system for obtaining a three-dimensional image of surface features of a substantially flat object, comprising (a) a camera, (b) a blue light source (c) a red light source and (d) a computer/processing unit. The blue light source illuminates the object from a first location and the red light source illuminates the object from a second location. The camera can capture a single image of the object to be subsequently separated into a red light image and a blue light image. The
computer/processing unit determines the intensity of reflected light of pixels. The intensity of reflected red light and the intensity of reflected blue light can be analyzed based on Lambert’s Cosine Law to detect irregularities or surface flaws on the object.
[0010] The light sources can be a blue LED and a red LED. The ratio of light from the blue light source and the red light source can be adjustable from 1 :99 to 99:1. The object to be imaged/photographed can be a wafer or an integrated circuit (IC) package. Further, the camera can be a CCD camera or a CMOS camera.
[0011] The invention also includes a process for detecting and/or visualizing a flaw or irregularity on the surface of an object using a camera comprising the steps of (a) illuminating an object with a blue light source from a first area, (b) illuminating an object with a red light source from a second area, (c) capturing a color image of the surface of the object, (d) separating the image into a red light image and a blue light image, (e) analyzing the intensity of light pixels on the red light image, (f) analyzing the intensity of light pixels on the blue light image, (g) determining a surface angle for pixels based on an analysis of the intensity of reflected red light and the intensity of reflected blue light and (h) generating a three-dimensional image of surface features of the object based on surface angles of pixels. The analysis in step (g) can use the principles of Lambert’s Cosine Law. The process can include the additional step of generating a three- dimensional map to visualize three-dimensional features. A flaw or irregularity on the surface of the object can be indentation and/or protrusion feature. Further, the red light image and the blue light image can be subjected to digital signal processing. The object that is photographed/imaged can be a wafer or an integrated circuit (1C) package.
[0012] The invention also includes a method of obtaining a three-dimensional image of surface features of an object, comprising the steps of (a) providing a blue light source from a first area of incidence, (b) providing a red light source from a second area of incidence, (c) obtaining a color image of the surface of the object, (d) separating the color image into a red light image and a blue light image using a computer and/or RBG filter, (e) analyzing the intensity of rows of pixels of reflected light on the red light image, (f) analyzing the intensity of rows of pixels of reflected light on the blue light image, (g) identifying a difference in orientation among pixels wherein a positive value indicates the next pixel is higher and a negative value indicates the next pixel is lower, (h) determining the topology of the surface by fitting lines through pixels and (i) identifying irregularities and/or surface flaws on the object. The principles of Lambert’s law can be used for the analysis in the step (g). The method can include the additional step of generating a three-dimensional map. Surface features can include indentation features on a surface of the object and/or protrusion features. The red light image and the blue light image can be subjected to digital signal processing.
INTRODUCTION
[0013] The invention includes a system and method for obtaining three dimensional (3D) images of the surfaces of objects to detect minor flaws and irregularities. Images can be captured from two white light sources or two colored light sources. 3D features are obtained by analyzing and comparing the images using an image processing algorithm. The system is well suited for industrial uses that require a high volume of objects to be rapidly inspected for defects as small as a few microns in any plane.
[0014] A first aspect of the invention is a system for obtaining 3D images of an object comprising a camera, at least two light sources, a diffuser and a beam splitter. The camera captures light from the light sources when each projects a gradient across the object to provide respective images. The 3D features of the object are revealed by combining and processing the images.
[0015] A second aspect of the invention is a system for obtaining a 3D image of an object wherein a camera captures at least two images using at least two light sources from different positions.
[0016] A third aspect of the invention is a system for obtaining a 3D image of an object wherein a single image is captured using colored light sources. Multiple images can then be extracted using a color filter.
[0017] A fourth aspect of the invention is a process for obtaining 3D images of an object comprising the steps of, capturing at least one image of the object, wherein the object is illuminated by light from at least two light sources, each one projecting a different illumination gradient across the object to provide respective images and processing 3D features of at least two captured images using image processing software.
[0018] A fifth aspect of the invention is a system for obtaining 3D images of an object using multiple sources of colored light. A camera captures an image that is subsequently separated into two images based on the wavelengths of light. Each pixel is analyzed and defects are located based on the principles of Lambert’s Law. BRIEF DESCRIPTION OF THE FIGURES
[0019] The drawings described herein are for illustrative purposes only of selected embodiments and not all possible implementations, and are not intended to limit the scope of the disclosure.
[0020] FIG. 1 depicts an arrangement of components of a system for obtaining 3D images of small irregularities or flaws on an object, according to one aspect of the invention.
[0021] FIG. 2 is a flow chart that describes a process for obtaining 3D images of an object from a camera, according to one aspect of the invention.
[0022] FIG. 3 depicts an arrangement of components of a system for obtaining 3D images of small irregularities or flaws on an object, according to one aspect of the invention.
[0023] FIG. 4A is a flow chart that describes a process for obtaining 3D images of an object from a camera, according to one aspect of the invention.
[0024] FIG. 4B is a flow chart that describes a process of analyzing pixels based on Lambert’s Cosine Law, according to one aspect of the invention.
[0025] FIG. 5 depicts the illumination of multiple points and the resultant variation in intensities, according to the principle of Lambert’s Cosine Law.
[0026] FIG. 6A depicts the illumination of two points on the surface of an object, according to one aspect of the invention.
[0027] FIG. 6B depicts the illumination of two points on the surface of an object, according to one aspect of the invention. [0028] FIG. 7 A depicts the illumination principle of Lambert’s Cosine Law that is utilized by the invention.
[0029] FIG. 7B depicts the illumination principle of Lambert’s Cosine Law that is utilized by the invention.
[0030] FIG. 7C depicts the illumination principle of Lambert’s Cosine Law that is utilized by the invention.
[0031] FIG. 7D depicts the illumination principle of Lambert’s Cosine Law that is utilized by the invention.
[0032] FIG. 8A shows a captured image separated based on a color filter, according to one aspect of the invention.
[0033] FIG. 8B shows a captured image separated based on a color filter, according to one aspect of the invention.
[0034] FIG. 8C shows a reconstructed from the two images of FIGS. 8A and 8B, according to one aspect of the invention.
[0035] FIG. 8D is the image of FIG. 8C, after filtering the high frequency texture and retaining the low frequency, showing features representing the surface topography suitable for digital signal processing, according to one aspect of the invention.
[0036] FIG. 9A is a graph showing scan of a line from left to right, the shape of an impulse indicating an indentation on a surface, according to one aspect of the invention.
[0037] FIG. 9B is a graph showing scan of a line from left to right, the shape of an impulse indicating a protrusion on a surface, according to one aspect of the invention.
[0038] FIG. 10 depicts a three dimensional (3D) topography map generated from two images. DETAILED DESCRIPTION OF THE INVENTION
Definitions
[0039] While the invention is primarily described for use for imaging electrical and computer components, it is understood that the invention is not so limited and can be used in the screening/imaging of other components as well as in other various industries. The invention is conducive to inspecting small objects that have one or more flat surfaces. Other applications include, for example, but not limited to, using the invention in aerospace, automotive, computer, biotechnology and pharmaceutical industries.
[0040] Reference in this specification to "one embodiment/aspect" or "an embodiment/aspect" means that a particular feature, structure, or characteristic described in connection with the embodiment/aspect is included in at least one embodiment/aspect of the disclosure. The use of the phrase "in one
embodiment/aspect" or "in another embodiment/aspect" in various places in the specification are not necessarily all referring to the same embodiment/aspect, nor are separate or alternative embodiments/aspects mutually exclusive of other
embodiments/aspects. Moreover, various features are described which may be exhibited by some embodiments/aspects and not by others. Similarly, various requirements are described which may be requirements for some embodiments/aspects but not other embodiments/aspects. Embodiment and aspect can in certain instances be used interchangeably.
[0041] The terms used in this specification generally have their ordinary meanings in the art, within the context of the disclosure, and in the specific context where each term is used. Certain terms that are used to describe the disclosure are discussed below, or elsewhere in the specification, to provide additional guidance to the practitioner regarding the description of the disclosure. For convenience, certain terms may be highlighted, for example using italics and/or quotation marks. The use of highlighting has no influence on the scope and meaning of a term; the scope and meaning of a term is the same, in the same context, whether or not it is highlighted. It will be appreciated that the same thing can be said in more than one way.
[0042] Consequently, alternative language and synonyms may be used for any one or more of the terms discussed herein. Nor is any special significance to be placed upon whether or not a term is elaborated or discussed herein. Synonyms for certain terms are provided. A recital of one or more synonyms does not exclude the use of other synonyms. The use of examples anywhere in this specification including examples of any terms discussed herein is illustrative only, and is not intended to further limit the scope and meaning of the disclosure or of any exemplified term. Likewise, the disclosure is not limited to various embodiments given in this specification.
[0043] Without intent to further limit the scope of the disclosure, examples of instruments, apparatus, methods and their related results according to the embodiments of the present disclosure are given below. Note that titles or subtitles may be used in the examples for convenience of a reader, which in no way should limit the scope of the disclosure. Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure pertains. In the case of conflict, the present document, including definitions, will control.
[0044] The term“8-bit” in computer architecture, refers to 8-bit integers, memory addresses, or other data units are those that are 8 bits (1 octet) wide. Similarly, 128-bit integers, memory addresses, or other data units are those that are 128 bits (16 octets) wide.
[0045] The term“Bayer filter” or“Bayer filter mosaic" refers to a color filter array (CFA) for arranging RGB color filters on a square grid of photo sensors. Its particular arrangement of color filters is commonly used in single-chip digital image sensors in digital cameras, camcorders, and scanners to create a color image. The filter pattern is 50% green, 25% red and 25% blue, hence is also called BGGR, RGBG, GRGB or RGGB.
[0046] The term“bar light” refers to a light source with a series of LEDs arranged in lines or bars, used for direct lighting of a substrate. In a preferred design, LEDs are arranged at high density on a single flat circuit board so that an object can be illuminated from a desired angle.
[0047] The term“beam splitter” refers to a mirror or prism or a combination of the two that is used to divide a beam of radiation into two or more parts. A beam splitter splits an incident beam of light into two output beams which diverge at a fixed angle. The letters“r” and“t” can denote the reflectance and transmittance respectively along a particular path through the beam-splitter.
[0048] The term“binocular disparity” refers to the difference in coordinates of similar features within two stereo images.
[0049] The term“CCD camera” or“three-CCD camera” is a camera whose imaging system uses three separate charge-coupled devices (CCDs), each one taking a separate measurement of the primary colors, red, green, or blue light. Light coming into the lens is split by a trichroic prism assembly, which directs the appropriate wavelength ranges of light to their respective CCDs.
[0050] The term“CMOS” or“complementary metal-oxide semiconductor” camera refers to a camera that uses an integrated circuit design on a printed circuit board (PCB) to create a digital image.
[0051] The term“continuously variable beam splitter” refers to a beam splitter in which allows the user to continuously vary the transmitted intensity of a linearly polarized beam of light. An attenuator accomplishes this by using a zero-order half- wave plate in a rotation mount and a polarizing beam splitter cube. This combination allows it to achieve split ratios of 1 :99 to 99:1 for P:S polarized light.
[0052] The term“conjugate points” refers to the object point and image point of a lens system. Because all of the light paths from the object to the image are reversible, it follows that if the object were placed where the image is, an image would be formed at the original object position.
[0053] The term“diffuser” refers to a device (as a reflector) for distributing the light of a lamp evenly.
[0054] The term“image rectification” refers to a transformation process used to project two or more images onto a common image plane. This process has several degrees of freedom and there are many strategies for transforming images to the common plane.
[0055] The term“Lambert’s Cosine Law” or“Lambert’s Law” refers to the principle that the radiant intensity or luminous intensity observed from an ideal diffusely reflecting surface or ideal diffuse radiator is directly proportional to the cosine of the angle Q between the direction of the incident light and the surface normal. A material that obeys Lambert's cosine law is said to be an isotropic diffuser; it has the same sterance
(luminance, radiance) in all directions. A Lambertian surface provides uniform diffusion of the incident radiation such that its radiance or luminance is the same in all directions from which it can be measured. The Lambert cosine law formula can be represented as: I = (li)(kd)cos Q = (li)(kd)(N-L), wherein L is the unit light vector; N is the unit normal vector; I is the intensity of the reflected light; li is the intensity of the incident light and kd is the diffuse reflectance.
[0056] The term“light emitting diode” or“LED” refers to a semiconductor device that emits visible light when an electric current passes through it. In most LEDs, the light is monochromatic, occurring at a single wavelength.
[0057] The term“optical attenuator” refers to a device used to reduce the power level of an optical signal. The basic types of optical attenuators are fixed, step-wise variable, and continuously variable.
[0058] The term“ramp gradient” refers to a gradient of colored light, which is typically linear across a region.
[0059] The term“white light fringes Michelson interferometer” refers to a common configuration for optical interferometry. Using a beam splitter, a light source is split into two arms. Each of those is reflected back toward the beam splitter which then combines their amplitudes interferometrically. The resulting interference pattern that is not directed back toward the source is typically directed to some type of photoelectric detector or camera. Depending on the interferometer's particular application, the two paths may be of different lengths or include optical materials or components under test.
[0060] The Twyman-Green interferometer is a variation of the Michelson interferometer used to test small optical components. The basic characteristics distinguishing it from the Michelson configuration are the use of a monochromatic point light source and a collimator.
[0061] Other technical terms used herein have their ordinary meaning in the art that they are used, as exemplified by a variety of technical dictionaries. The particular values and configurations discussed in these non-limiting examples can be varied and are cited merely to illustrate at least one embodiment and are not intended to limit the scope thereof. Description of Preferred Embodiments
[0062] The present invention relates to a system and method for obtaining 3D images of surfaces, using a single camera. The three dimensional (3D) images of objects can be obtained using one of two methods.
[0063] A first approach is to capture a pair of images. A first image is taken when an object is illuminated with a first source of light. A second image is taken when the object is illuminated with a second source of light. Because the first and second light sources illuminate the object from different directions, minor flaws and/or imperfections can be detected by comparing the first image and the second image.
[0064] 3D features obtained from each captured image can be reprocessed with software. Defects and/or irregularities can be identified and visualized based on the principles of Lambert’s Cosine Law and an image processing algorithm. Images suitable for obtaining surface topography can also be produced from the two images using digital signal processing.
[0065] In a second approach, a single image is captured from two light colored sources. The image is split into red and blue channels based on the different wavelengths of light. Each image is analyzed as described for the first approach. This approach is faster because a single image is taken with two steady light sources.
Further, the object can be photographed while it is moving across the field of view. The system is well suited for industrial uses that require a high volume of objects to be rapidly inspected for defects as small as a few microns.
[0066] FIG. 1 depicts an arrangement of the components (system 100) according to one aspect of the invention. The system 100 includes a camera 150, a first light source 135 and a second light source 140 for illuminating an object 105. Other components include a diffuser 120 and a beam splitter 115. [0067] The two light sources (135 and 140) transmit light through the diffuser 120, and then to the object 105 via the beam splitter 115. The beam splitter 115 functions to direct the illumination onto the object 105. The camera 201 can be a monochrome or color camera. As shown, the light sources are directed to the object at the same (or similar) angle of incidence.
[0068] In one embodiment of the invention, the camera 150 captures a first image when light from the first light source 135 projects a first illumination gradient across the object 105 from a first location. The camera 150 captures a second image when light from the second light source 140 projects a second illumination gradient across the object 105 from a second location. The first image and second image are combined for analysis of 3D surface features.
[0069] FIG. 2 is a flow chart 200 that describes the steps in a process according to one aspect of the invention. Light from a first light source is projected upon an object 205. The camera captures a first image of the object 210. Light from a second light source is projected upon an object 215. The camera captures a second image of the object 220. Thereafter, the two images are processed using an algorithm to derive the surface topography of the object 225. The images can be used to detect surface features such as indentations or imperfections. The system is most conducive for use with planar or substantially planar objects.
[0070] FIG. 3 depicts another arrangement of the components 300 of the invention. Here, the object 105 to be inspected can be stationary or moving across the stage or field of view. In this arrangement, the camera 150 takes a single image of the object 105. Accordingly, objects can be photographed at a greater speed. Objects can be passed across the stage and rapidly photographed according to the user needs and capabilities of the system. A camera records the images for inspection as part of a rapid, high volume screening process. [0071] In this arrangement, the two light sources illuminate the object at the same time. In one embodiment, the two light sources may comprise a red light 335 and a blue light 340 to illuminate the object. As shown, the red light has a diffuser 325 and the blue light has a diffuser 330. A central diffuser 120, and a beam splitter 115 can also be included. In an alternative design, the blue light source and the red light source can be adjustable (i.e. from 1 :99 to 99:1 ).
[0072] The two colored lights 335 and 340 transmit light through the central diffuser 120, and then to the object 105 via the beam splitter 115. The light illuminates the object 105. The beam splitter 115 functions to direct the illumination onto the object 105. A color camera is necessary because of the use of blue and red light.
[0073] Diffuse surfaces, such as a wafer or integrated circuit (IC) package, follow Lambert’s Cosine Law. This means that reflected light/energy from a small surface area in a particular direction is proportional to the cosine of the angle between that direction and the surface normal. Thus, when a radiating surface has a radiance that is independent of the viewing angle, the surface is said to be perfectly diffuse or a
Lambertian surface.
[0074] With this design, slight deformations on the surface scatter the light differently from the surrounding regions. Further, the smaller incident angles of light allow the light sources to be smaller and further away from the object to be inspected. This allows more space for mechanical handling of the object and increases the flexibility and utility of the system.
[0075] Unlike other systems, the defects do not need to be aligned with the lights and shadow casting is not needed. The changes in intensity in the image correspond to the rate of change of the surface. Thereafter, the defects can be detected with currently available computers and related software. This allows for a system that is faster, more sensitive and more predictable than conventional designs. [0076] FIG. 4A is a flow chart 400 that describes the steps in a process according to one aspect of the invention. Light from two sources is projected upon a substantially flat object 410. The camera captures an image of the object 420. The image can then be split into a red image and a blue image 430. Thereafter, the two images can be processed using an algorithm to derive the surface topography of the object 440. A 3D map can be generated 450 to visualize and detect small surface features such as indentations or imperfections. The system is most conducive for use with planar or substantially planar objects.
[0077] FIG. 4B is a flow chart that describes step 440 in detail. In this step, the principle of Lambert’s Cosine Law is used to detect flaws and/or imperfections on the surface of a substrate. In a preferred method, each pixel on an image is analyzed to determine the intensity of reflected light 510. The intensity of red light and the intensity of blue light can be quantified separately. Based on the intensity, the surface angle can be calculated based on the relationship defined by Lambert’s Cosine Law 520.
[0078] The surface angle can be determined for the red light and blue light separately. For each calculation, there can be two mathematically correct solutions. However, by comparing the two figures, the actual solution will be apparent 530. Only one solution will lie within the area of the substrate. This process is repeated for each pixel 540. Thereafter, the data can be compiled from the entire substrate 550.
[0079] FIG. 5 depicts the principle of Lambert’s Law of Cosines. The irradiance or illuminance falling on a surface varies as the cosine of the incident angle. The perceived measurement area orthogonal to the incident flux is reduced at oblique angles, causing light to spread out over a wider area than it would if perpendicular to the surface. The arrows depict the illumination of light from different angles with the percentage of light that is reflected. A diffuse Lambertian surface obeys the cosine law by distributing reflected energy in proportion to the cosine of the reflected angle. For light that is illuminated perpendicular to the surface (i.e. 0°), 100% of the light will be reflected. For light that is illuminated from an angle of 85°, just 9% will be reflected because much of the light is dispersed.
[0080] This principle is used in the invention, as depicted in FIG. 6A and FIG. 6B.
A light source 510 illuminates two points on the surface of an object. A camera 150 takes an image of the object. The intensity of light reflected from the surface will vary according to the angle of incident light.
[0081] In FIG. 6A, light is emitted onto point A 560 and point B 570 from a single light source 510. The light reflected from point A will appear brighter to the camera because of the small angle relative to the surface normal. The light reflected from point B will not appear as bright to the camera because of the greater angle relative to the surface normal. Similarly, in FIG. 6B, the light reflected from point B will appear brighter to the camera because of the small angle relative to the surface normal. The light reflected from point A will not appear as bright to the camera. By measuring the reflected energy, one can calculate the angle relative to the surface normal. When each location of the surface is illuminated by two separate light sources with different angles, the surface orientation of the each location on the image can be solved.
[0082] This principle is depicted in FIG. 3, wherein each light source is directed onto the object from a different source/location. For example, red light can be illuminated from a first light source 335 and blue light can be illuminated from a second light source 340. Although the light sources are combined in an image, the light can thereafter be separated based on a Bayer mask pattern. The intensities of reflected light can then be quantified and analyzed. By solving the blue image intensity and red image intensity, one can determine the surface orientation of each location (i.e. pixel) on the image.
[0083] After recording an image, the mixture of blue light (450 nm) and red light (650 nm) wavelength are detected/photographed by the camera. Next, there is a conversion based on the Bayer mask pattern at the detector device. The different
Figure imgf000019_0001
wavelengths are extracted at each location, the blue filter and red filter used in alternate locations (Bayer pattern) of the image pixels.
[0084] This principle is further illustrated in FIG. 7 A, 7B, 7C and 7D. The four figures depict the principle of the invention, wherein two light sources can be analyzed separately using a color filter. Two images can be obtained and analyzed as light is illuminated onto an object with a flat surface (one red, one blue). The amount of reflected light will be the same in both examples (FIG. 7A and FIG. 7B) as the intensity of illuminated light and the incident angle (Q) remain the same. Referring to FIG. 7A and FIG. 7B, based on the surface orientation to a line, there are two possible solutions, either solution (602 or 603) can be the correct solution.
[0085] Similarly, in FIG. 7C and FIG. 7D, the amount of reflected light will be the same if the angle (a) remains constant as depicted. That is, for a Lambertian surface, the amount of reflected light will be the same if the intensity of light and the incident angle are equal. There are two possible solutions; either solution (604 or 605) can be the correct solution.
[0086] This solves the problem of finding the surface orientation to the line, solution 602 and 604 will be closest to each other. Taking the average of the two angles will give the orientation of the surface. Solution 603 and 605 are both outliers and hence, neither are a correct solution.
[0087] FIG. 8A and FIG. 8B are images that have been separated by a color filter (although they are presented in black and white). The left and right side of images will show stronger blue intensity or stronger red intensity due to the arrangement of the blue and red light respectively. The detector can distinguish the blue red intensity by its wavelength differences. The result will produce single unmodified mixture of blue and red intensity image. The blue and red intensity information (following Lambert’s law) of the device is retained for processing. [0088] Shading on the two captured images indicate a normal vector of the surface of the object. Using image processing software, an evenly lit image (FIG. 8C) can be reconstructed from the two images (FIG. 8A and FIG. 8B). The texture and shadings with lights shining from the right are evident in the photo.
[0089] It is possible to use edge detection on FIG. 8A and FIG. 8B in localizing image processing to the device package area by masking out the background. This will reduce the time needed for the calculation. Further, an additional mask (to remove ball or other component) can be added to speed up the calculation. Working within the masked package area, the average orientation of every column of pixels can be calculated.
[0090] The average orientation of the object can be determined by calculating the “average orientation of a column of pixels.” Further, the device warpage in the orientation image can be filtered by calculating the“difference in orientation" of each pixel’s surface orientation to its column average. This can be important as each object, though substantially planar, can be warped to a degree and/or can be photographed while it is not completely flat.
[0091] It is also possible to regenerate other images, from these two captured images, by using image processing software. For example, the image shown in FIG.
8D was obtained by filtering high frequency texture of the image and retaining the low frequency features representing the surface topography suitable for digital signal processing.
[0092] Examples of further image processing include scanning a line from left to right, the impulse which indicates an indentation 410 on the surface as shown in FIG. 9A. Scanning a line from left to right, the shape of the impulse indicates a protrusion 420 on the surface as shown in FIG. 9B. [0093] After image processing, the three dimensional (3D) topography map of the image is generated from the two (i.e. red and blue) images. This is depicted in FIG. 10. Defects in the object cause subtle changes in the light intensity. The intensity changes reflect the direction to the light source. The defects can be highlighted and overlayed to a grayscale image.
[0094] The set up and design can capture small defects such as indentations and protrusions on still or moving objects. The device is sensitive enough to detect defects down to the micron level at high speed.
Working Example
Gradient space analysis of surface defects using Lambertian derived bump map
[0095] In this example, an integrated circuit (IC) package is inspected for flaws and/or irregularities. Individual pixels on a surface of the object are imaged and the respective intensity of reflected light (red and blue) is measured. The surface angle of each pixel is determined to detect the presence of deviations from the flat surface. A protrusion or indentation can be classified by size and/or depth. If the area and/or depth is outside a set of parameters, the package can be classified as defective. The system is capable of operating as part of an assembly line or independently as a quality control or inspection device.
1. Light and Camera Set Up
[0096] In a preferred method of operation, a red light source and a blue light source illuminate an object. The light sources can be arranged on a single flat circuit board so that the object can be illuminated from desired angles.
[0097] The object moves across the stage or field of view and is photographed as it passes through a field of view. In the alternative, the object can be photographed while stationary. The camera records the images for inspection while the object is illuminated as part of a rapid, high volume screening process.
2. Processing Photograph into Separate Images
[0098] From a captured image, separate images can be regenerated by splitting the captured color image into different colors based on the different light colors (i.e. a color filter) or Bayer mask pattern. Specifically, the captured single image can be split into a blue image and a red image.
3. Analysis of Light Intensity to Detect Defects
[0099] The regenerated blue and red images can be compiled and compared with one another to reveal discrepancies based on differences in intensities of reflected light that are quantified and analyzed. This can be done for a series of pixels, whereby pixels are analyzed in series of rows (i.e. a scan from left to right) on each image. An average orientation is determined for the pixels. Each pixel is then characterized individually. Deviations from Lambert’s law (as described below) will indicate if the surface, where a pixel is located, is flawed or irregular. In this regard, the two images (red and blue) can be compared to determine the location of the flaw or irregularity and further characterize it.
4. Generation of 3D Map
[00100] Given that a first pixel has a virtual z-height of 0, one can scan rows of pixels to characterize the height of other pixels. The principle of Lambert’s Cosine Law is used to characterize the orientation of each pixel based on the intensity of the reflected light of said pixel.
[00101] For reference, a pixel can be designated as positive or negative, depending on it orientation. A“difference in orientation” that is positive indicates the next pixel (i.e. a scan from left to right) is higher. Likewise, a negative value indicates the next pixel is lower. For a“difference in orientation" above a threshold, the next pixel height is increased by one. For“difference in orientation" below a threshold, the next pixel height is decreased by one.
[00102] Next, by fitting a line through the virtual z-height using linear regression, the z-height difference of every pixel on this line gives an estimated topology of the surface. This is repeated for every scan line to get the result which is an image with the surface topology. Thereafter, the image can then be smoothed out using a smoothing filter.
[00103] The next step is converting the difference in height of every pixel into a grayscale. For example, using an 8-bit representation, the baseline will be grayscale of 128-bit. The higher surface will be greater than 128-bit while the lower surface will be less than 128-bit. Thereafter, using an 8-bit image representation, one can use blob analysis to detect a bright blob and dark blob. The differences in the average gray- value of the blob with the background will be the contrast. To categorize the blob further, its length, width, area, contrast and shape can be analyzed using an image processing algorithm.
[00104] It is possible to use edge detection (e.g. on FIG. 8A and FIG. 8B) to localize image processing to the device package area by masking out the background. This can increase the speed of the analysis and the calculation. Additional masks (e.g. to remove a ball or component) can be added to speed up the analysis/calculation further.
[00105] For example, the image can be calibrated to calculate the length, width and area. A bright blob will be categorized as a protrusion with a calculated length, width and area. A dark blob will be categorized as an indentation with a calculated length, width and area. Based on an analysis of one or more blobs, the object can be categorized as“good” or“defective.”
Advantages of the Invention:
[00106] There are several advantages to the invention, including: 1. It can detect shallow indentations or protrusions on the surface that do not cast any prominent shadows
2. It can detect irregularities/defects that are not parallel to the light will cast prominent shadow. It can capture small details in the z-plane.
3. The detected location of an irregularity/defect will be the actual location. Other devices rely on a shadow casted in which case, the shadow is not the actual defect location.
4. It is capable of rapid imaging and analysis so that a large volume of substrates can be analyzed in a short period of time. This is conducive to rapid inspection as part of an assembly or inspection process.
5. It allows for a larger field of view than conventional systems.
6. Using other methods, the incident light must be high to cast a shadow. Because the light must be very close to the inspection surface in the Z direction, these methods have limited practical use.
[00107] It will be appreciated that variations of the above disclosed and other features and functions, or alternatives thereof, may be combined into other systems or applications. Also, various unforeseen or unanticipated alternatives, modifications, variations or improvements therein may be subsequently made by those skilled in the art which are also intended to be encompassed by the following claims.
[00108] Although embodiments of the current disclosure have been described comprehensively, in considerable detail to cover the possible aspects, those skilled in the art would recognize that other versions of the disclosure are also possible.

Claims

What is claimed is:
1. A system for detecting irregularities and/or surface flaws on a substantially flat object, comprising:
a) a camera,
b) a processing unit,
c) a blue light source, and
d) a red light source,
wherein the blue light source illuminates the object from a first location and the red light source illuminates the object from a second location;
wherein the camera captures a color image of the object that is separated into a red light image and a blue light image by the processing unit;
wherein the processing unit determines the intensity of reflected light of pixels in the red light image and the intensity of reflected light of pixels in the blue light image;
wherein irregularities and/or surface flaws are detected based on deviations from Lambert’s Cosine Law of the intensity of reflected light of pixels in the red light image and blue light image.
2. The system of claim 1 , wherein the processing unit uses irregularities and/or surface flaws to construct a three-dimensional image of the substantially flat object.
3. The system of claim 1 , wherein the blue light source and the red light sources are one each of a blue LED and a red LED.
4. The system of claim 1 , wherein the ratio of light from the blue light source and the red light source is adjustable from 1 :99 to 99:1.
5. The system of claim 1 , wherein the object is a wafer or an integrated circuit (IC) package.
6. The system of claim 1 , wherein the camera is a CCD camera or a CMOS camera.
7. A method of obtaining a three-dimensional image of surface features of an object, comprising the steps of:
a) illuminating an object with a red light source from a first area;
b) illuminating the object with a blue light source from a second area;
c) capturing an image of the object with a color camera;
d) separating the image into a red light image and a blue light image using a
computer and/or RBG filter;
e) analyzing the intensity of light of pixels on the red light image;
f) analyzing the intensity of light of pixels on the blue light image;
g) determining a surface angle for pixels based on an analysis of the intensity of reflected red light and the intensity of reflected blue light; and
h) generating a three-dimensional image of surface features of the object based on surface angles of pixels.
8. The method of claim 7, wherein Lambert’s Cosine Law is used in the step of determining a surface angle for pixels based on an analysis of the intensity of reflected red light and the intensity of reflected blue light.
9. The method of claim 7, wherein the surface features comprise at least one of indentation features on a surface of the object or protrusion features on a surface of the object.
10. The method of claim 7, including the additional step of subjecting the red light image and the blue light image to digital signal processing for obtaining surface protrusion and indentation features.
1 1. The method of claim 7, wherein the object is a wafer or an integrated circuit (IC) package.
12. A process for detecting irregularities and/or surface flaws on a surface of an object using a camera comprising the steps of:
a) providing a blue light source from a first area of incidence;
b) providing a red light source from a second area of incidence;
c) obtaining a color image of the surface of the object;
d) separating the color image into a red light image and a blue light image;
e) analyzing the intensity of rows of pixels of reflected light on the red light image; f) analyzing the intensity of rows of pixels of reflected light on the blue light image; g) identifying a difference in orientation among pixels wherein a positive value
indicates the next pixel is higher and a negative value indicates the next pixel is lower;
h) determining the topology of the surface by fitting lines through pixels; and i) identifying irregularities and/or surface flaws on the object.
13. The process of claim 12, including the additional step of generating a three- dimensional map to visualize three-dimensional features.
14. The process of claim 12, wherein irregularities and/or surface flaws comprise at least one of indentation features or protrusion features.
15. The process of claim 12, wherein the red light image and the blue light image are subjected to digital signal processing.
16. The process of claim 12, wherein the object is a wafer or an integrated circuit (IC) package.
17. The process of claim 12, wherein Lambert’s law is used in the step of identifying a difference in orientation among pixels wherein a positive value indicates the next pixel is higher and a negative value indicates the next pixel is lower.
PCT/SG2017/050616 2017-12-13 2017-12-13 A system for obtaining 3d images of objects and a process thereof WO2019117802A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/SG2017/050616 WO2019117802A1 (en) 2017-12-13 2017-12-13 A system for obtaining 3d images of objects and a process thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/SG2017/050616 WO2019117802A1 (en) 2017-12-13 2017-12-13 A system for obtaining 3d images of objects and a process thereof

Publications (1)

Publication Number Publication Date
WO2019117802A1 true WO2019117802A1 (en) 2019-06-20

Family

ID=66820529

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SG2017/050616 WO2019117802A1 (en) 2017-12-13 2017-12-13 A system for obtaining 3d images of objects and a process thereof

Country Status (1)

Country Link
WO (1) WO2019117802A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112767396A (en) * 2021-04-07 2021-05-07 深圳中科飞测科技股份有限公司 Defect detection method, defect detection device and computer-readable storage medium
JP7292457B1 (en) 2022-03-14 2023-06-16 三菱電機株式会社 Surface profile inspection method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6064478A (en) * 1995-03-29 2000-05-16 Fraunhofer-Gesellschaft Zur Forderung Der Angewandten Forschung E.V. Method of and apparatus for automatic detection of three-dimensional defects in moving surfaces by means of color vision systems
US20040090638A1 (en) * 1998-08-05 2004-05-13 Cadent Ltd. Imaging a three-dimensional structure by confocal focussing an array of light beams
CN103886642A (en) * 2014-04-04 2014-06-25 北京科技大学 Method for achieving three-dimensional reconstruction of steel plate surface

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6064478A (en) * 1995-03-29 2000-05-16 Fraunhofer-Gesellschaft Zur Forderung Der Angewandten Forschung E.V. Method of and apparatus for automatic detection of three-dimensional defects in moving surfaces by means of color vision systems
US20040090638A1 (en) * 1998-08-05 2004-05-13 Cadent Ltd. Imaging a three-dimensional structure by confocal focussing an array of light beams
CN103886642A (en) * 2014-04-04 2014-06-25 北京科技大学 Method for achieving three-dimensional reconstruction of steel plate surface

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JALOBEANU A. ET AL.: "Modeling Images of Natural 3D Surfaces: Overview and Potential Applications", CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOP, vol. 188, 2 July 2004 (2004-07-02), pages 1 - 9, XP010762044, Retrieved from the Internet <URL:https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=1384988> [retrieved on 20180116] *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112767396A (en) * 2021-04-07 2021-05-07 深圳中科飞测科技股份有限公司 Defect detection method, defect detection device and computer-readable storage medium
CN112767396B (en) * 2021-04-07 2021-07-20 深圳中科飞测科技股份有限公司 Defect detection method, defect detection device and computer-readable storage medium
JP7292457B1 (en) 2022-03-14 2023-06-16 三菱電機株式会社 Surface profile inspection method
JP2023133744A (en) * 2022-03-14 2023-09-27 三菱電機株式会社 Surface shape inspection method

Similar Documents

Publication Publication Date Title
JP6629455B2 (en) Appearance inspection equipment, lighting equipment, photography lighting equipment
JP5162702B2 (en) Surface shape measuring device
US7471381B2 (en) Method and apparatus for bump inspection
JP6834174B2 (en) Visual inspection method and visual inspection equipment
JP6791631B2 (en) Image generation method and inspection equipment
JP6859627B2 (en) Visual inspection equipment
JP5621178B2 (en) Appearance inspection device and printed solder inspection device
JP2015068668A (en) Appearance inspection device
TWI495867B (en) Application of repeated exposure to multiple exposure image blending detection method
WO2019117802A1 (en) A system for obtaining 3d images of objects and a process thereof
JP5890953B2 (en) Inspection device
KR20100138985A (en) Method and apparatus for multiplexed image acquisition and processing
JP6801860B2 (en) Appearance inspection device for the object to be inspected
JP2009236760A (en) Image detection device and inspection apparatus
JP5475167B1 (en) Work detection device and work detection method
TWI687672B (en) Optical inspection system and image processing method thereof
JP2021096112A (en) Inspection device for transparent body
TW201629470A (en) Separable multiple illumination sources in optical inspection
JP7136064B2 (en) Apparatus for inspecting surface of object to be inspected and method for inspecting surface of object to be inspected
Munaro et al. Efficient completeness inspection using real-time 3D color reconstruction with a dual-laser triangulation system
US11825211B2 (en) Method of color inspection by using monochrome imaging with multiple wavelengths of light
JP4967132B2 (en) Defect inspection method for object surface
TW201641928A (en) System for object inspection
JP7062798B1 (en) Inspection system and inspection method
JP7459525B2 (en) Three-dimensional shape measuring device, three-dimensional shape measuring method and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17935024

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17935024

Country of ref document: EP

Kind code of ref document: A1