WO2013116299A1 - Method and apparatus for measuring the three dimensional structure of a surface - Google Patents

Method and apparatus for measuring the three dimensional structure of a surface Download PDF

Info

Publication number
WO2013116299A1
WO2013116299A1 PCT/US2013/023789 US2013023789W WO2013116299A1 WO 2013116299 A1 WO2013116299 A1 WO 2013116299A1 US 2013023789 W US2013023789 W US 2013023789W WO 2013116299 A1 WO2013116299 A1 WO 2013116299A1
Authority
WO
WIPO (PCT)
Prior art keywords
images
coordinate system
sequence
sharpness
volume
Prior art date
Application number
PCT/US2013/023789
Other languages
French (fr)
Inventor
Evan J. Ribnick
Yi Qiao
Jack W. Lai
David L. Hofeldt
Original Assignee
3M Innovative Properties Company
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 3M Innovative Properties Company filed Critical 3M Innovative Properties Company
Priority to JP2014554952A priority Critical patent/JP2015513070A/en
Priority to US14/375,002 priority patent/US20150009301A1/en
Priority to KR1020147023980A priority patent/KR20140116551A/en
Priority to CN201380007293.XA priority patent/CN104254768A/en
Priority to EP13743682.0A priority patent/EP2810054A4/en
Priority to BR112014018573A priority patent/BR112014018573A8/en
Publication of WO2013116299A1 publication Critical patent/WO2013116299A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/207Image signal generators using stereoscopic image cameras using a single 2D image sensor
    • H04N13/221Image signal generators using stereoscopic image cameras using a single 2D image sensor using the relative movement between cameras and objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/30Measuring arrangements characterised by the use of optical techniques for measuring roughness or irregularity of surfaces
    • G01B11/303Measuring arrangements characterised by the use of optical techniques for measuring roughness or irregularity of surfaces using photoelectric detection means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/571Depth or shape recovery from multiple images from focus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30124Fabrics; Textile; Paper
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30136Metal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2004Aligning objects, relative positioning of parts

Definitions

  • the present disclosure is directed to a non-transitory computer readable medium including software instructions to cause a computer processor to:receive, with an online computerized inspection system, a sequence of images of a moving surface of a web material, wherein the sequence of images is captured with a stationary imaging sensor including a camera and a telecentric lens having a focal plane aligned at a non-zero viewing angle with respect to an x-y plane of a surface coordinate system; align a reference point on the surface in each image in the sequence to form a registered sequence of images; stack the registered sequence of images along a z direction in a camera coordinate system to form a volume, wherein each image in the registered sequence of images comprises a layer in the volume; compute a sharpness of focus value for each pixel within the volume, wherein the pixels lie in a plane normal to the z direction in the camera coordinate system; compute, based on the sharpness of focus values, a depth of maximum focus value z m for each pixel within the volume
  • the present disclosure is directed to a method including translating an imaging sensor relative to a surface, wherein the sensor includes a lens with a focal plane aligned at a non-zero viewing angle with respect to an x-y plane of a surface coordinate system; imaging the surface with the imaging sensor to acquire a sequence of images; estimating the three dimensional locations of points on the surface to provide a set of three dimensional points representing the surface; and processing the set of three dimensional points to generate a range- map of the surface in a selected coordinate system.
  • FIG. 3 is a flowchart illustrating another method for determining the structure of a surface using the apparatus of FIG. 1.
  • FIG. 6 is a photograph of three images obtained by the optical inspection apparatus in Example 1.
  • FIGS. 9A-C are surface reconstructions formed using the apparatus of FIG. 1 as described in Example 3 at viewing angles ⁇ of 22.3°, 38.1°, and 46.5°, respectively.
  • FIG. 1 is a schematic illustration of a sensor system 10, which is used to image a surface 14 of a material 12.
  • the surface 14 is moving along the direction of the arrow A along the direction y s at a known speed toward the imaging sensor system 18, and includes a plurality of features 16 having a three-dimensional (3D) structure (extending along the direction z s ).
  • the surface 14 may be moving away from the imaging sensor system 18 at a known speed.
  • the translation direction of the surface 14 with respect to the imaging sensor system 18, or the number and/or position of the imaging sensors 18 with respect to the surface 14, may be varied as desired so that the imaging sensor system 18 may obtain a more complete view of areas of the surface 14, or of particular parts of the features 16.
  • the imaging sensor system 18 includes a lens system 20 and a sensor included in, for example, the CCD or CMOS camera 22. At least one optional light source 32 may be used to illuminate the surface 14.
  • the lens 20 has a focal plane 24 that is aligned at a non-zero angle ⁇ with respect to an x- y plane of the surface coordinate system of the surface 14.
  • the viewing angle ⁇ between the lens focal plane and the x-y plane of the surface coordinate system may be selected depending on the characteristics of the surface 14 and the features 16 to be analyzed by the system 10.
  • is an acute angle less than 90°, assuming an arrangement such as in FIG. 1 wherein the translating surface 14 is moving toward the imaging sensor system 18.
  • the viewing angle ⁇ is about 20° to about 60°, and an angle of about 40° has been found to be useful.
  • the viewing angle ⁇ may be periodically or constantly varied as the surface 14 is imaged to provide a more uniform and/or complete view of the features 16.
  • the sensor system 10 includes a processor 30, which may be internal, external or remote from the imaging sensor system 18.
  • the processor 30 analyzes a series of images of the moving surface 14, which are obtained by the imaging sensor system 18.
  • the amount that an image must be translated to register it with another image in the sequence depends on the translation of the surface 14 between images. If the translation speed of the surface 14 is known, the motion of the surface 14 sample from one image to the next as obtained by the imaging sensor system 18 is also known, and the processor 30 need only determine how much, and in which direction, the image should be translated per unit motion of the surface 14. This determination made by the processor 30 depends on, for example, the properties of the imaging sensor system 18, the focus of the lens 20, the viewing angle ⁇ of the focal plane 24 with respect to the x-y plane of the surface coordinate system, and the rotation (if any) of the camera 22.
  • a modified Laplacian sharpness metric may be applied to compute the quantity
  • Partial derivatives can be computed using finite differences. The intuition behind this metric is that it can be thought of as an edge detector - clearly regions of sharp focus will have more distinct edges than out-of-focus regions.
  • a median filter may be used to aggregate the results locally around each pixel in the sequence of images.
  • the processor 30 computes a sharpness of focus volume, similar to the volume formed in earlier steps by stacking the registered images along the z c direction. To form the sharpness of focus volume, the processor replaces each (x,y) pixel value in the registered image volume by the corresponding sharpness of focus measurement for that pixel. Each layer (corresponding to an x-y plane in the plane x c -y c ) in this registered stack is now a "sharpness of focus" image, with the layers registered as before, so that an image location corresponding to the same physical location on the surface 14 are aligned.
  • the sharpness of focus values observed moving through different layers in the z c -direction comes to a maximum value when the point imaged at that location comes into focus (i.e., when it intersects with the focal plane 24 of the camera 22), and that the sharpness value will decrease moving away from that layer in either direction along the z c axis.
  • the processor 30 estimates the 3D location of each point on the surface 14 by approximating the theoretical location of the slice in the sharpness of focus volume with the sharpest focus through that point.
  • the processor approximates this theoretical location of sharpest focus by fitting a Gaussian curve to the measured sharpness of focus values at each location (x,y) through slice depths z c in the sharpness of focus volume.
  • the model for sharpness of focus values as a function of slice de th z c is given by
  • an approximate algorithm can be used that executes more quickly without substantially sacrificing accuracy.
  • a quadratic function can be fit to the sharpness profile samples at each location (x,y), but only using the samples near the location with the maximum sharpness value. So, for each point on the surface, first the depth is found with the highest sharpness value, and a few samples are selected on either side of this depth. A quadratic function is fit to these few samples using the standard Least-Squares formulation, which can be solved in closed form.
  • the parabola in the quadratic function may open upwards - in this case, the result of the fit is discarded, and the depth of the maximum sharpness sample is simply used instead. Otherwise, the depth is taken as the location of the theoretical maximum of the quadratic function, which may in general lie between two of the discrete samples.
  • the processor 30 estimates the 3D location of each point on the surface of the sample. This point cloud is then converted into a surface model of the surface 14 using standard triangular meshing algorithms.
  • step 502 the processor 30 approximates the sharpness of focus for each pixel in the newly acquired image using an appropriate algorithm such as, for example, the modified Laplacian sharpness metric described in detail in the discussion of the batch process above.
  • step 504 the processor 30 then computes a
  • step 506 based on the apparent shift of the surface in the last image in the sequence, the processor finds transitional points on the surface 14 that have just exited the field of view of the lens 20, but which were in the field of view in the previous image in the sequence.
  • step 508 the processor then estimates the 3D location of all such transitional points. Each time a new image is received in the sequence, the processor repeats the estimation of the 3D location of the transitional points, then accumulates these 3D locations to form a point cloud representative of the surface 14.
  • step 502 may be performed in one thread, while steps 504-508 occur in another thread.
  • step 510 the point cloud is further processed as described in FIG. 4 to form a range map of the surface 14.
  • the surface analysis method and apparatus described herein are particularly well suited, but are not limited to, inspecting and characterizing the structured surfaces 14 of web-like rolls of sample materials 12 that include piece parts such as the feature 16 (FIG. 1).
  • the web rolls may contain a manufactured web material that may be any sheet-like material having a fixed dimension in one direction (cross-web direction generally normal to the direction A in FIG. 1) and either a predetermined or indeterminate length in the orthogonal direction (down- web direction generally parallel to direction A in FIG. 1). Examples include, but are not limited to, materials with textured, opaque surfaces such as metals, paper, woven materials, non-woven materials, glass, abrasives, flexible circuits or combinations thereof.
  • the apparatus of FIG. 1 may be utilized in one or more inspection systems to inspect and characterize web materials during manufacture.
  • unfinished web rolls may undergo processing on multiple process lines either within one web manufacturing plant, or within multiple manufacturing plants.
  • a web roll is used as a source roll from which the web is fed into the manufacturing process.
  • the web may be converted into sheets or piece parts, or may be collected again into a web roll and moved to a different product line or shipped to a different manufacturing plant, where it is then unrolled, processed, and again collected into a roll. This process is repeated until ultimately a finished sheet, piece part or web roll is produced.
  • EPROM electronically erasable programmable read only memory
  • EEPROM electronically erasable programmable read only memory
  • flash memory a hard disk, a CD-ROM, a floppy disk, a cassette, magnetic media, optical media, or other computer-readable storage media.
  • FIGS. 7A-7C show the reconstructed surface in the images shown in FIGS. 7A-7C from three different perspectives.
  • the reconstructed surface in the images shown in FIGS. 7A-7C is realistic and accurate, and a number of quantities of interest could be computed from this surface, such as feature sharpness, size and orientation in the case of a web material such as an abrasive.
  • FIG. 7C shows that that there are several gaps or holes in the reconstructed surface. These holes are a result of the manner in which the samples were imaged.
  • the parts of the surface on the backside of tall features on the sample in this case, grains on the abrasive
  • This lack of data could potentially be alleviated through the use of two cameras viewing the sample simultaneously from different angles.
  • sample 1 showed a median range residual value of 12 ⁇
  • Sample 2 showed a median range residual value of 9 ⁇ .

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
  • Image Processing (AREA)

Abstract

A method includes imaging a surface with at least one imaging sensor, wherein the surface and the imaging sensor are in relative translational motion. The imaging sensor includes a lens having a focal plane aligned at a non-zero angle with respect to an x-y plane of a surface coordinate system. A sequence of images of the surface is registered and stacked along a z direction of a camera coordinate system to form a volume. A sharpness of focus value is determined for each (x,y) location in the volume, wherein the (x,y) locations lie in a plane normal to the z direction of the camera coordinate system. Using the sharpness of focus values, a depth of maximum focus zm along the z direction in the camera coordinate system is determined for each (x,y) location in the volume, and based on the depths of maximum focus zm, a three dimensional location of each point on the surface may be determined.

Description

METHOD AND APPARATUS FOR MEASURING THE THREE DIMENSIONAL
STRUCTURE OF A SURFACE
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of U.S. Provisional Application No. 61/593,197, filed January 31, 2012, the disclosure of which is incorporated by reference herein in its entirety.
TECHNICAL FIELD
[0002] The present disclosure relates to a method and optical inspection apparatus for determining a three-dimensional structure of a surface. In another aspect, the present disclosure relates to material inspection systems, such as computerized systems for the inspection of moving webs of material.
BACKGROUND
[0003] Online measurement and inspection systems have been used to continuously monitor the quality of products as the products are manufactured on production lines. The inspection systems can provide real-time feedback to enable operators to quickly identify a defective product and evaluate the effects of changes in process variables. Imaging-based inspection systems have also been used to monitor the quality of a manufactured product as it proceeds through the manufacturing process.
[0004] The inspection systems capture digital images of a selected part of the product material using sensors such as, for example, CCD or CMOS cameras. Processors in the inspection systems apply algorithms to rapidly evaluate the captured digital images of the sample of material to determine if the sample, or a selected region thereof, is suitably defect- free for sale to a customer.
[0005] Online inspection systems can analyze two-dimensional (2D) image characteristics of a moving surface of a web material during the manufacturing process, and can detect, for example, relatively large-scale non-uniformities such as cosmetic point defects and streaks. Other techniques such as triangulation point sensors can achieve depth resolution of surface structure on the order of microns at production line speeds, but cover only a single point on the web surface (since they are point sensors), and as such provide a very limited amount of useful three-dimensional (3D) information on surface characteristics. Other techniques such as laser line triangulation systems can achieve full 3D coverage of the web surface at production line speeds, but have a low spatial resolution, and as such are useful only for monitoring large-scale surface deviations such as web curl and utter. [0006] 3D inspection technologies such as, for example, laser profilometry, interferometry, and 3D microscopy (based on Depth from Focus (DFF)) have been used for surface analysis. DFF surface analysis systems image an object with a camera and lens having a narrow depth of field. As the object is held stationary, the camera and lens are scanned depth- wise over various positions along the z-axis (i.e., parallel to the optical axis of the lens), capturing an image at each location. As the camera is scanned through multiple z-axis positions, points on the object's surface come into focus at different image slices depending on their height above the surface. Using this information, the 3D structure of the object surface can be estimated relatively accurately.
SUMMARY
[0007] In one aspect, the present disclosure is directed to a method including imaging a surface with at least one imaging sensor, wherein the surface and the imaging sensor are in relative translational motion, and wherein the sensor includes a lens with a focal plane aligned at a nonzero viewing angle with respect to an x-y plane in a surface coordinate system; registering a sequence of images of the surface; stacking the registered images along a z direction in a camera coordinate system to form a volume; determining a sharpness of focus value for each (x,y) location in the volume, wherein the (x,y) locations lie in a plane normal to the z direction in the camera coordinate system; determining, using the sharpness of focus values, a depth of maximum focus zm along the z direction in the camera coordinate system for each (x,y) location in the volume; and determining, based on the depths of maximum focus zm, a three dimensional location of each point on the surface.
[0008] In another aspect, the present disclosure is directed to a method including capturing with an imaging sensor a sequence of images of a surface, wherein the surface and the imaging sensor are in relative translational motion, and wherein the imaging sensor includes a telecentric lens having a focal plane aligned at a non-zero viewing angle with respect to an x-y plane in a surface coordinate system; aligning a reference point on the surface in each image in the sequence to form a registered sequence of images; stacking the registered sequence of images along a z direction in a camera coordinate system to form a volume, wherein each image in the registered sequence of images comprises a layer in the volume; computing a sharpness of focus value for each pixel within the volume, wherein the pixels lie in a plane normal to the z direction in the camera coordinate system; computing, based on the sharpness of focus values, a depth of maximum focus value zm for each pixel within the volume; determining, based on the depths of maximum focus zm, a three dimensional location of each point on the surface; and constructing a three-dimensional model of the surface based on the three dimensional point locations.
[0009] In yet another aspect, the present disclosure is directed to an apparatus, including an imaging sensor with a telecentric lens, wherein the lens has a focal plane aligned at a non-zero viewing angle with respect to an x-y plane in a surface coordinate system, wherein the surface and the imaging sensor are in relative translational motion, and wherein the sensor images the surface to form a sequence of images thereof; a processor that: aligns in each image in the sequence a reference point on the surface to form a registered sequence of images; stacks the registered sequence of images along a z direction in a camera coordinate system to form a volume, wherein each image in the registered sequence of images comprises a layer in the volume; computes a sharpness of focus value for each pixel within the volume, wherein the pixels lie in a plane normal to the z direction in the camera coordinate system; computes, based on the sharpness of focus values, a depth of maximum focus value zm for each pixel within the volume; determines, based on the depths of maximum focus zm, a three dimensional location of each point on the surface; and constructs a three-dimensional model of the surface based on the three dimensional locations.
[0010] In yet another aspect, the present disclosure is directed to a method including positioning a stationary imaging sensor at a non-zero viewing angle with respect to a moving web of material, wherein the imaging sensor includes a telecentric lens to image a surface of the moving web and form a sequence of images thereof; processing the sequence of images to: register the images; stack the registered images along a z direction in a camera coordinate system to form a volume; determine a sharpness of focus value for each (x,y) location in the volume, wherein the (x,y) locations lie in a plane normal to the z direction in the camera coordinate system;
determine a depth of maximum focus zm along the z direction in the camera coordinate system for each (x,y) location in the volume; and determine, based on the depths of maximum focus zm, a three dimensional location of each point on the surface of the moving web.
[0011] In yet another aspect, the present disclosure is directed to a method for inspecting a moving surface of a web material in real time and computing a three-dimensional model of the surface, the method including capturing with a stationary sensor a sequence of images of the surface, wherein the imaging sensor includes a camera and a telecentric lens having a focal plane aligned at a non-zero viewing angle with respect to an x-y plane of a surface coordinate system; aligning a reference point on the surface in each image in the sequence to form a registered sequence of images; stacking the registered sequence of images along a z direction in a camera coordinate system to form a volume, wherein each image in the registered sequence of images comprises a layer in the volume; computing a sharpness of focus value for each pixel within the volume, wherein the pixels lie in a plane normal to the z direction in the camera coordinate system; computing, based on the sharpness of focus values, a depth of maximum focus value zm for each pixel within the volume; determining, based on the depths of maximum focus zm, a three dimensional location of each point on the surface; and constructing the three-dimensional model of the surface based on the three dimensional locations.
[0012] In yet another aspect, the present disclosure is directed to an online computerized inspection system for inspecting web material in real time, the system including a stationary imaging sensor including a camera and a telecentric lens, wherein the lens has a focal plane aligned at a non-zero viewing angle with respect to a plane of a moving surface, and wherein the sensor images the surface to form a sequence of images thereof; a processor that: aligns in each image in the sequence a reference point on the surface to form a registered sequence of images; stacks the registered sequence of images along a z direction in a camera coordinate system to form a volume, wherein each image in the registered sequence of images comprises a layer in the volume; computes a sharpness of focus value for each pixel within the volume, wherein the pixels lie in a plane normal to the z direction in the camera coordinate system; computes, based on the sharpness of focus values, a depth of maximum focus value zm for each pixel within the volume; determines, based on the depths of maximum focus zm, a three dimensional location of each point on the surface; and constructs a three-dimensional model of the surface based on the three dimensional locations.
[0013] In yet another aspect, the present disclosure is directed to a non-transitory computer readable medium including software instructions to cause a computer processor to:receive, with an online computerized inspection system, a sequence of images of a moving surface of a web material, wherein the sequence of images is captured with a stationary imaging sensor including a camera and a telecentric lens having a focal plane aligned at a non-zero viewing angle with respect to an x-y plane of a surface coordinate system; align a reference point on the surface in each image in the sequence to form a registered sequence of images; stack the registered sequence of images along a z direction in a camera coordinate system to form a volume, wherein each image in the registered sequence of images comprises a layer in the volume; compute a sharpness of focus value for each pixel within the volume, wherein the pixels lie in a plane normal to the z direction in the camera coordinate system; compute, based on the sharpness of focus values, a depth of maximum focus value zm for each pixel within the volume; determine, based on the depths of maximum focus zm, a three dimensional location of each point on the surface; and construct the three-dimensional model of the surface based on the three dimensional locations.
[0014] In a further aspect, the present disclosure is directed to a method including translating an imaging sensor relative to a surface, wherein the sensor includes a lens with a focal plane aligned at a non-zero viewing angle with respect to an x-y plane of a surface coordinate system; imaging the surface with the imaging sensor to acquire a sequence of images; estimating the three dimensional locations of points on the surface to provide a set of three dimensional points representing the surface; and processing the set of three dimensional points to generate a range- map of the surface in a selected coordinate system.
[0015] In yet another aspect, the present disclosure is directed to a method, including: (a) imaging a surface with at least one imaging sensor to acquire a sequence of images, wherein the surface and the imaging sensor are in relative translational motion, and wherein the sensor includes a lens with a focal plane aligned at a non-zero viewing angle with respect to an x-y plane in a surface coordinate system; (b) determining a sharpness of focus value for every pixel in a last image in the sequence of images; (c) computing a y-coordinate in the surface coordinate system at which the focal plane intersects the y axis; (d) based on the apparent shift of the surface in the last image, determining transitional points on the surface, wherein the transitional points have exited a field of view of the lens in the last image, but were in the field of view of the lens in an image in the sequence previous to the last image; (e) determining the three dimensional location in a camera coordinate system of all the transitional points on the surface; (f) repeating steps (a) to (f) for each new image acquired by the imaging sensor; and (g) accumulating the three dimensional location in the camera coordinate system of the transitional points from the images in the sequence to form a point cloud representative of the translating surface.
[0016] In yet another embodiment, the present disclosure is directed to an apparatus, including an imaging sensor with a lens having a focal plane aligned at a non-zero viewing angle with respect to an x-y plane of a surface coordinate system, wherein the surface and the imaging sensor are in relative translational motion, and wherein the sensor images the surface to form a sequence of images thereof; a processor that: (a) determines a sharpness of focus value for every pixel in a last image in the sequence of images; (b) computes a y-coordinate in a surface coordinate system at which the focal plane intersects the y axis; (c) based on the apparent shift of the surface in the last image, determines transitional points on the surface, wherein the transitional points have exited a field of view of the lens in the last image, but were in the field of view of the lens in an image in the sequence previous to the last image; (d) determines the three dimensional location in a camera coordinate system of all the transitional points on the surface; (e) repeats steps (a) to (d) for each new image acquired by the imaging sensor; and (f) accumulates the three dimensional location in the camera coordinate system of the transitional points from the images in the sequence to form a point cloud representative of the translating surface.
[0017] In yet another aspect, the present disclosure is directed to an online computerized inspection system for inspecting web material in real time, the system including a stationary imaging sensor including a camera and a telecentric lens, wherein the lens has a focal plane aligned at a non-zero viewing angle with respect to an x-y plane of a moving surface, and wherein the sensor images the surface to form a sequence of images thereof; a processor that: (a) determines a sharpness of focus value for every pixel in a last image in the sequence of images; (b) computes a y-coordinate in a surface coordinate system at which the focal plane intersects the y axis; (c) based on the apparent shift of the surface in the last image, determines transitional points on the surface, wherein the transitional points have exited a field of view of the lens in the last image, but were in the field of view of the lens in an image in the sequence previous to the last image; (d) determines the three dimensional location in a camera coordinate system of all the transitional points on the surface; (e) repeats steps (a) to (d) for each new image acquired by the imaging sensor; and (f) accumulates the three dimensional location in the camera coordinate system of the transitional points from the images in the sequence to form a point cloud representative of the translating surface.
[0018] In yet another aspect, the present disclosure is directed to a non-transitory computer readable medium including software instructions to cause a computer processor to: (a) receive, with an online computerized inspection system, a sequence of images of a moving surface of a web material, wherein the sequence of images is captured with a stationary imaging sensor including a camera and a telecentric lens having a focal plane aligned at a non-zero viewing angle with respect to an x-y plane of a surface coordinate system; (b) determine a sharpness of focus value for every pixel in a last image in the sequence of images; (c) compute a y-coordinate in a surface coordinate system at which the focal plane intersects the y-axis; (d) based on the apparent shift of the surface in the last image, determine transitional points on the surface, wherein the transitional points have exited a field of view of the lens in the last image, but were in the field of view of the lens in an image in the sequence previous to the last image; (e) determine the three dimensional location in a camera coordinate system of all the transitional points on the surface; (f) repeat steps (a) to (e) for each new image acquired by the imaging sensor; and (g) accumulate the three dimensional location in the camera coordinate system of the transitional points from the images in the sequence to form a point cloud representative of the translating surface.
[0019] The details of one or more embodiments of the invention are set forth in the
accompanying drawings and the description below. Other features, objects, and advantages of the invention will be apparent from the description and drawings, and from the claims.
BRIEF DESCRIPTION OF DRAWINGS
[0020] FIG. 1 is a schematic diagram of an optical inspection apparatus.
[0021] FIG. 2 is a flowchart illustrating a method for determining the structure of a surface using the apparatus of FIG. 1.
[0022] FIG. 3 is a flowchart illustrating another method for determining the structure of a surface using the apparatus of FIG. 1.
[0023] FIG. 4 is a flowchart illustrating a method for processing the point cloud obtained from FIG. 3 to create a map of a surface.
[0024] FIG. 5 is a schematic block diagram of an exemplary embodiment of an inspection system in an exemplary web manufacturing plant.
[0025] FIG. 6 is a photograph of three images obtained by the optical inspection apparatus in Example 1.
[0026] FIGS. 7A-7C are three different views of the surface of the sample as determined by the optical inspection apparatus in Example 1.
[0027] FIGS. 8A-C are surface reconstructions formed using the apparatus of FIG. 1 as described in Example 3 at viewing angles Θ of 22.3°, 38.1°, and 46.5°, respectively.
[0028] FIGS. 9A-C are surface reconstructions formed using the apparatus of FIG. 1 as described in Example 3 at viewing angles Θ of 22.3°, 38.1°, and 46.5°, respectively.
DETAILED DESCRIPTION
[0029] Currently available surface inspection systems have been unable to provide useful online information about 3D surface structure of a surface due to constraints on their resolutions, speeds, or fields-of-view. The present disclosure is directed to an online inspection system including a stationary sensor, and unlike DFF systems does not require translation of the focal plane of the imaging lens of the sensor. Rather, the system described in the present disclosure utilizes the translational motion of the surface to automatically pass points on the surface through various focal planes to rapidly provide a 3D model of the surface, and as such is useful for online inspection applications in which a web of material is continuously monitored as it is processed on a production line. [0030] FIG. 1 is a schematic illustration of a sensor system 10, which is used to image a surface 14 of a material 12. The surface 14 is translated relative to at least one imaging sensor system 18. The surface 14 is imaged with the imaging sensor system 18, which is stationary in FIG. 1, although in other embodiments the sensor system 18 may be in motion while the surface 14 remains stationary. To further clarify the discussion below, it is assumed that relative motion of the imaging sensor system 18 and the surface 14 also creates two coordinate systems in relative motion with respect to one another. For example, as shown in FIG. 1 the imaging sensor system 18 can be described with respect to a camera coordinate system in which the z direction, zc, is aligned with the optical axis of a lens 20 of a CCD or CMOS camera 22. Referring again to FIG. 1, the surface 14 can be described with respect to a surface coordinate system in which the axis zs is the height above the surface.
[0031] In the embodiment shown in FIG. 1, the surface 14 is moving along the direction of the arrow A along the direction ys at a known speed toward the imaging sensor system 18, and includes a plurality of features 16 having a three-dimensional (3D) structure (extending along the direction zs). However, in other embodiments the surface 14 may be moving away from the imaging sensor system 18 at a known speed. The translation direction of the surface 14 with respect to the imaging sensor system 18, or the number and/or position of the imaging sensors 18 with respect to the surface 14, may be varied as desired so that the imaging sensor system 18 may obtain a more complete view of areas of the surface 14, or of particular parts of the features 16. The imaging sensor system 18 includes a lens system 20 and a sensor included in, for example, the CCD or CMOS camera 22. At least one optional light source 32 may be used to illuminate the surface 14.
[0032] The lens 20 has a focal plane 24 that is aligned at a non-zero angle Θ with respect to an x- y plane of the surface coordinate system of the surface 14. The viewing angle Θ between the lens focal plane and the x-y plane of the surface coordinate system may be selected depending on the characteristics of the surface 14 and the features 16 to be analyzed by the system 10. In some embodiments Θ is an acute angle less than 90°, assuming an arrangement such as in FIG. 1 wherein the translating surface 14 is moving toward the imaging sensor system 18. In other embodiments in which the surface 14 is moving toward the imaging sensor system 18, the viewing angle Θ is about 20° to about 60°, and an angle of about 40° has been found to be useful. In some embodiments, the viewing angle Θ may be periodically or constantly varied as the surface 14 is imaged to provide a more uniform and/or complete view of the features 16.
[0033] The lens system 20 may include a wide variety of lenses depending on the intended application of the apparatus 10, but telecentric lenses have been found to be particularly useful. In this application the term telecentric lens means any lens or system of lenses that approximates an orthographic projection. A telecentric lens provides no change in magnification with distance from the lens. An object that is too close or too far from the telecentric lens may be out of focus, but the resulting blurry image will be the same size as the correctly-focused image.
[0034] The sensor system 10 includes a processor 30, which may be internal, external or remote from the imaging sensor system 18. The processor 30 analyzes a series of images of the moving surface 14, which are obtained by the imaging sensor system 18.
[0035] The processor 30 initially registers the series of images obtained by the imaging sensor system 18 in a sequence. This image registration is calculated to align points in the series of images that correspond to the same physical point on the surface 14. If the lens 20 utilized by the system 10 is telecentric, the magnification of the images collected by the imaging sensor system 18 does not change with distance from the lens. As a result, the images obtained by the imaging sensor system 18 can be registered by translating one image with respect to another, and no scaling or other geometric deformation is required. While non-telecentric lenses 20 may be used in the imaging sensor system 18, such lenses may make image registration more difficult and complex, and require more processing capacity in the processor 30.
[0036] The amount that an image must be translated to register it with another image in the sequence depends on the translation of the surface 14 between images. If the translation speed of the surface 14 is known, the motion of the surface 14 sample from one image to the next as obtained by the imaging sensor system 18 is also known, and the processor 30 need only determine how much, and in which direction, the image should be translated per unit motion of the surface 14. This determination made by the processor 30 depends on, for example, the properties of the imaging sensor system 18, the focus of the lens 20, the viewing angle Θ of the focal plane 24 with respect to the x-y plane of the surface coordinate system, and the rotation (if any) of the camera 22.
[0037] Assume two parameters Dx and Dy, which give the translation of an image in the x and y directions per unit motion of the physical surface 14. The quantities Dx and Dy are in the units of pixels/mm. If two images Iti(x,y) and It2(x,y) are taken at times ti and t2, respectively, and the processor 30 is provided with the distance d that the sample surface 14 moved from ti to t2, then these images should be registered by translating It2(x,y) according to the following formula:
Figure imgf000011_0001
[0038] The scale factors Dx and Dy can also be estimated offline through a calibration procedure. In a sequence of images, the processor 30 automatically selects and tracks distinctive key points as they translate through the sequence of images obtained by the imaging sensor system 18. This information is then used by the processor to calculate the expected
displacement (in pixels) of a feature point per unit translation of the physical sample of the surface 14. Tracking may be performed by the processor using a normalized template matching algorithm.
[0039] Once all images of the surface 14 have been aligned, the processor 30 then stacks the registered sequence of images together along the direction zc normal to the focal plane of the lens 20 to form a volume. Each layer in this volume is an image in the sequence, shifted in the x and y directions as computed in the registration. Since the relative position of the surface 14 is known at the time each image in the sequence was acquired, each layer in the volume represents a snapshot of the surface 14 along the focal plane 24 as it slices through the sample 14 at angle Θ (see FIG. 1), at the location of the particular displacement at that time.
[0040] Once the image sequence has been aligned, the processor 30 then computes the sharpness of focus at each (x,y) location in the volume, wherein the plane of the (x,y) locations is normal to the zc direction in the volume. Locations in the volume that contain no image data are ignored, since they can be thought of as having zero sharpness. The processor 30 determines the sharpness of focus using a sharpness metric. Several suitable sharpness metrics are described in Nayar and Nakagawa, Shape from Focus, IEEE Transactions on Pattern
Recognition and Machine Intelligence, vol. 16, no. 8, pages 824-831 (1994).
[0041] For example, a modified Laplacian sharpness metric may be applied to compute the quantity
Figure imgf000012_0001
at each pixel in all images in the sequence. Partial derivatives can be computed using finite differences. The intuition behind this metric is that it can be thought of as an edge detector - clearly regions of sharp focus will have more distinct edges than out-of-focus regions. After computing this metric, a median filter may be used to aggregate the results locally around each pixel in the sequence of images.
[0042] Once the processor 30 has computed the sharpness of focus value for all the images in the sequence, the processor 30 computes a sharpness of focus volume, similar to the volume formed in earlier steps by stacking the registered images along the zc direction. To form the sharpness of focus volume, the processor replaces each (x,y) pixel value in the registered image volume by the corresponding sharpness of focus measurement for that pixel. Each layer (corresponding to an x-y plane in the plane xc-yc) in this registered stack is now a "sharpness of focus" image, with the layers registered as before, so that an image location corresponding to the same physical location on the surface 14 are aligned. As such, if one location (x,y) in the volume is selected, the sharpness of focus values observed moving through different layers in the zc-direction, the sharpness of focus comes to a maximum value when the point imaged at that location comes into focus (i.e., when it intersects with the focal plane 24 of the camera 22), and that the sharpness value will decrease moving away from that layer in either direction along the zc axis.
[0043] Each layer (corresponding to an x-y plane) in the sharpness of focus volume corresponds to one slice through the surface 14 at the location of the focal plane 24, so that as the sample 14 moves along the direction A, various slices through the surface 14 are collected at different locations along the surface thereof. As such, since each image in the sharpness of focus volume corresponds to a physical slice through the surface 14 at a different relative location, ideally the slice where a point (x,y) comes into sharpest focus determines the three dimensional (3D) position on the sample of the corresponding point. However, in practice the sharpness of focus volume contains a discrete set of slices, which may not be densely or uniformly spaced along the surface 14. So most likely the actual (theoretical) depth of maximum focus (the depth at which sharpness of focus is maximized) will occur between slices.
[0044] The processor 30 then estimates the 3D location of each point on the surface 14 by approximating the theoretical location of the slice in the sharpness of focus volume with the sharpest focus through that point.
[0045] In one embodiment, the processor approximates this theoretical location of sharpest focus by fitting a Gaussian curve to the measured sharpness of focus values at each location (x,y) through slice depths zc in the sharpness of focus volume. The model for sharpness of focus values as a function of slice de th zc is given by
Figure imgf000013_0001
where zm is the theoretical depth of maximum focus for the location (x,y) in the volume and σ is the standard deviation of the Gaussian that results at least in part from the depth of field of the imaging lens (see lens 20 in FIG. 1). This curve fitting can be done by minimizing a simple least-squares cost function.
[0046] In another embodiment, if the Gaussian algorithm is prohibitively computationally expensive or time consuming for use in a particular application, an approximate algorithm can be used that executes more quickly without substantially sacrificing accuracy. A quadratic function can be fit to the sharpness profile samples at each location (x,y), but only using the samples near the location with the maximum sharpness value. So, for each point on the surface, first the depth is found with the highest sharpness value, and a few samples are selected on either side of this depth. A quadratic function is fit to these few samples using the standard Least-Squares formulation, which can be solved in closed form. In rare cases, when there is noise in the data, the parabola in the quadratic function may open upwards - in this case, the result of the fit is discarded, and the depth of the maximum sharpness sample is simply used instead. Otherwise, the depth is taken as the location of the theoretical maximum of the quadratic function, which may in general lie between two of the discrete samples.
[0047] Once the theoretical depth of maximum focus zm is approximated for each location (x,y) in the volume, the processor 30 estimates the 3D location of each point on the surface of the sample. This point cloud is then converted into a surface model of the surface 14 using standard triangular meshing algorithms.
[0048] FIG. 2 is a flowchart illustrating a batch method 200 of operating the apparatus in FIG. 1 to characterize the surface in a sample region of a surface 14 of a material 12. In step 202, a translating surface is imaged with a sensor including a lens having a focal plane aligned at a non-zero angle with respect to a plane of the surface. In step 204, a processor registers a sequence of images of the surface, while in step 206 the registered images are stacked along a zc direction to form a volume. In step 208 the processor determines a sharpness of focus value for each (x,y) location in the volume, wherein the (x,y) locations lie in a plane normal to the zc direction. In step 210, the processor determines, using the sharpness of focus values, a depth of maximum focus zm along the zc direction for each (x,y) location in the volume. In step 212, the processor determines, based on the depths of maximum focus zm, a three dimensional location of each point on the surface. In optional step 214, the processor can form, based on the three- dimensional locations, a three-dimensional model of the surface.
[0049] In the overall procedure described in FIG. 2, the processor 30 operates in batch mode, meaning that all images are processed together after they are acquired by the imaging sensor system 18. However, in other embodiments, the image data obtained by the imaging sensor system 18 may be processed incrementally as these data become available. As further explained in FIG. 3 below, the incremental processing approach utilizes an algorithm that proceeds in two phases. First, online, as the surface 14 translates and new images are acquired sequentially, the processor 30 estimates the 3D locations of points on the surface 14 as they are imaged. The result from this online processing is a set of 3D points (i.e., a point cloud) representing the surface 14 of the sample material 12. Then, offline, (after all images have been acquired and the 3D locations estimated), this point cloud is post-processed (FIG. 4) to generate a smooth range- map in an appropriate coordinate system.
[0050] Referring to the process 500 in FIG. 3, as the surface 14 translates with respect to the imaging sensor system 18, a sequence of images is acquired by the imaging sensor system 18. Each time a new image is acquired in the sequence, in step 502 the processor 30 approximates the sharpness of focus for each pixel in the newly acquired image using an appropriate algorithm such as, for example, the modified Laplacian sharpness metric described in detail in the discussion of the batch process above. In step 504, the processor 30 then computes a
y-coordinate in the surface coordinate system at which the focal plane 24 intersects the y axis. In step 506, based on the apparent shift of the surface in the last image in the sequence, the processor finds transitional points on the surface 14 that have just exited the field of view of the lens 20, but which were in the field of view in the previous image in the sequence. In step 508, the processor then estimates the 3D location of all such transitional points. Each time a new image is received in the sequence, the processor repeats the estimation of the 3D location of the transitional points, then accumulates these 3D locations to form a point cloud representative of the surface 14.
[0051] Although the steps in FIG. 3 are described serially, to enhance efficiency the incremental processing approach can also be implemented as a multi-threaded system. For example, step 502 may be performed in one thread, while steps 504-508 occur in another thread. In step 510, the point cloud is further processed as described in FIG. 4 to form a range map of the surface 14.
[0052] Referring to the process 550 of FIG. 4, in step 552 the processor 30 forms a first range map by re-sampling the points in the point cloud on a rectangular grid, parallel to the image plane 24 of the camera 20. In step 554, the processor optionally detects and suppresses outliers in the first range map. In step 556, the processor performs an optional additional de-noising step to remove noise in the map of the reconstructed surface. In step 558, the reconstructed surface is rotated and represented on the surface coordinate system in which the X-Y plane xs-ys is aligned with the plane of motion of the surface 14, with the zs axis in the surface coordinate system normal to the surface 14. In step 560, the processor interpolates and re-samples on a grid in the surface coordinate system to form a second range map. In this second range map, for each (x,y) position on the surface, with the X axis (xs) being normal to the direction A (FIG. 1) and the Y axis (ys) being parallel to direction A, the Z-coordinate (zs) gives the surface height of a feature 16 on the surface 14.
[0053] For example, the surface analysis method and apparatus described herein are particularly well suited, but are not limited to, inspecting and characterizing the structured surfaces 14 of web-like rolls of sample materials 12 that include piece parts such as the feature 16 (FIG. 1). In general, the web rolls may contain a manufactured web material that may be any sheet-like material having a fixed dimension in one direction (cross-web direction generally normal to the direction A in FIG. 1) and either a predetermined or indeterminate length in the orthogonal direction (down- web direction generally parallel to direction A in FIG. 1). Examples include, but are not limited to, materials with textured, opaque surfaces such as metals, paper, woven materials, non-woven materials, glass, abrasives, flexible circuits or combinations thereof. In some embodiments, the apparatus of FIG. 1 may be utilized in one or more inspection systems to inspect and characterize web materials during manufacture. To produce a finished web roll that is ready for conversion into individual sheets for incorporation into a product, unfinished web rolls may undergo processing on multiple process lines either within one web manufacturing plant, or within multiple manufacturing plants. For each process, a web roll is used as a source roll from which the web is fed into the manufacturing process. After each process, the web may be converted into sheets or piece parts, or may be collected again into a web roll and moved to a different product line or shipped to a different manufacturing plant, where it is then unrolled, processed, and again collected into a roll. This process is repeated until ultimately a finished sheet, piece part or web roll is produced. For many applications, the web materials for each of the sheets, pieces, or web rolls may have numerous coatings applied at one or more production lines of the one or more web manufacturing plants. The coating is generally applied to an exposed surface of either a base web material, in the case of a first manufacturing process, or a previously applied coating in the case of a subsequent manufacturing process. Examples of coatings include adhesives, hardcoats, low adhesion backside coatings, metalized coatings, neutral density coatings, electrically conductive or nonconductive coatings, or combinations thereof.
[0054] In the exemplary embodiment of an inspection system 300 shown in FIG. 5, a sample region of a web 312 is positioned between two support rolls 323, 325. The inspection system 300 includes a fiducial mark controller 301, which controls fiducial mark reader 302 to collect roll and position information from the sample region 312. In addition, the fiducial mark controller 301 may receive position signals from one or more high-precision encoders engaged with selected sample region of the web 312 and/or support rollers 323, 325. Based on the position signals, the fiducial mark controller 301 determines position information for each detected fiducial mark. The fiducial mark controller 301 communicates the roll and position information to an analysis computer 329 for association with detected data regarding the dimensions of features on a surface 314 of the web 312. [0055] The system 300 further includes one or more stationary sensor systems 318A-318N, which each include an optional light source 332 and a telecentric lens 320 having a focal plane aligned at an acute angle with respect to the surface 314 of the moving web 312. The sensor systems 318 are positioned in close proximity to a surface 314 of the continuously moving web 312 as the web is processed, and scan the surface 314 of the web 312 to obtain digital image data.
[0056] An image data acquisition computer 327 collects image data from each of the sensor systems 318 and transmits the image data to an analysis computer 329. The analysis computer 329 processes streams of image data from the image acquisition computers 327 and analyzes the digital images with one or more of the batch or incremental image processing algorithms described above. The analysis computer 329 may display the results on an appropriate user interface and/or may store the results in a database 331.
[0057] The inspection system 300 shown in FIG. 5 may be used within a web manufacturing plant to measure the 3D characteristics of the web surface 314 and identify potentially defective materials. Once the 3D structure of a surface is estimated, the inspection system 300 may provide many types of useful information such as, for example, locations, shapes, heights, fidelities, etc. of features on the web surface 314. The inspection system 300 may also provide output data that indicates the severity of defects in any of these surface characteristics in realtime as the web is manufactured. For example, the computerized inspection systems may provide real-time feedback to users, such as process engineers, within web manufacturing plants regarding the presence of structural defects, anomalies, or out of spec materials (hereafter generally referred to as defects) in the web surface 314 and their severity, thereby allowing the users to quickly respond to an emerging defect in a particular batch of material or series of batches by adjusting process conditions to remedy a problem without significantly delaying production or producing large amounts of unusable material. The computerized inspection system 300 may apply algorithms to compute the severity level by ultimately assigning a rating label for the defect (e.g., "good" or "bad") or by producing a measurement of non-uniformity severity of a given sample on a continuous scale or more accurately sampled scale.
[0058] The analysis computer 329 may store the defect rating or other information regarding the surface characteristics of the sample region of the web 314, including roll identifying information for the web 314 and possibly position information for each measured feature, within the database 331. For example, the analysis computer 329 may utilize position data produced by fiducial mark controller 301 to determine the spatial position or image region of each measured area including defects within the coordinate system of the process line. That is, based on the position data from the fiducial mark controller 301, the analysis computer 329 determines the xs, ys, and possibly zs position or range for each area of non-uniformity within the coordinate system used by the current process line. For example, a coordinate system may be defined such that the x dimension (xs) represents a distance across web 312, a y dimension (ys) represents a distance along a length of the web, and the z dimension (zs) represents a height of the web, which may be based on the number of coatings, materials or other layers previously applied to the web. Moreover, an origin for the x, y, z coordinate system may be defined at a physical location within the process line, and is typically associated with an initial feed placement of the web 312.
[0059] The database 331 may be implemented in any of a number of different forms including a data storage file or one or more database management systems (DBMS) executing on one or more database servers. The database management systems may be, for example, a relational (RDBMS), hierarchical (HDBMS), multidimensional (MDBMS), object oriented (ODBMS or OODBMS) or object relational (ORDBMS) database management system. As one example, the database 331 is implemented as a relational database available under the trade designation SQL Server from Microsoft Corporation, Redmond, WA.
[0060] Once the process has ended, the analysis computer 329 may transmit the data collected in the database 331 to a conversion control system 340 via a network 339. For example, the analysis computer 329 may communicate the roll information as well as the feature dimension and/or anomaly information and respective sub-images for each feature to the conversion control system 340 for subsequent, offline, detailed analysis. For example, the feature dimension information may be communicated by way of database synchronization between the database 331 and the conversion control system 340.
[0061] In some embodiments, the conversion control system 340 may determine those products of products for which each anomaly may cause a defect, rather than the analysis computer 329. Once data for the finished web roll has been collected in the database 331, the data may be communicated to converting sites and/or used to mark anomalies on the web roll, either directly on the surface of the web with a removable or washable mark, or on a cover sheet that may be applied to the web before or during marking of anomalies on the web.
[0062] The components of the analysis computer 329 may be implemented, at least in part, as software instructions executed by one or more processors of the analysis computer 329, including one or more hardware microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), or any other equivalent integrated or discrete logic circuitry, as well as any combinations of such components. The software instructions may be stored within in a non-transitory computer readable medium, such as random access memory (RAM), read only memory (ROM), programmable read only memory (PROM), erasable programmable read only memory
(EPROM), electronically erasable programmable read only memory (EEPROM), flash memory, a hard disk, a CD-ROM, a floppy disk, a cassette, magnetic media, optical media, or other computer-readable storage media.
[0063] Although shown for purposes of example as positioned within a manufacturing plant, the analysis computer 329 may be located external to the manufacturing plant, e.g., at a central location or at a converting site. For example, the analysis computer 329 may operate within the conversion control system 340. In another example, the described components execute on a single computing platform and may be integrated into the same software system.
[0064] The subject matter of the present disclosure will now be described with reference to the following non-limiting examples.
EXAMPLES
Example 1
[0065] An apparatus was constructed in accordance with the schematic in FIG. 1. A CCD camera including a telecentric lens was directed at a sample abrasive material on a moveable stage. The focal plane of the telecentric lens was oriented at a viewing angle (Θ in FIG. 1) of approximately 40° with respect to the x-y plane of the surface coordinate system of the sample material. The sample material was translated horizontally on the stage in increments of approximately 300 μιη, and an image was captured by the camera at each increment. FIG. 6 shows three images of the surface of the sample material taken by the camera as the sample material was moved through a series of 300 μιη increments.
[0066] A processor associated with an analysis computer analyzed the images of the sample surface acquired by the camera. The processor registered a sequence of the images, stacked the registered images along a zc direction to form a volume, and determined a sharpness of focus value for each (x,y) location in the volume using the modified Laplacian sharpness of focus metric described above. Using the sharpness of focus values, the processor computed a depth of maximum focus zm along the zc direction for each (x,y) location in the volume and determined, based on the depths of maximum focus zm, a three dimensional location of each point on the surface of the sample. The computer formed, based on the three-dimensional locations, a three- dimensional model of the surface of FIG. 6, which is shown in FIGS. 7A-7C from three different perspectives. [0067] The reconstructed surface in the images shown in FIGS. 7A-7C is realistic and accurate, and a number of quantities of interest could be computed from this surface, such as feature sharpness, size and orientation in the case of a web material such as an abrasive. However, FIG. 7C shows that that there are several gaps or holes in the reconstructed surface. These holes are a result of the manner in which the samples were imaged. As shown schematically in FIG. 1, the parts of the surface on the backside of tall features on the sample (in this case, grains on the abrasive), can never be viewed by the camera due to the relatively low angle of view. This lack of data could potentially be alleviated through the use of two cameras viewing the sample simultaneously from different angles.
Example 2
[0068] Several samples of an abrasive material were scanned by the incremental process described in this disclosure. The samples were also scanned by an off line laser profilometer using a confocal sensor. Two surface profiles of each sample were then reconstructed from the data sets obtained from the different methods, and the results were compared by registering the two reconstructions using a variant of the Iterated Closest-Point (ICP) matching algorithm described in Chen and Medioni, Object Modeling by Registration of Multiple Range Images, Proceedings of the IEEE International Conference on Robotics and Automation, 1991. The surface height estimates zs for each location (x, y) on the samples were then compared. Using a lens with a magnification of 2, sample 1 showed a median range residual value of 12 μιη, while Sample 2 showed a median range residual value of 9 μιη. Even with an imprecise registration, the scans from the incremental processing technique described above matched relatively closely to a scan taken by the off-line laser profilometer.
Example 3
[0069] In this example, the effect on the reconstructed 3D surface of the camera incidence angle Θ (FIG. 1) was evaluated by reconstructing 8 different samples (of various types), each from three different viewing angles: θ = 22:3°; 38:1°; and 46:5° (the surface of the samples was moving toward the camera as shown in FIG. 1). Examples of 3D reconstructions of two different surfaces from these different viewing angles of 22:3°; 38: 1°; and 46:5° are shown in FIGS. 8A-C and 9A-C, respectively. Based on these results, as well as reconstructions of the other samples (not shown in FIGS. 8-9), some qualitative observations can be made.
[0070] First, surfaces reconstructed with smaller viewing angles exhibit larger holes in the estimated surface. This is especially pronounced behind tall peaks, as shown in FIG. 9A. This is to be expected, since more of the surface behind these peaks is occluded from the camera when Θ is small. The result is that the overall surface reconstruction is less complete than from higher viewing angles.
[0071] Second, it can also be observed that, while larger viewing angles (such as in FIG. 8C and 9C yield more complete reconstructions, they also result in a higher level of noise in the surface estimate. This is more apparent on steep vertical edges on the surface. This is most likely due to the sensitivity to noise being increased by having fewer pixels on target on steep vertical edges, since the viewing angle is closer to top-down.
[0072] Based on these observations, as well as subjective visual inspection of all the results of this experiment, it appears that the middle viewing angle (38: 1°) yields the most pleasing results of all the configurations evaluated in this Example. Sequences reconstructed in this manner seem to strike a balance between completeness and low noise levels.
[0073] Various embodiments of the invention have been described. These and other embodiments are within the scope of the following claims.

Claims

CLAIMS:
1. A method, comprising:
imaging a surface with at least one imaging sensor, wherein the surface and the imaging sensor are in relative translational motion, and wherein the sensor comprises a lens with a focal plane aligned at a non-zero viewing angle with respect to an x-y plane in a surface coordinate system;
registering a sequence of images of the surface;
stacking the registered images along a z direction in a camera coordinate system to form a volume;
determining a sharpness of focus value for each (x,y) location in the volume, wherein the (x,y) locations lie in a plane normal to the z direction in the camera coordinate system;
determining, using the sharpness of focus values, a depth of maximum focus zm along the z direction in the camera coordinate system for each (x,y) location in the volume; and
determining, based on the depths of maximum focus zm, a three dimensional location of each point on the surface.
2. The method of claim 1, wherein images are registered by aligning a reference point on the surface.
3. The method of claim 1, further comprising forming, based on the three-dimensional locations, a three-dimensional model of the surface.
4. The method of claim 1 , wherein the lens comprises a telecentric lens.
5. The method of claim 1, wherein, when the surface is moving toward a stationary imaging sensor, the viewing angle is less than 90°.
6. The method of claim 1, wherein the sharpness of focus value is determined by applying a modified Laplacian sharpness metric at each (x,y) location.
7. The method of claim 1 , wherein the depth of each point on the surface is determined by fitting along the z direction a Gaussian curve to estimate the depths of maximum focus zm.
8. The method of claim 1 , wherein the depth of each point on the surface is determined by fitting a quadratic function to the sharpness of focus values at each location (x,y), in the volume.
9. The method of claim 3, comprising applying a triangular meshing algorithm to the three dimensional point locations to form the model of the surface.
10. The method of claim 1, wherein the imaging sensor comprises a CCD or a CMOS camera.
11. A method, comprising:
capturing with an imaging sensor a sequence of images of a surface, wherein the surface and the imaging sensor are in relative translational motion, and wherein the imaging sensor comprises a telecentric lens having a focal plane aligned at a non-zero viewing angle with respect to an x-y plane in a surface coordinate system;
aligning a reference point on the surface in each image in the sequence to form a registered sequence of images;
stacking the registered sequence of images along a z direction in a camera coordinate system to form a volume, wherein each image in the registered sequence of images comprises a layer in the volume;
computing a sharpness of focus value for each pixel within the volume, wherein the pixels lie in a plane normal to the z direction in the camera coordinate system;
computing, based on the sharpness of focus values, a depth of maximum focus value zm for each pixel within the volume;
determining, based on the depths of maximum focus zm, a three dimensional location of each point on the surface; and optionally
constructing a three-dimensional model of the surface based on the three dimensional point locations.
12. The method of claim 11, wherein the sharpness of focus value is determined by applying a modified Laplacian sharpness metric at each (x,y) location.
13. The method of claim 11, wherein the depth of each point on the surface is determined by fitting along the z direction a Gaussian curve to estimate the sharpness of focus values zm.
14. The method of claim 11 , wherein the depth of each point on the surface is determined by fitting a quadratic function to the sharpness of focus values at each location (x,y), in the volume.
15. The method of claim 11, comprising applying a triangular meshing algorithm to the three dimensional point locations to form the model of the surface.
16. An apparatus, comprising:
an imaging sensor comprising a telecentric lens, wherein the lens has a focal plane aligned at a non-zero viewing angle with respect to an x-y plane in a surface coordinate system, wherein the surface and the imaging sensor are in relative translational motion, and wherein the sensor images the surface to form a sequence of images thereof;
a processor that:
aligns in each image in the sequence a reference point on the surface to form a registered sequence of images;
stacks the registered sequence of images along a z direction in a camera coordinate system to form a volume, wherein each image in the registered sequence of images comprises a layer in the volume;
computes a sharpness of focus value for each pixel within the volume, wherein the pixels lie in a plane normal to the z direction in the camera coordinate system;
computes, based on the sharpness of focus values, a depth of maximum focus value zm for each pixel within the volume;
determines, based on the depths of maximum focus zm, a three dimensional location of each point on the surface; and
constructs a three-dimensional model of the surface based on the three dimensional locations.
17. The apparatus of claim 16, wherein the surface is a web of material.
18. The apparatus of claim 16, further comprising a light source to illuminate the surface.
19. The apparatus of claim 16, wherein the sensor comprises a CCD or a CMOS camera.
20. The apparatus of claim 19, wherein the processor is internal to the camera.
21. The apparatus of claim 19, wherein the processor is remote from the camera.
22. A method, comprising:
positioning a stationary imaging sensor at a non-zero viewing angle with respect to a moving web of material, wherein the imaging sensor comprises a telecentric lens to image a surface of the moving web and form a sequence of images thereof;
processing the sequence of images to:
register the images;
stack the registered images along a z direction in a camera coordinate system to form a volume;
determine a sharpness of focus value for each (x,y) location in the volume, wherein the (x,y) locations lie in a plane normal to the z direction in the camera coordinate system;
determine a depth of maximum focus zm along the z direction in the camera coordinate system for each (x,y) location in the volume; and
determine, based on the depths of maximum focus zm, a three dimensional location of each point on the surface of the moving web.
23. The method of claim 22, wherein the imaging sensor comprises a CCD or a CMOS camera.
24. The method of claim 22, wherein the processor is external to the CCD camera.
25. The method of claim 22, further comprising forming, based on the three-dimensional locations, a three-dimensional model of the surface of the moving web.
26 The method of claim 22, wherein the sharpness of focus value is determined by applying a modified Laplacian sharpness metric at each (x,y) location.
27. The method of claim 22, wherein the depth of each point on the surface is determined by fitting along the z direction a Gaussian curve to estimate the depths of maximum focus zm.
28. The method of claim 22, wherein the depth of each point on the surface is determined by fitting a quadratic function to the sharpness of focus values at each location (x,y), in the volume.
29. The method of claim 22, comprising applying a triangular meshing algorithm to the three dimensional point locations to form the model of the surface.
30. A method for inspecting a moving surface of a web material in real time and computing a three-dimensional model of the surface, the method comprising:
capturing with a stationary sensor a sequence of images of the surface, wherein the imaging sensor comprises a camera and a telecentric lens having a focal plane aligned at a nonzero viewing angle with respect to an x-y plane of a surface coordinate system;
aligning a reference point on the surface in each image in the sequence to form a registered sequence of images;
stacking the registered sequence of images along a z direction in a camera coordinate system to form a volume, wherein each image in the registered sequence of images comprises a layer in the volume;
computing a sharpness of focus value for each pixel within the volume, wherein the pixels lie in a plane normal to the z direction in the camera coordinate system;
computing, based on the sharpness of focus values, a depth of maximum focus value zm for each pixel within the volume;
determining, based on the depths of maximum focus zm, a three dimensional location of each point on the surface; and
constructing the three-dimensional model of the surface based on the three dimensional locations.
31. The method of claim 30, wherein the sharpness of focus value is determined by applying a modified Laplacian sharpness metric at each (x,y) location.
32. The method of claim 30, wherein the depth of each point on the surface is determined by fitting along the z direction a Gaussian curve to estimate the depths of maximum focus zm.
33. The method of claim 30, wherein the depth of each point on the surface is determined by fitting a quadratic function to the sharpness of focus values at each location (x,y), in the volume.
34. The method of claim 30, comprising applying a triangular meshing algorithm to the three dimensional point locations to form the model of the surface.
35. An online computerized inspection system for inspecting web material in real time, the system comprising:
a stationary imaging sensor comprising a camera and a telecentric lens, wherein the lens has a focal plane aligned at a non-zero viewing angle with respect to a plane of a moving surface, and wherein the sensor images the surface to form a sequence of images thereof;
a processor that:
aligns in each image in the sequence a reference point on the surface to form a registered sequence of images;
stacks the registered sequence of images along a z direction in a camera coordinate system to form a volume, wherein each image in the registered sequence of images comprises a layer in the volume;
computes a sharpness of focus value for each pixel within the volume, wherein the pixels lie in a plane normal to the z direction in the camera coordinate system;
computes, based on the sharpness of focus values, a depth of maximum focus value zm for each pixel within the volume;
determines, based on the depths of maximum focus zm, a three dimensional location of each point on the surface; and
constructs a three-dimensional model of the surface based on the three dimensional locations.
36. A non-transitory computer readable medium comprising software instructions to cause a computer processor to:
receive, with an online computerized inspection system, a sequence of images of a moving surface of a web material, wherein the sequence of images is captured with a stationary imaging sensor comprising a camera and a telecentric lens having a focal plane aligned at a nonzero viewing angle with respect to an x-y plane of a surface coordinate system;
align a reference point on the surface in each image in the sequence to form a registered sequence of images;
stack the registered sequence of images along a z direction in a camera coordinate system to form a volume, wherein each image in the registered sequence of images comprises a layer in the volume;
compute a sharpness of focus value for each pixel within the volume, wherein the pixels lie in a plane normal to the z direction in the camera coordinate system; compute, based on the sharpness of focus values, a depth of maximum focus value zm for each pixel within the volume;
determine, based on the depths of maximum focus zm, a three dimensional location of each point on the surface; and
construct the three-dimensional model of the surface based on the three dimensional locations.
37. A method, comprising :
translating an imaging sensor relative to a surface, wherein the sensor comprises a lens with a focal plane aligned at a non-zero viewing angle with respect to an x-y plane of a surface coordinate system;
imaging the surface with the imaging sensor to acquire a sequence of images;
estimating the three dimensional locations of points on the surface to provide a set of three dimensional points representing the surface; and
processing the set of three dimensional points to generate a range-map of the surface in a selected coordinate system.
38. A method, comprising:
(a) imaging a surface with at least one imaging sensor to acquire a sequence of images, wherein the surface and the imaging sensor are in relative translational motion, and wherein the sensor comprises a lens with a focal plane aligned at a non-zero viewing angle with respect to an x-y plane in a surface coordinate system;
(b) determining a sharpness of focus value for every pixel in a last image in the sequence of images;
(c) computing a y-coordinate in the surface coordinate system at which the focal plane intersects the y axis;
(d) based on the apparent shift of the surface in the last image, determining transitional points on the surface, wherein the transitional points have exited a field of view of the lens in the last image, but were in the field of view of the lens in an image in the sequence previous to the last image;
(e) determining the three dimensional location in a camera coordinate system of all the transitional points on the surface;
(f) repeating steps (a) to (f) for each new image acquired by the imaging sensor; and (g) accumulating the three dimensional location in the camera coordinate system of the transitional points from the images in the sequence to form a point cloud representative of the translating surface.
39. The method of claim 38, wherein the sharpness of focus value is determined by applying a modified Laplacian sharpness metric.
40. The method of claim 38, wherein the three dimensional location of each transitional point on the surface is determined by fitting along the z direction in the camera coordinate system a Gaussian curve to estimate the depths of maximum focus zm.
41. The method of claim 38, wherein the three dimensional location of each transitional point on the surface is determined by fitting a quadratic function to the sharpness of focus values for each pixel.
42. The method of claim 38, further comprising forming a first range map of the translating surface by re-sampling the points in the point cloud on a rectangular grid in the camera coordinate system.
43. The method of claim 42, further comprising removing noise from the first range map.
44. The method of claim 38, further comprising rotating the first range map to the surface coordinate system.
45. The method of claim 44, further comprising forming a second range map by re-sampling first range map on a grid in the surface coordinate system.
46. The method of claim 38, wherein, when the surface is moving toward a stationary imaging sensor, the viewing angle is about 38°.
47. The method of claim 38, wherein the lens is a telecentric lens.
48. An apparatus, comprising:
an imaging sensor comprising a lens having a focal plane aligned at a non-zero viewing angle with respect to an x-y plane of a surface coordinate system, wherein the surface and the imaging sensor are in relative translational motion, and wherein the sensor images the surface to form a sequence of images thereof;
a processor that:
(a) determines a sharpness of focus value for every pixel in a last image in the sequence of images;
(b) computes a y-coordinate in a surface coordinate system at which the focal plane intersects the y axis;
(c) based on the apparent shift of the surface in the last image, determines transitional points on the surface, wherein the transitional points have exited a field of view of the lens in the last image, but were in the field of view of the lens in an image in the sequence previous to the last image;
(d) determines the three dimensional location in a camera coordinate system of all the transitional points on the surface;
(e) repeats steps (a) to (d) for each new image acquired by the imaging sensor; and
(f) accumulates the three dimensional location in the camera coordinate system of the transitional points from the images in the sequence to form a point cloud representative of the translating surface.
49. The apparatus of claim 48, wherein the surface is a web of material.
50. The apparatus of claim 48, wherein the lens is a telecentric lens.
51. A method for inspecting a moving surface of a web material in real time and computing a three-dimensional model of the surface, the method comprising:
(a) capturing with a stationary sensor a sequence of images of the surface, wherein the imaging sensor comprises a camera and a telecentric lens having a focal plane aligned at a non-zero viewing angle with respect to an x-y plane of a surface coordinate system;
(b) determining a sharpness of focus value for every pixel in a last image in the sequence of images;
(c) computing a y-coordinate in a surface coordinate system at which the focal plane intersects the y-axis; (d) based on the apparent shift of the surface in the last image, determining transitional points on the surface, wherein the transitional points have exited a field of view of the lens in the last image, but were in the field of view of the lens in an image in the sequence previous to the last image;
(e) determining the three dimensional location in a camera coordinate system of all the transitional points on the surface;
(f) repeating steps (a) to (f) for each new image acquired by the imaging sensor; and
(g) accumulating the three dimensional location in the camera coordinate system of the transitional points from the images in the sequence to form a point cloud representative of the translating surface.
52. The method of claim 51 , wherein the sharpness of focus value is determined by applying a modified Laplacian sharpness metric.
53. The method of claim 51 , wherein the three dimensional location of each transitional point on the surface is determined by fitting along the z direction in the camera coordinate system a Gaussian curve to estimate the depths of maximum focus zm.
54. The method of claim 51 , wherein the three dimensional location of each transitional point on the surface is determined by fitting a quadratic function to the sharpness of focus values for each pixel.
55. The method of claim 51 , further comprising forming a first range map of the translating surface by re-sampling the points in the point cloud on a rectangular grid in the camera coordinate system.
56. The method of claim 55, further comprising removing noise from the first range map.
57. The method of claim 51 , further comprising rotating the first range map to a surface coordinate system.
58. The method of claim 57, further comprising forming a second range map by re-sampling first range map on a grid in the surface coordinate system.
59. The method of claim 51 , wherein, when the surface is moving toward a stationary imaging sensor, the viewing angle is about 38°.
60. An online computerized inspection system for inspecting web material in real time, the system comprising:
a stationary imaging sensor comprising a camera and a telecentric lens, wherein the lens has a focal plane aligned at a non-zero viewing angle with respect to an x-y plane of a moving surface, and wherein the sensor images the surface to form a sequence of images thereof;
a processor that:
(a) determines a sharpness of focus value for every pixel in a last image in the sequence of images;
(b) computes a y-coordinate in a surface coordinate system at which the focal plane intersects the y axis;
(c) based on the apparent shift of the surface in the last image, determines transitional points on the surface, wherein the transitional points have exited a field of view of the lens in the last image, but were in the field of view of the lens in an image in the sequence previous to the last image;
(d) determines the three dimensional location in a camera coordinate system of all the transitional points on the surface;
(e) repeats steps (a) to (d) for each new image acquired by the imaging sensor; and
(f) accumulates the three dimensional location in the camera coordinate system of the transitional points from the images in the sequence to form a point cloud representative of the translating surface.
61. A non-transitory computer readable medium comprising software instructions to cause a computer processor to:
(a) receive, with an online computerized inspection system, a sequence of images of a moving surface of a web material, wherein the sequence of images is captured with a stationary imaging sensor comprising a camera and a telecentric lens having a focal plane aligned at a non-zero viewing angle with respect to an x-y plane of a surface coordinate system;
(b) determine a sharpness of focus value for every pixel in a last image in the sequence of images;
(c) compute a y-coordinate in a surface coordinate system at which the focal plane intersects the y-axis; (d) based on the apparent shift of the surface in the last image, determine transitional points on the surface, wherein the transitional points have exited a field of view of the lens in the last image, but were in the field of view of the lens in an image in the sequence previous to the last image;
(e) determine the three dimensional location in a camera coordinate system of all the transitional points on the surface;
(f) repeat steps (a) to (e) for each new image acquired by the imaging sensor; and
(g) accumulate the three dimensional location in the camera coordinate system of the transitional points from the images in the sequence to form a point cloud representative of the translating surface.
PCT/US2013/023789 2012-01-31 2013-01-30 Method and apparatus for measuring the three dimensional structure of a surface WO2013116299A1 (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
JP2014554952A JP2015513070A (en) 2012-01-31 2013-01-30 Method and apparatus for measuring the three-dimensional structure of a surface
US14/375,002 US20150009301A1 (en) 2012-01-31 2013-01-30 Method and apparatus for measuring the three dimensional structure of a surface
KR1020147023980A KR20140116551A (en) 2012-01-31 2013-01-30 Method and apparatus for measuring the three dimensional structure of a surface
CN201380007293.XA CN104254768A (en) 2012-01-31 2013-01-30 Method and apparatus for measuring the three dimensional structure of a surface
EP13743682.0A EP2810054A4 (en) 2012-01-31 2013-01-30 Method and apparatus for measuring the three dimensional structure of a surface
BR112014018573A BR112014018573A8 (en) 2012-01-31 2013-01-30 METHOD AND APPARATUS FOR MEASURING THE THREE-DIMENSIONAL STRUCTURE OF A SURFACE

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201261593197P 2012-01-31 2012-01-31
US61/593,197 2012-01-31

Publications (1)

Publication Number Publication Date
WO2013116299A1 true WO2013116299A1 (en) 2013-08-08

Family

ID=48905775

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2013/023789 WO2013116299A1 (en) 2012-01-31 2013-01-30 Method and apparatus for measuring the three dimensional structure of a surface

Country Status (7)

Country Link
US (1) US20150009301A1 (en)
EP (1) EP2810054A4 (en)
JP (1) JP2015513070A (en)
KR (1) KR20140116551A (en)
CN (1) CN104254768A (en)
BR (1) BR112014018573A8 (en)
WO (1) WO2013116299A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463964A (en) * 2014-12-12 2015-03-25 英华达(上海)科技有限公司 Method and equipment for acquiring three-dimensional model of object
US9291877B2 (en) 2012-11-15 2016-03-22 Og Technologies, Inc. Method and apparatus for uniformly focused ring light
CN109886961A (en) * 2019-03-27 2019-06-14 重庆交通大学 Medium-and-large-sized measurement of cargo measurement method based on depth image
WO2019211515A3 (en) * 2018-05-03 2020-01-16 Valmet Automation Oy Measurement of elastic modulus of moving web
WO2022074171A1 (en) 2020-10-07 2022-04-14 Ash Technologies Ltd., System and method for digital image processing
DE102021111706A1 (en) 2021-05-05 2022-11-10 Carl Zeiss Industrielle Messtechnik Gmbh Method, measuring device and computer program product
CN116045852A (en) * 2023-03-31 2023-05-02 板石智能科技(深圳)有限公司 Three-dimensional morphology model determining method and device and three-dimensional morphology measuring equipment

Families Citing this family (77)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8908995B2 (en) 2009-01-12 2014-12-09 Intermec Ip Corp. Semi-automatic dimensioning with imager on a portable device
US9779546B2 (en) 2012-05-04 2017-10-03 Intermec Ip Corp. Volume dimensioning systems and methods
US10007858B2 (en) 2012-05-15 2018-06-26 Honeywell International Inc. Terminals and methods for dimensioning objects
JP6518187B2 (en) * 2012-05-22 2019-05-22 ユニリーバー・ナームローゼ・ベンノートシヤープ Personal care composition
US10321127B2 (en) 2012-08-20 2019-06-11 Intermec Ip Corp. Volume dimensioning system calibration systems and methods
US9939259B2 (en) 2012-10-04 2018-04-10 Hand Held Products, Inc. Measuring object dimensions using mobile computer
US9841311B2 (en) 2012-10-16 2017-12-12 Hand Held Products, Inc. Dimensioning system
US9080856B2 (en) 2013-03-13 2015-07-14 Intermec Ip Corp. Systems and methods for enhancing dimensioning, for example volume dimensioning
US10228452B2 (en) 2013-06-07 2019-03-12 Hand Held Products, Inc. Method of error correction for 3D imaging device
US9464885B2 (en) 2013-08-30 2016-10-11 Hand Held Products, Inc. System and method for package dimensioning
WO2015036432A1 (en) * 2013-09-11 2015-03-19 Novartis Ag Contact lens inspection system and method
US9823059B2 (en) 2014-08-06 2017-11-21 Hand Held Products, Inc. Dimensioning system with guided alignment
US9779276B2 (en) 2014-10-10 2017-10-03 Hand Held Products, Inc. Depth sensor based auto-focus system for an indicia scanner
US10775165B2 (en) 2014-10-10 2020-09-15 Hand Held Products, Inc. Methods for improving the accuracy of dimensioning-system measurements
US10810715B2 (en) 2014-10-10 2020-10-20 Hand Held Products, Inc System and method for picking validation
US9762793B2 (en) 2014-10-21 2017-09-12 Hand Held Products, Inc. System and method for dimensioning
US9557166B2 (en) * 2014-10-21 2017-01-31 Hand Held Products, Inc. Dimensioning system with multipath interference mitigation
US10060729B2 (en) 2014-10-21 2018-08-28 Hand Held Products, Inc. Handheld dimensioner with data-quality indication
US9752864B2 (en) 2014-10-21 2017-09-05 Hand Held Products, Inc. Handheld dimensioning system with feedback
US9897434B2 (en) 2014-10-21 2018-02-20 Hand Held Products, Inc. Handheld dimensioning system with measurement-conformance feedback
EP3209523A4 (en) 2014-10-24 2018-04-25 Magik Eye Inc. Distance sensor
EP3295118A4 (en) * 2015-05-10 2018-11-21 Magik Eye Inc. Distance sensor
US10488192B2 (en) 2015-05-10 2019-11-26 Magik Eye Inc. Distance sensor projecting parallel patterns
US9786101B2 (en) 2015-05-19 2017-10-10 Hand Held Products, Inc. Evaluating image values
US10066982B2 (en) 2015-06-16 2018-09-04 Hand Held Products, Inc. Calibrating a volume dimensioner
US20160377414A1 (en) 2015-06-23 2016-12-29 Hand Held Products, Inc. Optical pattern projector
US9857167B2 (en) 2015-06-23 2018-01-02 Hand Held Products, Inc. Dual-projector three-dimensional scanner
US9835486B2 (en) 2015-07-07 2017-12-05 Hand Held Products, Inc. Mobile dimensioner apparatus for use in commerce
EP3396313B1 (en) 2015-07-15 2020-10-21 Hand Held Products, Inc. Mobile dimensioning method and device with dynamic accuracy compatible with nist standard
US20170017301A1 (en) 2015-07-16 2017-01-19 Hand Held Products, Inc. Adjusting dimensioning results using augmented reality
US10094650B2 (en) 2015-07-16 2018-10-09 Hand Held Products, Inc. Dimensioning and imaging items
US10249030B2 (en) 2015-10-30 2019-04-02 Hand Held Products, Inc. Image transformation for indicia reading
US10225544B2 (en) 2015-11-19 2019-03-05 Hand Held Products, Inc. High resolution dot pattern
US10025314B2 (en) 2016-01-27 2018-07-17 Hand Held Products, Inc. Vehicle positioning and object avoidance
JP6525271B2 (en) * 2016-03-28 2019-06-05 国立研究開発法人農業・食品産業技術総合研究機構 Residual feed measuring device and program for measuring residual feed
KR101804051B1 (en) * 2016-05-17 2017-12-01 유광룡 Centering apparatus for the inspection object
US10339352B2 (en) 2016-06-03 2019-07-02 Hand Held Products, Inc. Wearable metrological apparatus
US9940721B2 (en) 2016-06-10 2018-04-10 Hand Held Products, Inc. Scene change detection in a dimensioner
US10163216B2 (en) 2016-06-15 2018-12-25 Hand Held Products, Inc. Automatic mode switching in a volume dimensioner
US10066986B2 (en) * 2016-08-31 2018-09-04 GM Global Technology Operations LLC Light emitting sensor having a plurality of secondary lenses of a moveable control structure for controlling the passage of light between a plurality of light emitters and a primary lens
US10265850B2 (en) * 2016-11-03 2019-04-23 General Electric Company Robotic sensing apparatus and methods of sensor planning
JP6493811B2 (en) * 2016-11-19 2019-04-03 スミックス株式会社 Pattern height inspection device and inspection method
WO2018106671A2 (en) 2016-12-07 2018-06-14 Magik Eye Inc. Distance sensor including adjustable focus imaging sensor
US10909708B2 (en) 2016-12-09 2021-02-02 Hand Held Products, Inc. Calibrating a dimensioner using ratios of measurable parameters of optic ally-perceptible geometric elements
US20200080838A1 (en) * 2017-01-20 2020-03-12 Intekplus Co.,Ltd. Apparatus and method for measuring three-dimensional shape
US11047672B2 (en) 2017-03-28 2021-06-29 Hand Held Products, Inc. System for optically dimensioning
EP3635619A4 (en) * 2017-05-07 2021-01-20 Manam Applications Ltd. System and method for construction 3d modeling and analysis
US10733748B2 (en) 2017-07-24 2020-08-04 Hand Held Products, Inc. Dual-pattern optical 3D dimensioning
KR101881702B1 (en) * 2017-08-18 2018-07-24 성균관대학교산학협력단 An apparatus to design add-on lens assembly and method thereof
KR20200054326A (en) 2017-10-08 2020-05-19 매직 아이 인코포레이티드 Distance measurement using hardness grid pattern
WO2019070806A1 (en) 2017-10-08 2019-04-11 Magik Eye Inc. Calibrating a sensor system including multiple movable sensors
US10679076B2 (en) 2017-10-22 2020-06-09 Magik Eye Inc. Adjusting the projection system of a distance sensor to optimize a beam layout
KR20200123849A (en) 2018-03-20 2020-10-30 매직 아이 인코포레이티드 Distance measurement using a projection pattern of variable densities
JP7354133B2 (en) 2018-03-20 2023-10-02 マジック アイ インコーポレイテッド Camera exposure adjustment for 3D depth sensing and 2D imaging
US10518480B2 (en) * 2018-04-02 2019-12-31 Nanotronics Imaging, Inc. Systems, methods, and media for artificial intelligence feedback control in additive manufacturing
US11084225B2 (en) 2018-04-02 2021-08-10 Nanotronics Imaging, Inc. Systems, methods, and media for artificial intelligence process control in additive manufacturing
US10584962B2 (en) 2018-05-01 2020-03-10 Hand Held Products, Inc System and method for validating physical-item security
EP3803266A4 (en) 2018-06-06 2022-03-09 Magik Eye Inc. Distance measurement using high density projection patterns
US10753734B2 (en) * 2018-06-08 2020-08-25 Dentsply Sirona Inc. Device, method and system for generating dynamic projection patterns in a confocal camera
WO2020033169A1 (en) 2018-08-07 2020-02-13 Magik Eye Inc. Baffles for three-dimensional sensors having spherical fields of view
US11483503B2 (en) 2019-01-20 2022-10-25 Magik Eye Inc. Three-dimensional sensor including bandpass filter having multiple passbands
DE102019102231A1 (en) * 2019-01-29 2020-08-13 Senswork Gmbh Device for detecting a three-dimensional structure
CN109870459B (en) * 2019-02-21 2021-07-06 武汉光谷卓越科技股份有限公司 Track slab crack detection method for ballastless track
US11474209B2 (en) 2019-03-25 2022-10-18 Magik Eye Inc. Distance measurement using high density projection patterns
CN110108230B (en) * 2019-05-06 2021-04-16 南京理工大学 Binary grating projection defocus degree evaluation method based on image difference and LM iteration
CN114073075B (en) 2019-05-12 2024-06-18 魔眼公司 Mapping three-dimensional depth map data onto two-dimensional images
KR20220054673A (en) 2019-09-10 2022-05-03 나노트로닉스 이미징, 인코포레이티드 Systems, methods and media for manufacturing processes
US11639846B2 (en) 2019-09-27 2023-05-02 Honeywell International Inc. Dual-pattern optical 3D dimensioning
CN110705097B (en) * 2019-09-29 2023-04-14 中国航发北京航空材料研究院 Method for removing weight of nondestructive testing data of aeroengine rotating part
CN110715616B (en) * 2019-10-14 2021-09-07 中国科学院光电技术研究所 Structured light micro-nano three-dimensional morphology measurement method based on focusing evaluation algorithm
EP4065929A4 (en) 2019-12-01 2023-12-06 Magik Eye Inc. Enhancing triangulation-based three-dimensional distance measurements with time of flight information
JP2023508501A (en) 2019-12-29 2023-03-02 マジック アイ インコーポレイテッド Association between 3D coordinates and 2D feature points
US11688088B2 (en) 2020-01-05 2023-06-27 Magik Eye Inc. Transferring the coordinate system of a three-dimensional camera to the incident point of a two-dimensional camera
KR102354359B1 (en) * 2020-02-11 2022-01-21 한국전자통신연구원 Method of removing outlier of point cloud and appraratus implementing the same
CN113188474B (en) * 2021-05-06 2022-09-23 山西大学 Image sequence acquisition system for imaging of high-light-reflection material complex object and three-dimensional shape reconstruction method thereof
WO2022237544A1 (en) * 2021-05-11 2022-11-17 梅卡曼德(北京)机器人科技有限公司 Trajectory generation method and apparatus, and electronic device and storage medium
KR102529593B1 (en) * 2022-10-25 2023-05-08 성형원 Device and method acquiring 3D information about an object

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20020054223A (en) * 2000-12-27 2002-07-06 오길록 An Apparatus and Method to Measuring Dimensions of 3D Object on a Moving Conveyor
US7177740B1 (en) * 2005-11-10 2007-02-13 Beijing University Of Aeronautics And Astronautics Method and apparatus for dynamic measuring three-dimensional parameters of tire with laser vision
US20090245616A1 (en) * 2008-03-26 2009-10-01 De La Ballina Freres Method and apparatus for visiometric in-line product inspection
US20110193953A1 (en) * 2010-02-05 2011-08-11 Applied Vision Company, Llc System and method for estimating the height of an object using tomosynthesis-like techniques
JP2011174879A (en) * 2010-02-25 2011-09-08 Canon Inc Apparatus and method of estimating position and orientation

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6603103B1 (en) * 1998-07-08 2003-08-05 Ppt Vision, Inc. Circuit for machine-vision system
KR101199475B1 (en) * 2008-12-22 2012-11-09 한국전자통신연구원 Method and apparatus for reconstruction 3 dimension model
US20110304618A1 (en) * 2010-06-14 2011-12-15 Qualcomm Incorporated Calculating disparity for three-dimensional images
JP5663331B2 (en) * 2011-01-31 2015-02-04 オリンパス株式会社 Control apparatus, endoscope apparatus, diaphragm control method, and program
CN102314683B (en) * 2011-07-15 2013-01-16 清华大学 Computational imaging method and imaging system based on nonplanar image sensor

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20020054223A (en) * 2000-12-27 2002-07-06 오길록 An Apparatus and Method to Measuring Dimensions of 3D Object on a Moving Conveyor
US7177740B1 (en) * 2005-11-10 2007-02-13 Beijing University Of Aeronautics And Astronautics Method and apparatus for dynamic measuring three-dimensional parameters of tire with laser vision
US20090245616A1 (en) * 2008-03-26 2009-10-01 De La Ballina Freres Method and apparatus for visiometric in-line product inspection
US20110193953A1 (en) * 2010-02-05 2011-08-11 Applied Vision Company, Llc System and method for estimating the height of an object using tomosynthesis-like techniques
JP2011174879A (en) * 2010-02-25 2011-09-08 Canon Inc Apparatus and method of estimating position and orientation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP2810054A4 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9291877B2 (en) 2012-11-15 2016-03-22 Og Technologies, Inc. Method and apparatus for uniformly focused ring light
US9594293B2 (en) 2012-11-15 2017-03-14 Og Technologies, Inc. Method and apparatus for uniformly focused ring light
CN104463964A (en) * 2014-12-12 2015-03-25 英华达(上海)科技有限公司 Method and equipment for acquiring three-dimensional model of object
TWI607862B (en) * 2014-12-12 2017-12-11 英華達股份有限公司 Method and apparatus of generating a 3-d model from a, object
CN112074717A (en) * 2018-05-03 2020-12-11 维美德自动化有限公司 Measurement of modulus of elasticity of moving web
WO2019211515A3 (en) * 2018-05-03 2020-01-16 Valmet Automation Oy Measurement of elastic modulus of moving web
US11828736B2 (en) 2018-05-03 2023-11-28 Valmet Automation Oy Measurement of elastic modulus of moving web
CN112074717B (en) * 2018-05-03 2024-01-19 维美德自动化有限公司 Measurement of elastic modulus of moving web
CN109886961A (en) * 2019-03-27 2019-06-14 重庆交通大学 Medium-and-large-sized measurement of cargo measurement method based on depth image
CN109886961B (en) * 2019-03-27 2023-04-11 重庆交通大学 Medium and large cargo volume measuring method based on depth image
WO2022074171A1 (en) 2020-10-07 2022-04-14 Ash Technologies Ltd., System and method for digital image processing
DE102021111706A1 (en) 2021-05-05 2022-11-10 Carl Zeiss Industrielle Messtechnik Gmbh Method, measuring device and computer program product
CN116045852A (en) * 2023-03-31 2023-05-02 板石智能科技(深圳)有限公司 Three-dimensional morphology model determining method and device and three-dimensional morphology measuring equipment

Also Published As

Publication number Publication date
BR112014018573A2 (en) 2017-06-20
EP2810054A4 (en) 2015-09-30
BR112014018573A8 (en) 2017-07-11
KR20140116551A (en) 2014-10-02
US20150009301A1 (en) 2015-01-08
JP2015513070A (en) 2015-04-30
EP2810054A1 (en) 2014-12-10
CN104254768A (en) 2014-12-31

Similar Documents

Publication Publication Date Title
US20150009301A1 (en) Method and apparatus for measuring the three dimensional structure of a surface
Orteu et al. Multiple-camera instrumentation of a single point incremental forming process pilot for shape and 3D displacement measurements: methodology and results
CN104655011B (en) A kind of noncontact optical measurement method of irregular convex surface object volume
US8582824B2 (en) Cell feature extraction and labeling thereof
Percoco et al. Experimental investigation on camera calibration for 3D photogrammetric scanning of micro-features for micrometric resolution
Traxler et al. Experimental comparison of optical inline 3D measurement and inspection systems
Liu et al. Real-time 3D surface measurement in additive manufacturing using deep learning
Shaheen et al. Characterisation of a multi-view fringe projection system based on the stereo matching of rectified phase maps
TW201445133A (en) Online detection method for three dimensional imperfection of panel
Audfray et al. A novel approach for 3D part inspection using laser-plane sensors
Cheng et al. An effective coaxiality measurement for twist drill based on line structured light sensor
Hodgson et al. Novel metrics and methodology for the characterisation of 3D imaging systems
US20140362371A1 (en) Sensor for measuring surface non-uniformity
Ding et al. Automatic 3D reconstruction of SEM images based on Nano-robotic manipulation and epipolar plane images
US20140240720A1 (en) Linewidth measurement system
Setti et al. Shape measurement system for single point incremental forming (SPIF) manufacts by using trinocular vision and random pattern
US20220011238A1 (en) Method and system for characterizing surface uniformity
Qi et al. Quality inspection guided laser processing of irregular shape objects by stereo vision measurement: application in badminton shuttle manufacturing
Helmli et al. Ultra high speed 3D measurement with the focus variation method
Percoco et al. 3D image based modelling for inspection of objects with micro-features, using inaccurate calibration patterns: an experimental contribution
Munaro et al. Fast 2.5 D model reconstruction of assembled parts with high occlusion for completeness inspection
Zolfaghari et al. On-line 3D geometric model reconstruction
Kubátová et al. Data Preparing for Reverse Engineering
To et al. On-line measurement of wrinkle using machine vision
Hu et al. Edge measurement using stereovision and phase-shifting methods

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13743682

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2013743682

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2014554952

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112014018573

Country of ref document: BR

ENP Entry into the national phase

Ref document number: 20147023980

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 112014018573

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20140728