WO2000011873A1 - Method and apparatus for inspection of printed circuit boards - Google Patents

Method and apparatus for inspection of printed circuit boards Download PDF

Info

Publication number
WO2000011873A1
WO2000011873A1 PCT/IL1999/000450 IL9900450W WO0011873A1 WO 2000011873 A1 WO2000011873 A1 WO 2000011873A1 IL 9900450 W IL9900450 W IL 9900450W WO 0011873 A1 WO0011873 A1 WO 0011873A1
Authority
WO
WIPO (PCT)
Prior art keywords
sensors
image
subsystem
article
sensor
Prior art date
Application number
PCT/IL1999/000450
Other languages
French (fr)
Inventor
Peter Grobgeld
Dan Magal
Zeev Smilansky
Ronen Hahn
Uri Gold
Original Assignee
Orbotech Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Orbotech Ltd. filed Critical Orbotech Ltd.
Priority to AU53840/99A priority Critical patent/AU5384099A/en
Priority to EP99939581A priority patent/EP1108329A1/en
Publication of WO2000011873A1 publication Critical patent/WO2000011873A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/95Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
    • G01N21/956Inspecting patterns on the surface of objects

Definitions

  • the present invention relates to article inspection systems and methods generally and more particularly to systems and methods for inspecting generally two dimensional articles, such as printed circuit boards
  • an image acquisition system including a plurality of sensors each including a multiplicity of sensor elements, a pre-scan calibration subsystem, employing a predetermined test pattern, which is sensed by the plurality of sensors, for providing an output indication in two dimensions of distortions in the output of the plurality of sensors, the output indication being employed to generate a function which maps the locations viewed by the sensor elements in at least two dimensions, and a distortion correction subsystem operative during scanning of an article by the plurality of sensors to correct the distortions by employing the output indication
  • the plurality of sensors include plural sensors having different spectral sensitivities
  • the distortion co ⁇ ection subsystem performs non-zero'th order interpolation of pixels in the outputs of the plurality of sensors
  • the plurality of sensors include sensors having differing spectral sensitivities and the function is dependent on differing accumulation times employed for the sensors having differing spectral sensitivities
  • the distortion correction subsystem compensates for variations in pixel shape in the plurality of sensors
  • the distortion correction subsystem is operative to an accuracy of better than 5% of pixel size of the multiplicity of sensor elements
  • an image acquisition system including a plurality of sensors each including a multiplicity of sensor elements, a pre-scan calibration subsystem, employing a predetermined test pattern, which is sensed by the plurality of sensors, for providing an output indication of distortions in the output of the plurality of sensors, the output indication being employed to generate a correction function, and a distortion correction subsystem operative during scanning of an article by the plurality of sensors to correct the distortions by employing the correction function, the distortion correction subsystem being operative to an accuracy of better than 5% of pixel size of the multiplicity of sensor elements
  • an image acquisition system including a plurality of sensors, a pre-scan calibration subsystem, employing a predetermined test pattern, which is sensed by the plurality of sensors while being moved relative to the plurality of sensors in a direction of relative movement, for providing an output indication of distortions in the output of the plurality of sensors, the pre-scan calibration system being operative to correlate images of at least one target on the test pattern as seen by the plurality of sensors, thereby to determine the relative orientation of the plurality of sensors, and a distortion correction subsystem operative to correct the distortions by employing the output indication
  • the pre-scan calibration subsystem also is operative to provide an output indication of the o ⁇ entation of the plurality of sensors relative to the scan direction
  • each of the plurality of sensors includes a multiplicity of sensor elements
  • the pre-scan calibration subsystem also is operative to determine the pixel size characteristic of each of the multiplicity of sensor elements of each of the plurality of sensors
  • the pre-scan calibration subsystem is operative to determine the pixel size characteristic of each of the multiplicity of sensor elements of each of the plurality of sensors by causing the plurality of sensors to view a grid formed of a multiplicity of parallel uniformly spaced lines, formed on the test pattern
  • an image acquisition system including a plurality of sensors each including a multiplicity of sensor elements, a pre-scan calibration subsystem, employing a predetermined test pattern, which is sensed by the plurality of sensors, for providing an output indication in two dimensions of distortions in the output of the plurality of sensors, the output indication being employed to generate a function which maps the locations viewed by the sensor elements in at least two dimensions, and an distortion correction subsystem operative during scanning of an article by the plurality of sensors to correct the distortions by employing the output indication
  • an article inspection system including an image acquisition subsystem operative to acquire an image of an article to be inspected, an image analysis subsystem for identifying at least one predetermined characteristic of the article from the image, and an output indication subsystem for providing an output indication of the presence of the at last one predetermined characteristic of the article,
  • the camera assembly includes a plurality of sensor assemblies, self calibration apparatus for determining a geometrical relationship between the sensor assemblies, and sensor output modification apparatus for modifying outputs of the plurality of sensor assemblies based on the geometrical relationship between the sensor assemblies, the sensor output modification apparatus including electronic interpolation apparatus operative to perform non-zero'th order interpolation of pixels in the outputs of the plurality of sensor assemblies
  • an article inspection system including an image acquisition subsystem operative to acquire an image of an article to be inspected, an image analysis subsystem for identifying at least one predetermined characteristic of the article from the image, and an output indication subsystem for providing an output indication of the presence of the at least one predetermined characteristic of the article,
  • the camera assembly includes a plurality of sensor assemblies, self calibration apparatus for determining a geometrical relationship between the sensor assemblies, and sensor output modification apparatus for modifying outputs of the plurality of sensor assemblies based on the geometrical relationship between the sensor assemblies, the sensor output modification apparatus being operative to modify the outputs of the plurality of sensor assemblies to sub- pixel accuracy
  • an article inspection system including an image acquisition subsystem operative to acquire an image of an article to be inspected, an image analysis subsystem for identifying at least one predetermined characteristic of the article from the image, and an output indication subsystem for providing an output indication of the presence of the at least one predetermined characteristic of the article, characterized in the camera assembly includes at least one sensor assembly, and sensor output modification apparatus for modifying at least one output of the at least one sensory assembly based at least in part on an optical distortion associated with the at least one sensor assembly
  • the optical distortion includes magnification distortion
  • the optical distortion includes chromatic aberration
  • an article inspection system including an image acquisition subsystem operative to acquire an image of an article to be inspected, an image analysis subsystem for identifying at least one predetermined characteristic of the article from the image, and an output indication subsystem for providing an output indication of the presence of the at least one predetermined characteristic of the article, characterized in the camera assembly includes at least one sensor assembly, sensor output modification apparatus for modifying at least one output of the at least one sensory assembly, the sensor output modification apparatus including a function generator which generates a function which maps locations on the sensor assembly to a collection of scan locations
  • an article inspection system including a camera assembly operative to acquire an image of an article to be inspected, an image analysis subsystem for identifying at least one predetermined characteristic of the article from the image, and an output indication subsystem for providing an output indication of the presence of the at last one predetermined characteristic of the article, characterized in the camera assembly includes a user interface which enables a user to select resolution of the image acquired by the camera assembly, an electro- optical sensor assembly, and an electronic resolution modifier operative downstream of the electro-optical sensor assembly
  • Fig 1 is a simplified block diagram illustration of an article inspection system constructed and operative in accordance with a preferred embodiment of the present invention
  • Fig 2 is a simplified illustration of parts of a preferred test pattern, certain portions of which are not drawn to scale, along with a simplified indication of the fields of view of individu ⁇ l line sensors viewing the test pattern,
  • Fig 3 is a simplified block diagram illustration of mapping function generator circuitry forming part of the system of Fig 1
  • Fig 4A is a simplified flow chart illustrating operation of pixel size and shape determination functionality of the mapping function generator circuitry of
  • Fig 4B is a simplified illustration of geometrical distortion addressed by the functionality of Fig 4A
  • Fig 4C is a simplified semi-pictorial, semi-graphical illustration of the functionality described in the flowchart of Fig 4A
  • Fig 5A is a simplified flow chart illustrating operation of test pattern angle determination functionality of the mapping function generator circuitry of Fig
  • Fig 5B is a simplified illustration of the geometrical distortion addressed by the functionality of Fig 5 A
  • Fig 5C is a simplified semi-pictorial, semi-graphical illustration of the functionality described in the flowchart of Fig 5 A
  • Fig 6A is a simplified flow chart illustrating operation of Determination of Relative Orientation of Sensors by Correlating Images of Test Pattern Targets at Edges of the Field of View of Sensors functionality of the mapping function generator circuitry of Fig 3,
  • Fig 6B is a simplified illustration of the geometrical distortion addressed by the functionality of Fig 6 A,
  • Fig 6C is a simplified semi-picto ⁇ al, semi-graphical illustration of the functionality described in the flowchart of Fig 6A,
  • Fig 7A is a simplified flow chart illustrating operation of Determination of X Overlap and Y Offset of Sensors by Correlation of Adjacent Images functionality of the mapping function generator circuitry of Fig 3,
  • Fig 7B is a simplified illustration of the geometrical distortion addressed by the functionality of Fig 7 A
  • Fig 7C is a simplified illustration of the functionality described in the flowchart of Fig 7 A.
  • Fig 8A is a simplified flow chart illustrating operation of Determination of X and Y Offsets for Multiple Colors functionality of the mapping function generator circuitry of Fig 3,
  • Fig 8B is a simplified illustration of the geometrical distortion addressed by the functionality of Fig 8 A,
  • Fig 8C is a simplified illustration of the functionality described in the flowchart of Fig 8 A, Figs 9A and 9B, taken together, are simplified flowchart illustrations of a preferred method of implementing mapping function generator circuitry forming part of the system of Fig 1 ,
  • Fig 10A is a simplified illustration of acquisition of a target image by multiple cameras under ideal conditions
  • Fig 10B is a simplified illustration of image buffers for storing the acquired image of 10 A
  • Fig 1 1A is a simplified illustration of multiple cameras acquiring an image of a target where the camera fields of view are mutually skewed and overlapped
  • Fig 1 IB is a simplified illustration of image buffers for storing the acquired image of 1 1 A
  • Fig 12 is a simplified illustration useful in understanding Y- resamphng functionality of the image correction circuitry 110 of Fig 1 ,
  • Figs 13 and 14, taken together, are simplified illustrations useful in understanding overlap correction functionality of the image correction circuitry 1 10
  • Figs 15 and 16 taken together, are simplified illustration useful in understanding X-resamphng functionality of the image correction circuitry 1 10 of
  • Fig 17 is a simplified illustration useful in understanding aspects of accumulation shift correction functionality of the image correction circuitry 1 10 of
  • Fig 1 is a simplified block diagram illustration of an inspection system constructed and operative in accordance with a preferred embodiment of the present invention
  • the inspection system of Fig 1 comprises a sensor array 100, typically comprising multiple multi-pixel line sensors oriented such that their fields of view are in partially overlapping, mutually skewed arrangement and, therefore, require correction
  • the multi-pixel line sensors are typically housed within a camera, such as a CCD camera, having electronic shutters
  • a conveyor 102 is arranged to transport an article to be inspected past the sensor array 100 in a transport direction indicated by an arrow 104
  • mapping function generator 106 is arranged to receive outputs from the sensor array 100 when a test pattern 108 is being inspected by the sensor array
  • the mapping function generator 106 provides an correction output to correction circuitry 1 10 which employs the mapping function generated by mapping function generator 106
  • Circuitry 1 10 receives outputs from the sensor array 100 when an article to be inspected, such as a printed circuit board 1 1 1, is being inspected by the sensor array 100
  • test pattern 108 is inspected by the sensor array 100 in order whereupon mapping function generator 106 generates the information required to correct the output of circuitry 1 10
  • Multiple articles to be inspected may then be inspected and thereafter, intermittently during the inspection of such articles, test pattern 108 may again be inspected by the sensor array 100 to provide updated calibration
  • test pattern 108 is preferably inspected about once per month of continuing operation of the inspection system
  • Image correction circuitry 1 10 is operative to employ the correction input received from mapping function generator 106 to correct the outputs received from sensor array 100 and provides a corrected sensor array output to segmentation circuitry 1 12 Segmentation circuitry 1 12 provides a segmentation output indication dividing all areas on the image of the article represented by the corrected sensor array output into one of typically two categories For example, in the case of inspection of printed circuit boards, every location on the image represented by the corrected sensor array output is identified by the segmentation output indication as being either laminate or copper
  • Image processing circuitry 1 14 is preferably a morphology-based system, but may alternatively be based on a bit map, a net list, or any other suitable input Circuitry 1 14 provides an image processing output which identifies va ⁇ ous features of the image represented by the corrected sensor array output and the locations of such features In the case of printed circuit boards, the features are typically pads, conductor junctions, open ends, and other printed circuit board elements
  • the image processing output of circuit 1 14 is supplied to feature registration circuitry 1 16, which maps the coordinate system of the image processing output of circuitry 1 14 onto a feature reference coordinate system, in accordance with information supplied by a reference input source 1 18
  • the output of registration circuitry 1 16 and an output of reference input source 1 18 are supplied to feature comparison circuitry 120, which compares the mapped image processing output of circuitry 1 14 with a reference stored in source 1 18 and provides a defect indication which is supplied to a defect indication output generator 122
  • Fig 2 is a defect indication which is supplied to a defect indication output generator 122
  • the uniformly spaced parallel inclined lines 132 are preferably angled at a small angle ⁇ having a tangent of about 0 05 with respect to the transport direction 104
  • the angular o ⁇ entation determinator 134 is preferably nearly rectangular in shape, however one of the edges of the angular o ⁇ entation determinator is preferably angled at a small angle ⁇ having a tangent of about 0 0156 with respect to the transport direction 104
  • Fig 3 is a simplified block diagram illustration of the mapping function generator 106 of Fig 1 Outputs 200 representing images of targets on test pattern 108 (Figs 1 and 2) which is being inspected by the sensor array 100 (Fig 1 ) are supplied to the mapping function generator 106 which carries out the following functions Pixel Size and Shape Function Determination for a Single Color 202,
  • the parameters thus determined are supplied to a geometric polynomial generator 212 which preferably provides a function which maps the locations viewed by individual elements of sensor array 100 (Fig 1 ) in at least two dimensions
  • Fig 4A is a simplified flow chart illustrating operation of the Pixel Size and Shape Function Determination circuitry 202 of Fig 3 and to Fig 4B which illustrates a distortion sought to be overcome by the functionality of circuitry 202
  • the apparent dimensions of identical features as seen by camera 124 may vary depending on the position of the feature in the field of view of the camera for a given field of view angle ⁇
  • an "on-axis" feature 140 which is directly in front of the camera and in the illustration spans the field of view of camera 124 is perceived to have a width of "d" pixels
  • an identical feature 142 when located at the edge of the field of view does not span the field of view of camera 124 and is perceived to have a width "d - ⁇ " pixels
  • row 130 of parallel uniformly spaced inclined lines 132 of test pattern target 108 (Fig 2) is viewed by a plurality of cameras 124, of which only one is shown in Fig 4C
  • Each camera 124 acquires an image of a part of row 130, an enlarged part of which is shown at reference numeral 150
  • the image is preferably acquired in a single color, such as red Alternatively, the image may be acquired in several colors, one of which may be used for the purpose of calculating the size and pixel shape
  • the angle ⁇ at which images of the lines 132 are inclined with respect to the transport direction 104, is measured
  • the images of the lines are indicated by reference numerals 152
  • the separation between each adjacent line 132 in row 130 is fixed and predetermined, and the location of each line 132 in row 130 along an axis 105 that is perpendicular to transport direction 104 is known with respect to an arbitrarily chosen one of
  • each line 152 produces a local minimum in graph 160
  • the separation of adjacent lines 152 in the camera output may be determined by measuring the distance between adjacent points of inflection before each local minimum 154' indicated the point of inflection corresponding to line location 154
  • the X-axis of graph 160 represents the number of each initial individual diode in a linear array of diodes in camera 124, while the Y-axis in graph 160 represents the summation of the intensity L of the image scanned in a direction angled by angle ⁇ with respect to direction 105
  • Fig 5A is a simplified flow chart illustrating operation of the Test Pattern Angle Determination circuitry 204 of Fig 3 and to Fig 5B which illustrates the distortion sought to be overcome by the functionality of circuitry 204
  • the entire test pattern 108 is normally, in reality, not perfectly aligned with the transport direction 104, but rather is offset therefrom by an angle ⁇
  • test pattern target 108 comprising angular o ⁇ entation determinator 134, having edge 182 of known aspect angle ⁇ relative to additional objects in the test pattern, is viewed by camera 124
  • the camera 124 acquires an image of angular orientation determinator 134, an enlarged portion of which is shown at reference numeral 184 in
  • An aspect angle ⁇ * of an inclined edge 186 of the image of the determinator 134 is calculated by conventional techniques based on measurements carried out on each raster line of the image
  • the thus-determined deviation is employed for correction in circuitry 1 10 (Fig
  • Fig 6A is a simplified flow chart illustrating operation of the Determination of Relative Orientation of Sensors by Correlating Images of Test Pattern Targets at Edges of the Field of View of Sensors circuitry 206 of Fig 3 and to Fig 6B which illustrates the distortion sought to be overcome by the functionality of circuitry 206 As seen in Fig 6B, rather than being aligned in a single row or
  • Fig 7A is a simplified flow chart illustrating operation of the Determination of X Overlap and Y Offset of Sensors by Correlation of Adjacent Images 208 of Fig 3 and to Fig 7B which illustrates the distortion sought to be overcome by the functionality of circuitry 208
  • the fields of view 162 of various cameras 124 are seen, in an exaggerated view, to be mutually skewed and shifted in what is referred to herein as the Y direction, being the same as direction 104, and partially overlapping in what is referred to herein as the X direction, being perpendicular to direction 104
  • the functionality described here with reference to Figs 7A - 7C deals with the problem of offset of the fields of view of the cameras 124
  • An overlap in the fields of view of two adjacent cameras is shown at reference numeral 348
  • test pattern target 108 preferably comprising row 300 of LORs 136 as in Fig 6C, is viewed by multiple cameras 124 Each camera 124 acquires an image of part of row 300 which includes the same LORs
  • the X overlap and Y offset may be determined by using an image region 253 that is acquired by two adjacent cameras 124
  • Two enlarged images of one image region 253 of a LOR 136, as seen by two adjacent cameras 124, are shown in Fig 7C at reference numeral 354
  • the two enlarged images of LOR 136 are shown in mutually offset, overlapping relationship, such that the LORs seen in both images are in precise overlapping registration
  • the offsets being expressed in pixel units ⁇ x Q v may then be converted to metric units and expressed as by employing the pixel size and shape function as desc ⁇ bed hereinabove with reference to Figs 4A - 4C
  • An overlap in the x direction, OV may then be calculated as w-m ⁇ x 0 v, where w is the metric width of each image may be converted to metric units by multiplying ⁇ y
  • Fig 8A is a simplified flow chart illustrating operation of the Determination of X and Y Offsets for Multiple Colors circuitry 210 of Fig 3 and to Fig 8B which illustrates a distortion sought to be overcome by the functionality of circuitry 210 As seen in Fig 8B, a three-color CCD camera 380 is shown Camera
  • 380 typically includes a multi-pixel line sensor 382 comprising three line sensors such as 384, 386, and 388, with the line sensor being arranged in parallel and each comp ⁇ sing a plurality of single-color sensing diodes 390 arranged linearly
  • the diodes 390 of the multi-pixel line sensor 382 may be logically arranged into groupings of three diodes, one from each of line sensors 384, 386, and 388, with each diode sensing a different color Three such groupings 392, 394, and 396 are shown, at the center and both edges of camera 380
  • Camera 380 is shown viewing elements 398 of a target 400 moving in direction 104
  • the image acquired by the multi-pixel line sensor 382 is typically stored in three buffers 402, 404, and 406, with each buffer corresponding to a particular color, such as red, green, and blue respectively
  • a combined view of buffers 402, 404, and 406 is shown at reference numeral 408
  • Combined buffer 408 shows the acquired images 399, 401 , and 403 of elements 398
  • the images 399, 401, and 403 of combined buffer 408 demonstrate the pixel size and shape distortion in the X direction due to chromatic aberration, as shown at 410, as well as the Y direction displacement, as shown at 412, due to the physical separation of the R, G and B line sensors
  • test pattern target 108 preferably comprising row 300 of LORs 136 as seen in Figs 6C and 7C, is viewed by multiple cameras 124 Each camera 124 preferably acquires a multicolored image at either edge of the camera, with one edge of camera 124 acquiring an image of one LOR 136, and the other edge acquiring an image of an adjacent LOR 136
  • Each color of each multicolored image acquired at each edge of one of the cameras 124 is preferably related to separately
  • a single color, such as red may be selected as a reference color to which the other two color components of the multicolored image are compared
  • the red and green components of image region 253 at one edge of the field of view of CAM 1 are shown enlarged at reference numeral 360
  • the two enlarged images of LOR 136 are shown in mutually offset, overlapping relationship, such that the LORs seen in both images are in precise overlapping registration It is noted that due to this overlapping relationship between the images a y offset, ⁇ yco L .
  • Figs 9A and 9B which, taken together, are simplified flow charts illustrating operation of the geometric polynomial generator 212 of Fig 3
  • a cubic function may be constructed to determine the positions of the diodes of cameras 124 in space using the outputs of 202 - 210 described hereinabove with reference to Figs 4A - 8C
  • Fig 9A the X-overlap, OV x , as determined in 208 (Fig 3), is used to find a0[n] first for one color, such as red, of each camera n in a row of cameras, such as cameras designated CAM 1 , CAM 2, and CAM 3 in Fig 7C
  • the X- polynomials may then be derived for the other colors, such as blue and green, based on the X-polynomial for the first color as follows
  • ND is the number of diodes in the preceding camera n-1
  • S n . ⁇ (ND) is the value of the pixel size and shape function output determined in 202 as evaluated for the last pixel of camera n-1
  • w is the width of the image in metric units containing the LOR
  • OVx is the measured overlap in the X-direction as determined m 206
  • the Y-polynomtal is determined initially for one color, such as red, of each camera n in a row of cameras, such as cameras designated CAM 1, CAM 2, and CAM 3
  • the Y-polynomials may then be derived for the other colors, such as blue and green, based on the Y-polynomial for the first color
  • Fig 10A is a simplified illustration of multiple cameras 500 and 502 acquiring an image of a target 504 under ideal conditions
  • Fig 10B which is a simplified illustration of image buffers for storing the acquired image of target 504
  • Cameras 500 and 502 are typically in fixed positions, each having a static field-of-view, and are arranged such that target 504 passes through the fields-of-view of cameras 500 and 502 in the direction of motion 104
  • Cameras 500 and 502 each acquire an image of target 504 one image line at a time by employing a multi-pixel line sensor as is desc ⁇ bed hereinabove
  • Each diode of the multi-pixel line sensor acquires a single-pixel image of a specific location on target 504, and the pixels acquired by each diode collectively form an image line
  • An image line portion 512 is shown comp ⁇ sing a plurality of pixels 514
  • the field- of-view of cameras 500 and 502 "moves" in the direction designated by arrows 508, and thus the image lines are acquired in the direction of 508 as well
  • camera 500 begins acquiring an image at t 0 to yield an image line 516 shown in dashed lines
  • Image line portion 512 is acquired at time index t s , shown intersecting a portion of a target element 518
  • Target 504 continues to move in the direction of arrow 506, and the acquired image lines approach t] as is shown in dashed lines by an image line portion 520
  • Fig 10 A image line portion 512 associated with camera 500 is aligned in a single row with an image line portion 522 associated with camera 502 at time index t , with image line portions 512 and 522 meeting at a boundary line 524
  • the image lines scanned by cameras 500 and 502 are typically stored in buffers, such as buffers 530 and 532 of Fig 10B Since the scan lines of both buffers are aligned in a single row for each corresponding time index, buffers 530 and 532 may be combined to form a non-distorted composite image of target element 518, a portion of which is shown as 534
  • Figs 1 1 A and 1 IB illustrate the effect cameras 500 and 502 have when acquiring an image of target 504 under less than ideal conditions in contrast to Figs 10A and 10B, specifically when the fields of view of both cameras are mutually skewed and are overlapping
  • Figs 1 1 A and 1 IB are intentionally provided as an oversimplified illustration of some difficulties encountered, and are merely intended to review what was described in greater detail hereinabove and are not intended to supersede the descriptions of Figs 1 - 9B
  • the image lines acquired are shown not aligned in a single row for a given time index t , such as is shown with reference to buffers 560 and 562 and image line portions 564 and 566 of Fig 1 I B
  • Combining buffers 560 and 562 would produce a distorted composite image 568 of target element 518, as is shown in Fig 1 IB
  • simply combining the buffer images would neither correct for image overlap, discussed hereinabove with reference to Figs 7A - 7C, nor for the viewing angle distortion, discussed hereinabove with reference to Figs 4A - 4C
  • a FIFO buffer such as a FIFO buffer 600 of Fig 12, may be defined by defining a window having a height expressed in a fixed number of image buffer rows, such as 40, beginning with the first row of pixels acquired This window is then typically advanced to a new position one row at a time as each new row of pixels is acquired Alternatively, an image buffer may initially be filled with row of pixels, at which point the FIFO buffer window may be defined as a subset of the image buffer rows and advanced along the image buffer in the manner just described Before the X and Y polynomials determined with reference to Figs 9A and 9B can be used for correcting the image they may be translated into another type of polynomial referred to herein as a "diode compensating polynomial" This polynomial maps the relationship between a diode and a pixel position of a corrected image constructed using a pixel size chosen by the user The diode compensating polynomials Q x (d) and Q y (d) may be derived from
  • Q y (d) Py(d)/p
  • p is the pixel size chosen by the user and is expressed in the same metric units as Px(d)
  • the pixel size p chosen by the user must be a multiple of the minimum measurable distance of travel in the scan direction 104, typically one pulse unit of a drum encoder
  • the diode compensating polynomial Q may be additionally be adjusted for the shift introduced by different color component accumulation times as is described in greater detail hereinbelow with reference to Fig 17
  • Each pixel or grid point in the FIFO buffer represents the sampling of a corresponding target acquired by a diode
  • a process of "resampling" is used whereby calculations are performed to determine a gray level g at an arbitrary position "between" grid points of the buffer grid
  • the four gray levels g may be selected from four contiguous grid points of the FIFO buffer These four points are referred to herein as a "quartet"
  • Resampling may be performed in two stages corresponding to the X and Y directions described hereinabove
  • the stages are referred to herein as X- resamphng and Y-resamphng Y-resamphng is now described in greater detail with reference to Fig 12
  • Two FIFO buffers 600 and 602 are shown corresponding to two cameras
  • the set of all pixels 604 in the FIFO buffers scanned by a diode d may be referred to as diode d's "gray level column," such as a column 606
  • Virtual scan lines 608 and 610 are shown to indicate the correction angles needed for each buffer to compensate for the misalignment angles of each corresponding camera
  • a quartet 612 is shown as a group of four pixels within the gray level column 606 closest to
  • the quartet index q(d) denotes the first grid point belonging to the quartet of pixels that he within the diode's gray level column
  • the index q(d) for each quartet is preferably stored in a quartet look-up table in a position corresponding to the diode d
  • these four coefficients cl , c2, c3, and c4 are preferably encoded in such a manner that when decoded and summed the summed value equals 1 0, although the accuracy of any single coefficient may be diminished
  • the encoded values are preferably stored in a coefficients look-up table in a position corresponding to d
  • a "blending region" 626 of a predefined number of pixels B is defined within the overlap region 624 between cameras 1 and 2 where corresponding pixels from both cameras within the blending region are blended to yield a single pixel value which is then used to form the combined image 630
  • the blending region preferably begins after allowing for a pixel margin 628 of a predefined number of pixels M, typically 20 pixels, in order to accommodate lower-quality output often encountered at the ends of a diode array
  • the two diode resampling polynomials ON (d) and Q (2) (d) of the two adjacent cameras may be used to determine the amount of overlap between the cameras
  • the following steps may be performed 1 Defining Q (2) (l ) to refer to the pixel position r of the first pixel of X and Y-resampled output of camera 2 (X-resampling is described in greater detail hereinbelow with reference to Fig 15)
  • the leading edge of the blending region may be determined by adding the predetermined number of pixels defined for the pixel margin desc ⁇ bed above
  • X-resampling is now described in greater detail with reference to Fig 15 Due to optical distortions, and in order to accommodate a user-defined pixel size, an output 640 of the Y-resamphng must be resampled in the X direction, thereby creating a X-corrected image row 642 with pixels having the desired pixel size
  • Each pixel position r on the X-corrected image row corresponds to a position d(r) on the diode array d(r) may be calculated using the diode resampling polynomial Q x (d) This involves finding the inverse function Q '(r) of Q x (d) This inverse function allows the mapping of the pixel position r on the X-corrected image row to a corresponding position d(r)on the diode array It is appreciated that this position might not correspond to a integer position of a specific diode, but rather may be expressed in fractional terms along the diode array Once the dio
  • Fig 16 illustrates the steps to be performed for each pixel in the X-corrected image row as follows 1 Assigning an index r p to correspond to the first pixel 650 in an
  • Stepping index r p through each pixel position in the X- corrected image row 652 corresponding to an overlap region 654 to the end of the field of view 656 of the current camera, CAM 1 , and find the diode position d p (1) such that rp (1 ) Qx(d p (1 ) ) Since Qx is a monotonically increasing function, d p advances as r p advances When d p reaches the end 656 of the field of view of CAM 1, r p is returned to the pixel posi'ion corresponding to the start of the overlap region 658 of the next camera, CAM 2, assigning to r p the value of the diode compensating polynomial evaluated for the first diode of camera CAM 2, I e r p ⁇ — Q (2) ( 1 ) 3 Stepping index r p through each pixel position in the X- corrected image row 652 for CAM 2, finding the diode position d p ( ) such that r P (2) -
  • Steps 2 and 3 are performed for each subsequent pair of cameras
  • Finding the diode position d p may either be done through a one-time inversion of the function Qx(d p ) or through a numerical solution
  • the convolution coefficients cl through c4 may be calculated based on ⁇ using the formulae described above for Y- resamp ng
  • Image correction employing Y-Resamphng, X-Resamphng, and Overlap correction are preferably performed by circuitry 1 10 of Fig 1, now summarized hereinbelow
  • the Y-Resampled output is processed further to correct pixel shape and size
  • the gray level is processed one pixel row at time
  • the X-quartet index is retrieved from the X-quartet look-up table and the four convolution coefficients Ci through c 4 are retrieved from the X-coefficients look-up table
  • the four gray level values g through g - of the corresponding X-quartet may then be retrieved from the Y- corrected gray level buffer
  • each color component of the multi-line sensor array begins to accumulate the charge corresponding to its respective color
  • the exposure of each color component of the multi-line sensor array is then varied by closing the electronic shutters of each color component at different times
  • the center of the acquired pixel for each color component may be different than the geometric center of the pixel
  • an "accumulation shift" ⁇ y aCL is introduced that may be corrected by subtracting the center of the acquired pixel from the geometric center of the pixel for each color component by the formula
  • This accumulation shift is preferably determined during acquisition of the test target, and is used to adjust the bO component of the Y-polynomial P y
  • the diode compensating polynomial Q> described in Fig 9B may also be adjusted for the accumulation shift according to the different exposure times chosen for the various color components
  • a multi-line sensor array 700 is shown acquiring three pixels 702, 704, and 706, each pixel being acquired by a different line sensor 708, 710, and 712, with each line sensor comprising a plurality of single-color sensing diodes Due to the different acquisition times of each of each line sensor, the relative areas of each of the three pixels acquired vary, as is shown by accumulation areas 714, 716, and 718
  • a geometric center may be defined for each of the three pixels at 720, 722, and 724
  • the center of each accumulation area may be defined at 726, 728, and 730
  • the distances between the centers of each accumulation area and its corresponding geometric center 732, 734, and 736 represent the accumulation shift for each color component and may be used to correct for the overlap in the Y-direction 104 as described above It is appreciated that various features of the invention which are, for clarity, described in the contexts of separate embodiments may also be provided in combination in a

Abstract

An image acquisition system including a plurality of sensors each including a multiplicity of sensor elements, pre-scan calibration subsystem, employing a predetermined test pattern (108), which is sensed by the plurality of sensors (100), for providing an output indication in two dimensions of distortions in the output of the plurality of sensors, the output indication being employed to generate a function which maps the locations viewed by the sensor elements in at least two dimensions and a distortion subsystem operative during scanning of an article by the plurality of sensors to correct the distortions by employing the output indication.

Description

METHOD AND APPARATUS FOR INSPECTION OF PRINTED CIRCUIT
BOARDS
FIELD OF THE INVENTION The present invention relates to article inspection systems and methods generally and more particularly to systems and methods for inspecting generally two dimensional articles, such as printed circuit boards
BACKGROUND OF THE INVENTION There are known in the patent and professional literature various article inspection systems and methods One well known problem with existing article inspection systems is that they encounter great difficulty strictly arranging multiple line image sensors such that they collectively acquire an image of a single line of an article being inspected Other problems relate to the stretching of pixels due to the angle of the camera, the difficulty in combining overlapping images from multiple cameras, and the shifting of color components of an image due to factors such as the physical separation between diodes in a line sensor, chromatic aberrations, and varying intensities of different wavelengths of light
The following patent documents are believed to represent the current state of the art
U S Patent Nos RE33,956, 3,814,520, 3,956,698, 4,100,570, 4,152,723, 4,167,728, 4,185,298, 4,223,346, 4,269,515, 4,277,175, 4,277,802, 4,326,792, 4,347,001, 4,389,655, 4,421,410, 4,448,532, 4,449,818, 4,459,619, 4,465,939, 4,506,275, 4,532,650, 4,538,909, 4,556,317, 4,585,351, 4,589,140, 4,590,607, 4,594,599, 4,597,455, 4,618,938, 4,633,504, 4,635,289, 4,653,109, 4,675,745, 4,692,812, 4,701 ,859, 4,712, 134, 4,751 ,377, 4,758,782, 4,758,888, 4,762,985, 4,771,468, 4,772,125, 4,783,826, 4,794,647, 4,799,175, 4,805,123, 4,805,123, 4,81 1,410, 4,821,1 10, 4,821,1 10, 4,870,505, 4,870,505, 4,877,326, 4,878,736, 4,893,346, 4,894,790, 4,897,737, 4,897,795, 4,929,845, 4,930,889, 4,938,654, 4,958,307, 4,969,038, 4,969,198, 4,978,974, 4,978,974, 4,979,029, 4,984,073, 4,989,082, 4,989,082, 5,023,714, 5,023,917, 5,067,012, 5,067,162, 5,085,517, 5,091 ,974, 5,103,105, 5,103,257, 5,1 14,875, 5,1 19,190, 5,1 19,439, 5,125,040, 5,127,726, 5,128,753, 5, 129,014, 5,131 ,755, 5,136,149, 5,144,132, 5,144,132, 5,144,448, 5,150,422, 5,150,423, 5,161,202, 5,162,866, 5,162,867, 5,163,128, 5,170,062, 5,175,504, 5,181 ,068, 5,198,778, 5,204,918, 5,220,617, 5,245,421, 5,253,307, 5,258,706, 5,285,295, 5,303,064, 5,305,080, 5,331,397, 5,373,233, 5,379,350, 5,414,534, 5,414,534, 5,444,478, 5,450,204, 5,483,359, 5,483,603, 5,495,535, 5,500,746, 5,539,444,
European patent documents EP 094,501 A2, EP 598,582 A2, EP 426,182 A2, EP 426,166 A2, EP 247,308 A2, EP 243,939 A2, EP 206,713 A2, EP 128,107 Al, EP 126,492 A2, EP 426,166 A2, EP 533,348 A2, EP 209,422 A2, EP 536,918 A2, EP 92306649 2, and
British patent documents GB 2,201,804 A and GB 2,124,362 A The following patent documents are believed to be most relevant U S Patent Nos 4,459,619, 4,465,939, 4,675,745, 4,692,812, 4,821,1 10, 4,870,505, 5,144,132, 5,144,448, 5,438,359, and 5,500,746
SUMMARY OF THE INVENTION The present invention seeks to provide an improved system and method for article inspection which are characterized by substantially enhanced accuracy
There is thus provided in accordance with a preferred embodiment of the present invention an image acquisition system including a plurality of sensors each including a multiplicity of sensor elements, a pre-scan calibration subsystem, employing a predetermined test pattern, which is sensed by the plurality of sensors, for providing an output indication in two dimensions of distortions in the output of the plurality of sensors, the output indication being employed to generate a function which maps the locations viewed by the sensor elements in at least two dimensions, and a distortion correction subsystem operative during scanning of an article by the plurality of sensors to correct the distortions by employing the output indication Further in accordance with a preferred embodiment of the present invention the plurality of sensors include plural sensors having different spectral sensitivities
Still further in accordance with a preferred embodiment of the present invention the plurality of sensors include at least two sensors having generally the same spectral sensitivity
Additionally in accordance with a preferred embodiment of the present invention the plurality of sensors include at least two sensors which at least partially overlap in at least one dimension
Moreover in accordance with a preferred embodiment of the present invention the pre-scan calibration subsystem is operative to sub-pixel accuracy
Further in accordance with a preferred embodiment of the present invention the distortion coπection subsystem performs non-zero'th order interpolation of pixels in the outputs of the plurality of sensors
Still further in accordance with a preferred embodiment of the present invention the distortion correction subsystem compensates for variations in pixel size in the plurality of sensors
Additionally in accordance with a preferred embodiment of the present invention the distortion correction subsystem compensates for variations in magnification in the plurality of sensors Further in accordance with a preferred embodiment of the present invention the distortion correction subsystem compensates for chromatic aberrations in the plurality of sensors
Moreover in accordance with a preferred embodiment of the present invention the plurality of sensors include sensors having differing spectral sensitivities and the function is dependent on differing accumulation times employed for the sensors having differing spectral sensitivities Further in accordance with a preferred embodiment of the present invention the distortion correction subsystem compensates for variations in pixel shape in the plurality of sensors
Still further in accordance with a preferred embodiment of the present invention the distortion correction subsystem is operative to an accuracy of better than 5% of pixel size of the multiplicity of sensor elements
There is additionally provided in accordance with a preferred embodiment of the present invention an image acquisition system including a plurality of sensors each including a multiplicity of sensor elements, a pre-scan calibration subsystem, employing a predetermined test pattern, which is sensed by the plurality of sensors, for providing an output indication of distortions in the output of the plurality of sensors, the output indication being employed to generate a correction function, and a distortion correction subsystem operative during scanning of an article by the plurality of sensors to correct the distortions by employing the correction function, the distortion correction subsystem being operative to an accuracy of better than 5% of pixel size of the multiplicity of sensor elements
There is also provided in accordance with a preferred embodiment of the present invention an image acquisition system including a plurality of sensors, a pre-scan calibration subsystem, employing a predetermined test pattern, which is sensed by the plurality of sensors while being moved relative to the plurality of sensors in a direction of relative movement, for providing an output indication of distortions in the output of the plurality of sensors, the pre-scan calibration system being operative to correlate images of at least one target on the test pattern as seen by the plurality of sensors, thereby to determine the relative orientation of the plurality of sensors, and a distortion correction subsystem operative to correct the distortions by employing the output indication
Further in accordance with a preferred embodiment of the present invention the pre-scan calibration subsystem also is operative to provide an output indication of the oπentation of the plurality of sensors relative to the scan direction Still further in accordance with a preferred embodiment of the present invention each of the plurality of sensors includes a multiplicity of sensor elements, and the pre-scan calibration subsystem also is operative to determine the pixel size characteristic of each of the multiplicity of sensor elements of each of the plurality of sensors
Additionally in accordance with a preferred embodiment of the present invention the pre-scan calibration subsystem is operative to determine the pixel size characteristic of each of the multiplicity of sensor elements of each of the plurality of sensors by causing the plurality of sensors to view a grid formed of a multiplicity of parallel uniformly spaced lines, formed on the test pattern
There is additionally provided in accordance with a preferred embodiment of the present invention an image acquisition system including a plurality of sensors each including a multiplicity of sensor elements, a pre-scan calibration subsystem, employing a predetermined test pattern, which is sensed by the plurality of sensors, for providing an output indication in two dimensions of distortions in the output of the plurality of sensors, the output indication being employed to generate a function which maps the locations viewed by the sensor elements in at least two dimensions, and an distortion correction subsystem operative during scanning of an article by the plurality of sensors to correct the distortions by employing the output indication
Further in accordance with a preferred embodiment of the present invention the distortion correction subsystem is operative using a pixel size which is user selectable
There is additionally provided in accordance with a preferred embodiment of the present invention an article inspection system including an image acquisition subsystem operative to acquire an image of an article to be inspected, an image analysis subsystem for identifying at least one predetermined characteristic of the article from the image, and an output indication subsystem for providing an output indication of the presence of the at last one predetermined characteristic of the article, characterized in the camera assembly includes a plurality of sensor assemblies, self calibration apparatus for determining a geometrical relationship between the sensor assemblies, and sensor output modification apparatus for modifying outputs of the plurality of sensor assemblies based on the geometrical relationship between the sensor assemblies, the sensor output modification apparatus including electronic interpolation apparatus operative to perform non-zero'th order interpolation of pixels in the outputs of the plurality of sensor assemblies
There is additionally provided in accordance with a preferred embodiment of the present invention an article inspection system including an image acquisition subsystem operative to acquire an image of an article to be inspected, an image analysis subsystem for identifying at least one predetermined characteristic of the article from the image, and an output indication subsystem for providing an output indication of the presence of the at least one predetermined characteristic of the article, characterized in the camera assembly includes a plurality of sensor assemblies, self calibration apparatus for determining a geometrical relationship between the sensor assemblies, and sensor output modification apparatus for modifying outputs of the plurality of sensor assemblies based on the geometrical relationship between the sensor assemblies, the sensor output modification apparatus being operative to modify the outputs of the plurality of sensor assemblies to sub- pixel accuracy
There is additionally provided in accordance with a preferred embodiment of the present invention an article inspection system including an image acquisition subsystem operative to acquire an image of an article to be inspected, an image analysis subsystem for identifying at least one predetermined characteristic of the article from the image, and an output indication subsystem for providing an output indication of the presence of the at least one predetermined characteristic of the article, characterized in the camera assembly includes at least one sensor assembly, and sensor output modification apparatus for modifying at least one output of the at least one sensory assembly based at least in part on an optical distortion associated with the at least one sensor assembly
Further in accordance with a preferred embodiment of the present invention the optical distortion includes pixel size distortion
Still further in accordance with a preferred embodiment of the present invention the optical distortion includes magnification distortion Additionally in accordance with a preferred embodiment of the present invention the optical distortion includes chromatic aberration
Moreover in accordance with a preferred embodiment of the present invention the optical distortion includes overlap misadaptation Further in accordance with a preferred embodiment of the present invention the optical distortion includes pixel shift due to sensor separation
Still further in accordance with a preferred embodiment of the present invention the optical distortion includes focus inconsistencies across color components Additionally in accordance with a preferred embodiment of the present invention the optical distortion includes color accumulation shift
There is additionally provided in accordance with a preferred embodiment of the present invention an article inspection system including an image acquisition subsystem operative to acquire an image of an article to be inspected, an image analysis subsystem for identifying at least one predetermined characteristic of the article from the image, and an output indication subsystem for providing an output indication of the presence of the at least one predetermined characteristic of the article, characterized in the camera assembly includes at least one sensor assembly, sensor output modification apparatus for modifying at least one output of the at least one sensory assembly, the sensor output modification apparatus including a function generator which generates a function which maps locations on the sensor assembly to a collection of scan locations
There is additionally provided in accordance with a preferred embodiment of the present invention an article inspection system including a camera assembly operative to acquire an image of an article to be inspected, an image analysis subsystem for identifying at least one predetermined characteristic of the article from the image, and an output indication subsystem for providing an output indication of the presence of the at last one predetermined characteristic of the article, characterized in the camera assembly includes a user interface which enables a user to select resolution of the image acquired by the camera assembly, an electro- optical sensor assembly, and an electronic resolution modifier operative downstream of the electro-optical sensor assembly
Further in accordance with a preferred embodiment of the present invention the camera assembly is operative in response to resolution selection at the user interface to determine the pixel size of the image
BRIEF DESCRIPTION OF THE DRAWINGS The present invention will be understood and appreciated more fully from the following detailed description taken in conjunction with the drawings in which
Fig 1 is a simplified block diagram illustration of an article inspection system constructed and operative in accordance with a preferred embodiment of the present invention, Fig 2 is a simplified illustration of parts of a preferred test pattern, certain portions of which are not drawn to scale, along with a simplified indication of the fields of view of individuεl line sensors viewing the test pattern,
Fig 3 is a simplified block diagram illustration of mapping function generator circuitry forming part of the system of Fig 1 , Fig 4A is a simplified flow chart illustrating operation of pixel size and shape determination functionality of the mapping function generator circuitry of
Fig 4B is a simplified illustration of geometrical distortion addressed by the functionality of Fig 4A, Fig 4C is a simplified semi-pictorial, semi-graphical illustration of the functionality described in the flowchart of Fig 4A,
Fig 5A is a simplified flow chart illustrating operation of test pattern angle determination functionality of the mapping function generator circuitry of Fig
3, Fig 5B is a simplified illustration of the geometrical distortion addressed by the functionality of Fig 5 A, Fig 5C is a simplified semi-pictorial, semi-graphical illustration of the functionality described in the flowchart of Fig 5 A,
Fig 6A is a simplified flow chart illustrating operation of Determination of Relative Orientation of Sensors by Correlating Images of Test Pattern Targets at Edges of the Field of View of Sensors functionality of the mapping function generator circuitry of Fig 3,
Fig 6B is a simplified illustration of the geometrical distortion addressed by the functionality of Fig 6 A,
Fig 6C is a simplified semi-pictoπal, semi-graphical illustration of the functionality described in the flowchart of Fig 6A,
Fig 7A is a simplified flow chart illustrating operation of Determination of X Overlap and Y Offset of Sensors by Correlation of Adjacent Images functionality of the mapping function generator circuitry of Fig 3,
Fig 7B is a simplified illustration of the geometrical distortion addressed by the functionality of Fig 7 A,
Fig 7C is a simplified illustration of the functionality described in the flowchart of Fig 7 A,
Fig 8A is a simplified flow chart illustrating operation of Determination of X and Y Offsets for Multiple Colors functionality of the mapping function generator circuitry of Fig 3,
Fig 8B is a simplified illustration of the geometrical distortion addressed by the functionality of Fig 8 A,
Fig 8C is a simplified illustration of the functionality described in the flowchart of Fig 8 A, Figs 9A and 9B, taken together, are simplified flowchart illustrations of a preferred method of implementing mapping function generator circuitry forming part of the system of Fig 1 ,
Fig 10A is a simplified illustration of acquisition of a target image by multiple cameras under ideal conditions, Fig 10B is a simplified illustration of image buffers for storing the acquired image of 10 A, Fig 1 1A is a simplified illustration of multiple cameras acquiring an image of a target where the camera fields of view are mutually skewed and overlapped,
Fig 1 IB is a simplified illustration of image buffers for storing the acquired image of 1 1 A,
Fig 12 is a simplified illustration useful in understanding Y- resamphng functionality of the image correction circuitry 110 of Fig 1 ,
Figs 13 and 14, taken together, are simplified illustrations useful in understanding overlap correction functionality of the image correction circuitry 1 10
Figs 15 and 16, taken together, are simplified illustration useful in understanding X-resamphng functionality of the image correction circuitry 1 10 of
Fig 17 is a simplified illustration useful in understanding aspects of accumulation shift correction functionality of the image correction circuitry 1 10 of
DETAILED DESCRIPTION OF A PREFERRED EMBODIMENT Reference is now made to Fig 1 which is a simplified block diagram illustration of an inspection system constructed and operative in accordance with a preferred embodiment of the present invention The inspection system of Fig 1 comprises a sensor array 100, typically comprising multiple multi-pixel line sensors oriented such that their fields of view are in partially overlapping, mutually skewed arrangement and, therefore, require correction The multi-pixel line sensors are typically housed within a camera, such as a CCD camera, having electronic shutters
A conveyor 102 is arranged to transport an article to be inspected past the sensor array 100 in a transport direction indicated by an arrow 104
Alternatively, sensor array 100 could be moved for providing scanning of the entire article to be inspected A mapping function generator 106 is arranged to receive outputs from the sensor array 100 when a test pattern 108 is being inspected by the sensor array The mapping function generator 106 provides an correction output to correction circuitry 1 10 which employs the mapping function generated by mapping function generator 106 Circuitry 1 10 receives outputs from the sensor array 100 when an article to be inspected, such as a printed circuit board 1 1 1, is being inspected by the sensor array 100
It is noted that normally, at beginning of an inspection operation series, test pattern 108 is inspected by the sensor array 100 in order whereupon mapping function generator 106 generates the information required to correct the output of circuitry 1 10 Multiple articles to be inspected may then be inspected and thereafter, intermittently during the inspection of such articles, test pattern 108 may again be inspected by the sensor array 100 to provide updated calibration Typically test pattern 108 is preferably inspected about once per month of continuing operation of the inspection system
Image correction circuitry 1 10 is operative to employ the correction input received from mapping function generator 106 to correct the outputs received from sensor array 100 and provides a corrected sensor array output to segmentation circuitry 1 12 Segmentation circuitry 1 12 provides a segmentation output indication dividing all areas on the image of the article represented by the corrected sensor array output into one of typically two categories For example, in the case of inspection of printed circuit boards, every location on the image represented by the corrected sensor array output is identified by the segmentation output indication as being either laminate or copper
The segmentation output indication from segmentation circuitry 1 12 is supplied to image processing circuitry 1 14 Image processing circuitry 1 14 is preferably a morphology-based system, but may alternatively be based on a bit map, a net list, or any other suitable input Circuitry 1 14 provides an image processing output which identifies vaπous features of the image represented by the corrected sensor array output and the locations of such features In the case of printed circuit boards, the features are typically pads, conductor junctions, open ends, and other printed circuit board elements The image processing output of circuit 1 14 is supplied to feature registration circuitry 1 16, which maps the coordinate system of the image processing output of circuitry 1 14 onto a feature reference coordinate system, in accordance with information supplied by a reference input source 1 18 The output of registration circuitry 1 16 and an output of reference input source 1 18 are supplied to feature comparison circuitry 120, which compares the mapped image processing output of circuitry 1 14 with a reference stored in source 1 18 and provides a defect indication which is supplied to a defect indication output generator 122 Reference is now made to Fig 2 which is a simplified illustration of parts of a preferred test pattern 108, certain portions of which are not drawn to scale, along with a simplified indication of the fields of view of individual line sensors 124 in sensor array 100 (Fig 1 ) viewing test pattern 108 Test pattern 108 typically comprises a row 130 of parallel uniformly spaced inclined lines 132, an angular orientation determinator 134 having an inclined edge 182, and an array of LORs 136, preferably positioned at the edges of the field of view of each camera 124 The term "LOR" stands for Lots of Rectangles and is used to designate a multiplicity of differently sized rectangles as shown in enlarged form at reference numeral 138 LORs 136 are employed to facilitate the relative positioning of images of an object as seen by sensor array 100 (Fig 1 )
The uniformly spaced parallel inclined lines 132 are preferably angled at a small angle ψ having a tangent of about 0 05 with respect to the transport direction 104 The angular oπentation determinator 134 is preferably nearly rectangular in shape, however one of the edges of the angular oπentation determinator is preferably angled at a small angle β having a tangent of about 0 0156 with respect to the transport direction 104
Reference is now made to Fig 3, which is a simplified block diagram illustration of the mapping function generator 106 of Fig 1 Outputs 200 representing images of targets on test pattern 108 (Figs 1 and 2) which is being inspected by the sensor array 100 (Fig 1 ) are supplied to the mapping function generator 106 which carries out the following functions Pixel Size and Shape Function Determination for a Single Color 202,
Determination of Test Pattern Angle 204,
Determination of Relative Orientation of Sensors by Correlating Images of Test Pattern Targets, preferably LORs 136 (Fig 2), at Edges of the Field of View of Sensors 206,
Determination of X Overlap and Y Offset of Sensors by Correlation of Adjacent Images 208, and
Determination of X and Y Offsets for Multiple Colors 210
The parameters thus determined are supplied to a geometric polynomial generator 212 which preferably provides a function which maps the locations viewed by individual elements of sensor array 100 (Fig 1 ) in at least two dimensions
The output of geometric polynomial generator 212 is provided to image correction circuitry 1 10 Reference is now made to Fig 4A, which is a simplified flow chart illustrating operation of the Pixel Size and Shape Function Determination circuitry 202 of Fig 3 and to Fig 4B which illustrates a distortion sought to be overcome by the functionality of circuitry 202
As seen in Fig 4B, the apparent dimensions of identical features as seen by camera 124 may vary depending on the position of the feature in the field of view of the camera for a given field of view angle φ Thus, an "on-axis" feature 140 which is directly in front of the camera and in the illustration spans the field of view of camera 124 is perceived to have a width of "d" pixels, while an identical feature 142 when located at the edge of the field of view does not span the field of view of camera 124 and is perceived to have a width "d - Δ" pixels
Considering now Fig 4A with further reference to Fig 4C, it is seen that row 130 of parallel uniformly spaced inclined lines 132 of test pattern target 108 (Fig 2) is viewed by a plurality of cameras 124, of which only one is shown in Fig 4C Each camera 124 acquires an image of a part of row 130, an enlarged part of which is shown at reference numeral 150 For the purpose of calculating the size and pixel shape function, the image is preferably acquired in a single color, such as red Alternatively, the image may be acquired in several colors, one of which may be used for the purpose of calculating the size and pixel shape For each image, the angle ψ, at which images of the lines 132 are inclined with respect to the transport direction 104, is measured The images of the lines are indicated by reference numerals 152 The separation between each adjacent line 132 in row 130 is fixed and predetermined, and the location of each line 132 in row 130 along an axis 105 that is perpendicular to transport direction 104 is known with respect to an arbitrarily chosen one of lines 132 The location of a typical line is designated by reference numeral 154 A graph 160 represents the summation of the outputs of the cameras
124 as they scan row 130 in a direction angled by angle ψ with respect to direction 105, l e the image is projected onto the x-axis at angle ψ It can be seen that each line 152 produces a local minimum in graph 160 The separation of adjacent lines 152 in the camera output may be determined by measuring the distance between adjacent points of inflection before each local minimum 154' indicated the point of inflection corresponding to line location 154 The X-axis of graph 160 represents the number of each initial individual diode in a linear array of diodes in camera 124, while the Y-axis in graph 160 represents the summation of the intensity L of the image scanned in a direction angled by angle ψ with respect to direction 105 By utilizing the knowledge that lines 132 in the test pattern are uniformly spaced, the variations in the position of lines 152 can be mapped as a function of diode number to indicate the distortion present in the image output of the camera 124 and thus the correction which is required This mapping is provided in a graph 170 which illustrates a least square fit to a cubic function which represents a size and shape function, where the Y-axis is labeled "POS" and 154" represents the distance of the line position 154 from the arbitrarily chosen line 132 and the X-axis represents the diode number in camera 124 This function may be expressed as
S (d)=s,d+s2d2+s3d1 where n indicates the number of the camera, Si to s are coefficients that are to be determined, and d is the diode number corresponding to an enumeration of the diodes of each camera n Reference is now made to Fig 5A, which is a simplified flow chart illustrating operation of the Test Pattern Angle Determination circuitry 204 of Fig 3 and to Fig 5B which illustrates the distortion sought to be overcome by the functionality of circuitry 204 As seen in Fig 5B, the entire test pattern 108 is normally, in reality, not perfectly aligned with the transport direction 104, but rather is offset therefrom by an angle α
Considering now Fig 5A with further reference to Fig 5C, it is seen that test pattern target 108 comprising angular oπentation determinator 134, having edge 182 of known aspect angle β relative to additional objects in the test pattern, is viewed by camera 124 The camera 124 acquires an image of angular orientation determinator 134, an enlarged portion of which is shown at reference numeral 184 in
An aspect angle β* of an inclined edge 186 of the image of the determinator 134 is calculated by conventional techniques based on measurements carried out on each raster line of the image The deviation of this calculated aspect angle β* from the angle β represents the value of the angle α and is expressed as β*- β=α The thus-determined deviation is employed for correction in circuitry 1 10 (Fig
1) Reference is now made to Fig 6A, which is a simplified flow chart illustrating operation of the Determination of Relative Orientation of Sensors by Correlating Images of Test Pattern Targets at Edges of the Field of View of Sensors circuitry 206 of Fig 3 and to Fig 6B which illustrates the distortion sought to be overcome by the functionality of circuitry 206 As seen in Fig 6B, rather than being aligned in a single row or
"colhnearly", the fields of view 162 of various cameras 124 are seen, in an exaggerated view, to be mutually skewed and partially overlapping These are shown as angles θ] through θ between the axis of the field of view of each camera 124 and an axis 308 perpendicular the direction of motion 104 The calculation of θi through θ is described in greater detail hereinbelow with reference to Figs 9A and 9B The functionality descπbed here with reference to Figs 6A - 6C deals with the distortion of relatively skewed fields of view of the cameras, while the functionality described hereinbelow with reference to Figs 7A - 7C and circuitry 208 (Fig 3) deals with the problem of partial X overlap and Y offset as determined by the direction of motion 104 Considering now Fig 6A with further reference to Fig 6C, it is seen that a test pattern target preferably compπsing a row 300 of LORs 136 of known angular orientation relative to edge 182 of angle determinator 134 is viewed by a plurality of cameras 124 The LORs 136 of row 300 preferably collinear, and row 300 is preferably parallel to the front edge of the target 108 Each camera 124 acquires an image of part of row 300 Preferably the LORs are seen at the edges of the respective fields of view of each camera 124
Enlargements of two image regions 253, each shown in dashed lines and comprising a different LOR 136, as viewed by one of the cameras 124, such as CAM 1, are shown in Fig 6C at reference numerals 304 and 306 It is noted that due to the angle of the field of view of CAM 1 there results an offset relationship between the images 304 and 306 including a y offset, ΔyANG, as seen in Fig 6C This offset is determined and used to calculate the angle θ between the axis of the field of view of CAM 1 and the row of LORs 300 θ* is determined by the relationship θ* = arctan(ΔyANG /LJ where Lx is the longitudinal separation between adjacent LORs The angles θ* for each camera 124 may be calculated in this manner and stored for later reference
Reference is now made to Fig 7A, which is a simplified flow chart illustrating operation of the Determination of X Overlap and Y Offset of Sensors by Correlation of Adjacent Images 208 of Fig 3 and to Fig 7B which illustrates the distortion sought to be overcome by the functionality of circuitry 208
As seen in Fig 7B, and similar to that which is seen in Fig 6B, rather than being aligned in a single row or "collinearly", the fields of view 162 of various cameras 124 are seen, in an exaggerated view, to be mutually skewed and shifted in what is referred to herein as the Y direction, being the same as direction 104, and partially overlapping in what is referred to herein as the X direction, being perpendicular to direction 104 The functionality described here with reference to Figs 7A - 7C deals with the problem of offset of the fields of view of the cameras 124 An overlap in the fields of view of two adjacent cameras is shown at reference numeral 348
Considering now Fig 7A with further reference to Fig 7C, it is seen that test pattern target 108, preferably comprising row 300 of LORs 136 as in Fig 6C, is viewed by multiple cameras 124 Each camera 124 acquires an image of part of row 300 which includes the same LORs
The X overlap and Y offset may be determined by using an image region 253 that is acquired by two adjacent cameras 124 Two enlarged images of one image region 253 of a LOR 136, as seen by two adjacent cameras 124, are shown in Fig 7C at reference numeral 354 The two enlarged images of LOR 136 are shown in mutually offset, overlapping relationship, such that the LORs seen in both images are in precise overlapping registration It is noted that due to this overlapping relationship between the images a y offset, Δy0v, and an x offset, Δxov, are produced, the offsets being expressed in pixel units ΔxQv may then be converted to metric units and expressed as
Figure imgf000019_0001
by employing the pixel size and shape function as descπbed hereinabove with reference to Figs 4A - 4C An overlap in the x direction, OV , may then be calculated as w-mΔx0v, where w is the metric width of each image
Figure imgf000019_0002
may be converted to metric units by multiplying Δy0v by a predetermined pixel size in the Y direction 104
Reference is now made to Fig 8A, which is a simplified flow chart illustrating operation of the Determination of X and Y Offsets for Multiple Colors circuitry 210 of Fig 3 and to Fig 8B which illustrates a distortion sought to be overcome by the functionality of circuitry 210 As seen in Fig 8B, a three-color CCD camera 380 is shown Camera
380 typically includes a multi-pixel line sensor 382 comprising three line sensors such as 384, 386, and 388, with the line sensor being arranged in parallel and each compπsing a plurality of single-color sensing diodes 390 arranged linearly The diodes 390 of the multi-pixel line sensor 382 may be logically arranged into groupings of three diodes, one from each of line sensors 384, 386, and 388, with each diode sensing a different color Three such groupings 392, 394, and 396 are shown, at the center and both edges of camera 380
Camera 380 is shown viewing elements 398 of a target 400 moving in direction 104 The image acquired by the multi-pixel line sensor 382 is typically stored in three buffers 402, 404, and 406, with each buffer corresponding to a particular color, such as red, green, and blue respectively A combined view of buffers 402, 404, and 406 is shown at reference numeral 408 Combined buffer 408 shows the acquired images 399, 401 , and 403 of elements 398 The images 399, 401, and 403 of combined buffer 408 demonstrate the pixel size and shape distortion in the X direction due to chromatic aberration, as shown at 410, as well as the Y direction displacement, as shown at 412, due to the physical separation of the R, G and B line sensors
Considering now Fig 8A with further reference to Fig 8C, it is seen that test pattern target 108, preferably comprising row 300 of LORs 136 as seen in Figs 6C and 7C, is viewed by multiple cameras 124 Each camera 124 preferably acquires a multicolored image at either edge of the camera, with one edge of camera 124 acquiring an image of one LOR 136, and the other edge acquiring an image of an adjacent LOR 136
Each color of each multicolored image acquired at each edge of one of the cameras 124 is preferably related to separately A single color, such as red, may be selected as a reference color to which the other two color components of the multicolored image are compared In the example shown, the red and green components of image region 253 at one edge of the field of view of CAM 1 are shown enlarged at reference numeral 360 The two enlarged images of LOR 136 are shown in mutually offset, overlapping relationship, such that the LORs seen in both images are in precise overlapping registration It is noted that due to this overlapping relationship between the images a y offset, ΔycoL. resulting from the physical shift between line sensors of different colors, and an x offset, ΔXCOL, resulting from the shift due to chromatic aberration, are produced, the offsets being expressed in pixel units ΔXCOL and Δy oL may then be converted to metric units in the same manner as is described hereinabove with reference to Fig 7C The red and blue components of image region 253 at one edge of the field of view of CAM 1 may likewise be compared, as may the red and green components and the red and blue components of the multicolored image of the adjacent LOR acquired at the other edge of the field of view of CAM 1 (not shown) Reference is now made to Figs 9A and 9B, which, taken together, are simplified flow charts illustrating operation of the geometric polynomial generator 212 of Fig 3 A cubic function may be constructed to determine the positions of the diodes of cameras 124 in space using the outputs of 202 - 210 described hereinabove with reference to Figs 4A - 8C Two sets of polynomials are constructed for each camera 124, an X-polynomial for determining the position of a diode in the X direction and a Y-polynomial for determining the position in the Y direction The X-polynomial may be expressed as
P (d) = ao + aid + a2d2 + a3d3 where ao through a3 are the coefficients of the X-polynomial The Y-polynomial may be expressed as
Pv(d) = b0 + b,d where b0 and bj are the coefficients of the Y-polynomial
It has been found through experimentation that expressing the Y- polynomial linearly provides a sufficient approximation of a diode's Y position Methods for determining the X-polynomial are now described in greater detail
In Fig 9A the X-overlap, OVx, as determined in 208 (Fig 3), is used to find a0[n] first for one color, such as red, of each camera n in a row of cameras, such as cameras designated CAM 1 , CAM 2, and CAM 3 in Fig 7C The X- polynomials may then be derived for the other colors, such as blue and green, based on the X-polynomial for the first color as follows
1) Let ao[l ] = 0 for CAM 1
2) Proceeding along the row of cameras, for each subsequent camera n determine aO as follows ao[n] = ao[n-l ] + (Sn-l (ND) - W) + OVN[n,n-l ] where ND is the number of diodes in the preceding camera n-1 , Sn.ι(ND) is the value of the pixel size and shape function output determined in 202 as evaluated for the last pixel of camera n-1, w is the width of the image in metric units containing the LOR, and
OVx is the measured overlap in the X-direction as determined m 206
3) The coefficients si , s2, s3 as expressed in the size and shape function output determined by circuitry 202 (Fig 3) are assigned to al, a2, and a3 respectively of the X-polynomial
The red-green and red-blue X-offsets, ΔXCOL as determined by circuitry 210
(Fig 3) is combined with the result of the X-polynomial determined for the red color component, designated
(ao, a a , a3)red, to yield aO and al values for the X-polynomial for the green color as follows a0[green] = a0[red] + (dr * rg_xl - dl * rg_xr) / (dr - dl) al [green] = al [red] + (rg_xr-rg_xl)/(dr-dl) a2 [green] = a2[red] a3 [green] = a3[red] where dr is the diode position of the LOR used for the overlapping at the right edge of the camera, dl is the position in metric units of the LOR used at the left edge of the camera, - rg_xl is the Δx,0ι difference measured between the red and the green component at the left edge of camera, and rg_xr is the ΔxLθI difference measured at the right edge of the camera The "left" and "right" edges of the field of view of a camera refer respectively to the edge of the camera closest to camera n-1 and the edge closest to camera n+1 , except for the first and last cameras 1 and n, where the edges are defined with respect only to cameras n+1 and n-1 respectively The same procedure may be used to calculate a0[blue] through a3[blue]
Methods for determining the Y-polynomial are now descπbed in greater detail with particular reference to Fig 9B Referring again to Fig 6B, an angle θι.3 may be determined for each camera 124 relative to the axis 162 of the field of view of each camera 124 and the axis 308 which is perpendicular the direction of motion 104 θι-3 may then be determined by the relationship θ = θ* - α where α is the correction angle determined by circuitry 204 (Fig 3) In Fig 9B the Y-polynomtal is determined initially for one color, such as red, of each camera n in a row of cameras, such as cameras designated CAM 1, CAM 2, and CAM 3 The Y-polynomials may then be derived for the other colors, such as blue and green, based on the Y-polynomial for the first color
The coefficient bl of the Y-polynomial may be derived as bl = tan(θ) The coefficient bO of the Y-polynomial may be calculated in two stages In the first stage bO of the first camera CAM 1 is set b0[ l ] = 0 bO for each subsequent camera may be calculated as follows b0[n] = b0[n-l ] + bl [n-l ]*(Sn.,(ND)-OVx) + Δyos[n, n-1]
In the second stage the minimum value of the three Y-polynomials referring to the three cameras is determined Since the approximation is linear it is sufficient to look for the minimum value at the edges of the fields of view the respective cameras as follows mιn(Pu(0). P. ,(ND), P 2(0), P 2(ND), PN 3(0), Py3(ND)) where each of the subscripts of PN identify a specific one of several cameras This minimum is subtracted from bO for each camera, ensuring that Py >= 0 for all diodes
The red-green and red-blue y-offsets , ΔyCO| as determined by circuitry 210 (Fig 3) is combined with the result of the Y-polynomial determined for the red color component to yield bO and bl values for the green and the blue Y-polynomials This is done as follows b0[green] = b0[red] + 0 5 * (rgjy + rg_ry) bl [green] = bl [red] where rg_ly is the Δy 0| shift measured between the red and green components at the left edge of the camera, and rg_ry is the ΔyLθ! shift measured at the right edge of the field of view of the camera b0[green] and b0[blue] are also preferably modified to accommodate the different accumulation times for each color component as is described in greater detail hereinbelow with reference to Fig 17
Reference is now made to Fig 10A which is a simplified illustration of multiple cameras 500 and 502 acquiring an image of a target 504 under ideal conditions, and Fig 10B which is a simplified illustration of image buffers for storing the acquired image of target 504 Cameras 500 and 502 are typically in fixed positions, each having a static field-of-view, and are arranged such that target 504 passes through the fields-of-view of cameras 500 and 502 in the direction of motion 104 Cameras 500 and 502 each acquire an image of target 504 one image line at a time by employing a multi-pixel line sensor as is descπbed hereinabove
Each diode of the multi-pixel line sensor acquires a single-pixel image of a specific location on target 504, and the pixels acquired by each diode collectively form an image line An image line portion 512 is shown compπsing a plurality of pixels 514 As target 504 moves in the direction of motion 104, the field- of-view of cameras 500 and 502 "moves" in the direction designated by arrows 508, and thus the image lines are acquired in the direction of 508 as well Referring to a time index 510 ranging from t0 to ti, camera 500 begins acquiring an image at t0 to yield an image line 516 shown in dashed lines Image line portion 512 is acquired at time index ts, shown intersecting a portion of a target element 518 Target 504 continues to move in the direction of arrow 506, and the acquired image lines approach t] as is shown in dashed lines by an image line portion 520
The conditions under which cameras 500 and 502 acquire the image of target 504 are ideal in that the fields of view of both cameras are aligned in a single row and are non-overlapping As shown in Fig 10 A, image line portion 512 associated with camera 500 is aligned in a single row with an image line portion 522 associated with camera 502 at time index t , with image line portions 512 and 522 meeting at a boundary line 524 The image lines scanned by cameras 500 and 502 are typically stored in buffers, such as buffers 530 and 532 of Fig 10B Since the scan lines of both buffers are aligned in a single row for each corresponding time index, buffers 530 and 532 may be combined to form a non-distorted composite image of target element 518, a portion of which is shown as 534
Additional reference is now made to Figs 1 1 A and 1 IB which illustrate the effect cameras 500 and 502 have when acquiring an image of target 504 under less than ideal conditions in contrast to Figs 10A and 10B, specifically when the fields of view of both cameras are mutually skewed and are overlapping Figs 1 1 A and 1 IB are intentionally provided as an oversimplified illustration of some difficulties encountered, and are merely intended to review what was described in greater detail hereinabove and are not intended to supersede the descriptions of Figs 1 - 9B
The image lines acquired are shown not aligned in a single row for a given time index t , such as is shown with reference to buffers 560 and 562 and image line portions 564 and 566 of Fig 1 I B Combining buffers 560 and 562 would produce a distorted composite image 568 of target element 518, as is shown in Fig 1 IB In addition, simply combining the buffer images would neither correct for image overlap, discussed hereinabove with reference to Figs 7A - 7C, nor for the viewing angle distortion, discussed hereinabove with reference to Figs 4A - 4C
Techniques for deriving a corrected composite image from buffers 560 and 562 are now described with additional reference to Figs 12 - 16
A FIFO buffer, such as a FIFO buffer 600 of Fig 12, may be defined by defining a window having a height expressed in a fixed number of image buffer rows, such as 40, beginning with the first row of pixels acquired This window is then typically advanced to a new position one row at a time as each new row of pixels is acquired Alternatively, an image buffer may initially be filled with row of pixels, at which point the FIFO buffer window may be defined as a subset of the image buffer rows and advanced along the image buffer in the manner just described Before the X and Y polynomials determined with reference to Figs 9A and 9B can be used for correcting the image they may be translated into another type of polynomial referred to herein as a "diode compensating polynomial" This polynomial maps the relationship between a diode and a pixel position of a corrected image constructed using a pixel size chosen by the user The diode compensating polynomials Qx(d) and Qy(d) may be derived from the X and Y-polynomials through the transformation
Q (d) = Px(d)/p
Qy (d) = Py(d)/p where p is the pixel size chosen by the user and is expressed in the same metric units as Px(d) The pixel size p chosen by the user must be a multiple of the minimum measurable distance of travel in the scan direction 104, typically one pulse unit of a drum encoder The diode compensating polynomial Q may be additionally be adjusted for the shift introduced by different color component accumulation times as is described in greater detail hereinbelow with reference to Fig 17
Each pixel or grid point in the FIFO buffer represents the sampling of a corresponding target acquired by a diode A process of "resampling" is used whereby calculations are performed to determine a gray level g at an arbitrary position "between" grid points of the buffer grid A four point convolution may be used to interpolate a value for g as follows g = C, gi + C2 g2 + C3 g3 -r C4 g4 where Ci through c are the convolution cofficients, and gi through g4 are four grey levels at four adjacent grid points
A method for determining the four interpolation coefficients Ci through c , collectively designated c„ and the four gray levels gj through g4, collectively designated g„ is now described
The four gray levels g, may be selected from four contiguous grid points of the FIFO buffer These four points are referred to herein as a "quartet" The method of determining which four grid points are selected from the buffer grid is described in detail below Resampling may be performed in two stages corresponding to the X and Y directions described hereinabove The stages are referred to herein as X- resamphng and Y-resamphng Y-resamphng is now described in greater detail with reference to Fig 12 Two FIFO buffers 600 and 602 are shown corresponding to two cameras The set of all pixels 604 in the FIFO buffers scanned by a diode d may be referred to as diode d's "gray level column," such as a column 606 Virtual scan lines 608 and 610 are shown to indicate the correction angles needed for each buffer to compensate for the misalignment angles of each corresponding camera A quartet 612 is shown as a group of four pixels within the gray level column 606 closest to
Figure imgf000027_0001
The following steps are performed
1 For each diode d the value of the resampling polynomial Qy(d) is calculated
2 The quartet index q(d) denotes the first grid point belonging to the quartet of pixels that he within the diode's gray level column The other three grid points in the quartet are the previous three grid points in the diode's gray level column, that is the three grid po'nts most recently acquired by diode d just prior to the grid point at index q(d) q(d) is determined as follows q(d) = floor(Qy(d)-l /2)-l The index q(d) for each quartet is preferably stored in a quartet look-up table in a position corresponding to the diode d
3 The four convolution coefficients cl through c4 are calculated based on the distance of the polynomial Qy from the nearest quartet index This distance is called ξ(d) and is expressed as ξ(d) = Qy(d) - (q(d) + l )
Preferably -1/2 <= ξ <l/2
4 For a given ξ the four convolution coefficients ch c2, c , and c4 may be calculated as follows c, = - 14 + 10ξ - 3ξ2 + 0 5ξ' c2 = -l + 0 5 ξ -r 2ξ2 - 1 5 ξ' c3 - 1 - 2 5ξ2 - 1 5ξ' c4 - -5/2 - 5 5ξ - 1 5ξ2 - 0 5 ξ3 where ξ is dependent on d as explained above
The use of convolution coefficients cl through c4 is described in greater detail in "Image Reconstruction by Parametric Cubic Convolution", Stephen K Park and Robert A Schowengerdt, Computer Vision Graphics and Image Processing 23, 258-272, the disclosure of which is incorporated herein by reference
For each diode d these four coefficients cl , c2, c3, and c4 are preferably encoded in such a manner that when decoded and summed the summed value equals 1 0, although the accuracy of any single coefficient may be diminished The encoded values are preferably stored in a coefficients look-up table in a position corresponding to d
Overlap correction functionality is now descπbed in further detail with reference to Fig 13 which shows outputs 620 and 622 from two adjacent cameras 1 and 2 after Y-resamphng and pπor to being combined An image overlap region 624 between the two cameras must be corrected when combining the outputs thereof to provide a single image In a preferred embodiment the image outputs are not combined by simply using .he output of camera 1 until an arbitrarily chosen pixel position in the overlap region and then switching to the output of camera 2 starting from a corresponding pixel position Rather, a "blending region" 626 of a predefined number of pixels B, typically 100 pixels, is defined within the overlap region 624 between cameras 1 and 2 where corresponding pixels from both cameras within the blending region are blended to yield a single pixel value which is then used to form the combined image 630 The blending region preferably begins after allowing for a pixel margin 628 of a predefined number of pixels M, typically 20 pixels, in order to accommodate lower-quality output often encountered at the ends of a diode array
It is appreciated that the two diode resampling polynomials ON (d) and Q (2)(d) of the two adjacent cameras may be used to determine the amount of overlap between the cameras To correct for overlap, the following steps may be performed 1 Defining Q (2)(l ) to refer to the pixel position r of the first pixel of X and Y-resampled output of camera 2 (X-resampling is described in greater detail hereinbelow with reference to Fig 15) The leading edge of the blending region may be determined by adding the predetermined number of pixels defined for the pixel margin descπbed above
2 As shown in Fig 14, determining a weight w(ι) for each position l in the blending region where w(ι)=ι/B The use of this weight aids in combining the outputs of cameras 1 and 2 as a linear mixture of the outputs of both of the adjacent cameras, where the contribution of camera 1 is expressed as l-w(ι) and camera 2 as w(ι) For example, where the blending region comprises 100 pixels, the first pixel in the blending region contains 99% of the information from the pixel of camera 1 and 1% from the corresponding pixel of camera 2, and the last pixel in the blending region being inversely proportionate This solution enables a smooth transition between the cameras 3 Expressing the gray-level output g of two corresponding pixels of camera 1 and 2 within the blending region as g(ι) = ( l -w .)) * gl (ι) + w(ι) * g2(ι) where B is the number of pixels in the blending region, I is the index of the position in the region, and gl (ι) and g2(ι) are the gray levels of the corresponding pixels in the region of camera 1 and 2 respectively
X-resampling is now described in greater detail with reference to Fig 15 Due to optical distortions, and in order to accommodate a user-defined pixel size, an output 640 of the Y-resamphng must be resampled in the X direction, thereby creating a X-corrected image row 642 with pixels having the desired pixel size Each pixel position r on the X-corrected image row corresponds to a position d(r) on the diode array d(r) may be calculated using the diode resampling polynomial Qx(d) This involves finding the inverse function Q '(r) of Qx(d) This inverse function allows the mapping of the pixel position r on the X-corrected image row to a corresponding position d(r)on the diode array It is appreciated that this position might not correspond to a integer position of a specific diode, but rather may be expressed in fractional terms along the diode array Once the diode position dp has been found, an X-quartet of pixels corresponding to four diodes is determined in a fashion similar to the method described above for Y-resamphng The gray levels of these four pixels are subsequently convolved with four correlation coefficients to interpolate a gray level value at a position r on the X-corrected image row
An index q(r) is maintained to denote the position of the current quartet being used
Additional reference is now made to Fig 16 which illustrates the steps to be performed for each pixel in the X-corrected image row as follows 1 Assigning an index rp to correspond to the first pixel 650 in an
X-corrected image row 652
2 Stepping index rp through each pixel position in the X- corrected image row 652 corresponding to an overlap region 654 to the end of the field of view 656 of the current camera, CAM 1 , and find the diode position dp (1) such that rp(1 )=Qx(dp (1 )) Since Qx is a monotonically increasing function, dp advances as rp advances When dp reaches the end 656 of the field of view of CAM 1, rp is returned to the pixel posi'ion corresponding to the start of the overlap region 658 of the next camera, CAM 2, assigning to rp the value of the diode compensating polynomial evaluated for the first diode of camera CAM 2, I e rp<— Q (2)( 1 ) 3 Stepping index rp through each pixel position in the X- corrected image row 652 for CAM 2, finding the diode position dp ( ) such that rP (2)-Qx(dp <2)), and continuing as in step 2
Steps 2 and 3 are performed for each subsequent pair of cameras Finding the diode position dp may either be done through a one-time inversion of the function Qx(dp) or through a numerical solution Once the diode position dp has been found, the index of the first pixel in the X-quartet, as well as ξ, may be expressed as follows q(r)
Figure imgf000030_0001
/2)-l ξ(r) = Q \(r) - (q(r) + l ) as was similarly done in the case of Y-resamphng q(r) thus defines the X-quartet, and the convolution coefficients cl through c4 may be calculated based on ξ using the formulae described above for Y- resamp ng
4 Storing q(r) in a X-quartet look-up table corresponding to the pixel position r Alternatively, calculating an offset relative to the position of q(r-l) by subtracting the value of the previous quartet position q(r-l) from the current quartet position q(r) and storing the offset
5 Encoding the four convolution coefficients cl through c4 as was described above for the Y-resamphng and store in an X-coefficients look-up table m a position corresponding to pixel position rP
Image correction employing Y-Resamphng, X-Resamphng, and Overlap correction are preferably performed by circuitry 1 10 of Fig 1, now summarized hereinbelow
During Y-Resamphng, an image may be corrected for camera field- of-view misalignment as follows For a given position of the FIFO window the quartet index is retrieved for each diode from the Y-quartet look-up table The four gray level values gl through g4 of the corresponding quartet may then be extracted from the FIFO buffer, and the four correlation coefficients cl through c4 may be retrieved from the Y-coefficients look-up table The final interpolated gray level for each diode may then be calculated as was previously described by g = Ci * gi + c2 * g2 + c3 * g3 + c4 * g and stored in a Y-corrected gray level buffer
During X-Resamphng, the Y-Resampled output is processed further to correct pixel shape and size The gray level is processed one pixel row at time For each pixel in the X-corrected image row, the X-quartet index is retrieved from the X-quartet look-up table and the four convolution coefficients Ci through c4 are retrieved from the X-coefficients look-up table The four gray level values g through g - of the corresponding X-quartet may then be retrieved from the Y- corrected gray level buffer The final interpolated gray level for each pixel position rP may then be calculated as was previously described by g' = c'ι * gr + c'2 * g2 + c'3 * g3 + c\, * g - During overlap correction the X-Resampled image row outputs from the various cameras are combined to form a single image row as described above with reference to Fig 16
It is well known in color image acquisition systems that both the nature of the light sources illuminating a target as well as the spectral reflective properties of the target may result in uneven color intensity of the image acquired Thus, for example, a diode in the multi-line sensor array which detects red may receive a different amount of light for a white area of the target than a diode which detects blue receives for the same white area Uneven color intensity may be corrected by varying the accumulation times of each color component of the multiline sensor array in an inverse relationship to the intensity of light received by the color component
In the present invention, when acquisition of an image line of a target begins, the electronic shutters of the camera are all opened, and each color component of the multi-line sensor array begins to accumulate the charge corresponding to its respective color The exposure of each color component of the multi-line sensor array is then varied by closing the electronic shutters of each color component at different times However, the center of the acquired pixel for each color component may be different than the geometric center of the pixel Thus when measuring the overlap in the Y-direction 104, Δyt0|, as described in Fig 8C, an "accumulation shift" ΔyaCL is introduced that may be corrected by subtracting the center of the acquired pixel from the geometric center of the pixel for each color component by the formula
ΔyaLl [green] = (AT[green]-AT[red])/(2*IT) b0*[green] = b0[green] + ΔyacC [green] where AT represents the accumulation time, IT represents the integration time, l e the time between the start of two subsequent image rows, , and b0*[green] is the modified 0th coefficient of QN The blue coefficient is modified similarly
This accumulation shift is preferably determined during acquisition of the test target, and is used to adjust the bO component of the Y-polynomial Py The diode compensating polynomial Q> described in Fig 9B may also be adjusted for the accumulation shift according to the different exposure times chosen for the various color components
Fig 17 illustrates the problem of accumulation shift in greater detail As was described hereinabove with reference to Fig 8C, a multi-line sensor array 700 is shown acquiring three pixels 702, 704, and 706, each pixel being acquired by a different line sensor 708, 710, and 712, with each line sensor comprising a plurality of single-color sensing diodes Due to the different acquisition times of each of each line sensor, the relative areas of each of the three pixels acquired vary, as is shown by accumulation areas 714, 716, and 718 A geometric center may be defined for each of the three pixels at 720, 722, and 724 The center of each accumulation area may be defined at 726, 728, and 730 The distances between the centers of each accumulation area and its corresponding geometric center 732, 734, and 736 represent the accumulation shift for each color component and may be used to correct for the overlap in the Y-direction 104 as described above It is appreciated that various features of the invention which are, for clarity, described in the contexts of separate embodiments may also be provided in combination in a single embodiment Conversely, various features of the invention which are, for brevity, descπbed in the context of a single embodiment may also be provided separately or in any suitable subcombination It will be appreciated by persons skilled in the art that the present invention is not limited by what has been particularly shown and described hereinabove Rather the scope of the present invention includes both combinations and subcombinations of the features described hereinabove as well as modifications and variations thereof which would occur to a person of skill in the art upon reading the foregoing description and which are not in the prior art

Claims

C L A I M S What is claimed is
1 An image acquisition system comprising a plurality of sensors each including a multiplicity of sensor elements, a pre-scan calibration subsystem, employing a predetermined test pattern, which is sensed by the plurality of sensors, for providing an output indication in two dimensions of distortions in the output of said plurality of sensors, said output indication being employed to generate a function which maps the locations viewed by said sensor elements in at least two dimensions, and a distortion correction subsystem operative during scanning of an article by said plurality of sensors to correct said distortions by employing said output indication
2 An image acquisition system according to claim 1 and wherein said plurality of sensors include plural sensors having different spectral sensitivities
3 An image acquisition system according to claim 2 and wherein said plurality of sensors include at least two sensors having generally the same spectral sensitivity
4 An image acquisition system according to claim 1 and wherein said plurality of sensors include at least two sensors which at least partially overlap in at least one dimension
5 An image acquisition system according to claim 1 and wherein said pre-scan calibration subsystem is operative to sub-pixel accuracy
6 An image acquisition system according to claim 1 and wherein said distortion correction subsystem performs non-zero'th order interpolation of pixels in the outputs of said plurality of sensors 7 An image acquisition system according to claim 1 and wherein said distortion correction subsystem compensates for variations in pixel size in said plurality of sensors
8 An image acquisition system according to claim 1 and wherein said distortion correction subsystem compensates for variations in magnification in said plurality of sensors
9 An image acquisition system according to claim 1 and wherein said distortion correction subsystem compensates for chromatic aberrations in said plurality of sensors
10 An image acquisition system according to claim 1 wherein said plurality of sensors include sensors having differing spectral sensitivities and wherein said function is dependent on differing accumulation times employed for said sensors having differing spectral sensitivities
1 1 An image acquisition according to claim 1 and wherein said distortion correction subsystem compensates for variations in pixel shape in said plurality of sensors
12 An image acquisition system according to claim 1 and wherein said distortion correction subsystem is operative to an accuracy of better than 5% of pixel size of said multiplicity of sensor elements
13 An image acquisition system comprising a plurality of sensors each including a multiplicity of sensor elements, a pre-scan calibration subsystem, employing a predetermined test pattern, which is sensed by the plurality of sensors, for providing an output indication of distortions in the output of said plurality of sensors, said output indication being employed to generate a correction function, and a distortion correction subsystem operative during scanning of an article by said plurality of sensors to correct said distortions by employing said correction function, said distortion correction subsystem being operative to an accuracy of better than 5% of pixel size of said multiplicity of sensor elements
14 An image acquisition system compπsing a plurality of sensors, a pre-scan calibration subsystem, employing a predetermined test pattern, which is sensed by the plurality of sensors while being moved relative to the plurality of sensors in a direction of relative movement, for providing an output indication of distortions in the output of said plurality of sensors, said pre-scan calibration system being operative to correlate images of at least one target on said test pattern as seen by said plurality of sensors, thereby to determine the relative orientation of said plurality of sensors, and a distortion correction subsystem operative to correct said distortions by employing said output indication
15 An image acquisition system according to claim 14 and wherein said pre-scan calibration subsystem also is operative to provide an output indication of the oπentation of said plurality of sensors relative to said scan direction
16 An image acquisition system according to claim 14 and wherein each of said plurality of sensors includes a multiplicity of sensor elements, and said pre-scan calibration subsystem also is operative to determine the pixel size characteristic of each of said multiplicity of sensor elements of each of said plurality of sensors 17 An image acquisition system according to claim 16 and wherein said pre-scan calibration subsystem is operative to determine the pixel size characteristic of each of said multiplicity of sensor elements of each of said plurality of sensors by causing said plurality of sensors to view a grid formed of a multiplicity of parallel uniformly spaced lines, formed on said test pattern
18 An image acquisition system comprising a plurality of sensors each including a multiplicity of sensor elements, a pre-scan calibration subsystem, employing a predetermined test pattern, which is sensed by the plurality of sensors, for providing an output indication in two dimensions of distortions in the output of said plurality of sensors, said output indication being employed to generate a function which maps the locations viewed by said sensor elements in at least two dimensions, and an distortion correction subsystem operative during scanning of an article by said plurality of sensors to correct said distortions by employing said output indication
19 An image acquisition system according to claim 18 and wherein said distortion correction subsystem is operative using a pixel size which is user selectable
20 An article inspection system comprising an image acquisition subsystem operative to acquire an image of an article to be inspected, an image analysis subsystem for identifying at least one predetermined characteristic of the article from said image, and an output indication subsystem for providing an output indication of the presence of said at last one predetermined characteristic of the article, characterized in said camera assembly includes a plurality of sensor assemblies, self calibration apparatus for determining a geometrical relationship between said sensor assemblies, and sensor output modification apparatus for modifying outputs of said plurality of sensor assemblies based on said geometrical relationship between said sensor assemblies, said sensor output modification apparatus comprising electronic interpolation apparatus operative to perform non-zero'th order interpolation of pixels in the outputs of said plurality of sensor assemblies
21 An article inspection system compπsing an image acquisition subsystem operative to acquire an image of an article to be inspected, an image analysis subsystem for identifying at least one predetermined characteristic of the article from said image, and an output indication subsystem for providing an output indication of the presence of said at least one predetermined characteristic of the article, characterized in said camera assembly includes a plurality of sensor assemblies, self calibration apparatus for determining a geometrical relationship between said sensor assemblies, and sensor output modification apparatus for modifying outputs of said plurality of sensor assemblies based on said geometrical relationship between said sensor assemblies, said sensor output modification apparatus being operative to modify the outputs of said plurality of sensor assemblies to sub-pixel accuracy
22 An article inspection system comprising an image acquisition subsystem operative to acquire an image of an article to be inspected, an image analysis subsystem for identifying at least one predetermined characteristic of the article from said image, and an output indication subsystem for providing an output indication of the presence of said at least one predetermined characteristic of the article, characterized in said camera assembly includes at least one sensor assembly, and sensor output modification apparatus for modifying at least one output of said at least one sensory assembly based at least in part on an optical distortion associated with said at least one sensor assembly
23 An article inspection system according to claim 22 wherein said optical distortion comprises pixel size distortion
24 An article inspection system according to claim 22 wherein said optical distortion comprises magnification distortion
25 An article inspection system according to claim 22 wherein said optical distortion comprises chromatic aberration
26 An article inspection system according to claim 22 wherein said optical distortion comprises overlap misadaptation
27 An article inspection system according to claim 22 wherein said optical distortion comprises pixel shift due to sensor separation
28 An article inspection system according to claim 22 wherein said optical distortion comprises focus inconsistencies across color components
29 An article inspection system according to claim 22 wherein said optical distortion comprises color accumulation shift
30 An article inspection system comprising an image acquisition subsystem operative to acquire an image of an article to be inspected, an image analysis subsystem for identifying at least one predetermined characteristic of the article from said image, and an output indication subsystem for providing an output indication of the presence of said at least one predetermined characteristic of the article, characterized in said camera assembly includes at least one sensor assembly, sensor output modification apparatus for modifying at least one output of said at least one sensory assembly, said sensor output modification apparatus comprising a function generator which generates a function which maps locations on said sensor assembly to a collection of scan locations
31 An article inspection system comprising a camera assembly operative to acquire an image of an article to be inspected, an image analysis subsystem for identifying at least one predetermined characteristic of the article from said image, and an output indication subsystem for providing an output indication of the presence of said at last one predetermined characteristic of the article, characterized in said camera assembly includes a user interface which enables a user to select resolution of the image acquired by the camera assembly, an electro-optical sensor assembly, and an electronic resolution modifier operative downstream of said electro-optical sensor assembly
32 An article inspection system according to claim 31 and wherein camera assembly is operative in response to resolution selection at said user interface to determine the pixel size of said image
PCT/IL1999/000450 1998-08-25 1999-08-19 Method and apparatus for inspection of printed circuit boards WO2000011873A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
AU53840/99A AU5384099A (en) 1998-08-25 1999-08-19 Method and apparatus for inspection of printed circuit boards
EP99939581A EP1108329A1 (en) 1998-08-25 1999-08-19 Method and apparatus for inspection of printed circuit boards

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IL12592998A IL125929A (en) 1998-08-25 1998-08-25 Method and apparatus for inspection of printed circuit boards
IL125929 1998-08-25

Publications (1)

Publication Number Publication Date
WO2000011873A1 true WO2000011873A1 (en) 2000-03-02

Family

ID=11071888

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL1999/000450 WO2000011873A1 (en) 1998-08-25 1999-08-19 Method and apparatus for inspection of printed circuit boards

Country Status (5)

Country Link
EP (1) EP1108329A1 (en)
CN (1) CN1314049A (en)
AU (1) AU5384099A (en)
IL (2) IL125929A (en)
WO (1) WO2000011873A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10208289C1 (en) * 2002-02-26 2003-02-27 Koenig & Bauer Ag Electronic image sensor with read out of signals from individual sensors so that overlapping image regions are only read once
CN1306244C (en) * 2005-06-16 2007-03-21 姚晓栋 On-the-spot printing circuit board test based on digital image
CN102914543A (en) * 2011-08-03 2013-02-06 浙江中茂科技有限公司 Article detection device of three-dimensional stereo image
CN107860773B (en) * 2017-11-06 2021-08-03 凌云光技术股份有限公司 Automatic optical detection system for PCB and correction method thereof

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5099522A (en) * 1989-05-29 1992-03-24 Rohm Co., Ltd. Method and apparatus for performing head-tail discrimination of electronic chip components
US5298989A (en) * 1990-03-12 1994-03-29 Fujitsu Limited Method of and apparatus for multi-image inspection of bonding wire
US5686994A (en) * 1993-06-25 1997-11-11 Matsushita Electric Industrial Co., Ltd. Appearance inspection apparatus and appearance inspection method of electronic components

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5099522A (en) * 1989-05-29 1992-03-24 Rohm Co., Ltd. Method and apparatus for performing head-tail discrimination of electronic chip components
US5298989A (en) * 1990-03-12 1994-03-29 Fujitsu Limited Method of and apparatus for multi-image inspection of bonding wire
US5686994A (en) * 1993-06-25 1997-11-11 Matsushita Electric Industrial Co., Ltd. Appearance inspection apparatus and appearance inspection method of electronic components

Also Published As

Publication number Publication date
IL147723A0 (en) 2002-08-14
IL125929A (en) 2002-03-10
AU5384099A (en) 2000-03-14
IL125929A0 (en) 1999-04-11
CN1314049A (en) 2001-09-19
EP1108329A1 (en) 2001-06-20

Similar Documents

Publication Publication Date Title
US4311914A (en) Process for assessing the quality of a printed product
JP3525964B2 (en) 3D shape measurement method for objects
US6876763B2 (en) Image resolution improvement using a color mosaic sensor
US7315643B2 (en) Three-dimensional shape measurement technique
JP3937024B2 (en) Detection of misalignment, pattern rotation, distortion, and misalignment using moiré fringes
WO2008011888A1 (en) Autostereoscopic system
JP4966413B2 (en) Method and apparatus for coping with chromatic aberration and purple fringing
JP2004109106A (en) Method and apparatus for inspecting surface defect
CN106017313B (en) Edge detection deviation correction value calculation method, edge detection deviation correction method and device
JP2009522561A (en) Method and system for optical inspection of periodic structures
US20060018533A1 (en) Segmentation technique of an image
JP3924796B2 (en) Pattern position measuring method and measuring apparatus
US5852671A (en) Method for reconstructing a curved surface of a subject and an apparatus for reconstructing a curved surface of a subject
US6813392B2 (en) Method and apparatus for aligning multiple scans of the same area of a medium using mathematical correlation
EP1108329A1 (en) Method and apparatus for inspection of printed circuit boards
EP1692869A2 (en) Inspection apparatus and method
JP2004077290A (en) Apparatus and method for measuring three-dimensional shape
CN111080716B (en) Camera calibration target based on color coding phase shift stripes and calibration point extraction method
JP3433331B2 (en) Image inspection method and apparatus
CN105894068B (en) FPAR card design and rapid identification and positioning method
KR100303181B1 (en) Calibration method of high resolution photographing equipment using multiple imaging device
JP2961140B2 (en) Image processing method
JP4985213B2 (en) Three-dimensional shape measuring method, apparatus and program
JP3563797B2 (en) Image inspection method and apparatus
JP2675167B2 (en) Pattern recognition method

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 99809996.1

Country of ref document: CN

AK Designated states

Kind code of ref document: A1

Designated state(s): AE AL AM AT AT AU AZ BA BB BG BR BY CA CH CN CR CU CZ CZ DE DE DK DK DM EE EE ES FI FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SK SL TJ TM TR TT UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW SD SL SZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
WWE Wipo information: entry into national phase

Ref document number: 1999939581

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1999939581

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

WWW Wipo information: withdrawn in national office

Ref document number: 1999939581

Country of ref document: EP