EP2006804A1 - Verfahren zur optischen Insprktion einer matten Oberfläche und Vorrichtung zum Durchführen dieses Verfahren - Google Patents

Verfahren zur optischen Insprktion einer matten Oberfläche und Vorrichtung zum Durchführen dieses Verfahren Download PDF

Info

Publication number
EP2006804A1
EP2006804A1 EP07075502A EP07075502A EP2006804A1 EP 2006804 A1 EP2006804 A1 EP 2006804A1 EP 07075502 A EP07075502 A EP 07075502A EP 07075502 A EP07075502 A EP 07075502A EP 2006804 A1 EP2006804 A1 EP 2006804A1
Authority
EP
European Patent Office
Prior art keywords
images
sub
interest
blobs
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP07075502A
Other languages
English (en)
French (fr)
Inventor
Oliver Stier
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Siemens AG
Original Assignee
Siemens AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siemens AG filed Critical Siemens AG
Priority to EP07075502A priority Critical patent/EP2006804A1/de
Priority to US12/665,210 priority patent/US8189044B2/en
Priority to PCT/EP2008/057550 priority patent/WO2009000689A1/en
Publication of EP2006804A1 publication Critical patent/EP2006804A1/de
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8806Specially adapted optical and illumination features
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/95Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined

Definitions

  • the invention concerns a method for the optical inspection of a matt surface of an object, the surface having a random texture, for the purpose of finding texture anomalies.
  • N. Bonnot et al. disclose in "Machine vision system for surface inspection on brushed industrial parts", Proceedings of SPIE-IS&T Electronic Imaging, SPIE Vol. 5303, 2004, pages 64 - 72 , a method for the inspection of reflecting surfaces to find surface defects like cracks.
  • the inspection method includes an illumination procedure which uses two light sources illuminating the surface to be inspected from opposite directions at the same time.
  • the problem to be solved by the invention is to find a method for the inspection of surfaces which is applicable to matt, randomly textured surfaces, and which is suitable for the detection of surface anomalies.
  • texture anomalies like cracks are found by performing at least the following main steps.
  • a first main step more than one (n>1) digital images of the surface are created by an image sensor whereby the surface is illuminated from different directions for each image to be created.
  • a second main step at least one, or more than one, (r>0) regions of interest of the surface are defined whereby all regions of interest are entirely shown in all of the n images.
  • a matrix of nxr sub-images is created which consists of the regions of interest in each of the n images.
  • texture anomalies are detected in the sub-images by digital image processing, and an abnormality chart showing the putative anomalies is generated for each sub-image.
  • a joint abnormality chart is generated by fusion of all abnormality charts of that region of interest.
  • texture anomalies are detected in each of the joint abnormality charts.
  • the advantage of the proposed method is that, for the first time, the inspection of matt surfaces having a random texture can be performed with a high reliability.
  • This reliability can be achieved by the inventive combination of the said main steps whereby the principal idea is the creation of a plurality of sub-images of one and the same region of interest, the sub-images differing by the illumination angle. These sub-images are interpreted separately by digital image processing to find putative texture anomalies.
  • abnormality charts of the texture anomalies can be generated which can be fused to a joint abnormality chart afterwards.
  • the number of putative texture anomalies is reduced advantageously so that the detection rate for true texture anomalies like cracks can be enhanced to a percentage close to 100 % at a very small false alarm rate.
  • brightness gradients in the n images, or in the nxr sub-images, as resulting from the unidirectional illumination are eliminated by an individual photometric normalization preceding the third main step.
  • This intermediate step enhances the results of the digital image processing in the third main step if there are brightness gradients across the images (sub-images) due to a narrow emission area of the light sources.
  • a photometric normalization results in that a given texture anomaly will yield essentially the same brightness modulation regardless of its location in the sub-image or image.
  • the photometric normalization can be performed before or after the creation of sub-images from the images.
  • surfaces appearing with curved boundaries in the n images or in the nxr sub-images are transformed to elementary shapes with, e.g., straight or circular boundaries by an individual geometric normalization before the third main step.
  • the third main step comprises the following steps.
  • a binary sub-image is created whereby all pixels of the sub-image with a brightness (B) and/or hue (H) and/or saturation (S) below, or above, a certain threshold are grouped to blobs.
  • the blobs are analysed according to their size and/or their position on the surface and/or their spatial relation to each other.
  • the thresholds have to be determined depending on what texture anomalies are to be found. To find the optimal thresholds for a given application empirical values have to be determined from a number of test measurements. Thus thresholds can be found yielding blobs bearing the information on all putative texture anomalies.
  • thresholds for one or more of the attributes brightness (B), hue (H) and saturation (S) have to be found.
  • a thresholding like that is based on the commonly known HSB-color-model.
  • a further refinement of the first main step is that its second step comprises the following steps: A selection of relevant blobs identified by a minimal size, a pairwise calculation of a distance between all blobs, and a partitioning cluster analysis based on that distance function to find clusters of relevant blobs suspect to form an anomaly.
  • This procedure advantageously reduces the number of actually irrelevant blobs having been marked erroneously by the thresholding.
  • the idea behind this refinement of the invention is that eventually such blobs remain relevant which have a minimal size and are separated from other blobs by a maximal distance.
  • These blobs can be combined to clusters which represent an anomaly more likely than isolated blobs.
  • cracks can be found easily this way whereby linear, unbranched, open chains of relevant blobs are considered suspect clusters.
  • the reliability of the method can be further improved if all q possible sums of two out of the n images are calculated by pixel fusion to generate q additional images before the second main step, or if all q ⁇ r possible sums of two out of the n sub-images belonging to the same region of interest are calculated to generate qxr additional sub-images for each region of interest before the third main step.
  • These q additional images or q ⁇ r additional sub-images are treated identically as the n images, or the nxr sub-images, in all subsequent steps.
  • the invention further relates to an apparatus for the optical inspection of the surface of an object which comprises a support frame which bears a camera and at least two light sources whereby the light sources and the camera are oriented towards the place intended for said surface.
  • This place could be a carrier for the object which also is integrated in the support frame.
  • the camera and the light sources might as well be oriented towards the base of the support frame whereby the support frame base can be positioned on the surface to be inspected.
  • the problem to be solved by this part of the invention is to provide an apparatus for the optical inspection of surfaces which allows to use the method for the optical inspection quickly and reliably.
  • the inventive apparatus comprises a plurality of light sources which can be driven separately by a controller.
  • the above described method can be run on the apparatus whereby the plurality of images under different illumination angles can be taken without changing the position of the support frame or the position of the light sources attached to the support frame.
  • the method can be run after one single adjustment step of the apparatus, saving time and gaining accuracy.
  • the support frame bears exactly four light sources forming the corners of a rectangle similar to and equally oriented as the boundary of rectangular objects under inspection.
  • wall tiles of combustion chambers have an approximately rectangular shape.
  • the method and the apparatus having four light sources are particularly appropriate for finding cracks in the tile surface.
  • gas turbine or compressor blades can be inspected by the inventive method.
  • Gas turbine blades comprise different coatings which appear matt and with a random surface texture.
  • the method for the inspection of gas turbine blades could be applied to a quality control after production to find e.g. cracks.
  • the method could also be applied to the inspection of worn turbine blades after a certain operating time whereby the progress in abrasion or corrosion of the coatings on the turbine blades can be evaluated. Also cracks in the surface of the coatings can be detected. This applies analogously to compressor blades.
  • a wall tile 11 is shown in figure 1 .
  • a crack 12 shall be found by the inventive method of inspection.
  • a number of images are taken of the surface 16 under different illumination angles (i.e. from each corner of the wall tile 11). These images will, at first, be analyzed individually in the following steps of the inventive method. For a better understanding these steps carry numbered headlines (V1, V1.1... V4), whereby for certain steps V alternative steps W are presented, indicating their correspondence by using same numbers.
  • any other object with a matt and random textured surface can be inspected.
  • the tile boundary can be localized by known methods for edge detection, morphological segmentation, etc., and be reconstructed to an idealized, continuous shape by known methods of geometrical data processing, like interpolation. These methods will make ample use of any available a priori information on the object shape and its position within the given environment, taking into account CAD models and similar means.
  • the tile 11 has a bevel 13 whose interior edge defines the relevant object boundary 14.
  • a separate mask is generated analogous to V1.2.1 including the relevant pixels and hiding all pixels which do not belong to that region of interest.
  • the various single sub-images (this method also applies to the images entering V1.1, though not explicitly described in the following) of one region of interest 15a, 15b differ strongly in their spatial illumination profiles, and each exhibit large brightness gradients, as can be seen schematically by the hatched surface 16 of the tile 11 in Fig. 2 .
  • the different illumination profiles prevent, on the one hand, reasonable results from superposing such images as in V1.6 and, on the other hand, render thresholding procedures as in V1.7.1 fruitless. Therefore an individual photometric normalization is prerequisite here for the further processing.
  • the first step of the normalization is the compensation of the camera gradation, i.e., the translation of the generic gray scale image into an intensity image.
  • the second step is the arithmetic decomposition of the intensity image into two layers, the pixel-wise superposition of which returns the intensity image.
  • the first layer is the spatial brightness profile the body surface would yield in absence of its texture, as a result only of the angular intensity distribution of the light source, the overall curvature of the body surface and its diffuse and, if applicable, specular reflectivity.
  • the second layer is the intensity image the body surface would yield after hypothetical planarization, and under perfect unidirectional, homogeneous illumination, as a result only of its texture relief. This layer is selected for further processing, the first layer being dropped.
  • the third and last step of the normalization is a transformation of the (second layer) pixel values such that the main part of the intensity distribution (without outliers) will approximately obey a standard normal distribution.
  • the normalization procedure is visualized in Fig. 3 and 4 .
  • the first layer (the global intensity profile) from the intensity image shown in Fig. 3 the latter is sampled by a certain number of chips 20 in the manner of an exposure meter.
  • the tiny sub-images within the chips 20 are analyzed by procedure W1.7.2 to W1.7.4 to reject unrepresentative chips.
  • the average intensities of the accepted chips are used as data points to extract the continuous global intensity profile from the image.
  • the global intensity profile was calculated by assuming a five-parameter ellipsoid shape of the tile surface 16, a point light source with a three-parameter angular intensity profile and the Blinn-Phong shading model including diffuse and specular reflection. Including the positions of light source and camera, this shading model has 17 parameters for which maximum likelihood estimates were calculated from 58 data points using a derandomized (1,10)-evolution strategy with correlated mutations and individual step size.
  • the second layer is the fit residual, i.e. was calculated by subtracting the global profile (first layer) from the original intensity image.
  • the pixel intensity values in the fit residual are transformed to approximately unit variance (the median of the residual of a correct fit is already zero).
  • the resulting histogram (pixel brightness B on x-axis and frequency on y-axis) is shown in Fig. 4 .
  • the shading model can be adapted appropriately.
  • the proposed photometric normalization procedure is by no means restricted to images of wall tiles from turbine combustion chambers.
  • the typically irregularly shaped regions of interest are smoothly transformed to elementary shapes, e.g., rectangles. It is advantageous to maintain the original pixel structure to an as large as possible extent, to avoid distortion artifacts in V1.7.2 to V1.7.6.
  • An appropriate transformation of the two regions of interest (RoI) 15a, 15b shown in Fig. 1 to rectangles can be performed by pixel column shifts.
  • the degree of surface decomposition is an additional global characteristic of the surface texture and not only of interest per se, but can also be considered when selecting threshold values and pass bounds for the various processing steps in V1.7.
  • An advantageous choice to quantify the surface decomposition is the average feature size measured in either (i) a defect free region, (ii) a typical region, or (iii) throughout the entire visible region of the body surface. Whatever region chosen, the average feature size is advantageously calculated from the image autocorrelation function, shown schematically as contour plots in Fig. 5 and 6 .
  • the butterfly shape of the autocorrelation function 21 visualizes the preference of feature orientation due to the unidirectional illumination.
  • the reciprocal of the weighted average of the spatial frequency modulus is the average feature size.
  • the typical feature sizes obtained in this application example are 1 mm for new, defect free tiles ( Fig. 5) and 4 mm for structurally decomposed tile ( Fig.6 ).
  • Pixel-wise arithmetic combinations e.g. addition
  • the normalized sub-mages of one and the same region of interest can enhance the appearance of surface defects. Due to the opposite positions of the light source during the two respective exposures shadow and light on many random features cancel to significant extent while the crack 12 is unaffected, due to its accidental orientation with respect to the light source shift. Hence, the contrast of the crack 12 becomes larger than in the parent sub-images.
  • the appreciable extent of cancellation is the result of the preceding photometric normalization due to V1.3. Without V1.3, a systematic cancellation of irrelevance could not have been expected.
  • the inventive method is based on the two findings that (i) under appropriate illumination a crack in a bright textured surface is visible by the appearance of dark pixels forming characteristic spatial patterns, and that (ii) among these dark pixels a significant number are exceptionally dark, i.e., are outliers from the approximate standard normal distribution of brightness values obtained as a result of V1.3 and shown in Fig. 4 .
  • method V1.7 commences with the isolation of the darkest pixels prevailing in the sub-image, and is accomplished by a comprehensive analysis of their spatial connectivity.
  • a fraction f is chosen, either by prescription or auto-thresholding, and the f -quantile of the brightness distribution in the given sub-image calculated.
  • the quantile is chosen as a threshold, and all pixels with brightness values below the threshold are selected. These are called foreground pixels, and all pixels not selected are called background pixels.
  • Fig. 7 and 8 show the effect of this process.
  • the submitted sub-image is shown in Fig. 7 . Apart from the normalization according to V1.3 and V1.4, this is an original photograph. This sub-image is decomposed by the 1%-quantile, yielding the binary image shown in Fig. 8 .
  • a rectangular foreground pixel is called a boundary pixel if, and only if, it shares an edge with at least one background pixel. All foreground pixels not being boundary pixels are called interior pixels.
  • Fig. 9 shows the partition of an enlarged part VIII of Fig. 8 into interior (hatched), boundary (black), and background (white) pixels. Two rectangular pixels are said to be adjacent if, and only if, they share a common edge or corner.
  • a blob is, by definition, a set of foreground pixels which contains all pixels adjacent to a pixel p if, and only if, it contains the pixel p .
  • the blobs 22 are separated by background pixels. The construction of the blobs 22 from a given binary image is accomplished by known iterative-recursive algorithms. In what follows, only the blobs 22 will be considered, while the background pixels are discarded.
  • blob diameter d defined as the maximal Euclidian distance existing between any two pixels of the blob.
  • Fig. 10 shows all blobs from Fig. 8 having a diameter of at least 15 pixel units. The largest blob has a diameter of 70 pixel units.
  • the most important ingredient to the single sub-image evaluation procedure V1 is the adequate choice of a distance function between blobs 22.
  • the subsequent partition, or grouping, respectively, of the blobs in V.1.7.5 will be based on a calculated adjacency criterion which is derived from the blob distance matrix defined here.
  • one assumption about the topological structure of a crack is made which has some effect on the present choice of the distance function and is specific to surface cracks in ceramic wall tiles for turbine combustion chambers, but can easily be modified for a detection of cracks in other brittle body surfaces.
  • the assumption made here is that the connection topology of a crack is necessarily a linear, open chain of blobs containing no branching.
  • Such a chain either consists of one blob 22 only, or contains exactly two blobs 22 having one nearest neighbor only while all other blobs 22 have exactly two nearest neighbors.
  • This definition determines the target adjacency structure used in V1.7.5 and motivates the construction of the distance function as proposed below.
  • the present definition of the distance function intentionally avoids assumptions about geometrical properties of the blobs 22, and proves useful in this general form. For other applications than searching for surface cracks in turbine combustion chamber wall tiles geometrical requirements can readily be incorporated to the distance function, as appropriate.
  • the single linkage distance s ij between two blobs i and j is known as the shortest Euclidian distance existing between a pixel in blob i and a pixel in blob j.
  • s ij 0 if, and only if, one pixel of blob i is adjacent or identical to a pixel in blob j, then the blobs i and j are identical.
  • s ij is calculated from the boundary pixels only.
  • the application specific distance ⁇ ij between two blobs i and j according to the invention involves a direction (or orientation) parameter ⁇ , 0 ⁇ 1, and a spacing parameter ⁇ , 0 ⁇ 1, such that by definition ⁇ 1 for ⁇ 1.
  • 1 if, and only if, the line segments visualizing the blob diameters d i and d j of the two blobs lie on the same line, i.e., if one blob is a straight continuation of the other (realizing the equality case of the triangle inequality), and furthermore choose ⁇ to decrease significantly if the two blobs, e.g., form a sharp angle, a T-shape junction, or are aligned parallel.
  • the latter expression reduces the influence of ⁇ as compared to the first and has been used throughout the examples presented here.
  • d ij is the blob diameter of the pixel union of the blobs i and j , i.e., the largest Euclidian distance existing between any two pixels from the blobs i and j .
  • the topological characteristics of a surface crack 12 in a turbine combustion chamber wall tile 11 assumed above are translated into an adjacency matrix according to the invention as follows. Define the set m i containing two elements by sorting the k -1 values of ⁇ ij
  • j ⁇ i in ascending order and taking the first two elements of that ordered set. Then the adjacency matrix A is the symmetric rank k matrix with the matrix elements a ij ⁇ 1 if ( i ⁇ j ) ⁇ ⁇ ij ⁇ m i ⁇ ( ⁇ ij ⁇ m j ) 0 otherwise .
  • two blobs i and j are considered adjacent if, and only if, they are mutual nearest or second-nearest neighbors in terms of the distance function defined in V1.7.4.
  • a blob 22 will be adjacent to, at most, two other blobs 22, multiple adjacencies will occur accidentally, and many blobs 22 will be isolated, i.e., not be adjacent to any other blob.
  • the present definition of A directly reflects the above assumption of a crack manifesting itself as a linear, unbranched, open blob chain 23. For other applications than searching for surface cracks 12 in turbine combustion chamber wall tiles 11 different adjacency criteria can readily be defined, as appropriate.
  • A corresponds to an undirected, unweighted graph which, in the general case, is disconnected.
  • the connected components of that graph are disjunct sets of adjacent blobs 22, called blob clusters 24, and represent a unique partition of the adjacency graph.
  • Fig. 11 shows an embedding of the adjacency graph resulting from the blob selection shown in Fig. 10 , using the blob centroids as vertex coordinates.
  • Adjacent blobs (connected vertices) are indicated by connecting lines. This partition yields seven clusters, five of which contain isolated blobs, and two of which contain more than one blob.
  • the first criterion for a suspect cluster is that its blobs form a linear, unbranched, open chain.
  • advantageous parameters apt to the first-level discrimination of false detections are obtained from generalizing the above introduced parameters d ij , ⁇ ij , and ⁇ ij to blob clusters of arbitrary size c, with c being the number of blobs forming the cluster.
  • the weight assigned to an edge is chosen as the single-linkage distance s ij between the blobs corresponding to the two vertices i and j terminating the edge.
  • d 1c the blob diameter of the pixel union over the entire blob cluster
  • s 1c the weight of the minimal spanning tree of the weighted adjacency graph of the cluster.
  • such clusters 24 passing the above, topological chain criterion are considered suspect whose parameters d 1c , ⁇ 1c , and ⁇ 1c lie in certain, specified intervals (which can be open).
  • a typical choice is the specification of lower bounds for d 1c and ⁇ 1c and an upper bound for ⁇ 1c .
  • Application of these criteria discriminates, e.g., the left cluster shown in Fig. 11 as harmless ( ⁇ 1c is too small), while classifying the right cluster shown in Fig. 11 as a potential crack 12.
  • Fig. 12 shows a highlighting image of the crack 12 which can be obtained by image processing from the adjacency graphs, shown in Fig. 11 as example.
  • a detection failure of cracks would be both likely and typical for single image evaluations, which is the reason for having taken several photographs of the same region of interest under varying illumination, and for having created even more sub-images of that region in V.1.6 by pixel fusion.
  • n+q abnormality charts of one and the same region of interest are merged to one joint abnormality chart of that region by setting all pixels of all n + q single abnormality charts as foreground pixels of a new binary image, and setting all remaining pixels of the region as background pixels.
  • the joint abnormality charts of each of the r regions of interest are resubmitted to steps V1.7.2 to V1.7.7 for finally detecting and marking suspect features in the respective surface texture.
  • the accept/reject bounds for the blob parameters (or, more generally, the classification function) in V1.7.6 can be different from the single image evaluation pass, taking into account the higher significance of pixels in the joint abnormality chart, as compared to the single sub-images. It is advantageous, however, to leave the distance function (in V1.7.4) and the definition of the adjacency matrix (in V1.7.5) unaltered.
  • the adjacency graph resulting from the joint abnormality chart is shown in Fig. 13 .
  • the final pass of V1.7.7 yields the final detection map of that region of interest, displaying all putative cracks (or other wanted surface defects, if applicable) found by the algorithm V.
  • Fig. 14 shows such a final detection map of the region around the crack 12 by bright highlighting in the corresponding sub-image. Unambiguous detection of all cracks, without mistakes, was achieved, as well as in a suite of other relevant test examples.
  • V4 Discrimination of false detection items based on a priori information
  • V1.7.4 In contrast to V1.7.4 no assumption is made concerning the geometrical or topological structure of a defect, but anything deviating significantly from the vast majority of prevailing structural patterns is reported as an anomaly. Like V1.7 this approach works best on appropriately normalized sub-images. Unlike the above outlined implementation of V1.7, it is not tailored to finding cracks in wall tiles of turbine combustion chambers, but is kept rather general in order to efficiently complement V1.7. Any tailoring to specific defect signatures is possible, however, by appropriate concretization of the observable parameters in W1.7.2.
  • the submitted sub-image is completely partitioned into much smaller sub-images called chips or tiles forming a parquet. These chips have all the same shape and size which is larger than the average feature size determined in V1.5. It is advantageous if adjacent chips overlap to some extent which must not vary across the parquet.
  • the chips can be squares overlapping to 50% in either direction.
  • a set of characteristic parameters for the description of the chips is chosen such that chips containing structural anomalies will yield abnormal values for at least some of these parameters. Tailoring to specific anomalies is possible by according choice or design of these parameters. The success of the method W depends most on the appropriate choice of the characteristic parameters. A very general, though suitable, choice is the pair of (i) the average brightness and (ii) the peak value of the autocorrelation function inside the chip.
  • the chosen parameters are calculated for all chips, yielding a rectangular data matrix with as many data lines as the parquet contains chips, and with one column per parameter.
  • Each data line (thus, each chip) is associated with a data point in the space of characteristic parameters.
  • the parameter space is a plane spanned by the average brightness on the abscissa and the autocorrelation peak value on the ordinate. In this plane all chips are represented by points, as shown in Fig. 15 .
  • the set of data points is submitted to an outlier detection procedure.
  • the procedure performs a partitioning cluster analysis with a priori unknown class number under the constraint that one class contains significantly more data points than the union of all other classes.
  • What distance function is used depends completely on the definition of the characteristic parameters spanning the data space and has, like in V1.7.4, the by far most important influence on the quality of the outlier classification.
  • both parameters have the same dimension and order of magnitude (due to the normalization V1.3) so that, e.g., the (squared) Euclidian distance is suited.
  • the resulting partition of the data set is shown in Fig. 15 by the line 25 circumfering all data points considered normal.
  • the data points outside the line do not belong to that class and are considered supplementary, i.e. outliers. If the partition yields more than two classes, all but the largest class are unified to a set of supplementary data points. If the partition yields one class only, the supplementary data point set is empty.
  • the set of abnormal chips yields the abnormality chart corresponding to the submitted image.
  • the abnormality chart corresponding to Fig. 15 is schematically shown in Fig. 16 where the seven data points outside the line 25 in Fig. 15 have been translated back to their corresponding chips.
  • Figure 17 shows an example of an apparatus for the inspection of the wall tile 11 to find cracks 12 in a schematic perspective view.
  • the apparatus comprises a support frame realized as a tripod.
  • the support frame S can be held by an operator on handles H and positioned above the wall tile 11.
  • On the support frame S a digital camera K and four light sources B are fixed in a position such that they are oriented towards the wall tile 11 to be inspected.
  • the method is controlled by a computer V which has a signal connection to the digital camera K and to a controller C.
  • the controller C is able to drive the light sources B separately, so that the wall tile 11 can be illuminated from four different directions and each of these illumination conditioins can be documented by the digital camera K.
  • the controller C and the computer V can be driven by an energy source E.
  • the controlling unit comprising the controller C, the energy source E and the computer V can be integrated in a back pack P. Furthermore, the operator can obtain information by wearing display spectacles.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
EP07075502A 2007-06-22 2007-06-22 Verfahren zur optischen Insprktion einer matten Oberfläche und Vorrichtung zum Durchführen dieses Verfahren Withdrawn EP2006804A1 (de)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP07075502A EP2006804A1 (de) 2007-06-22 2007-06-22 Verfahren zur optischen Insprktion einer matten Oberfläche und Vorrichtung zum Durchführen dieses Verfahren
US12/665,210 US8189044B2 (en) 2007-06-22 2008-06-16 Method for optical inspection of a matt surface and apparatus for applying this method
PCT/EP2008/057550 WO2009000689A1 (en) 2007-06-22 2008-06-16 Method for optical inspection of a matt surface and apparatus for applying this method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP07075502A EP2006804A1 (de) 2007-06-22 2007-06-22 Verfahren zur optischen Insprktion einer matten Oberfläche und Vorrichtung zum Durchführen dieses Verfahren

Publications (1)

Publication Number Publication Date
EP2006804A1 true EP2006804A1 (de) 2008-12-24

Family

ID=39387905

Family Applications (1)

Application Number Title Priority Date Filing Date
EP07075502A Withdrawn EP2006804A1 (de) 2007-06-22 2007-06-22 Verfahren zur optischen Insprktion einer matten Oberfläche und Vorrichtung zum Durchführen dieses Verfahren

Country Status (3)

Country Link
US (1) US8189044B2 (de)
EP (1) EP2006804A1 (de)
WO (1) WO2009000689A1 (de)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104792795A (zh) * 2015-04-30 2015-07-22 无锡英斯特微电子有限公司 光电鼠标芯片的图像测试方法
TWI497425B (zh) * 2010-11-18 2015-08-21 Alibaba Group Holding Ltd Method, apparatus and reptile server for digital image recognition
DE102015212837A1 (de) 2015-07-09 2017-01-12 Siemens Aktiengesellschaft Verfahren zur Überwachung eines Prozesses zur pulverbettbasierten additiven Herstellung eines Bauteils und Anlage, die für ein solches Verfahren geeignet ist
EP3495077B1 (de) 2016-08-02 2022-05-04 Xi'an Bright Laser Technologies Co., Ltd. Pulverteilungqualitätstestverfahren und vorrichtung zur generativen fertigung

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
SG158787A1 (en) * 2008-07-28 2010-02-26 Chan Sok Leng Apparatus for detecting micro-cracks in wafers and method therefor
SG158782A1 (en) * 2008-07-28 2010-02-26 Chan Sok Leng Method and system for detecting micro-cracks in wafers
US8611636B1 (en) * 2009-01-05 2013-12-17 Cognex Corporation High speed method of aligning components having a plurality of non-uniformly spaced features
US9025019B2 (en) * 2010-10-18 2015-05-05 Rockwell Automation Technologies, Inc. Time of flight (TOF) sensors as replacement for standard photoelectric sensors
US8750596B2 (en) * 2011-08-19 2014-06-10 Cognex Corporation System and method for identifying defects in a material
GB201122082D0 (en) * 2011-12-22 2012-02-01 Rolls Royce Deutschland Method and apparatus for inspection of components
BR112014019364A8 (pt) * 2012-02-10 2017-07-11 Koninklijke Philips Nv Sistema de geração de imagem clínica, método de geração de imagem clínica, e meio não transitório executável por computador com software que controla um ou mais processadores
US20140043492A1 (en) * 2012-08-07 2014-02-13 Siemens Corporation Multi-Light Source Imaging For Hand Held Devices
US9016560B2 (en) 2013-04-15 2015-04-28 General Electric Company Component identification system
US20150071541A1 (en) * 2013-08-14 2015-03-12 Rice University Automated method for measuring, classifying, and matching the dynamics and information passing of single objects within one or more images
DE102014212511A1 (de) * 2014-06-27 2015-12-31 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtungen und Verfahren zur Prüfung einer Oberflächengüte
WO2016172827A1 (zh) * 2015-04-27 2016-11-03 武汉武大卓越科技有限责任公司 一种逐步求精的路面裂缝检测方法
EP3327198B1 (de) * 2015-07-21 2021-04-21 Kabushiki Kaisha Toshiba Rissanalysevorrichtung, rissanalyseverfahren und rissanalyseprogramm
JP6654849B2 (ja) * 2015-10-08 2020-02-26 西日本高速道路エンジニアリング四国株式会社 コンクリートの表面ひび割れの検出方法
US10121232B1 (en) * 2015-12-23 2018-11-06 Evernote Corporation Visual quality of photographs with handwritten content
SG10201913148YA (en) * 2016-03-30 2020-02-27 Agency Science Tech & Res System and method for imaging a surface defect on an object
CN106340007A (zh) * 2016-06-13 2017-01-18 吉林大学 一种基于图像处理的车身漆膜缺陷检测识别方法
JP6883458B2 (ja) * 2017-03-31 2021-06-09 大和ハウス工業株式会社 不定形シール材の診断方法
JP6937642B2 (ja) * 2017-09-22 2021-09-22 株式会社大林組 表面評価方法及び表面評価装置
JP6894338B2 (ja) * 2017-09-29 2021-06-30 清水建設株式会社 ひび割れ検出装置、ひび割れ検出方法、および、コンピュータプログラム
CN109064439B (zh) * 2018-06-15 2020-11-24 杭州舜浩科技有限公司 基于分区的单侧入光式导光板暗影缺陷提取方法
WO2020193056A1 (en) * 2019-03-22 2020-10-01 Basf Coatings Gmbh Method and system for defect detection in image data of a target coating
EP3845890B1 (de) * 2019-12-31 2024-04-03 ANSALDO ENERGIA S.p.A. Werkzeug zur visuellen inspektion des innenraums von brennkammern von gasturbinen
JP2022138669A (ja) * 2021-03-10 2022-09-26 富士フイルムビジネスイノベーション株式会社 表面検査装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE4123916A1 (de) * 1990-07-19 1992-01-23 Reinhard Malz Verfahren zum beleuchtungsdynamischen erkennen und klassifizieren von oberflaechenmerkmalen und -defekten eines objektes und vorrichtung hierzu
JPH09311020A (ja) * 1996-05-23 1997-12-02 Nec Corp 突起部検査装置
WO2000010115A1 (en) * 1998-08-14 2000-02-24 Acuity Imaging, Llc System and method for image subtraction for ball and bumped grid array inspection
EP1832867A1 (de) * 2006-03-10 2007-09-12 Omron Corporation Fehlerüberprüfungsvorrichtung und Fehlerüberprüfungsverfahren

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3072998B2 (ja) 1990-04-18 2000-08-07 株式会社日立製作所 はんだ付け状態検査方法及びその装置
US7630006B2 (en) * 1997-10-09 2009-12-08 Fotonation Ireland Limited Detecting red eye filter and apparatus using meta-data
US7612805B2 (en) * 2006-07-11 2009-11-03 Neal Solomon Digital imaging system and methods for selective image filtration
US7738088B2 (en) * 2007-10-23 2010-06-15 Gii Acquisition, Llc Optical method and system for generating calibration data for use in calibrating a part inspection system
US7929128B2 (en) * 2008-09-26 2011-04-19 Siemens Aktiengesellschaft Apparatus for the optical inspection of the thermal protection tiles of a space shuttle

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE4123916A1 (de) * 1990-07-19 1992-01-23 Reinhard Malz Verfahren zum beleuchtungsdynamischen erkennen und klassifizieren von oberflaechenmerkmalen und -defekten eines objektes und vorrichtung hierzu
JPH09311020A (ja) * 1996-05-23 1997-12-02 Nec Corp 突起部検査装置
WO2000010115A1 (en) * 1998-08-14 2000-02-24 Acuity Imaging, Llc System and method for image subtraction for ball and bumped grid array inspection
EP1832867A1 (de) * 2006-03-10 2007-09-12 Omron Corporation Fehlerüberprüfungsvorrichtung und Fehlerüberprüfungsverfahren

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI497425B (zh) * 2010-11-18 2015-08-21 Alibaba Group Holding Ltd Method, apparatus and reptile server for digital image recognition
CN104792795A (zh) * 2015-04-30 2015-07-22 无锡英斯特微电子有限公司 光电鼠标芯片的图像测试方法
DE102015212837A1 (de) 2015-07-09 2017-01-12 Siemens Aktiengesellschaft Verfahren zur Überwachung eines Prozesses zur pulverbettbasierten additiven Herstellung eines Bauteils und Anlage, die für ein solches Verfahren geeignet ist
EP3495077B1 (de) 2016-08-02 2022-05-04 Xi'an Bright Laser Technologies Co., Ltd. Pulverteilungqualitätstestverfahren und vorrichtung zur generativen fertigung

Also Published As

Publication number Publication date
US20100177191A1 (en) 2010-07-15
US8189044B2 (en) 2012-05-29
WO2009000689A1 (en) 2008-12-31

Similar Documents

Publication Publication Date Title
US8189044B2 (en) Method for optical inspection of a matt surface and apparatus for applying this method
US10726543B2 (en) Fluorescent penetrant inspection system and method
CN105787923B (zh) 用于平面表面分割的视觉系统和分析方法
TWI539150B (zh) 用於偵測檢查圖像中之缺陷的系統、方法及電腦程式產品
KR100954703B1 (ko) 결함을 검출하는 방법 및 시스템
US7177458B1 (en) Reduction of false alarms in PCB inspection
WO2003104781A1 (en) Method for pattern inspection
CA3061262C (en) Fluorescent penetrant inspection system and method
US9046497B2 (en) Intelligent airfoil component grain defect inspection
CN103972124B (zh) 图案化晶圆缺点检测系统及其方法
CN107004266A (zh) 检测轮胎表面上缺陷的方法
Schluns et al. Global and local highlight analysis in color images
Kim et al. A vision-based system for monitoring block assembly in shipbuilding
Lin et al. Surface defect detection of machined parts based on machining texture direction
JP4018092B2 (ja) 検査対象物の所定の表面状態の原因を自動的に求める装置
JP2015163999A (ja) 繰返模様画像の解析システム及び方法、繰返構造図の解析システム及び方法
Adameck et al. Three color selective stereo gradient method for fast topography recognition of metallic surfaces
Lauziere et al. Autonomous physics-based color learning under daylight
Bonnot et al. Machine vision system for surface inspection on brushed industrial parts
US20240048848A1 (en) Optimized Path Planning for Defect Inspection based on Effective Region Coverage
Lin et al. Automated inspection of contour faults for convex mirrors using wavelet descriptors and EWMA control scheme
CN116364569A (zh) 利用计算高效分割方法的缺陷检测
Simler et al. Bimodal model-based 3D vision and defect detection for free-form surface inspection
CN115668292A (zh) 基于有效区域覆盖的缺陷检测的优化路径规划
JP2006047099A (ja) 検査対象物の表面状態を判定する装置およびプログラム

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC MT NL PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL BA HR MK RS

AKX Designation fees paid
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20090625

REG Reference to a national code

Ref country code: DE

Ref legal event code: 8566