WO2001040780A1 - Imaging system for detecting surface defects and evaluating optical density - Google Patents

Imaging system for detecting surface defects and evaluating optical density Download PDF

Info

Publication number
WO2001040780A1
WO2001040780A1 PCT/IT2000/000504 IT0000504W WO0140780A1 WO 2001040780 A1 WO2001040780 A1 WO 2001040780A1 IT 0000504 W IT0000504 W IT 0000504W WO 0140780 A1 WO0140780 A1 WO 0140780A1
Authority
WO
WIPO (PCT)
Prior art keywords
product
image
defective areas
images
tone
Prior art date
Application number
PCT/IT2000/000504
Other languages
English (en)
French (fr)
Inventor
Roberto Falessi
Valerio Moroli
Original Assignee
Centro Sviluppo Materiali S.P.A.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Centro Sviluppo Materiali S.P.A. filed Critical Centro Sviluppo Materiali S.P.A.
Priority to AU22197/01A priority Critical patent/AU2219701A/en
Publication of WO2001040780A1 publication Critical patent/WO2001040780A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0007Image acquisition
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination

Definitions

  • the present invention refers to a real-time image capturing and processing system and method. These images generally come from image transducers such as linear videocameras with digital output and are related to continuous -type plane industrial products such as rolled metallic sections or fabrics, or discrete products with plane surface such as metallic, ceramic or plastic pieces .
  • the system and method according to the present invention are such as to enable evaluating the quality of the inspectioned surface, by classifying the same both in terms of the detected defects and in terms of the measured tone variations, where tone indicates the gradation of the grey intensity.
  • the main disadvantage of these methods is to enable detecting only a particular defect typology on a particular product, such as for example dents or polishing defects on body parts. Furthermore, che gravity evaluation of the detected defects is usually performed by a specialized operator, with consequent errors due to natural sensorial limitations of men.
  • the present invention overcomes to these problems of prior art since it provides an image capturing and processing method of an industrial product, for superficial defect research and tone evaluation of said industrial product, characterized in that it comprises the following steps: capturing digitalized images of said product; - performing a processing of said digitalized images to detect one or more defective areas of said product ; performing a first classification of said product based upon said detected defective areas; assigning a tone to said product; and performing a second classification of said product based upon the assigned tone.
  • an image capturing and processing system for superficial defect research and tone evaluation of industrial products comprising: a linear lighting device; a first frame apt to rotate around a frame rotation axis, whereon said linear lighting device is placed; a first image transducer; a second frame apt to rotate around said frame rotation axis, whereon said first image transducer is mounted; a second image transducer; a third frame apt to rotate around said frame rotation axis, whereon said second image transducer is mounted; means for capturing and digitalizing images detected by said first and second image transducer; a processing unit, apt to process the images captured and digitalized by said means for capturing and digitalizing images; and control means, said first image transducer being positioned so as to be sensible to the intensity of light diffused by said industrial products, and said second image transducer being positioned so as to be sensible to the intensity of light reflected by said industrial products.
  • a first advantage of the present invention is to enable recognizing defects belonging to a high number of typologies on any product type.
  • a second advantage is then to enable at the same time both a classification based upon the detected defects and a classification based upon a tone measure. It is then possible to automatically decide whether a product satisfies predetermined quality criteria, which combine evaluations referred to the defect entity with evaluations referred to the tone.
  • an additional advantage is that by means of the present invention an additional classification of the examined products is further performed, expressed by a code apt to be provided as input of an automatic machine for sorting the products, in case connected to the system.
  • figure 1 schematically shows a perspective view of the system according to the present invention
  • figure 2 shows the connections of the image transducers of Fig. 1 with a computer
  • figure 3 shows a block diagram of the image capturing and processing method according to the present invention.
  • a base 1 comprising two supports 50, vertically placed.
  • a first frame 2 is revolving mounted, which may rotate around the X-axis.
  • a support 9 is mounted, whereon a first image transducer 3 (hereinafter designated with videocamera 3) is installed, revolving both around the A-axis and the B-axis of the figure.
  • a second frame 5 is further mounted, revolving too around the X-axis.
  • a support 10 is mounted whereon a second image transducer 4 (hereinafter designated with videocamera 4) is installed, revolving both around the C-axis and around the D-axis of the figure.
  • a third frame 6 is mounted, revolving too around the X- axis.
  • a linear lighting device 7 is fastened.
  • Both the videocameras 3 and 4 result focalized on an inspection line 8, contained in the XY-plane.
  • the XY- plane represents also the inspection plane whereon the surface of the industrial product to be examined flows . To make exposition easier, this surface will be considered plane.
  • the frames 2 and 5 are respectively rotated by angles ⁇ and ⁇ with respect to the Z-axis, that is with respect to the normal to the inspection plane.
  • the frame 6, instead, is rotated by an angle - ⁇ with respect to the Z-axis.
  • the videocamera 3 results sensible to the intensity of the light diffused by the surface to be inspectioned, by so generating diffusion images, whereas the videocamera 4 generates reflection images, since it is specularly placed with respect to the lighting source 7.
  • This videocamera arrangement results particularly advantageous since it allows to detect different defect typologies.
  • a defect consisting for example in a surface deformation will cause a change in direction of the light reflected by the surface itself. This defect will be highlighted by the image captured by the videocamera 4.
  • a defect consisting for example in a stain or in a tone variation will cause a different light absorption by the surface, therefore a different light diffusion at the defective area will be obtained.
  • This defect instead, will be hightlighted by the image captured by the videocamera 3.
  • the value of the angles ⁇ and ⁇ varies according to the type of surface to be analyzed and to the type of the defects to be detected.
  • the following figure 2 illustrates the connection of the videocameras 3 and 4 with a pair of capture cards 11 and 12 to be installed on a computer 13 apt to perform the subsequent procedures which will implement the processing method according to the present invention.
  • the capture start and end are driven by a signal coming from a sensor 14 apt to detect the presence/absence of the material to be inspectioned .
  • the image capturing and processing method provides the advantageous performing of some service procedures, both apt to set the parameters necessary for the capture and also to calculate equalization coefficients and lower and upper threshold values to be later utilized in the defect and tone evaluation of the industrial product analyzed time by time.
  • a first procedure PI is of interactive type and enables to set out, as operating parameters, both a configuration file in the capture cards and also the exposure time for the videocameras.
  • the setting of the configuration file is performed according to parameters set by the manufacturer.
  • the procedure PI allows to set the following four operating parameters : 1.
  • the detection of the grey references 1 and 2 is performed by positioning the corresponding cursors, so as to detect the initial image points of each corresponding intervals .
  • a third common cursor will be then positioned so as to define the width of each of these intervals.
  • the produced field representing the whole area taken into consideration, is instead localized by two cursors apt to detect the start image point and the end of this field, respectively.
  • the opening of the videocamera lens diaphragm which may be adjusted for example by visualizing a videocamera signal on the computer monitor.
  • the light intensity of the lighting source adjusted for example by controlling the power supply voltage of this source.
  • the videocamera focusing.
  • the adjustment of this parameter is performed by visualizing a bar on the monitor, the length thereof is proportional to the variance of the grey tones of the image points of a line. The greater is the length of the visualized bar, the better will be the focusing.
  • a subsequent service procedure P2 interactive-type too, is instead apt to calculate static equalization coefficients. Such coefficients are necessary for a subsequent correction, by means of an equalization procedure later described, of each image point within a line inside the produced . field, in order to make the capture dynamics uniform. In this way, possible istic errors due to the disuniformity of the intensity of light emitted by the lighting source, lens aberrations, and so on, are corrected.
  • the procedure P2 is of interactive type since it indicates to the operator the action sequence to be performed by showing a series of messages on the monitor thereto the operator, time by time, will have to give confirmation once performed the action.
  • Such procedure in order to obtain a significative calibration, may advantageously be performed a predetermined number of times, relatively to products of the same type. For each of these products, the procedure P2 provides the following six steps of:
  • x ⁇ J represents the image point m position ( ⁇ ,D>; n is the number of image rows; and m is the number of image columns.
  • N ⁇ — l ⁇ i ⁇ n
  • Lr is the width of the Grey Reference 1 and 2 fields rl , r2 are the initial points of the Grey
  • the capturing and processing method object of the present invention provides, as it will be better illustrated hereinafter, the generation of a predetermined number of decreasing resolution images of the product to be examined, starting from a first product image captured at the maximum resolution.
  • a service procedure P3 will be described hereinafter, finalized to detect Sinf (nxn) and Ssup (nxn) threshold values for each nxn used resolution. These values define, for each resolution, a range of gradient values apt to detect the normality interval of the image points after a derivative filtering procedure, which will be better described by referring to the procedure Pll, later described.
  • the procedure P3 provides, for each image generated at nxn resolution, a first initialization step at a minumum value and at a maximum value of the two thresholds Sinf (nxn) and Ssup (nxn) .
  • the subsequent steps of the procedure P3 are the following: 1. To perform the derivative filtering of the corresponding image according to the modes which will be described in details by referring to the procedure Pll.
  • the current threshold values designated with Ss, Sc and Sil are updated.
  • Such threshold values start from a null initial value and are updated at the maximum value of linear convolutions on the Frame or of linear convolution on the profiles of the Inner Area, according to what is provided by the defect - classifying procedure P17, to be described hereinafter.
  • the procedure P3 too may be repeated a prefixed number of times relatively to products representative of a particular product typology, so as to make the reached threshold values statistically significative and reliable.
  • the procedure P3 continues an optimization of the Inner Area and of the Frame. This optimization provides the following steps of :
  • the optimization step may be repeated a prefixed number of times relatively to products representative of a particular product typology, by updating time by time the ⁇ value whenever the current value exceeds the value obtained in the preceding iteractions .
  • a procedure P4 for calculating the tone thresholds will be hereinafter described, in order to determine division thresholds among tone classes.
  • each tone class is characterized by a vector x of statistical parameters comprising the average value of grey tone, the variance of the grey tone, the median of the grey tone and the average values of the gradients of each image at resolution nxn. At last, even the covariance matrixes of said parameters related to each tone class are filed in a feature file.
  • Method of the nearest k points In such second method, the identification of a class is calculated by the distribution of the distances between class samples, by choosing as significative value the appropriate percentile, for example the median value.
  • a subsequent procedure P5 is then so as to calculate the defect gravity thresholds.
  • This procedure visualizes a choice menu which allows the operator to select a prearrangement mask for each defect class. These classes will be defined in a defect-classifying procedure P17.
  • the user inputs the limit values, corresponding to each gravity class, which define the dimensional and frequency ranges of the particular defect. Furthermore, the user, for each gravity class, may input a code which could advantageously be utilized to drive product selecting and sorting machines, in case connected to the described system.
  • the capturing and processing method of the images related to the product whose surface is desired to be inspectioned will be hereinafter described by referring to the procedures shown on the left side of the Figure. Particularly, it is noted the presence of a first procedure P6, the task thereof is to capture, digitalize and store an image of the product to be examined coming from each of the provided image transducers . Each of these operations is performed by means of known modes.
  • the presence of two image transducers is provided, in particular two linear videocameras with digital output.
  • the first videocamera is positioned so as to result sensible to the intensity of light diffused by the surface of the product under examination, whereas the second videocamera is positioned so as to result sensible to the intensity of light reflected by the surface of the product under examination, being placed specularly to a linear lighting device .
  • a procedure P7 is performed, hereinafter described, apt to normalize the value associated to each image point with respect to a parameter which is dependant on grey references in order to eliminate drift errors of the capturing and lighting system calibration.
  • Xi j is the current value of the image point with coordinate (i,j);
  • yij is the normalized value of the point with coordinate (i,j);
  • Rl and R2 are the calibration values of the Grey References 1 and 2;
  • XI and X2 are the current values of the Grey References 1 and 2.
  • an equalization procedure P8 is performed, the task thereof is to eliminate systematical errors of the capturing and lighting system.
  • the procedure P8 performs a linear transformation of the grey levels measured for each image point, according to the following equation:
  • n is the number of rows of the image
  • z x - is the equalized value of the image point with coordinate (i,j)
  • N is the coefficient of local equalization of the j-th column already calculated by the procedure P2 ;
  • Yi 3 is the normalized value of the image point with coordinate (i, j) .
  • a procedure P9 is performed to research the edges of the product under examination inside the produced field already detected by the service procedure PI. For example, by processing images related to descrete products, this procedure researches the edges of an object wholly contained in the capured image .
  • edges are researched by scanning the rows and columns of the captured image, so as to define for each row and each column a start product coordinate and an end product coordinate. Such coordinates are then stored into vectors exactly representing the product punctual edges.
  • Av(k) is the average edge at the k position; Loc (k) is the punctual edge at the k position; and ⁇ is a predefined constant.
  • a subsequent procedure P10 herebelow described is so as to generate, as previously mentioned, a predetermined number of images of the product to be examined. Such images are obtained starting from a first product image captured (by means of the procedure P6) at the maximum resolution, by means of subsequent resolution reductions.
  • an image at lower resolution may be derived from this first image by performing an average operation on nxm image point groups .
  • each image at lower resolution may be generated from the one at an immediately higher resolution by making the average of groups of 2x2 points at a time, obtaining an image having a resolution equal to the half, for each coordinate, of the resolution of the image therefrom it derives.
  • the calculation algorithm is fixed, that is:
  • the result of the now described filtering procedure is provided at the subsequent procedure P12, the task thereof is to detect the defective areas of the product under examination.
  • This procedure for each image at each resolution, compares the filtering result of the preceding procedure Pll to the corresponding thresholds previously calculated by the procedure P3.
  • two thresholds Sinf (n x n) and Ssup(n x n) have been calculated, which define an interval within the filtering result is considered conforming to the good-quality product. If, for each image point, the filtering result falls outside this interval, the image point is considered belonging to a defective area and as such it is marked in the corresponding memory position wherin the image is stored.
  • a procedure P13 of "thresholding" the defective areas is then performed wherein, for each produced image, at each defective point, a local threshold is calculated, as defined by the following equation:
  • s J is the local threshold of the the defective image point with coordinate (i,j); .J- 2J is the grey level of the defective image point with coordinate (i,j); d 13 is the derivative filtering value of the defective image point with coordinate (i,j); and k is a predefined constant.
  • an area of (2nx2n) points is determined, centered on the defective area detected by the point with coordinate (i,j) of the image at resolution (nxn) . Inside this area each point of the image at the maximum resolution is compared to the local threshold S a.j calculated as above described.
  • the procedure P14 scans point by point the whole image by deciding time by time the operation to be performed based upon the following conditions : 1. To create a new object if:
  • the joining operation of the defective adjacent points is performed by taking into account the sign of the corresponding gradient. This means that the adjacent points are grouped only if they have the same sign of the gradient .
  • the normalized position is given by the four coordinates which detect the extreme points of the diagonal of the rectangle surrounding the object, divided by the maximum sizes of the product along the two reference axes .
  • the area is given by the number of image points constituting the object.
  • the perimeter is calculated as sum of the boundary image points of the object.
  • the involved resolutions are represented by a binary variable the bit thereof, starting from the least significative one, indicate that the object has been detected by the derivative operator at the corresponding resolution nxn.
  • the distance from the defect threshold represents the difference between the maximum value of the defect rey intensity and the value of the localized local threshold.
  • the capturing angle is represented by a binary variable, the first four bit thereof indicate the contrast type and the view angle of the object, that is: White in reflection, White in diffusion, Black in reflection, Black in diffusion and the combinations thereof .
  • an aggregation procedure of the near objects is performed, designated with P16.
  • Such procedure joins the objects near one to the other into macro-objects composed by several disjointed objects deriving from both the lists generated by the preceding procedure P14.
  • Such lists are fused into a single list grouping objects adjacent one to the other.
  • the procedure also performs a grouping of macro-objects deriving from images captured by videocameras placed at different angulations with respect to the product, but they refer to the same defect related to te same product area.
  • the procedure P16 comprises the following steps of:
  • the normalized position is represented by the coordinates of the extremes of the rectangle surrounding the macro-object.
  • the area and the perimeter are calculated as sum of the image points of the component objects, respectively.
  • the involved resolutions are given as logic OR of the involved resolutions of the starting objects.
  • the distance from the defect threshold is given by the maximum of the respective starting variables.
  • the capturing angle is given as logic OR of the capturing angles of the starting objects.
  • the method at issue provides a classification of the detected defects.
  • This classification is performed by a procedure designated with P17 and results to be diversified depending on the fact that the defects are localized in the Inner Area or in the Frame.
  • the areas are classified according to three general classes of defects, and more precisely:
  • cavity defect such cavity can be positive or negative according to the direction of the local deformation of the product surface with respect to the inspection plan;
  • spot defect the defect does not produce a deformation of the product surface
  • long print defect the area involved by the defect extends along a longitudinal or cross strip of the surface for the whole extension of the product .
  • the assignment of a class to each area considered defective is performed based upon the occurrance of one of the following conditions: if an area results to be defective both during processing of the image in reflection and during processing of the image in diffusion, this area will be assigned the cavity defect class; if an area results defective during processing of the image in diffusion only, it will be assigned the spot defect class; and - the area detected by the following calculation will be assigned the long-print defect class.
  • Such calculation provides the following steps of :
  • m size of the convolution cell p average profile point of the row or column; Pa profile point after convolution; and n, m values presettable by the operator.
  • the classification of the defective areas is performed compared to two general categories of defects detected by means of specific calculation processes, in particular: "chipping" defect : in this case the process provides the following steps of : a) Accumulating, for each row and each column of the image at the maximum resolution, the numerical values of the image points being in the interval [i,j] starting from the average edge until the Inner Area and belonging to the row n or to the column m, dividing the obtained result by the numerosity of each interval and storing the result in specific vectors.
  • ⁇ i xij Scolumn (j,l) LOWAverage Edge > i > LOWSidelnn erArea interv. i b) Making a convolution of said vector of segment average values with a monodimensional 3xm or mx3 nucleous, depending on whether it is a horizontal or vertical side of the Frame, by means of the formula of the point 2) of the preceding case. c) Comparing the so obtained convolution value to the threshold Ss valued defined in the procedure P3. Should this convolution value be greater than this threshold, the corresponding point will be marked as defective . d) Grouping the adjacent defective points in order to detect the area characterized by the chipping defect.
  • a specific configuration mask allows the operator to decide which method is to be adopted for the calculation, to be chosen between the "average k method” and the “nearest k points method” .
  • These methods will be hereinafter described, more particularly: i)
  • the average k method provides the calculation of the distance of an unknown sample from the average values representative of each " tone class, by means of the covariance matrixes stored by the procedure P4.
  • the metrics used to calculate the distance is the Mahalanobis 'one, as defined as:
  • V "1 inverted covariance matrix The value of minimum distance is further compared to a pre-established threshold value and, in case of exceeding, the unknown sample is not assigned any tone class.
  • This lack of assignment may be defined as "not- conforming-tone” defect and be used later as an additional classification code. Should this lack of assignment repeat for a prefixed number k of the last n products under examination, a "possible tone change" is signalled to the operator.
  • the values k and n may be advantageously preset by means of an interactive mask.
  • the method of the nearest k points consists, instead, in calculating the distances of an unknown sample from each reference sample used during the procedure P4 and therefore in arranging in increasing order the obtained distances.
  • the covariance matrix V "1 will be in this case a unitary matrix.
  • the method provides: to select samples corresponding to the first k distances between the " previously calculated ones and to calculate the frequency of the appearance of the tone classes among the selected samples .
  • the class having the maximum frequency will be assigned to the unknown sample.
  • the distance between the unknown sample and the most distant point chosen among the reference samples of the assigned class is compared to the threshold calculated by the procedure P4. Should this threshold be exceeded, the unknown sample is not assigned any tone class.
  • the lack of class assignment may be used as in the preceding case.
  • the gravity of the detected defects is then evaluated by taking into consideration the following parameters: a) defect classification; b) defect size (area or length or width) ; and c) number of defects belonging to the same class detected on the product under examination.
  • the belonging to one of the intervals defined by the procedure P5 is then calculated and consequently the product is assigned the corresponding gravity class defined by the procedure P5.
  • a sorting code is automatically associated to the product under examination, according to the configuration performed by the procedure P5, apt to be provided to a possible automatic selection machine.
  • a procedure P20 called “Output of the results" will be hereinafter described.
  • This procedure is performed at the end of the inspection of each product and makes available, on a computer's communication port, the following data: list of the defects detected on the product, each defect being described by the following parameters: a) parameters calculated by the procedure P15, that is: normalized position, area, perimeter, involved resolutions, distance from the threshold defect, capturing angle; b) defect classification code calculated by the procedure P17; and c) tone classification code calculated by the procedure P18; gravity class calculated by the procedure P19; and - code for a selection machine calculated by the procedure PI9.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Image Processing (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
PCT/IT2000/000504 1999-12-06 2000-12-06 Imaging system for detecting surface defects and evaluating optical density WO2001040780A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU22197/01A AU2219701A (en) 1999-12-06 2000-12-06 Imaging system for detecting surface defects and evaluating optical density

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
ITRM99A000741 1999-12-06
IT1999RM000741A IT1308026B1 (it) 1999-12-06 1999-12-06 Sistema e metodo di acquisizione ed elaborazione di immagini perricerca di difetti superficiali e per valutazione di tono di

Publications (1)

Publication Number Publication Date
WO2001040780A1 true WO2001040780A1 (en) 2001-06-07

Family

ID=11407083

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IT2000/000504 WO2001040780A1 (en) 1999-12-06 2000-12-06 Imaging system for detecting surface defects and evaluating optical density

Country Status (3)

Country Link
AU (1) AU2219701A (it)
IT (1) IT1308026B1 (it)
WO (1) WO2001040780A1 (it)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102007052300A1 (de) 2007-10-31 2009-05-07 Peter Weigelt Stativ zum Photografieren von Objekten
EP2169388A1 (en) * 2008-09-25 2010-03-31 CEI-Companhia de Equipamentos Industriais, Lda. Process and seat for digitalising elements that are characteristic of stone plates
DE102010050448A1 (de) * 2010-11-03 2012-05-03 Ortery Technologies, Inc. Vorrichtung zum Einstellen des Aufnahmewinkels für Kameraarme
CN116958150A (zh) * 2023-09-21 2023-10-27 深圳市中农易讯信息技术有限公司 农产品的缺陷检测和缺陷等级划分方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2345717A1 (fr) * 1976-03-24 1977-10-21 Hoesch Werke Ag Procede et dispositif pour l'observation optique de la surface d'un materiau lamine deplace rapidement
US5301129A (en) * 1990-06-13 1994-04-05 Aluminum Company Of America Video web inspection system employing filtering and thresholding to determine surface anomalies
WO1998001746A1 (en) * 1996-07-04 1998-01-15 Surface Inspection Limited Visual inspection apparatus
US5857119A (en) * 1996-08-30 1999-01-05 Borden; John Object photographer

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2345717A1 (fr) * 1976-03-24 1977-10-21 Hoesch Werke Ag Procede et dispositif pour l'observation optique de la surface d'un materiau lamine deplace rapidement
US5301129A (en) * 1990-06-13 1994-04-05 Aluminum Company Of America Video web inspection system employing filtering and thresholding to determine surface anomalies
WO1998001746A1 (en) * 1996-07-04 1998-01-15 Surface Inspection Limited Visual inspection apparatus
US5857119A (en) * 1996-08-30 1999-01-05 Borden; John Object photographer

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102007052300A1 (de) 2007-10-31 2009-05-07 Peter Weigelt Stativ zum Photografieren von Objekten
EP2169388A1 (en) * 2008-09-25 2010-03-31 CEI-Companhia de Equipamentos Industriais, Lda. Process and seat for digitalising elements that are characteristic of stone plates
DE102010050448A1 (de) * 2010-11-03 2012-05-03 Ortery Technologies, Inc. Vorrichtung zum Einstellen des Aufnahmewinkels für Kameraarme
CN116958150A (zh) * 2023-09-21 2023-10-27 深圳市中农易讯信息技术有限公司 农产品的缺陷检测和缺陷等级划分方法
CN116958150B (zh) * 2023-09-21 2024-04-02 深圳市中农易讯信息技术有限公司 农产品的缺陷检测和缺陷等级划分方法

Also Published As

Publication number Publication date
ITRM990741A1 (it) 2001-06-06
IT1308026B1 (it) 2001-11-29
AU2219701A (en) 2001-06-12
ITRM990741A0 (it) 1999-12-06

Similar Documents

Publication Publication Date Title
US5619429A (en) Apparatus and method for inspection of a patterned object by comparison thereof to a reference
US5586058A (en) Apparatus and method for inspection of a patterned object by comparison thereof to a reference
US6753965B2 (en) Defect detection system for quality assurance using automated visual inspection
CN100580435C (zh) 监控制程变异的系统与方法
JP2921660B2 (ja) 物品形状計測方法および装置
CN111815555A (zh) 对抗神经网络结合局部二值的金属增材制造图像检测方法及装置
US5351308A (en) Method and apparatus for measuring crimp frequency of a web
CN112334761A (zh) 缺陷判别方法、缺陷判别装置、缺陷判别程序及记录介质
CN113176270B (zh) 一种调光方法、装置及设备
CN116342597B (zh) 一种汽车配件表面电镀加工缺陷检测方法和系统
CN115035092A (zh) 基于图像的瓶体检测方法、装置、设备及存储介质
EP0563897A1 (en) Defect inspection system
CN116008289A (zh) 一种非织造产品表面缺陷检测方法及系统
US7679746B1 (en) System and method for measurement of pressure drop through perforated panels
WO2001040780A1 (en) Imaging system for detecting surface defects and evaluating optical density
JP2001209798A (ja) 外観検査方法及び検査装置
Coulthard Image processing for automatic surface defect detection
US10388011B2 (en) Real-time, full web image processing method and system for web manufacturing supervision
EP3459045B1 (en) Real-time, full web image processing method and system for web manufacturing supervision
CN116805314B (zh) 一种建筑工程质量评估方法
JP2001291105A (ja) パターン認識方法および装置
JP3844792B2 (ja) 物品の顕微鏡検査装置及び方法
JPH09179985A (ja) 画像欠陥検査方法及び画像欠陥分類方法
JPH07167611A (ja) 硬貨中心位置決定方式
JP3144012B2 (ja) ワーク表面検査装置

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP