EP3582183B1 - Deflektometrische techniken - Google Patents
Deflektometrische techniken Download PDFInfo
- Publication number
- EP3582183B1 EP3582183B1 EP18177112.2A EP18177112A EP3582183B1 EP 3582183 B1 EP3582183 B1 EP 3582183B1 EP 18177112 A EP18177112 A EP 18177112A EP 3582183 B1 EP3582183 B1 EP 3582183B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- pixel position
- peak
- patterns
- maximizing
- image pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims description 99
- 230000006870 function Effects 0.000 claims description 48
- 230000003287 optical effect Effects 0.000 claims description 38
- 230000005855 radiation Effects 0.000 claims description 23
- 238000005311 autocorrelation function Methods 0.000 claims description 17
- 238000006073 displacement reaction Methods 0.000 claims description 16
- 238000012545 processing Methods 0.000 claims description 12
- 230000000694 effects Effects 0.000 claims description 9
- 238000001228 spectrum Methods 0.000 claims description 6
- 238000003860 storage Methods 0.000 claims description 4
- 239000003517 fume Substances 0.000 claims description 3
- 230000009699 differential effect Effects 0.000 claims description 2
- 238000005259 measurement Methods 0.000 description 15
- 238000012360 testing method Methods 0.000 description 15
- 238000005314 correlation function Methods 0.000 description 11
- 238000004590 computer program Methods 0.000 description 10
- 239000011159 matrix material Substances 0.000 description 9
- 239000013598 vector Substances 0.000 description 9
- 238000013507 mapping Methods 0.000 description 6
- 238000002156 mixing Methods 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 230000003068 static effect Effects 0.000 description 5
- 238000012512 characterization method Methods 0.000 description 4
- 238000002474 experimental method Methods 0.000 description 4
- 238000005286 illumination Methods 0.000 description 4
- 239000000203 mixture Substances 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 238000009472 formulation Methods 0.000 description 3
- 239000011521 glass Substances 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 238000007689 inspection Methods 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 230000003595 spectral effect Effects 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000010420 art technique Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 235000019994 cava Nutrition 0.000 description 1
- CRQQGFGUEAVUIL-UHFFFAOYSA-N chlorothalonil Chemical compound ClC1=C(Cl)C(C#N)=C(Cl)C(C#N)=C1Cl CRQQGFGUEAVUIL-UHFFFAOYSA-N 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 239000006059 cover glass Substances 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000001066 destructive effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000005315 distribution function Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000007654 immersion Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000005305 interferometry Methods 0.000 description 1
- 238000012886 linear function Methods 0.000 description 1
- 239000007788 liquid Substances 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 238000010422 painting Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000010363 phase shift Effects 0.000 description 1
- 230000000704 physical effect Effects 0.000 description 1
- 238000003908 quality control method Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000005309 stochastic process Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 238000004441 surface measurement Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/521—Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10141—Special mode during image acquisition
- G06T2207/10152—Varying illumination
Definitions
- Examples here refer, inter alia, to deflectometric techniques.
- Examples here refer to apparatus, systems and methods for deriving properties of an object, e.g., by using optical techniques.
- High-resolution digital cameras enable sophisticated measurement techniques that facilitate numerous applications in the fields of computer vision and optical metrology.
- deflectometry [10] ( Fig. 1 ) may be sensitive to shape variations of specular and transparent objects in the range of nanometers, which is comparable to the performance of interferometers [3].
- Camera-based methods rely on a precise understanding of the view ray geometry.
- Camera-to-world point correspondences The geometry of camera imaging can be mathematically described as a projective mapping from 3D world points of an observed scene to 2D image points ( Fig. 2 ). By learning the properties of this mapping, one can characterize the scene, the camera, and the medium between them. This principle is employed to perform such fundamental tasks as camera calibration, camera pose estimation, object size measurements, etc.
- each screen pixel transmits a sequence of values encoding its position on the screen.
- the sequence for each camera pixel is decoded, one usually obtains a dense set of correspondences ⁇ ⁇ ⁇ p ⁇ , ⁇ ⁇ ⁇ , where the region ⁇ of the decoded camera pixels in the sensor space may be as large as the entire frame.
- Cosine phase-shifted pattern sequences uses patterns modulated by a cosine function along the x- or y-direction ( Figs. 4 , 1 ).
- a pattern sequence usually contains several phase shifts for each period of the modulation function and uses cosines of several periods to facilitate the unambiguous decoding.
- the method may deliver an estimate of the decoding uncertainty ⁇ p [4] and even an estimate of the local modulation transfer function (blurring kernel) in each pixel [7]. Since the decoding is performed in each pixel independently, it is robust with respect to arbitrary distortions and rather strong blurring (depending on the parameters of the coding patterns).
- a CPSPS encodes a single point on the coding screen. Upon decoding, this information is sufficient to recover the shape of the reflective surface.
- FIG. 6 shows a checkerboard pattern 60 reflecting in a plano-convex lens.
- the back-side and the front-side reflections clearly overlap, preventing the identification of the cell corners.
- one practical approach may be to suppress the secondary reflections appearing in a setup as in Fig. 5b by e.g. immersing one side of the object into a liquid with a matching refractive index or by painting the secondary surface with an absorbing substance.
- a non-contact (and nondestructive) measurement one may switch to a different spectral range - e.g., in UV light, the secondary reflection in a lens would disappear. This is, however, inconvenient - UV cameras are expensive, and no common technology to produce arbitrary UV patterns exists.
- the present techniques may exploit stochastic band-limited patterns.
- Such patterns have already been studied in optical metrology.
- Wiegmann et al [11] and later Schaffer et al [8] introduced band-limited stochastic patterns and evaluated their advantages compared to pre-existing coding methods. Their main goal was to replace slow digital projectors with alternative projecting devices (stochastic patterns can be generated very fast using certain analog techniques such as diffusing of laser speckle distributions).
- a structured light method for depth reconstruction using unstructured, essentially arbitrary projection patterns is presented by Anner Kushnir and Nahum Kiryati in Shape from Unstructured Light, Tel-Aviv University, Tel Aviv 69978, Israel .
- the apparatus is configured to determine, for at least one image pixel position, a relationship between:
- the apparatus may be configured, for the at least one image pixel position, to: use differential properties of the similarity function with respect to peaks obtained at different peak positions in order to estimate the uncertainty in the determination of maximizing reference pixel position.
- the apparatus may be configured to: generate at least one reference pattern as an image with reference values obtained by a stochastic method and/or stochastically generating the at least one reference pattern, and/or wherein at least one reference pattern is generated as an image with a spatial spectrum featuring a maximum frequency cut-off associated to a minimum spatial correlation length.
- the apparatus may be configured to: for at least one time instant, adaptively prepare at least one pattern to be displayed at a time instant on the basis of the codes acquired at preceding time instants.
- the apparatus may be configured to: adaptively choose, on the basis of the acquired codes, a minimum spatial correlation length parameter of the reference patterns so as to define a band-limited spatial spectrum for the subsequent patterns.
- the apparatus may be configured to: define the minimum spatial correlation length parameter of the reference patterns so that the computational effort spent on data processing does not exceed a selected or predetermined limit or threshold.
- the apparatus may be configured to: for at least one time instant, prepare a reference pattern or a sequence of reference patterns under at least one of the following conditions:
- the apparatus may be configured to obtain a minimum spatial correlation length parameter; and further configured to, after having found a peak in the similarity function and the associated maximizing reference pixel position, search for a local peak among the reference pattern pixel positions whose distance from the maximizing reference pixel position is greater than the minimum spatial correlation length parameter.
- the apparatus may be configured to: define reference patterns which are stochastically independent from each other for different time instants.
- the apparatus may be configured to: define, for different pixel positions in the same pattern out of a predetermined interval, reference codes which are uncorrelated with each other.
- the apparatus may be configured to: modify the relative position between the object and the sensing device and/or reference device on the basis of acquired codes associated to the plurality of image pixel positions, to minimize the number of pixel positions for which the peaks in the respective similarity functions overlap and/or to minimize occurrences in which the individual peak identification fails.
- the apparatus may be configured to: define and/or display at least one of the reference patterns using a random physical process involving at least one of the following effects: laser speckles, flame, fumes, clouds, surface waves, or turbulence.
- the apparatus may be configured to collect values associated to:
- a deflectometric method which may derive properties of an object for example on the basis of:
- a non-transitory storage device storing instructions which, when executed by a processor, cause the processor to perform one of the methods above or below.
- an optical system comprising, for example, a reference device (e.g., a screen or a "cave"), an object, and a camera, obtains and/or processes an image of screen pixels on the camera sensor.
- a reference device e.g., a screen or a "cave”
- an object e.g., a camera
- a camera e.g., a camera
- light emitted by the screen may reflect from the outer surfaces, propagate through the volume, and/or reflect from the inner surfaces of the studied object.
- each sensor pixel ⁇ may receive light from one or several screen pixels denoted as p (with coordinates ( x,y ) or ( x,y,z ), for example) as shown in Fig. 5b .
- Fig. 5a shows a system 50' configured to derive the properties of a specular object (mirror) 53'.
- a camera 54 (or another imaging sensor) acquires images displayed by a reference device 56.
- light emitted at the point (reference pixel position) p 1 is transported along the path (ray) 57a and reflects from the external surface 53a' of the object 53' according to the reflection law. After that, it follows the path 57b and hits the sensor of the camera 54 at a pixel with coordinates ⁇ 1 .
- the reference pixel position p 1 in the reference pattern maps onto the image pixel position ⁇ 1 .
- Fig. 5a shows the path of light originating from a single reference pixel p 1 . In general, different reference pixels would be mapped onto different image pixels.
- Fig. 5b shows a system 50 to derive properties of a transparent object 53.
- Light emitted at different reference pixel positions p 1 , p 2 , and p 3 reaches the same image pixel position ⁇ 1 in the camera 54 by virtue of multiple internal reflections (see paths 57", 57"' between the surfaces 53a and 53b) facilitated by the transparency of the material.
- the possibility of determining the geometry of the object 53 is rendered complicated. For example, with the prior art techniques, by simply examining the acquired optical radiation intensity of image pixel position ⁇ 1 , it would not be possible to distinguish between the components of the intensity originating at each of p 1 , p 2 , and p 3 .
- the system 50 comprises a controller 52 (which may be an apparatus or part of an apparatus) which permits to derive the properties of the transparent object 53.
- the reference device 56 may display a sequence of reference patterns at different and subsequent time instants.
- a sequence of reference patterns For example, in Fig. 5b , light is emitted at the reference pixel positions p 1 , p 2 , and p 3 according to e.g. the reference pattern 70 ( Fig. 7a ).
- a respective intensity value s ( p 1 ), s ( p 2 ), and s ( p 3 ) may be defined by each reference pattern such as e.g. 70.
- the intensity value may be, for example, between a 0 value (e.g., no light) and a maximum value (e.g., maximum intensity).
- Each reference pattern defines different intensity values for all reference pixel positions.
- Each reference pattern may be represented as a matrix (e.g., stored in a memory), each entry of the matrix being associated to a particular reference pixel position (e.g., with two-dimensional coordinates x,y ), with the value being the intensity value.
- the entries of the matrix may be communicated to the reference device 56, for example, by the controller 52 (e.g., via digital communication, such as a cable or wireless communication).
- a reference pattern is the pattern 70 shown in Fig. 7a .
- the reference device 56 may be understood as modulating the intensity of pixels on the reference screen (e.g., on the basis of the values in the entries of the matrix under the control of the controller 52).
- a "screen” which may be, for example, a digital display.
- a sequence of images may be acquired by the camera 54.
- the camera 54 may acquire the light emitted by the reference device 56 according to the reference patterns (such as e.g. 70) and transported to the camera 54 (or another sensing device) via an optical path (e.g., 57', 57", 57"' and 57b).
- the optical path may involve inner and/or outer reflections from the surfaces 53a and 53b of the object 53 (in other examples a propagation through the volume of the object may be provided).
- Each image acquired by the camera 54 may also be represented as a matrix in two dimensions (with coordinates u, v , for example), each entry corresponding to an intensity value.
- the entries of the matrix may be communicated to the controller 52, for example.
- the controller 52 may control the activities of the system 50, e.g., of the reference device 56 and/or of the camera 54.
- the controller 52 may generate reference patterns (such as e.g. 70) and/or may synchronously control their acquisition by the camera 54.
- the sequence of reference patterns 70 may be stored in a memory and provided to the reference device 56 when needed.
- the controller 52 may obtain the sequence of reference patterns and/or the sequence of images acquired by the camera 54 and process them, at least partially, during an offline session.
- the camera 54 may be placed so as to be in static relationship with the object 53 when acquiring the images.
- the reference device 56 may be placed so as to be in static relationship with the object 53 during the emission of the light (display of the patterns).
- the controller 52 may be aware of the relative positions between the camera 54, the reference device 56, and/or the object 53. Therefore, the controller 52 may reconstruct the real shape of the object 52 or, at least, may derive some structural properties of the object 53.
- systems which move the camera 54 and/or the object 53 and/or the reference device 56 may be provided, so as to perform different measurement sessions with different relative positions between the camera 54, the reference device 56, and/or the object 53.
- calibration methods may be possible in which, after having obtained a first measurement with an initial geometrical relationship between the camera, the object and the reference device, a second, different geometrical relationship between these elements may be chosen to ameliorate the measurements.
- a hardware setup may be used to maintain the elements of the system in a static relationship and to controi the reiative motion between them in case of necessity (e.g., calibration). Motors and actuators may be controlled by the controller 52 to move the camera 54, the reference device 56, and/or the object 53 into different positions.
- the controller 52 is aware of the patterns that have been displayed by the reference device 56, and may determine the properties (e.g., geometrical properties, quality-related properties, etc.) of the object 53 by analyzing the sequence of acquired images obtained from the different reference patterns. By decoding codes modulated by the reference device 56 and acquired by the camera 54, the controller 52 may determine correspondences between the camera pixels ( ⁇ ) and the emitting reference screen pixels ( p ). Accordingly, it is possible to establish, on the basis of the positional relationships between the object 53, the camera 54 and the reference device 56, properties associated to the geometry of the object 53.
- the properties e.g., geometrical properties, quality-related properties, etc.
- the controller 52 may process the sequence of images on the basis of a similarity function.
- the similarity function may be obtained from a plurality of reference codes ( s ( p )).
- Each reference code may carry information on the evolution of the optical radiation intensity and may be expressed as a vector ( s 1 , s 2 , ...,s k , ..., s K ), for example (K may be a value such as 200, or more than 10, or between 150 and 250, for example). Therefore, the light emitted at each reference pixel position p varies its intensity to form a reference code described by the vector ( s 1 , s 2 , ...,s k , ..., s K ) .
- the values of each reference code are known a priori. They may have been pre-computed or generated by the controller 52 during the session.
- the similarity function also takes into consideration the acquired codes g ( ⁇ ) associated to the image pixels ⁇ .
- the acquired intensity will be an entry g k ( ⁇ ) of the code g ( ⁇ ) .
- the acquired code g ( ⁇ ) carries information on the evolution in the sequence of images acquired by the camera 54, of the optical radiation intensity ( g 1 , g 2 , ..., g k , ..., g K ) acquired for the image pixel position ⁇ .
- the similarity function may be, for example, a correlation function such as a normalized correlation function, a covariance-based function, etc.
- the similarity function may give information on a statistical relationship between the light intensities as emitted by the reference device 56 and the light intensities as acquired by the camera 54.
- a peak in the similarity function may be understood as indicating the pixel which, among all pixels of the reference device 56, contributes the most light to the illumination of the camera pixel ⁇ .
- the peak in the similarity function may be associated with the pixel position p 1 in the reference device 56: as can be noted, the path 57' taken by the light is more direct than the paths 57" and 57"', and the contribution of the pixel p 1 to the acquired intensity at ⁇ 1 is expected to be greater than the contribution of p 2 and p 2 .
- the contribution provided by p 2 and p 3 to the light intensity at ⁇ 1 is likely to be lower than the contribution of p 1 , i.e., by virtue of the light losses due to multiple reflections between the surfaces 53 and 53b.
- a position relating to the dominant contributing pixel p 1 may be obtained.
- the dominant pixel p 1 may be associated, for example, to the acquired pixel position ⁇ 1 .
- positions associated to at least another, secondary dominant pixel p 2 can be retrieved, e.g., by finding a locally maximizing reference pixel position (e.g., a secondary local peak) in the similarity function.
- the coordinates in Fig. 9a correspond to the camera pixel positions ⁇ .
- Each acquired pixel position (mapped in the coordinates u, v ) may be associated to an intensity value stored in a matrix after the acquisition. Let us consider the camera pixel position ⁇ 1 referred to as 91.
- the coordinates in Fig. 9a correspond to the camera pixel positions ⁇ .
- Each acquired pixel position (mapped in the coordinates u, v ) may be associated to an intensity value stored in a matrix after the acquisition. Let us consider the camera pixel position ⁇ 1 referred to as 91.
- FIG. 9b shows a plot 92 of a correlation function corresponding to the pixel position ⁇ 1 (the higher the similarity, the less intense the color for each pixel position) depending on the reference pixel positions p (mapped in the coordinates x,y ).
- an absolute maximum 93 and a local maximum 94 have been identified by searching based on the similarity function, and by finding out the two locally maximizing positions.
- the absolute maximum 93 in the correlation function of Fig. 9b corresponds to the directly reflected signal (the respective screen pixel position).
- the second, local maximum 94 in the correlation function of Fig. 9b corresponds to a different contributing optical path, and the respective maximizing screen position cannot be obtained with the techniques according to the prior art. Techniques of relative peak finding are per se known.
- Fig. 8 shows a peak 93' and a local peak 94' in a normalized correlation function 80.
- Fig. 8 shows a similarity function profile for a single camera pixel in a synthetic experiment: direct decoding of screen pixels when the observed signal is generated via a linear mixing model of Eq. 2.
- the plot 92 of Fig. 9b has the same meaning of that of Fig. 8 , but the data are obtained in a real experiment with a glass lens.
- the multi-valued correspondences are clearly obtainable as described above both in the simulation and in the real experiment.
- the reference patterns 70 can be stochastic images (e.g., the pixel values of the patterns are generated by a stochastic process). In examples, there is no correlation between a pattern with the preceding or subsequent patterns or any other pattern of the sequence.
- the patterns may be defined as stochastic random patterns. It has been noted, in fact, that accordingly the accuracy of the measurements is further increased.
- each pattern 70 is spatially band limited.
- the pattern may be a stochastic random image, where each pixel value is sampled independently, and filtered (e.g., using a Gaussian filter) in order to implement an upper frequency cut-off limit and/or a lower frequency cut-off limit.
- some connected regions e.g., prevalently white regions 70a and/or prevalently black regions 70b which appear, to human eye, to be separated from each other by gracefully grey-scaled graded intermediate regions. This may be the effect of a filtering applied to the random patterns.
- Each pattern 70 is therefore generated so as to have no relationship at all with any of the previous and/or the following patterns in the sequence.
- Fig. 7b shows an autocorrelation function 72 computed for the pattern 70.
- the x-y axes denote the relative displacement between the image copies so that the central point in the x-y domain corresponds to zero displacement.
- the z axis is the autocorrelation value.
- the function 72 falls off to zero as the displacement grows.
- the band limitation of the pattern is mirrored by the "width" 74 of the peak 73: the lower the upper filtering cut-off frequency, the "wider" the peak 73, and vice versa.
- a technique based on the sequences of band-limited stochastic coding patterns is proposed.
- the following similarity metric may be employed. Consider two sequences s and g of length K.
- the above normalized correlation may be interpreted as a cosine of the angle between two vectors in the K -dimensional space after their mean values have been subtracted.
- An important property of stochastic patterns is that the sequences s ( p 1 ) and s ( p 2 ) are uncorrelated in the limit of large number of patterns for sufficiently well-separated points p 1 and p 2 , i.e.
- ⁇ may be the characteristic width (e.g., full width at half maximum, FWHM, also referred to as "parameter ⁇ ") of the peak in the auto-correlation function of the pattern (cf. Fig. 7b ).
- FWHM full width at half maximum
- the positions of the peaks then permit to obtain the decoded points p 1 , ..., p n , while the values of the correlation at peaks and its derivatives near maxima can be further used to establish the mixing coefficients B 1 , ..., B n and, if needed, the decoding uncertainties ⁇ p i ( Fig. 8 ).
- Figs. 10a-10d show the results obtained from an experiment using a circular plano-spherical lens.
- One of the acquired camera images is shown in Fig. 9a .
- the Figs. 10a-10d show the decoding results (multi-valued point correspondences) computed for all camera pixels (as opposed to studying a single pixel as in Fig. 9b ).
- the four panels show the decoded screen x- and y-coordinates for the two dominant contributions for the light arriving at each camera pixel.
- Eq. 2 with n 2
- 10a, 10b , 10c, and 10d represent the color-coded values of ( p 1 ) y , ( p 1 ) x , ( p 2 ) y , and ( p 2 ) x , respectively, as functions of the sensor pixel position ⁇ .
- the decoding was done according to the technique described above by finding the local maxima of the normalized correlation function. More intense colors indicate higher coordinate values.
- the decoded point coordinates are relatively smooth and noise-free.
- the spurious decoding results outside of the lens may be easily filtered out by e.g. setting a threshold applied to the minimum contrast of the sequence.
- p 1 argmax p , C ( s ( p ', g ( ⁇ )), in Figs. 10c and 10d , refers to p 2 (the secondary local peak).
- the artefacts 101 black regions relate to regions for which the similarity function cannot resolve the two peaks. Such situations are in general due to a non-optimal object position, for which the two contributing screen pixels p 1 and p 2 are too close to each other.
- the patterns such as in Fig. 7a have some pre-defined highest (and possibly lowest) frequency (hence “band-limited”).
- the autocorrelation function (a delta-function for band-unlimited random patterns) may have a shape similar to that in Fig. 7b : it may fall off to zero far away from zero displacement (assuming an infinitely large image size); and the peak at zero displacement must have some finite width ⁇ (which is directly related to the cut-off frequency).
- different patterns are un-correlated with each other.
- the generation of stochastic patterns can be done in many different ways. For example, a low-pass filter on the random images and then performed a non-linear, valumetric ("compander") transform to improve the pattern histograms (i.e. applied some non-linear function to each pixel value) may be used.
- a non-linear, valumetric transform to improve the pattern histograms (i.e. applied some non-linear function to each pixel value)
- the parameter ⁇ may play an important role. If the image of the screen is blurred with a kernel size smaller than ⁇ , the decoding will still succeed. On the other hand, if the two overlapping signals are shifted by a distance less than ⁇ (i.e.,
- Fig. 11 shows an apparatus 110 (which is here depicted as a group of blocks, but may implement or be implemented by the controller 52) for deriving properties of an object (e.g., 53).
- the apparatus 110 may perform the operations 120 ( Fig. 12 ) on the basis of:
- the sequences 170 and 190 may be obtained, for example, at step 120a of method 120.
- a particular image pixel ⁇ 1 is chosen and the method may be repeated several times, by iterations 124, so as to choose other image pixel positions ⁇ 2 , ⁇ 3 ...
- all the image pixel positions ⁇ i may be iteratively chosen. In other examples, only a selection of the pixel positions of the acquired images are chosen.
- the apparatus is configured, for at least one image pixel position in each of the images (90) of the sequence (190) of images (90), to:
- the correlating unit 111 may operate on the fly, the peak retrieval unit 111 and/or the association unit 113 may operate offline, for example.
- the similarity function (e.g., correlation) may be a function of p and ⁇ . Peaks may be found with respect to screen position p for a fixed sensor pixel ⁇ .
- ⁇ 1 D ⁇ P 2 ⁇ ⁇ 2 ⁇ p i ⁇ p j Z s ⁇ p ⁇ , g ⁇ ⁇ ⁇
- p ⁇ q ⁇ 1 ⁇ 1 , where ⁇ 2 ⁇ p i ⁇ p j Z s ⁇ p ⁇ , g ⁇ ⁇ ⁇
- p ⁇ q ⁇ 1 is the matrix of the second partial derivatives of Z ( s ( p ), g ( ⁇ )) evaluated at the respective peak position, and D ⁇ 1 is a numeric coefficient that depends on the details of the noise distribution functions. Uncertainties in the remaining peaks may be found in the same fashion.
- the method may start at 131.
- K pattern cycles 133 are repeated, by stochastically generating each k th pattern at 134.
- a random physical process may be used, which may involve at least one of the following effects: laser speckles, flame, fumes, clouds, surface waves, and/or turbulence.
- each k th pattern is filtered (e.g., low-pass filtered or band-pass filtered), for making the k th patters band-limited.
- it is checked ( k ⁇ K ?) if it is necessary to create another pattern and, in case it is not necessary (YES), the method 130 may end, so that all the patterns are stored in a memory to be used by the reference device, for example. If it is necessary to create another pattern (e.g., the (k+1) th pattern), the variable k is updated (k++) at 137 and the cycle 133 is reiterated.
- the filter at 135 may be based, for example, on a cutoff frequency (e.g., a maximum frequency) which may affect the shape of each pattern.
- a cutoff frequency e.g., a maximum frequency
- the cutoff frequency is bound to the parameter ⁇ , which may be understood as the minimum spatial correlation length parameter ⁇ .
- the parameter ⁇ may be extremely important for determining the absolute peak ( p 1 ) and the second peak ( p 2 ): if the distance between the absolute peak ( p 1 ) and the second peak ( p 2 ) is less than ⁇ , the absolute peak ( p 1 ) and the second peak ( p 2 ) are not obtained with the similarity function.
- the artefacts 101 are created because of the distance between the absolute peak ( p 1 ) and the second peak ( p 2 ) being less than ⁇ .
- the filter may be, for example, a low-pass Gaussian filter.
- an increased cutoff frequency could simply reduce the minimum closeness between the absolute peak ( p 1 ) and the second peak ( p 2) that are to be retrieved.
- the higher the cutoff frequency the higher the computational effort.
- a minimum spatial correlation length [the parameter ⁇ ] of the reference patterns so as to define a band-limited spatial spectrum for the subsequent patterns. This may be obtained, for example, with a calibration process.
- Fig. 14 shows a method 140.
- a first, generic cutoff frequency may be chosen.
- a pattern or a sequence of patterns may be stochastically generated.
- the generated pattern(s) may be low-pass filtered according to the cutoff frequency.
- light is emitted by the reference device 56 according to the filtered pattern(s).
- an image(s) is(are) acquired by the camera 54.
- method 120 may be performed (e.g., by processing the correlation function) and, at 146, the resulting correlation function is analysed. If too many or too big artefacts 101 are recognized (e.g., as in Figs.
- step 147 the cutoff frequency is increased, so as to reduce the minimum distance between the maximum peak and the second maximum peak. Otherwise, if the obtained correlation function is satisfactory and the artefacts 101 are not too prominent, step 147 may be bypassed at 148 and a new pattern 142 may be generated. The first iterations of this process may be simply performed for the purpose of retrieving the most preferable cutoff frequency and its results may be discarded when performing the measurements (calibration). Between steps 120 and 146, step 145b a check (not shown) may be provided, so as to end the method when an image of satisfactory quality is obtained (at 145c); otherwise, the method is repeated.
- Fig. 15 shows another method 140' which may also imply a calibration, where, instead of modifying the cutoff frequency, at 147' the relative position between the object 53, the camera 54 and/or tha reference device 56 is modified (e.g., by operating actuators). When the ievei of artefacts is acceptable, the modification of the position may be bypassed at 148' and the measurements may start. Between steps 120 and 146', at step 145b' a check (not shown) may be provided, so as to end the method when an image of satisfactory quality is obtained (at 145c'); otherwise, the method is iterated.
- the described coding method is very flexible and allows a large freedom in the implementation:
- Elements of examples of the proposed solution may comprise at least one of:
- An application is the inspection and measurement of transparent objects with several reflecting surfaces. This can be realized in a deflectometric setup with a flat screen or using a projector of patterns, which illuminates a 3D scene.
- the powerful method of deflectometry is mostly applied to inspecting specular surfaces.
- Transparent objects can be measured only in a situation where the overlapping back-side reflections are suppressed or can be ignored.
- the data processing algorithm central to the Invention allows one to separate, identify, and exploit these secondary reflections. With that, it becomes possible to inspect the geometry of such transparent objects as car windshields, precision lenses, smartphone cover glass plates, etc.
- Fig. 16 shows a system 160 which may comprise (or be a particular example) of the system 50 of Fig. 5b or the apparatus 110 of Fig. 16 .
- the system 160 may comprise at least one of the camera (or other imaging sensor) 52, the reference device 56, and the controller 52.
- the system 160 may implement at least one of the methods shown in Figs. 11-15 , for example.
- the system 160 may comprise a testing block 162 for testing the quality of a plurality of objects 161.
- a testing operation may comprise, for example, an acquisition of a sequence of images (e.g., images 90) by the camera 52, after having being subjected to a sequence of reference patterns 70 displayed by the reference device 54.
- calibration operations e.g., as shown in Figs. 14 and 15 ) may be performed.
- a plurality of image pixel positions ⁇ 1 , ⁇ 2 ,..., ⁇ M may be processed, so as to obtain, for each of the image pixel positions ⁇ 1 , ⁇ 2 , ⁇ M , correspondences (e.g., two peaks associated to maximizing reference pixel positions q 1 , q2).
- At least for one object 161 (but preferably on a plurality of series-manufactured similar products), at least one value may be collected (e.g., stored in a memory).
- the collected at least one value (metrics) may be, for example, associated to at least one of the following data:
- threshold statistical values (data) 163 and/or to threshold expected values (data) 164 it is possible to compare the collected values to threshold statistical values (data) 163 and/or to threshold expected values (data) 164, so as to derive quality information 165.
- a comparison with statistical data 163 is here described.
- at least one of the collected values may, for example, be compared with statistically-obtained values 163 associated to analogous values of the previously tested objects. If, for one object, the at least one of the collected value deviates from the statistically-obtained value for more than a threshold, the test result has negative result and the object may be discarded (or, in any case, a quality information 165 is determined by associating the object with negative result).
- the object imaged by image 90 in Fig. 9a is subjected to a test regarding the reference image pixel position 91.
- the testing block 162 may compare the pixel positions 93 and 94 (or their distance) with the average of the pixel positions of the previously tested objects (corresponding to the same image pixel position ⁇ 1 ). If the at least one of the pixel positions 93 and 94 deviate from the relative average for a distance greater than a determined threshold, then the test is assumed as having a negative result. If each of the pixel positions 93 and 94 is within a determined threshold distance from the relative average, then the test is assumed to have a positive result.
- a comparison with expected data 164 is here described.
- at least one of the collected values may, for example, be compared with expected values 164 (e.g., values which are determined during a design phase). If, for one object, the at least one of the collected value deviates from the expected value for a difference greater than a determined threshold, the test result has negative result and the object may be discarded (or, in any case, a quality information 165 is determined by associating the object with negative result).
- expected values 164 e.g., values which are determined during a design phase
- the testing block 162 may compare the pixel positions 93 and 94 (or their distance) with the expected pixel positions (corresponding to the same image pixel position ⁇ 1 ). If the the at least one of the pixel positions 93 and 94 deviate from the expected position for a distance greater than a determined threshold, then the test is assumed as having a negative result. If each of the pixel positions 93 and 94 is within a determined threshold distance from the relative average, then the test is assumed to have a positive result.
- each criterion may be associated to a comparison of a particular value of the objected with a related (expected or statistically-obtained value) threshold.
- Each criterion may provide a score associated to the deviance of the value with an expected or statistically-obtained value.
- a final rating may be obtained by summing or averaging the scores of the object. The final rating may be compared to a final threshold which provides the final information on the positive or negative result.
- the at least one values and/or metrics and/or data associated to the image pixel positions, reference codes, relationships, incremental values, etc. may be displayed on a display. In other cases, they may be used to notify an alarm, e.g., when the metrics are out of an expected interval or deviate too much from the statistical values.
- examples may be implemented as a computer program product with program instructions, the program instructions being operative for performing one of the methods when the computer program product runs on a computer.
- the program instructions may for example be stored on a machine readable medium.
- an example of method is, therefore, a computer program having a program instructions for performing one of the methods described herein, when the computer program runs on a computer.
- a further example of the methods is, therefore, a data carrier medium (or a digital storage medium, or a computer-readable medium) comprising, recorded thereon, the computer program for performing one of the methods described herein.
- the data carrier medium, the digital storage medium or the recorded medium are tangible and/or non-transitionary, rather than signals which are intangible and transitory.
- a further example of the method is, therefore, a data stream or a sequence of signals representing the computer program for performing one of the methods described herein.
- the data stream or the sequence of signals may for example be transferred via a data communication connection, for example via the Internet.
- a further example comprises a processing means, for example a computer, or a programmable logic device performing one of the methods described herein.
- a further example comprises a computer having installed thereon the computer program for performing one of the methods described herein.
- a further example comprises an apparatus or a system transferring (for example, electronically or optically) a computer program for performing one of the methods described herein to a receiver.
- the receiver may, for example, be a computer, a mobile device, a memory device or the like.
- the apparatus or system may, for example, comprise a file server for transferring the computer program to the receiver.
- a programmable logic device for example, a field programmable gate array
- a field programmable gate array may cooperate with a microprocessor in order to perform one of the methods described herein.
- the methods may be performed by any appropriate hardware apparatus.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Optics & Photonics (AREA)
- Length Measuring Devices By Optical Means (AREA)
Claims (16)
- Eine Vorrichtung (50, 52, 110, 160) zum Herleiten von Eigenschaften eines Objekts (53) unter Verwendung von Deflektometrie auf der Basis von Folgendem:- einer Sequenz (170) von Referenzmustern (70), die durch eine Referenzvorrichtung (56) angezeigt werden, wobei die Sequenz von Referenzmustern (70) zu nachfolgenden Zeitpunkten durch die Referenzvorrichtung (56) angezeigt wird, so dass Licht, das durch die Referenzvorrichtung (56) emittiert wird, auf das Objekt (53) einfällt;- einer Sequenz (190) von Bildern (90), wobei die Bilder (90) zu Zeitpunkten aus dem Licht, das durch die Referenzvorrichtung (56) emittiert wird, gemäß den Referenzmustern (70) erhalten und über optische Wege (57a, 57b, 57', 57", 57"') zu einer Erfassungsvorrichtung (54) transportiert werden, die Reflexionen von der Innen- und Außenoberfläche (53a, 53b) des Objekts (53) beinhalten,wobei die Vorrichtung für zumindest eine Bildpixelposition in jedem der Bilder (90) der Sequenz (190) von Bildern (90), zu Folgendem ausgebildet ist:- Durchführen einer Verarbeitung von Bildern, um eine Ähnlichkeitsfunktion (121) zwischen Folgendem zu erhalten:o einer Mehrzahl von Referenzcodes, wobei jeder Referenzcode einer Referenzpixelposition in den Referenzmustern zugeordnet ist, wobei jeder Referenzcode Informationen über die Entwicklung der optischen Strahlungsintensität der Referenzvorrichtungsposition (p) trägt, die durch den Wert des Referenzmusterpixels an der Referenzpixelposition während der Zeitpunkte moduliert wird; undo einem erfassten Code, der der Bildpixelposition zugeordnet ist, wobei der erfasste Code Informationen über die Entwicklung, in der Sequenz von Bildern, der optischen Strahlungsintensität trägt, die für die Bildpixelposition erfasst wird;- Finden (122) von Folgendem:o zumindest einer Spitze in der Ähnlichkeitsfunktion, wobei die Spitze eine lokale Spitze oder eine globale Spitze zwischen den Werten der Ähnlichkeitsfunktion ist; undo für zumindest eine gefundene Spitze einer Maximierungsreferenzpixelposition (93, 94), die der zumindest einen Spitze zugeordnet ist; und- für jede gefundene Spitze (123):wobei die Vorrichtung ferner dazu ausgebildet ist, für zumindest eine Bildpixelposition eine Beziehung zwischen Folgendem zu bestimmen:
o Zuordnen der Maximierungsreferenzpixelposition zu der Bildpixelposition,- der optischen Strahlungsintensität an der Bildpixelposition (91); und- der oder den optischen Strahlungsintensitäten an der zumindest einen oder den Maximierungsreferenzpixelpositionen (93, 94),wobei die Beziehung auf Folgendem basiert:- zumindest einem Koeffizienten, der Folgendem zugeordnet ist:∘ dem oder den Referenzcodes, die der Maximierungsreferenzpixelposition (93, 94) zugeordnet sind; und∘ dem erfassten Code, der der Bildpixelposition zugeordnet ist; und- zumindest einem konstanten Term basierend auf dem erfassten Code und dem zumindest einen Koeffizienten, undwobei die Vorrichtung für die zumindest eine Bildpixelposition zu Folgendem ausgebildet ist:Verwenden erhaltener Parameter von n>1 gefundenen Spitzen in der Ähnlichkeitsfunktion mit den höchsten Spitzenähnlichkeitswerten, die an Maximierungsreferenzpixelpositionen q1,..., qn erhalten werden, um folgende Beziehung:zwischen den Referenzcodes s(q 1 ), ...,s(q n ) und dem erfassten Code g(π ) an der zumindest einen Bildpixelpositionπ zu erhalten, wobei g(π ) die Intensität an der Bildpixelpositionπ ist, s(q 1), ...,s(q n ) die Intensitäten an den Maximierungsreferenzpixelpositionen q1,..., qn sind, B 1 ...Bn Koeffizienten sind, A ein kontanter Term ist. - Die Vorrichtung gemäß Anspruch 1, die für die zumindest eine Bildpixelposition zu Folgendem ausgebildet ist:
Verwenden unterschiedlicher Eigenschaften der Ähnlichkeitsfunktion in Bezug auf Spitzen, die an unterschiedlichen Spitzenpositionen erhalten werden, um die Unsicherheit bei der Bestimmung einer Maximierungsreferenzpixelposition zu schätzen. - Die Vorrichtung gemäß einem der vorherigen Ansprüche, die zu Folgendem ausgebildet ist:Erzeugen zumindest eines Referenzmusters (70) als Bild mit Referenzwerten, die durch ein stochastisches Verfahren erhalten werden, und/oder stochastisches Erzeugen des zumindest einen Referenzmusters (70), und/oderwobei zumindest ein Referenzmuster (70) als Bild mit einem Raumspektrum erzeugt wird, das eine maximale Grenzfrequenz aufweist, die einer minimalen Raumkorrelationslänge zugeordnet ist.
- Die Vorrichtung gemäß einem der vorherigen Ansprüche, die zu Folgendem ausgebildet ist:
für zumindest einen Zeitpunkt, adaptives Erstellen zumindest eines Musters (70), das zu einem Zeitpunkt angezeigt werden soll, auf der Basis der Codes, die zu vorherigen Zeitpunkten erfasst werden. - Die Vorrichtung gemäß einem der Ansprüche 4-5, die zu Folgendem ausgebildet ist:
adaptives Auswählen, auf der Basis der erhaltenen Codes, eines minimalen Raumkorrelationslängenparameters der Referenzmuster, um so ein bandbeschränktes Raumspektrum für die nachfolgenden Muster zu definieren. - Die Vorrichtung gemäß einem der Ansprüche 4-6, die zu Folgendem ausgebildet ist:
Definieren des minimalen Raumkorrelationslängenparameters der Referenzmuster, so dass der Rechenaufwand, der auf eine Datenverarbeitung verwendet wird, eine ausgewählte oder vorbestimmte Grenze oder Schwelle nicht überschreitet. - Die Vorrichtung gemäß einem der Ansprüche 4-7, die zu Folgendem ausgebildet ist:
für zumindest einen Zeitpunkt, Erstellen eines Referenzmusters (70) oder einer Sequenz von Referenzmustern unter zumindest einer der folgenden Bedingungen:- die Autokorrelationsfunktion der Referenzcodes fällt auf null, in Richtung von einer Null-Verschiebung weg;- die Autokorrelationsfunktion der Referenzcodes weist eine maximale Spitze bei einer Null-Verschiebung auf;- die Autokorrelationsfunktion der Referenzcodespitze besitzt eine endliche Breite bei einer Null-Verschiebung. - Die Vorrichtung gemäß einem der vorherigen Ansprüche, die dazu ausgebildet ist, einen minimalen Raumkorrelationslängenparameter zu erhalten; und
ferner dazu ausgebildet ist, nachdem eine Spitze in der Ähnlichkeitsfunktion und die zugeordnete Maximierungsreferenzpixelposition gefunden wurden, nach einer lokalen Spitze unter den Referenzmusterpixelpositionen zu suchen, deren Entfernung von der Maximierungsreferenzpixelposition größer ist als der minimale Raumkorrelationslängen parameter. - Die Vorrichtung gemäß einem der vorherigen Ansprüche, die zu Folgendem ausgebildet ist:
Definieren von Referenzmustern, die stochastisch voneinander unabhängig sind, für unterschiedliche Zeitpunkte. - Die Vorrichtung gemäß einem der vorherigen Ansprüche, die zu Folgendem ausgebildet ist:
Definieren, für unterschiedliche Pixelpositionen in dem gleichen Muster aus einem vorbestimmten Intervall, von Referenzcodes, die untereinander unkorreliert sind. - Die Vorrichtung gemäß einem der vorherigen Ansprüche, die zu Folgendem ausgebildet ist:
Modifizieren der relativen Position zwischen dem Objekt und der Erfassungsvorrichtung und/oder Referenzvorrichtung auf der Basis erfasster Codes, die der Mehrzahl von Bildpixelpositionen zugeordnet sind, um die Anzahl von Pixelpositionen zu minimieren, für die sich die Spitzen in den jeweiligen Ähnlichkeitsfunktionen überlappen, und/oder Vorkommnisse zu minimieren, bei denen die individuelle Spitzenidentifikation fehlschlägt. - Die Vorrichtung gemäß einem der vorherigen Ansprüche, die zu Folgendem ausgebildet ist:
Definieren und/oder Anzeigen zumindest eines der Referenzmuster unter Verwendung eines physischen Zufallsvorgangs, der zumindest einen der folgenden Effekte beinhaltet: Lasersprenkel, Flamme, Rauch, Wolken, Oberflächenwellen oder Turbulenzen. - Die Vorrichtung gemäß einem der vorherigen Ansprüche, die ferner dazu ausgebildet ist, Werte und/oder Metrik zu sammeln, die Folgendem zugeordnet sind:- der zumindest einen Bildpixelposition und der oder den zugeordneten Maximierungsreferenzpixelpositionen und/oder- einem oder mehreren Referenzcodes und/oder einem oder mehreren erhaltenen Codes; und/oder- einer oder mehreren Beziehungen zwischen der oder den optischen Strahlungsintensitäten des oder der Referenzpixel und den optischen Strahlungsintensitäten des oder der erfassten Pixel; und/oder- einem oder mehreren Ähnlichkeitswerten; und/oder- inkrementellen Werten, die möglichen der Daten oder Informationen oder nummerischen Werten oben zugeordnet sind,um so die gesammelten Werte mit statistischen Schwellenwerten und/oder erwarteten Schwellenwerten zu vergleichen.
- Ein Deflektometrieverfahren zum Herleiten von Eigenschaften eines Objekts (53) auf der Basis von Folgendem:- einer Sequenz (170) von Referenzmustern (70), die durch eine Referenzvorrichtung (56) angezeigt werden, wobei die Sequenz von Referenzmustern (70) zu nachfolgenden Zeitpunkten durch die Referenzvorrichtung (56) angezeigt wird, so dass Licht, das durch die Referenzvorrichtung (56) emittiert wird, auf das Objekt (53) einfällt;- einer Sequenz (190) von Bildern (90), wobei die Bilder (90) zu Zeitpunkten aus dem Licht, das durch die Referenzvorrichtung (56) emittiert wird, gemäß den Referenzmustern (70) erhalten und über optische Wege (57a, 57b, 57', 57", 57'") zu einer Erfassungsvorrichtung (54) transportiert werden, die Reflexionen von der Innen- und Außenoberfläche (53a, 53b) des Objekts (53) beinhalten,wobei das Verfahren für zumindest eine Bildpixelposition in der Sequenz (190) von Bildern (90) folgende Schritte aufweist:- Durchführen einer Verarbeitung von Bildern, um eine Ähnlichkeitsfunktion (121) zwischen Folgendem zu erhalten:o einer Mehrzahl von Referenzcodes, wobei jeder Referenzcode einer Referenzpixelposition in den Referenzmustern zugeordnet ist, wobei jeder Referenzcode Informationen über die Entwicklung der optischen Strahlungsintensität der Referenzvorrichtungsposition (p) trägt, die durch den Wert des Referenzmusterpixels an der Referenzpixelposition während der Zeitpunkte moduliert wird; und∘ einem erfassten Code, der der Bildpixelposition zugeordnet ist, wobei der erfasste Code Informationen über die Entwicklung, in der Sequenz von Bildern, der optischen Strahlungsintensität trägt, die für die Bildpixelposition erfasst wird;- Finden (122) von Folgendem:∘ zumindest einer Spitze in der Ähnlichkeitsfunktion; und∘ für zumindest eine gefundene Spitze einer Maximierungsreferenzpixelposition (93, 94); und- für jede gefundene Spitze (123):wobei das Verfahren ferner ein Bestimmen, für zumindest eine Bildpixelposition, einer Beziehung zwischen Folgendem aufweist:
∘ Zuordnen der Maximierungsreferenzpixelposition zu der Bildpixelposition,- der optischen Strahlungsintensität an der Bildpixelposition (91); und- der oder den optischen Strahlungsintensitäten an der zumindest einen oder den Maximierungsreferenzpixelpositionen (93, 94),wobei die Beziehung auf Folgendem basiert:- zumindest einem Koeffizienten, der Folgendem zugeordnet ist:∘ dem oder den Referenzcodes, die der Maximierungsreferenzpixelposition (93, 94) zugeordnet sind; und∘ dem erfassten Code, der der Bildpixelposition zugeordnet ist; und- zumindest einem konstanten Term basierend auf dem erfassten Code und dem zumindest einen Koeffizienten, undwobei das Verfahren ferner für die zumindest eine Bildpixelposition folgenden Schritt aufweist:Verwenden erhaltener Parameter von n>1 gefundenen Spitzen in der Ähnlichkeitsfunktion mit den höchsten Spitzenähnlichkeitswerten, die an Maximierungsreferenzpixelpositionen q1,..., qn erhalten werden, um folgende Beziehung:zwischen den Referenzcodes s(q 1), ..., s(q n ) und dem erfassten Code g(π ) an der zumindest einen Bildpixelpositionπ zu erhalten, wobei g(π ) die Intensität an der Bildpixelpositionπ ist, s(q 1), ..., s(q n ) die Intensitäten an den Maximierungsreferenzpixelpositionen q1,..., qn sind, B 1 ... Bn Koeffizienten sind, A ein kontanter Term ist. - Eine nicht-flüchtige Speichervorrichtung, die Befehle speichert, die bei Ausführung durch einen Prozessor den Prozessor dazu veranlassen, das Verfahren gemäß Anspruch 15 durchzuführen.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP18177112.2A EP3582183B1 (de) | 2018-06-11 | 2018-06-11 | Deflektometrische techniken |
PCT/EP2019/065028 WO2019238583A1 (en) | 2018-06-11 | 2019-06-07 | Deflectometric techniques |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP18177112.2A EP3582183B1 (de) | 2018-06-11 | 2018-06-11 | Deflektometrische techniken |
Publications (2)
Publication Number | Publication Date |
---|---|
EP3582183A1 EP3582183A1 (de) | 2019-12-18 |
EP3582183B1 true EP3582183B1 (de) | 2020-12-30 |
Family
ID=62630949
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP18177112.2A Active EP3582183B1 (de) | 2018-06-11 | 2018-06-11 | Deflektometrische techniken |
Country Status (2)
Country | Link |
---|---|
EP (1) | EP3582183B1 (de) |
WO (1) | WO2019238583A1 (de) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102021001366A1 (de) | 2021-03-11 | 2022-09-15 | Friedrich-Schiller-Universität Jena Körperschaft des öffentlichen Rechts | Verfahren zur 3D-Messung von Oberflächen |
DE102022113090B4 (de) | 2022-05-24 | 2024-03-21 | Rodenstock Gmbh | Verfahren zur optischen Vermessung eines zumindest teilweise transparenten Probekörpers |
-
2018
- 2018-06-11 EP EP18177112.2A patent/EP3582183B1/de active Active
-
2019
- 2019-06-07 WO PCT/EP2019/065028 patent/WO2019238583A1/en active Application Filing
Non-Patent Citations (1)
Title |
---|
None * |
Also Published As
Publication number | Publication date |
---|---|
EP3582183A1 (de) | 2019-12-18 |
WO2019238583A1 (en) | 2019-12-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11029144B2 (en) | Super-rapid three-dimensional topography measurement method and system based on improved fourier transform contour technique | |
US10340280B2 (en) | Method and system for object reconstruction | |
US10234561B2 (en) | Specular reflection removal in time-of-flight camera apparatus | |
US10152800B2 (en) | Stereoscopic vision three dimensional measurement method and system for calculating laser speckle as texture | |
US10302424B2 (en) | Motion contrast depth scanning | |
CA2666256C (en) | Deconvolution-based structured light system with geometrically plausible regularization | |
CN109631798B (zh) | 一种基于π相移方法的三维面形垂直测量方法 | |
EP3582183B1 (de) | Deflektometrische techniken | |
Kim et al. | Acquiring axially-symmetric transparent objects using single-view transmission imaging | |
Lyu et al. | Structured light-based underwater 3-D reconstruction techniques: A comparative study | |
Zhu et al. | Invalid point removal method based on error energy function in fringe projection profilometry | |
WO2005100910A1 (ja) | 3次元形状計測方法及びその装置 | |
RU2573767C1 (ru) | Устройство трехмерного сканирования сцены с неламбертовыми эффектами освещения | |
Qiao et al. | Snapshot interferometric 3D imaging by compressive sensing and deep learning | |
Liu et al. | Investigation of phase pattern modulation for digital fringe projection profilometry | |
CN112325799A (zh) | 一种基于近红外光投影的高精度三维人脸测量方法 | |
Zhang et al. | BimodalPS: Causes and corrections for bimodal multi-path in phase-shifting structured light scanners | |
CN109781153A (zh) | 物理参数估计方法、装置和电子设备 | |
CN113592995B (zh) | 一种基于并行单像素成像的多次反射光分离方法 | |
Kammel et al. | Topography reconstruction of specular surfaces | |
Iwaguchi et al. | Efficient light transport acquisition by coded illumination and robust photometric stereo by dual photography using deep neural network | |
Birch et al. | 3d imaging with structured illumination for advanced security applications | |
Liu et al. | High-speed 3D surface measurement of rear lamp housing by automatic digital fringe projection system | |
Uhlig | Light Field Imaging for Deflectometry | |
김정희 | Phase error correction for a robust surface reflectivity-invariant 3-D scanning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
17P | Request for examination filed |
Effective date: 20200617 |
|
RBV | Designated contracting states (corrected) |
Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
INTG | Intention to grant announced |
Effective date: 20200721 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE PATENT HAS BEEN GRANTED |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 602018011238 Country of ref document: DE |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: REF Ref document number: 1350678 Country of ref document: AT Kind code of ref document: T Effective date: 20210115 |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: FI Ref legal event code: FGE |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20210331 Ref country code: NO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20210330 Ref country code: RS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20201230 |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: MK05 Ref document number: 1350678 Country of ref document: AT Kind code of ref document: T Effective date: 20201230 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20210330 Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20201230 Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20201230 |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: MP Effective date: 20201230 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: HR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20201230 |
|
REG | Reference to a national code |
Ref country code: LT Ref legal event code: MG9D |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20201230 Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20201230 Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20210430 Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20201230 Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20201230 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20201230 Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20201230 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20210430 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 602018011238 Country of ref document: DE |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20201230 Ref country code: AL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20201230 |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20201230 |
|
26N | No opposition filed |
Effective date: 20211001 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20201230 Ref country code: MC Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20201230 |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20201230 |
|
REG | Reference to a national code |
Ref country code: BE Ref legal event code: MM Effective date: 20210630 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LI Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20210630 Ref country code: IE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20210611 Ref country code: CH Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20210630 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20210430 Ref country code: FR Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20210630 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: BE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20210630 |
|
GBPC | Gb: european patent ceased through non-payment of renewal fee |
Effective date: 20220611 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GB Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20220611 |
|
P01 | Opt-out of the competence of the unified patent court (upc) registered |
Effective date: 20230524 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: NL Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20201230 Ref country code: CY Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20201230 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SM Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20201230 Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20201230 Ref country code: HU Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO Effective date: 20180611 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: DE Payment date: 20230620 Year of fee payment: 6 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: LU Payment date: 20230619 Year of fee payment: 6 Ref country code: FI Payment date: 20230621 Year of fee payment: 6 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20201230 |