WO2012007561A2 - Measurement system of a light source in space - Google Patents

Measurement system of a light source in space Download PDF

Info

Publication number
WO2012007561A2
WO2012007561A2 PCT/EP2011/062104 EP2011062104W WO2012007561A2 WO 2012007561 A2 WO2012007561 A2 WO 2012007561A2 EP 2011062104 W EP2011062104 W EP 2011062104W WO 2012007561 A2 WO2012007561 A2 WO 2012007561A2
Authority
WO
WIPO (PCT)
Prior art keywords
light source
component
shadow
imaging device
measurement system
Prior art date
Application number
PCT/EP2011/062104
Other languages
French (fr)
Other versions
WO2012007561A3 (en
Inventor
Peter Masa
Edoardo Franzi
David Hasler
Eric Grenet
Original Assignee
CSEM Centre Suisse d'Electronique et de Microtechnique SA - Recherche et Développement
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CSEM Centre Suisse d'Electronique et de Microtechnique SA - Recherche et Développement filed Critical CSEM Centre Suisse d'Electronique et de Microtechnique SA - Recherche et Développement
Priority to EP11749753.7A priority Critical patent/EP2593755B1/en
Priority to US13/810,455 priority patent/US9103661B2/en
Priority to KR1020137000887A priority patent/KR101906780B1/en
Publication of WO2012007561A2 publication Critical patent/WO2012007561A2/en
Publication of WO2012007561A3 publication Critical patent/WO2012007561A3/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/14Measuring arrangements characterised by the use of optical techniques for measuring distance or clearance between spaced objects or spaced apertures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • G01C3/02Details
    • G01C3/06Use of electric means to obtain final indication
    • G01C3/08Use of electric radiation detectors
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • G01C3/02Details
    • G01C3/06Use of electric means to obtain final indication
    • G01C3/08Use of electric radiation detectors
    • G01C3/085Use of electric radiation detectors with electronic parallax measurement
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01DMEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
    • G01D5/00Mechanical means for transferring the output of a sensing member; Means for converting the output of a sensing member to another variable where the form or nature of the sensing member does not constrain the means for converting; Transducers not specially adapted for a specific variable
    • G01D5/26Mechanical means for transferring the output of a sensing member; Means for converting the output of a sensing member to another variable where the form or nature of the sensing member does not constrain the means for converting; Transducers not specially adapted for a specific variable characterised by optical transfer means, i.e. using infrared, visible, or ultraviolet light
    • G01D5/32Mechanical means for transferring the output of a sensing member; Means for converting the output of a sensing member to another variable where the form or nature of the sensing member does not constrain the means for converting; Transducers not specially adapted for a specific variable characterised by optical transfer means, i.e. using infrared, visible, or ultraviolet light with attenuation or whole or partial obturation of beams of light
    • G01D5/34Mechanical means for transferring the output of a sensing member; Means for converting the output of a sensing member to another variable where the form or nature of the sensing member does not constrain the means for converting; Transducers not specially adapted for a specific variable characterised by optical transfer means, i.e. using infrared, visible, or ultraviolet light with attenuation or whole or partial obturation of beams of light the beams of light being detected by photocells
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01DMEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
    • G01D5/00Mechanical means for transferring the output of a sensing member; Means for converting the output of a sensing member to another variable where the form or nature of the sensing member does not constrain the means for converting; Transducers not specially adapted for a specific variable
    • G01D5/26Mechanical means for transferring the output of a sensing member; Means for converting the output of a sensing member to another variable where the form or nature of the sensing member does not constrain the means for converting; Transducers not specially adapted for a specific variable characterised by optical transfer means, i.e. using infrared, visible, or ultraviolet light
    • G01D5/32Mechanical means for transferring the output of a sensing member; Means for converting the output of a sensing member to another variable where the form or nature of the sensing member does not constrain the means for converting; Transducers not specially adapted for a specific variable characterised by optical transfer means, i.e. using infrared, visible, or ultraviolet light with attenuation or whole or partial obturation of beams of light
    • G01D5/34Mechanical means for transferring the output of a sensing member; Means for converting the output of a sensing member to another variable where the form or nature of the sensing member does not constrain the means for converting; Transducers not specially adapted for a specific variable characterised by optical transfer means, i.e. using infrared, visible, or ultraviolet light with attenuation or whole or partial obturation of beams of light the beams of light being detected by photocells
    • G01D5/36Forming the light into pulses
    • G01D5/38Forming the light into pulses by diffraction gratings
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S3/00Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received
    • G01S3/78Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received using electromagnetic waves other than radio waves
    • G01S3/781Details
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S3/00Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received
    • G01S3/78Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received using electromagnetic waves other than radio waves
    • G01S3/782Systems for determining direction or deviation from predetermined direction
    • G01S3/783Systems for determining direction or deviation from predetermined direction using amplitude comparison of signals derived from static detectors or detector systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S3/00Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received
    • G01S3/78Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received using electromagnetic waves other than radio waves
    • G01S3/782Systems for determining direction or deviation from predetermined direction
    • G01S3/783Systems for determining direction or deviation from predetermined direction using amplitude comparison of signals derived from static detectors or detector systems
    • G01S3/7835Systems for determining direction or deviation from predetermined direction using amplitude comparison of signals derived from static detectors or detector systems using coding masks

Definitions

  • the present invention relates to the field of absolute positioning device, in particular to the field of three or more degrees of freedom measurement systems. Examples of such devices are pointing devices for computers or measuring devices for tooling.
  • the present invention relates to the field of absolute positioning devices where the measured position ranges from a few nanometers to a few meters. It relates to positioning devices that measure the position of light sources in space.
  • Positioning devices are well known in the art, and are used across several technical domains. In the metrology domain, positioning devices are mostly found as rotary encoders, as in WO2006107363A1 , or linear encoders as in US5,563,408. These encoders output a one-dimensional information about the position, and are operating with an excellent resolution— of the order of 1/10 of a micron or of a 1/10 ⁇ 00 of a degree. To reach a positioning with several degrees of freedom, these encoders can be part of a chain, for example in a robotic arm, with the disadvantage that the more encoders are used, the more the positioning resolution degrades. The state of the art of robotic arm positioning system has today a resolution, which is at best one micron. These encoders have in common the fact that the sensing element is measuring the position of a grating with respect to the sensing element. It implies that either the sensing element or the grating is attached to the object the position of which has to be measured.
  • More elaborate encoders as disclosed in EP2169357A1 , can measure precisely the two dimensional position of a camera with respect to a grating. These encoders are mostly targeted to X-Y positioning tables in the tooling industry, and can achieve sub-micron resolution.
  • DE20121855U1 discloses a system to measure the position in space of an object carrying 3 light sources, by measuring the projection of a T-shaped device on a 2D sensitive area.
  • the method suffers 2 major drawbacks: it does not explain how the system can work in a natural environment with several other light sources, and it has a limited precision. Indeed even if it would be possible to build a perfect device with infinite mechanical precision, the resulting measurement precision on the sensitive surface would be at best of the order of the wavelength, i.e. half a micron.
  • An object of the present invention is to alleviate the limitation of the prior art by disclosing a device that measures the position of one or several light sources in space, with a resolution that exceeds the wavelength by at least one order of magnitude while being robust to external illumination sources.
  • the present invention is conceived for mass production, and can lead to a very economic system compared to the state of the art.
  • the disclosed invention is a measurement system that comprises at least one imaging device composed of a plurality of sensitive pixels disposed in at least one dimension; and at least one punctual light source; and at least one component— a grating or a microlens array— arranged to cast a shadow on the imaging device; the position of the component being fixed with respect to the imaging device. It also contains some computation means.
  • the principle of measurement, for one light source is the following.
  • the component casts a shadow on the imaging device.
  • the imaging device records the image of the shadow.
  • the image of the shadow is used to compute the position of the shadow with respect to the component.
  • the position of the shadow is used to compute the elevation of the light source.
  • the position of the shadow is used to compute the elevation of the light source along the first and along the second dimension of the sensor.
  • the three dimensional position of the light source can be obtained using well known triangulation rules.
  • the component that casts a shadow is composed of repetitive patterns.
  • This repetitive property spreads the information of the light position over a large area on the sensor, and allows the system to break the fundamental precision limit associated to any device that measures a position based on a single measurement resulting from light propagation.
  • the component can be advantageously realized as a grating on a planar surface and must include a distinctive element.
  • the grating must contain parts that are transparent to the light and parts that are opaque to the light.
  • the component can also be realized as an array of microlenses realized on a planar surface. The planar property brings the advantage of a simple elevation computation and a simple fixation over the imaging device.
  • the grating can be printed using a standard lithography process, and the microlens array can be produced by hot embossing.
  • the shadow of the component, recorded by the imaging device must exhibit the repetitive patterns and the distinctive elements.
  • the position of the shadow is computed using the position of the distinctive element, and is refined using the positions of the repetitive patterns. This refinement in the position is very important and gives an excellent precision to the device. Without the precision given by this refinement in the position, the device would be of very little practical use.
  • FIG. 3 shows the use of three one-dimensional sensors to compute the position of a light source
  • FIG. 6 shows an embodiment of the two-dimensional grating printed on the surface above the sensor with an interlaced absolute code
  • FIG. 7 shows an embodiment of the two-dimensional component realized on the surface above the sensor with one missing pattern as distinctive element
  • FIG. 8 shows an embodiment of the two-dimensional grating printed on the surface above the sensor with a cross as distinctive element
  • FIG. 10 shows an embodiment using two sensors to compute the position of a light source
  • FIG. 1 1 shows the principle of the computation of a retroreflector position using a virtual light source position
  • FIG. 12 shows the computation of the position of two frequency selective retroref lectors
  • this component will be a one-dimension grating. Then we will present how this system can be extended using a two dimensional sensor, using more than one light source and finally how to handle light sources from the ambient illumination.
  • a light source 101 produces light rays 102, which can be considered as being locally parallel rays 103 in the sensor proximity.
  • a grating 104 is used to let only part of the light reach the sensor 105.
  • a sensor records the shadow pattern 106, which is an approximate replica of the grating 104.
  • the grating contains repetitive elements 108 and a distinctive element 107, which in this example is just a lack of one of the repetitive elements.
  • Computation means are used to compute the displacement ⁇ of the shadow with respect to the grating. Using the knowledge of the measurement systems dimensions, it is straightforward to compute the elevation. The elevation is shown by the angle 109 in figure 1.
  • M is the pixel pitch
  • s(x) is the shadow pattern 106 recorded by the camera
  • x the pixel coordinate
  • represents the position of the shadow with respect to the imager or vice-versa
  • the sign of dX can change.
  • the shadow can be encoded as a large or as a small value depending on the imager— the value of dX can shift by AP/2.
  • the man skilled in the art will have no difficulty to set these parameters by trial-and-error. The closer the light source is, the larger the AP value is.
  • AP can be measured by correlating the shadow image with itself, and finding the distance to the first correlation peak.
  • the sums of equation (2) are performed on complete sine and cosine periods.
  • the x range can be set from 0 to a multiple of ⁇ / ⁇ minus one.
  • the pixel pitch of the imager may preferably divide the distance from one repetitive pattern to the next, i.e. ⁇ / ⁇ may preferably be an integer.
  • the vertical distance Z of the light source from the sensor, measured perpendicularly from the sensor surface, it is possible to compute two (or more) elevation values, from two (or more) distinct locations of the imager, and combining those to obtain the distance Z.
  • the distance ⁇ is computed in two locations, and result in ⁇ ⁇ and ⁇ 2 .
  • the resulting position P of the light source is computed as
  • the distance Z can also be computed by computing the magnification of the shadow pattern with respect to the pattern realized on the component; for a grating it means computing a value ⁇ on the shadow and a value ⁇ 2 on the grating and, and comparing the two
  • the grating can be made with a chromium-plated glass. The light is blocked at the locations where chromium is deposited, and can go through the glass elsewhere.
  • the preferred embodiment is the one using opaque regions and holes for implementing transparent regions.
  • a grating made of nickel and holes may be used.
  • Today Nickel plates can be manufactured at low cost, with thicknesses around 30 microns, and with an accuracy of the holes of one micron over a couple centimetres. It is preferred to implement transparent regions by holes instead of by glass, because the light goes straight through the holes, while it is slightly deviated by a glass layer, according to Snell's law.
  • M imaging devices and M components, where M is greater or equal to two.
  • Each component is attached between the light source and its respective imaging device, the relative position between each imaging- component couple being fixed and defined.
  • the imaging devices are non-coplanar.
  • equation (3) is applied for every imaging device, and defines a line in space (because only two dimensions are fixed by equation 3). The point closest to the two lines computed for the two imaging devices being the position of the light source 101.
  • Figure 3 shows an example setup, where the three dimensional position of a light source 101 is computed from 3 linear devices.
  • the elevation is computed for each sensor.
  • the elevation value defines a plane in space for every sensor, which is depicted by the light ray 102 and the intersection of said plane with the sensor plane 202.
  • the position of the light source 101 is the intersection of these 3 planes. These 3 planes intersect in a single point if the sensors are not coplanar.
  • the position of the light source 101 is chosen to be the one closest to every plane derived from the elevation computed for every linear device.
  • closest we mean the one whose sum of the distance to every said plane is minimum.
  • the invention can be carried out advantageously using two- dimensional imaging devices.
  • the system can compute the elevation of the light source along the lines and the elevation of the light source along the columns from the repetitive patterns and from the distinctive element present in the image delivered by the two-dimensional imaging device.
  • the computation of the elevation should use most of the pixels that record said image of the shadow in the area used for the estimation of the elevation values. By most we mean at least 80%, preferable 90% and mostly preferably 100% of the pixels.
  • Equation (1) and Equation (2) follows this principle: it uses every pixel value in the refinement of the position estimation. For a given physical setup, the precision limit will be given by the shot noise, which decreases with the number of photons recorded by the imaging device. It is thus important to use as many pixel values as possible in the computation to obtain an excellent precision. Note that using 100% of the pixels in an implementation that computes the elevation along the lines and the elevation along the column may mean using 50% of the pixel for the computation of the elevation along the lines and the other 50% of the pixels for the computation of the elevation along the columns.
  • This splitting of the pixels reduces the computation complexity and does not reduce the final precision as long as every considered pixel is used in the overall computation.
  • the splitting of the pixels should be balanced, in other words, when splitting 100% of the pixel, 50% ( ⁇ 5%) must be used along the columns and the other 50% ( ⁇ 5%) must be used along the rows (the sum of both percentages must sum up to
  • Figure 4 shows the image of a grating taken by a two-dimensional sensor.
  • the distinctive element is the set of diagonal lines 401
  • the repetitive pattern is a square 402.
  • the grid of repetitive pattern is aligned to the grid of pixels of the sensor.
  • the elevation of the light source along the lines of the sensor is obtained by computing the sum of the pixel values over the lines of the image, and by using the resulting signal 106 as in the one-dimensional case.
  • the elevation of the light source along the columns of the sensor is obtained in a similar manner by summing the pixel values over the columns of the image.
  • Figure 5 shows an example of using a single sensor for measuring the three dimensional position of the light source.
  • the image is separated into two zones 501 and 502 by the computation means.
  • each zone defines a line in space where the light source is located. This line crosses the center point of each zone (501 or 502). Ideally, the light source location is the intersection of these two lines. Practically, because of measurement noise, these lines do not intersect.
  • the position of the light source is estimated as the location in space that is the closest to both lines. In other words, the sum of the distance from said location to every line is minimal.
  • the position of the distinctive element is computed from the signal resulting from the sum over the lines and columns of the images, for example with the patterns of figure 7 and 8.
  • only the phase distance dX is computed from said signals; the estimate of the absolute position ( ⁇ ) being computed directly on the picture, as in the example of figure 4 and 6.
  • the sum over the lines and over the columns may exhibit a repetitive pattern.
  • the repetitive pattern may be repeated at regular space intervals, and have always the same shape and size, as in the examples of Figure 6 to 8.
  • Figure 6 shows an example of a grating that uses a two-dimensional code as distinctive element, as described in EP2169357A1 , which is interlaced with the repetitive patterns.
  • Diagonal lines represent the elements of the code: a diagonal at 45 degree represents a 1 and a diagonal at -45 degrees represents a 0.
  • the code is characterized in that any squared subset of the code, which contains at least three by three elements of the code, is unique.
  • any sub-image that contains at least three times three (3x3) elements of the code can be used for the computation of the position of the shadow.
  • the advantage of using such a code is that the distinctive element is always present, no matter what part of the grating is used. This confers some flexibility to the system, even if such an interlaced code is a slight degradation in the precision of the position compared to solutions that use the grating of figure 7 or 8.
  • the code must be read directly from the image, and cannot be read from the sum over the lines or columns.
  • the element of figure 7 can be implemented using a microlens array.
  • the component pattern is a microlens and the distinctive element is a missing microlens region.
  • Each black dot represents the position of a micro-lens.
  • the microlenses are more expensive to produce than a conventional grating, but generate a shadow pattern, which has more light, and thus allows for a faster measurement system.
  • the diffraction phenomena also known as Talbot effect, have a substantially smaller influence on the shadow pattern. This last advantage allows for more flexibility in the choice of the distance between the element and the imaging device.
  • microlens array cannot have a missing microlens in the middle of the array, it is possible to use a regular and complete rectangular microlens array, of a size that does not cover completely the imaging device; the distinctive element is thus embodied by the border of the microlens array.
  • the embodiment is also shown in figure 16, with microlenses 1601 , that generate light on the imaging device 1604 in positions 1603, and shadow in positions 1602.
  • the system measures the three dimensional position of two punctual light sources emitting light at distinct wavelengths, by using two filters.
  • One of said filters is opaque at the wavelengths of one light source, and transparent at the wavelength of the other light source, and vice versa for the other of said filters.
  • the light sources are monochromatic and each filter is transparent only at the wavelength of its associated light source.
  • a filter is never 100% opaque or 100% transparent; the filters are chosen such as to maximize its transparency for one light source while maximize its opacity for the other light source, respectively.
  • the filters that implement this tradeoff are said to be matched to the wavelengths of the light sources.
  • the filters are arranged so as to cover distinct locations of the component, and in that each filter covers a surface, which is at least as big as nine times the surface of a single pattern of the component.
  • filter we refer to the optical property of the material that embodies the surface used for filtering the light. According to this definition, we can place the same filter on different distinct location of the sensor.
  • Figure 9 shows a system with two filters 901 and 904.
  • the filter 901 covers two areas 902 and 903 of the sensor, while filter 904 covers two other areas 905 and 906 of the sensor. Every area under the filter is treated as a separate image by the computation means.
  • the computation of the elevation of the light source along the first dimension and along the second dimension is performed separately for each filter areas 902, 903, 905 and 906, by taking the corresponding image and performing the computation as described before.
  • the elevations value of areas 902 and 903 are used to compute the position of the first light source, while the elevations values of areas 905 and 906 are used to compute the position of the second light source.
  • each device 1001 or 1002 is composed of an imaging device and a component, which is attached between the light source and its imaging device. As described before, the relative position between the imaging device and its component is fixed and defined. Devices 1001 and 1002 share the computation means that are designed to compute the three-dimensional position of the light source.
  • the computation means compute the elevation of the light source along the lines and the elevation of the light source along the columns from the repetitive patterns and from the distinctive element present in the image delivered by the two-dimensional sensor for every device 1001 and 1002. These elevations values define two lines in space. The point that is the closest to these two lines is the three-dimensional position of the light source.
  • the measurement system of figure 10 can be implemented using an arbitrary number (>1 ) of imaging-component couple: the position of the light source being estimated as the point in space whose sum of the distance to every line resulting from an imaging-component couple is minimal.
  • the system measures the position of two or more light sources by temporal modulation.
  • the light sources are switched on and off according to a predefined temporal sequence. For example, for two light sources, the time can be divided in three periods p1 , p2 and p3. The first light is switched on during period p1 and switched off during periods p2 and p3; the second light source is switched on during period p2 and switched off during periods p1 and p3.
  • the computation means can detect when all the lights are switched off, and thus synchronize itself with the light sources.
  • these computations means perform a position estimation during period p1 , which correspond to the position of the first light source, and perform a position estimation during period p2, which correspond to the position of the second light source.
  • the image taken during period p3 is not influenced by the light sources the position of which has to be measured.
  • the image recorded during period p3 can be subtracted from the images taken during period p1 and p2, resulting in a new image, which is used as replacement of the image of the shadow for the computation of the position.
  • This last computation can mitigate the influence of spurious light sources in the scene on the estimation of the position of the light source of interest.
  • This principle can be extended to an arbitrary number of light sources, the temporal multiplexing of signals, as shown as example here, is well known in the field of telecommunications. In particular, it can also be extended to a single light source, which is switched on and off, to mitigate the effect of spurious light sources in the environment.
  • the light source is modulated using a modulation circuit.
  • the light source can be advantageously modulated to deliver a luminance L, which follows a sinusoidal law
  • t is the time
  • P and Q are constants
  • f is the modulation frequency of the light source.
  • P must be greater or equal to Q, preferable slightly greater than Q.
  • three images can be taken at times t l t t 2 and t 3 resulting in images I l t I 2 and / 3 , where
  • Image I n is guaranteed to be non-zero, independently of the choice of t x .
  • the measuring device only needs to know the oscillation frequency f, but does not need to be synchronized with the light source modulation.
  • the new image I n is independent of any non-oscillating light source in the environment.
  • the new image I n can be made independent of a background light source oscillating at 100Hz or at 120Hz.
  • fe- ⁇ must be a multiple of 1/100 second
  • (t 3 - t t ) must also be a multiple of 1/100 second.
  • the oscillation frequency f is set to a multiple of 3 times the background frequency.
  • fe- ⁇ must be a multiple of 1/120 second
  • ⁇ t ⁇ -t ⁇ must also be a multiple of 1/120 second.
  • 100Hz and 120Hz are particularly important frequencies, because the incandescent light sources oscillate at twice the frequency of the power lines, which is set to 50Hz or 60Hz in most of the countries.
  • the light source 101 is connected to the computing means and to the imaging device.
  • the light source can be placed next to the imaging device on the same circuit, or even in the middle of the imaging device.
  • This configuration requires only one power supply, and allows for a very convenient synchronisation between the image capture and the light emission. For example, it is easy to switch on the light, take and image, switch off the light take another image, and combine both images to mitigate the influence of spurious lights in the environment.
  • a retroreflector 1 103 is used to reflect the light back to the light source and to the sensor.
  • a retroreflector is an optical element that reflects any light ray back in a direction, which is parallel to the incident direction, independently of the orientation of the retroreflector.
  • a retroreflector element may be made of 3 mirrors positioned with an angle of 90 degrees between each other, or may be a sphere with a particular index of refraction. If the ray travels in the air, the index of refraction of the sphere must be equal to 2.
  • the light source 101 must be placed close to the imaging device 1 104 in order to allow the light to retro-reflect on the imaging device. By applying the same computation method as described above in this description, it will result in the position of a virtual light source 1 102.
  • the retroreflector position being the middle point between the computed virtual light source position 1 102, and physical light source position 101 , it is thus straightforward to compute the retroreflector position from the virtual light source position.
  • the system measures the three dimensional position of two retroreflectors 1203 and 1213 reflecting light at distinct wavelengths, by using two filters, as shown in figure 9.
  • One of said filters is opaque at the wavelengths of one retroreflector, and transparent at the wavelength of the other retroreflector, and vice versa for the other of said filters.
  • the filters must be matched to the retroreflector wavelengths.
  • the first filter must be transparent at the reflection wavelength of the first retroreflector 1203 and opaque at the reflection wavelength of the second retroreflector 1213.
  • the filters are arranged so as to cover distinct locations of the component, and in that each filter covers a surface, which is at least as big as nine times the surface of a single pattern of the component.
  • filter we refer to the optical property of the material that embodies the surface used for filtering the light. According to this definition, we can place the same filter on different distinct location of the sensor.
  • This embodiment can either use a single light source that emits at several wavelengths, or two light sources whose wavelength are matched to the retroreflectors and to the filters. The method can be extended to more than two retroreflectors.
  • the system measures the three dimensional position of one retroreflector 1 103, by using two filters, as shown in figure 14, and two light sources 1301 and 1302 connected to the computing means and to the imaging device, as shown in figure 13.
  • One of said filters is opaque at the wavelengths of one light source 1301 , and transparent at the wavelength of the other light source 1302, and vice versa for the other of said filter.
  • the filters are arranged so as to cover distinct locations of the component, and in that each filter covers a surface, which is at least as big as nine times the surface of a single pattern of the component.
  • filter we refer to the optical property of the material that embodies the surface used for filtering the light.
  • the elevations values of the virtual light source 1312 are computed using the image under the filter 901 , which define a line 1322 in space where said virtual light source is located. Since the retroreflector is located half way in-between the light source and its virtual counterpart, the retroreflector is located on line 1332, which is parallel to line 1322 and half way between the location of the elevation measurements 1300, and the location of the light source
  • the retroreflector is located on line 1331 , which is parallel to line 1321 that defines the position of the virtual light source 131 1.
  • the retroreflector three dimensional position is obtained by intersecting lines 1331 and 1332. If they don't intersect, the closest point to the two lines is chosen.
  • the distance between light sources 1301 and 1302 influences the precision of the measurement height of the retroreflector. On the one hand, these light sources must be close to the imaging device in order to receive some light from the retroreflector, and on the other hand, these light sources would be conveniently placed far away from each other to get the best measurement height precision.
  • a first measurement by using the light sources 1301 and 1302 close to the imaging device is performed, followed by a measurement with these light sources placed further away. If the retroreflector is close, then the light source must be close to the imaging device, otherwise not enough light will be reflected on the imaging device. If the retroreflector is far, the light source can be placed further from the imaging device, and still reflect some light on the imaging device.
  • the light sources instead of displacing the light sources, the light sources are duplicated on the sensor resulting in several identical copies of each light source positioned at increasing distance from the imaging device.
  • Figure 15 shows the light source 1301 duplicated as source 1511 and 1521 , and the light source 1302 duplicated as source 1512 and 1522.
  • a displacement of the light sources is equivalent to turning off light sources 1301 and 1302, and turning on light sources 1511 and 1512.
  • the light sources must be addressable individually by the computing means in order to turn them on and off in the way just described.
  • a well known example is the computation of the six degrees of freedom of an object using four light sources placed on a planar surface of that object: the six degrees of freedom of that object can be computed from the elevation values of the light sources— or equivalent ⁇ from the (x,y) locations of their shadow— as described in R. Hartley and A. Zissermann, "Multiple View Geometry in Computer Vision", second edition, Cambridge university press, 2003, section 8.1.1.
  • a component that casts a shadow on an imager is used to estimation the elevation of a light source in space.
  • the three-dimensional position of the light source can be computed. If the component contains repetitive patterns, the shadow position can be computed with a precision that reaches a small fraction of the wavelength of the light. If the pattern is aligned with the lines and columns of the imaging device, the computation can be performed from the sum over the lines and the sum over the columns of the pixel values, thus saving a substantial amount of computation and memory consumption.
  • the perturbation of other lights in the environment can be reduced by using a proper modulation of the light, or by using colour filters, or by using both.
  • the estimation of the position of several lights in the scene can be computed by using a temporal multiplexed code, or by using distinct wavelengths and matched filters on top of the imaging device.
  • two imaging devices with two elements can be used, and must be placed with a substantial distance between them.
  • the light source can be replaced by a retroreflector and by placing a second light source close to the imaging device.
  • the retroreflector needs no power supply, in contrast with the light source it replaces.
  • the synchronisation of the second light source with the imaging device is greatly simplified thanks to a direct connection between the two elements.
  • the setup with the retroreflector can also be implemented using two light sources, with two matched filters. The distance between the light sources determines the precision of the estimation of the third dimension. Finally, the distance between said two light sources can be increased to increase the third dimension precision.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Optical Transform (AREA)
  • Computer Networks & Wireless Communication (AREA)

Abstract

The present invention discloses a system that measures the position of a light source in space using an imager and transparent surface with a pattern on top. The pattern consists of a repetitive pattern and a distinctive element. The system achieves sub-micron precision. It also handles the measurement of several light sources simultaneously, and the measurement of the position of a retroreflector instead of the light.

Description

Measurement system of a light source in space
Technical field
[0001] The present invention relates to the field of absolute positioning device, in particular to the field of three or more degrees of freedom measurement systems. Examples of such devices are pointing devices for computers or measuring devices for tooling. In particular, the present invention relates to the field of absolute positioning devices where the measured position ranges from a few nanometers to a few meters. It relates to positioning devices that measure the position of light sources in space.
Background
[0002] Positioning devices are well known in the art, and are used across several technical domains. In the metrology domain, positioning devices are mostly found as rotary encoders, as in WO2006107363A1 , or linear encoders as in US5,563,408. These encoders output a one-dimensional information about the position, and are operating with an excellent resolution— of the order of 1/10 of a micron or of a 1/10Ό00 of a degree. To reach a positioning with several degrees of freedom, these encoders can be part of a chain, for example in a robotic arm, with the disadvantage that the more encoders are used, the more the positioning resolution degrades. The state of the art of robotic arm positioning system has today a resolution, which is at best one micron. These encoders have in common the fact that the sensing element is measuring the position of a grating with respect to the sensing element. It implies that either the sensing element or the grating is attached to the object the position of which has to be measured.
[0003] More elaborate encoders, as disclosed in EP2169357A1 , can measure precisely the two dimensional position of a camera with respect to a grating. These encoders are mostly targeted to X-Y positioning tables in the tooling industry, and can achieve sub-micron resolution.
[0004] In a different technical field, DE20121855U1 discloses a system to measure the position in space of an object carrying 3 light sources, by measuring the projection of a T-shaped device on a 2D sensitive area. The method suffers 2 major drawbacks: it does not explain how the system can work in a natural environment with several other light sources, and it has a limited precision. Indeed even if it would be possible to build a perfect device with infinite mechanical precision, the resulting measurement precision on the sensitive surface would be at best of the order of the wavelength, i.e. half a micron.
[0005] An object of the present invention is to alleviate the limitation of the prior art by disclosing a device that measures the position of one or several light sources in space, with a resolution that exceeds the wavelength by at least one order of magnitude while being robust to external illumination sources. In addition, the present invention is conceived for mass production, and can lead to a very economic system compared to the state of the art.
Summary of the invention
[0006] The disclosed invention is a measurement system that comprises at least one imaging device composed of a plurality of sensitive pixels disposed in at least one dimension; and at least one punctual light source; and at least one component— a grating or a microlens array— arranged to cast a shadow on the imaging device; the position of the component being fixed with respect to the imaging device. It also contains some computation means. The principle of measurement, for one light source, is the following.
- Thanks to the light source, the component casts a shadow on the imaging device.
- The imaging device records the image of the shadow. - The image of the shadow is used to compute the position of the shadow with respect to the component.
- The position of the shadow is used to compute the elevation of the light source. For two-dimensional sensors, the position of the shadow is used to compute the elevation of the light source along the first and along the second dimension of the sensor.
[0007] By repeating this measurement in several distinct locations of the imaging device, and by combining the resulting elevations values, the three dimensional position of the light source can be obtained using well known triangulation rules.
[0008] To obtain the desired precision, it may be requested that the component that casts a shadow is composed of repetitive patterns. This repetitive property spreads the information of the light position over a large area on the sensor, and allows the system to break the fundamental precision limit associated to any device that measures a position based on a single measurement resulting from light propagation. In addition, the component can be advantageously realized as a grating on a planar surface and must include a distinctive element. The grating must contain parts that are transparent to the light and parts that are opaque to the light. The component can also be realized as an array of microlenses realized on a planar surface. The planar property brings the advantage of a simple elevation computation and a simple fixation over the imaging device. The grating can be printed using a standard lithography process, and the microlens array can be produced by hot embossing. The shadow of the component, recorded by the imaging device, must exhibit the repetitive patterns and the distinctive elements. The position of the shadow is computed using the position of the distinctive element, and is refined using the positions of the repetitive patterns. This refinement in the position is very important and gives an excellent precision to the device. Without the precision given by this refinement in the position, the device would be of very little practical use.
Brief description of the drawings
[0009] The invention will be better understood by reading the following description, provided in reference to the annexed drawings where:
- Figure 1 shows the principle of the elevation measurement;
- Figure 2 shows an example of computation of the distance of the light source to the sensor plane;
- Figure 3 shows the use of three one-dimensional sensors to compute the position of a light source;
- Figure 4 shows the computation of the position of the shadow in two dimensions;
- Figure 5 shows the split of the sensor into two zones for implementation of the triangulation;
- Figure 6 shows an embodiment of the two-dimensional grating printed on the surface above the sensor with an interlaced absolute code;
- Figure 7 shows an embodiment of the two-dimensional component realized on the surface above the sensor with one missing pattern as distinctive element;
- Figure 8 shows an embodiment of the two-dimensional grating printed on the surface above the sensor with a cross as distinctive element;
- Figure 9 shows the use of filters to measure the three- dimensional position of two light sources simultaneously;
- Figure 10 shows an embodiment using two sensors to compute the position of a light source;
- Figure 1 1 shows the principle of the computation of a retroreflector position using a virtual light source position - Figure 12 shows the computation of the position of two frequency selective retroref lectors;
- Figure 13 shows the computation of the three-dimensional position of a retroreflector using two light sources with different wavelengths;
- Figure 14 shows use of filters to compute the position of the retroreflector of figure 13;
- Figure 15 shows how to adapt the position of the light sources to increase the retroreflector position estimation precision.
Detailed description of the invention
[0010] In the following description, we will first present the measurement system based on a single point light source, a one-dimensional imager and a component arranged to cast a shadow on the imager. In a first example, this component will be a one-dimension grating. Then we will present how this system can be extended using a two dimensional sensor, using more than one light source and finally how to handle light sources from the ambient illumination.
[001 1] A light source 101 produces light rays 102, which can be considered as being locally parallel rays 103 in the sensor proximity. A grating 104 is used to let only part of the light reach the sensor 105. A sensor records the shadow pattern 106, which is an approximate replica of the grating 104. The grating contains repetitive elements 108 and a distinctive element 107, which in this example is just a lack of one of the repetitive elements.
[0012] Computation means are used to compute the displacement Δ of the shadow with respect to the grating. Using the knowledge of the measurement systems dimensions, it is straightforward to compute the elevation. The elevation is shown by the angle 109 in figure 1.
[0013] The computation of Δ js performed as the sum of an approximate position computed from the distinctive element and a phase position computed from the repetitive patterns. By using well known methods, for example correlation, one can compute an estimate ΔΧοί the position Δ . Then, Δ can be expressed as a multiple of the distance from one repetitive pattern to the next AP (on the image of the shadow) plus a phase distance dX:
AX= n AP+ dX (1 ) n is then chosen to minimize the absolute value of the difference
(AX- AX) phase distance dX is computed, like in
WO20101 12082, using this formulation
Figure imgf000008_0001
Where M is the pixel pitch, s(x) is the shadow pattern 106 recorded by the camera, x the pixel coordinate and a\ax 2(A,B) the avc\an(A/B) λ— 1
function defined in J ' J . Depending on the choice of the coordinate system, on whether Δ represents the position of the shadow with respect to the imager or vice-versa, the sign of dX can change. Also, depending on the encoding of the shadow — the shadow can be encoded as a large or as a small value depending on the imager— the value of dX can shift by AP/2. The man skilled in the art will have no difficulty to set these parameters by trial-and-error. The closer the light source is, the larger the AP value is. In practice, AP can be measured by correlating the shadow image with itself, and finding the distance to the first correlation peak.
To obtain an excellent precision, it is important, but not mandatory, that the sums of equation (2) are performed on complete sine and cosine periods. For example, the x range can be set from 0 to a multiple of Μ/ΔΡ minus one. It also implies that the pixel pitch of the imager may preferably divide the distance from one repetitive pattern to the next, i.e. ΔΡ/Μ may preferably be an integer.
[0015] To obtain the vertical distance Z of the light source from the sensor, measured perpendicularly from the sensor surface, it is possible to compute two (or more) elevation values, from two (or more) distinct locations of the imager, and combining those to obtain the distance Z. For example, in Figure 2, the distance ΔΧ is computed in two locations, and result in ΔΧΧ and ΔΧ2. The resulting position P of the light source is computed as
Figure imgf000009_0001
[0016] The distance Z can also be computed by computing the magnification of the shadow pattern with respect to the pattern realized on the component; for a grating it means computing a value ΔΡ on the shadow and a value ΔΡ2 on the grating and, and comparing the two
ΔΡ2
values : Zn = £
JP ΔΡ-ΔΡ2
[0017] The grating can be made with a chromium-plated glass. The light is blocked at the locations where chromium is deposited, and can go through the glass elsewhere. The preferred embodiment is the one using opaque regions and holes for implementing transparent regions. For example a grating made of nickel and holes may be used. Today Nickel plates can be manufactured at low cost, with thicknesses around 30 microns, and with an accuracy of the holes of one micron over a couple centimetres. It is preferred to implement transparent regions by holes instead of by glass, because the light goes straight through the holes, while it is slightly deviated by a glass layer, according to Snell's law.
[0018] To compute the three-dimensional position of the light source 101 using a one-dimensional imaging device, we need M imaging devices, and M components, where M is greater or equal to two. Each component is attached between the light source and its respective imaging device, the relative position between each imaging- component couple being fixed and defined. The imaging devices are non-coplanar.
[0019] When M is equal to 2, equation (3) is applied for every imaging device, and defines a line in space (because only two dimensions are fixed by equation 3). The point closest to the two lines computed for the two imaging devices being the position of the light source 101.
[0020] Figure 3 shows an example setup, where the three dimensional position of a light source 101 is computed from 3 linear devices. There are three linear sensors 201 , disposed in a non-coplanar fashion, and preferably disposed perpendicular from one to another. The elevation is computed for each sensor. The elevation value defines a plane in space for every sensor, which is depicted by the light ray 102 and the intersection of said plane with the sensor plane 202. The position of the light source 101 is the intersection of these 3 planes. These 3 planes intersect in a single point if the sensors are not coplanar.
[0021] When there are more than three linear devices, the position of the light source 101 is chosen to be the one closest to every plane derived from the elevation computed for every linear device. By closest we mean the one whose sum of the distance to every said plane is minimum.
[0022] The invention can be carried out advantageously using two- dimensional imaging devices. With a two-dimensional imaging device, and by computing the position of the shadow along the lines and along the columns, the system can compute the elevation of the light source along the lines and the elevation of the light source along the columns from the repetitive patterns and from the distinctive element present in the image delivered by the two-dimensional imaging device. To get the best possible precision, the computation of the elevation should use most of the pixels that record said image of the shadow in the area used for the estimation of the elevation values. By most we mean at least 80%, preferable 90% and mostly preferably 100% of the pixels. In other words, in the example that uses 100% of the pixels, if the value of one single pixel varies, the elevation along the lines, or the elevation along the columns (or both) will also vary. The implementation according to Equation (1) and Equation (2) follows this principle: it uses every pixel value in the refinement of the position estimation. For a given physical setup, the precision limit will be given by the shot noise, which decreases with the number of photons recorded by the imaging device. It is thus important to use as many pixel values as possible in the computation to obtain an excellent precision. Note that using 100% of the pixels in an implementation that computes the elevation along the lines and the elevation along the column may mean using 50% of the pixel for the computation of the elevation along the lines and the other 50% of the pixels for the computation of the elevation along the columns. This splitting of the pixels reduces the computation complexity and does not reduce the final precision as long as every considered pixel is used in the overall computation. The splitting of the pixels should be balanced, in other words, when splitting 100% of the pixel, 50% (± 5%) must be used along the columns and the other 50% (± 5%) must be used along the rows (the sum of both percentages must sum up to
100%). When splitting 80% of the pixel, 40% (± 5%) must be used along the columns and the remaining 40% (± 5%) must be used along the rows (the sum of both percentages must sum up to 80%).
Figure 4 shows the image of a grating taken by a two-dimensional sensor. The distinctive element is the set of diagonal lines 401 , the repetitive pattern is a square 402. The grid of repetitive pattern is aligned to the grid of pixels of the sensor. The elevation of the light source along the lines of the sensor is obtained by computing the sum of the pixel values over the lines of the image, and by using the resulting signal 106 as in the one-dimensional case. The elevation of the light source along the columns of the sensor is obtained in a similar manner by summing the pixel values over the columns of the image. [0024] Figure 5 shows an example of using a single sensor for measuring the three dimensional position of the light source. The image is separated into two zones 501 and 502 by the computation means. By computing the position of the shadow with respect to the grating in each zone, the elevations values along both dimensions are computed. These elevations values are combined to compute the three-dimensional location of the light source: each zone defines a line in space where the light source is located. This line crosses the center point of each zone (501 or 502). Ideally, the light source location is the intersection of these two lines. Practically, because of measurement noise, these lines do not intersect. The position of the light source is estimated as the location in space that is the closest to both lines. In other words, the sum of the distance from said location to every line is minimal.
[0025] In some embodiments, the position of the distinctive element is computed from the signal resulting from the sum over the lines and columns of the images, for example with the patterns of figure 7 and 8. In other embodiments, only the phase distance dX is computed from said signals; the estimate of the absolute position (ΔΧ) being computed directly on the picture, as in the example of figure 4 and 6.
To function properly, the sum over the lines and over the columns may exhibit a repetitive pattern. Preferably, the repetitive pattern may be repeated at regular space intervals, and have always the same shape and size, as in the examples of Figure 6 to 8. Figure 6 shows an example of a grating that uses a two-dimensional code as distinctive element, as described in EP2169357A1 , which is interlaced with the repetitive patterns. Diagonal lines represent the elements of the code: a diagonal at 45 degree represents a 1 and a diagonal at -45 degrees represents a 0. The code is characterized in that any squared subset of the code, which contains at least three by three elements of the code, is unique. In other words, it means that any sub-image that contains at least three times three (3x3) elements of the code can be used for the computation of the position of the shadow. The advantage of using such a code is that the distinctive element is always present, no matter what part of the grating is used. This confers some flexibility to the system, even if such an interlaced code is a slight degradation in the precision of the position compared to solutions that use the grating of figure 7 or 8. In addition, the code must be read directly from the image, and cannot be read from the sum over the lines or columns.
[0026] In another embodiment of the invention, the element of figure 7 can be implemented using a microlens array. In other words, the component pattern is a microlens and the distinctive element is a missing microlens region. Each black dot represents the position of a micro-lens. The microlenses are more expensive to produce than a conventional grating, but generate a shadow pattern, which has more light, and thus allows for a faster measurement system. In addition, the diffraction phenomena, also known as Talbot effect, have a substantially smaller influence on the shadow pattern. This last advantage allows for more flexibility in the choice of the distance between the element and the imaging device. If for some technological reasons, the microlens array cannot have a missing microlens in the middle of the array, it is possible to use a regular and complete rectangular microlens array, of a size that does not cover completely the imaging device; the distinctive element is thus embodied by the border of the microlens array. The embodiment is also shown in figure 16, with microlenses 1601 , that generate light on the imaging device 1604 in positions 1603, and shadow in positions 1602.
[0027] In another embodiment of the invention, the system measures the three dimensional position of two punctual light sources emitting light at distinct wavelengths, by using two filters. One of said filters is opaque at the wavelengths of one light source, and transparent at the wavelength of the other light source, and vice versa for the other of said filters. Preferably, the light sources are monochromatic and each filter is transparent only at the wavelength of its associated light source. In practice, a filter is never 100% opaque or 100% transparent; the filters are chosen such as to maximize its transparency for one light source while maximize its opacity for the other light source, respectively. The filters that implement this tradeoff are said to be matched to the wavelengths of the light sources. The filters are arranged so as to cover distinct locations of the component, and in that each filter covers a surface, which is at least as big as nine times the surface of a single pattern of the component. By "filter" we refer to the optical property of the material that embodies the surface used for filtering the light. According to this definition, we can place the same filter on different distinct location of the sensor.
[0028] Figure 9 shows a system with two filters 901 and 904. The filter 901 covers two areas 902 and 903 of the sensor, while filter 904 covers two other areas 905 and 906 of the sensor. Every area under the filter is treated as a separate image by the computation means. The computation of the elevation of the light source along the first dimension and along the second dimension is performed separately for each filter areas 902, 903, 905 and 906, by taking the corresponding image and performing the computation as described before. The elevations value of areas 902 and 903 are used to compute the position of the first light source, while the elevations values of areas 905 and 906 are used to compute the position of the second light source.
[0029] To increase the precision of the measurement in the third dimension, that is, in the dimension perpendicular to the measuring device, the distance between the measurement zones 501 and 502 must be increased. This is done in an equivalent way in another embodiment of the invention shown in figure 10, by using two (or more) distinct measurement devices 1001 and 1002 instead of only one. Each device 1001 or 1002 is composed of an imaging device and a component, which is attached between the light source and its imaging device. As described before, the relative position between the imaging device and its component is fixed and defined. Devices 1001 and 1002 share the computation means that are designed to compute the three-dimensional position of the light source. By computing the position of the shadow along the lines and along the columns, the computation means compute the elevation of the light source along the lines and the elevation of the light source along the columns from the repetitive patterns and from the distinctive element present in the image delivered by the two-dimensional sensor for every device 1001 and 1002. These elevations values define two lines in space. The point that is the closest to these two lines is the three-dimensional position of the light source. The measurement system of figure 10 can be implemented using an arbitrary number (>1 ) of imaging-component couple: the position of the light source being estimated as the point in space whose sum of the distance to every line resulting from an imaging-component couple is minimal.
In another embodiment of the invention, the system measures the position of two or more light sources by temporal modulation. The light sources are switched on and off according to a predefined temporal sequence. For example, for two light sources, the time can be divided in three periods p1 , p2 and p3. The first light is switched on during period p1 and switched off during periods p2 and p3; the second light source is switched on during period p2 and switched off during periods p1 and p3. At the sensor side, the computation means can detect when all the lights are switched off, and thus synchronize itself with the light sources. Then, these computations means perform a position estimation during period p1 , which correspond to the position of the first light source, and perform a position estimation during period p2, which correspond to the position of the second light source. The image taken during period p3 is not influenced by the light sources the position of which has to be measured. Hence, the image recorded during period p3 can be subtracted from the images taken during period p1 and p2, resulting in a new image, which is used as replacement of the image of the shadow for the computation of the position. This last computation can mitigate the influence of spurious light sources in the scene on the estimation of the position of the light source of interest.
[0031] This principle can be extended to an arbitrary number of light sources, the temporal multiplexing of signals, as shown as example here, is well known in the field of telecommunications. In particular, it can also be extended to a single light source, which is switched on and off, to mitigate the effect of spurious light sources in the environment.
[0032] In another embodiment of the invention, the light source is modulated using a modulation circuit. For example, the light source can be advantageously modulated to deliver a luminance L, which follows a sinusoidal law
L = P + Q sin (2π · / · t)
[0033] where t is the time, P and Q are constants, and f is the modulation frequency of the light source. P must be greater or equal to Q, preferable slightly greater than Q. On the receiver side, that is, on the imaging device side, three images can be taken at times tl t t2 and t3 resulting in images Il t I2 and /3 , where
Figure imgf000016_0001
[0034] and where m and n are arbitrary integer constants, but preferably equal to 0. By taking the sum of the images /s = + /2 + /3) we get an image which averages out the modulation. This new image /s can be subtracted from images Il t I2 and /3. The new image considered for the computation of the three-dimensional position of the light source is
ln = \k - \ + \ - \ + \ - V
[0035] Image In is guaranteed to be non-zero, independently of the choice of tx. In other words, the measuring device only needs to know the oscillation frequency f, but does not need to be synchronized with the light source modulation. In addition, the new image In is independent of any non-oscillating light source in the environment. By choosing f and m, n, appropriately, the new image In can be made independent of a background light source oscillating at 100Hz or at 120Hz. For example, to be independent of a light source that oscillates at 100Hz in the background, fe-^) must be a multiple of 1/100 second, and (t3- tt) must also be a multiple of 1/100 second. Preferably, the oscillation frequency f is set to a multiple of 3 times the background frequency.
To be independent of a light source that oscillates at 120Hz in the background, fe-^) must be a multiple of 1/120 second, and {t^-t^) must also be a multiple of 1/120 second. 100Hz and 120Hz are particularly important frequencies, because the incandescent light sources oscillate at twice the frequency of the power lines, which is set to 50Hz or 60Hz in most of the countries.
In another embodiment of the invention, the light source 101 is connected to the computing means and to the imaging device. By connected, we mean that there is at least one electrical connection between the computing means, the imaging device and the light source. For example, the light source can be placed next to the imaging device on the same circuit, or even in the middle of the imaging device. This configuration requires only one power supply, and allows for a very convenient synchronisation between the image capture and the light emission. For example, it is easy to switch on the light, take and image, switch off the light take another image, and combine both images to mitigate the influence of spurious lights in the environment. In this embodiment, a retroreflector 1 103 is used to reflect the light back to the light source and to the sensor. A retroreflector is an optical element that reflects any light ray back in a direction, which is parallel to the incident direction, independently of the orientation of the retroreflector. A retroreflector element may be made of 3 mirrors positioned with an angle of 90 degrees between each other, or may be a sphere with a particular index of refraction. If the ray travels in the air, the index of refraction of the sphere must be equal to 2. The light source 101 must be placed close to the imaging device 1 104 in order to allow the light to retro-reflect on the imaging device. By applying the same computation method as described above in this description, it will result in the position of a virtual light source 1 102. The retroreflector position being the middle point between the computed virtual light source position 1 102, and physical light source position 101 , it is thus straightforward to compute the retroreflector position from the virtual light source position.
[0037] In another embodiment of the invention, the system measures the three dimensional position of two retroreflectors 1203 and 1213 reflecting light at distinct wavelengths, by using two filters, as shown in figure 9. One of said filters is opaque at the wavelengths of one retroreflector, and transparent at the wavelength of the other retroreflector, and vice versa for the other of said filters. The filters must be matched to the retroreflector wavelengths. In other words, the first filter must be transparent at the reflection wavelength of the first retroreflector 1203 and opaque at the reflection wavelength of the second retroreflector 1213. The filters are arranged so as to cover distinct locations of the component, and in that each filter covers a surface, which is at least as big as nine times the surface of a single pattern of the component. By "filter" we refer to the optical property of the material that embodies the surface used for filtering the light. According to this definition, we can place the same filter on different distinct location of the sensor. This embodiment can either use a single light source that emits at several wavelengths, or two light sources whose wavelength are matched to the retroreflectors and to the filters. The method can be extended to more than two retroreflectors.
[0038] In another embodiment of the invention, the system measures the three dimensional position of one retroreflector 1 103, by using two filters, as shown in figure 14, and two light sources 1301 and 1302 connected to the computing means and to the imaging device, as shown in figure 13. One of said filters is opaque at the wavelengths of one light source 1301 , and transparent at the wavelength of the other light source 1302, and vice versa for the other of said filter. The filters are arranged so as to cover distinct locations of the component, and in that each filter covers a surface, which is at least as big as nine times the surface of a single pattern of the component. By "filter" we refer to the optical property of the material that embodies the surface used for filtering the light. The elevations values of the virtual light source 1312 are computed using the image under the filter 901 , which define a line 1322 in space where said virtual light source is located. Since the retroreflector is located half way in-between the light source and its virtual counterpart, the retroreflector is located on line 1332, which is parallel to line 1322 and half way between the location of the elevation measurements 1300, and the location of the light source
1302. Using a similar reasoning, the retroreflector is located on line 1331 , which is parallel to line 1321 that defines the position of the virtual light source 131 1. Thus, the retroreflector three dimensional position is obtained by intersecting lines 1331 and 1332. If they don't intersect, the closest point to the two lines is chosen. The distance between light sources 1301 and 1302 influences the precision of the measurement height of the retroreflector. On the one hand, these light sources must be close to the imaging device in order to receive some light from the retroreflector, and on the other hand, these light sources would be conveniently placed far away from each other to get the best measurement height precision. To get the optimal precision, a first measurement by using the light sources 1301 and 1302 close to the imaging device is performed, followed by a measurement with these light sources placed further away. If the retroreflector is close, then the light source must be close to the imaging device, otherwise not enough light will be reflected on the imaging device. If the retroreflector is far, the light source can be placed further from the imaging device, and still reflect some light on the imaging device. In practice, instead of displacing the light sources, the light sources are duplicated on the sensor resulting in several identical copies of each light source positioned at increasing distance from the imaging device. Figure 15 shows the light source 1301 duplicated as source 1511 and 1521 , and the light source 1302 duplicated as source 1512 and 1522. A displacement of the light sources is equivalent to turning off light sources 1301 and 1302, and turning on light sources 1511 and 1512. The light sources must be addressable individually by the computing means in order to turn them on and off in the way just described.
[0039] By computing the three dimensional position of several light sources, or several retroreflectors in space, it is straightforward to compute the position of an object with several degrees of freedom if the light sources or the retroreflectors are part of that object. For example, if three light sources are placed on a single object, then the six degrees of freedom— the position and the orientation in space— of that object can be easily computed. This procedure can be extended to an arbitrary number of degrees of freedom provided the adequate number of light sources. A well known example is the computation of the six degrees of freedom of an object using four light sources placed on a planar surface of that object: the six degrees of freedom of that object can be computed from the elevation values of the light sources— or equivalent^ from the (x,y) locations of their shadow— as described in R. Hartley and A. Zissermann, "Multiple View Geometry in Computer Vision", second edition, Cambridge university press, 2003, section 8.1.1.
[0040] In conclusion, a component that casts a shadow on an imager is used to estimation the elevation of a light source in space. When there are multiple shadows, the three-dimensional position of the light source can be computed. If the component contains repetitive patterns, the shadow position can be computed with a precision that reaches a small fraction of the wavelength of the light. If the pattern is aligned with the lines and columns of the imaging device, the computation can be performed from the sum over the lines and the sum over the columns of the pixel values, thus saving a substantial amount of computation and memory consumption. The perturbation of other lights in the environment can be reduced by using a proper modulation of the light, or by using colour filters, or by using both. The estimation of the position of several lights in the scene can be computed by using a temporal multiplexed code, or by using distinct wavelengths and matched filters on top of the imaging device. To get better precision in the estimation of the third dimension, i.e. the distance from the light source to the sensor, two imaging devices with two elements can be used, and must be placed with a substantial distance between them. To have a system with only one active component, the light source can be replaced by a retroreflector and by placing a second light source close to the imaging device. In this setup the retroreflector needs no power supply, in contrast with the light source it replaces. In addition, the synchronisation of the second light source with the imaging device is greatly simplified thanks to a direct connection between the two elements. The setup with the retroreflector can also be implemented using two light sources, with two matched filters. The distance between the light sources determines the precision of the estimation of the third dimension. Finally, the distance between said two light sources can be increased to increase the third dimension precision.
This description has been provided only for purpose of non limiting example. Those skilled in the art may adapt the invention but keeping within the scope of the invention as defined in the claims.

Claims

Claims
1. A measurement system comprising,
- at least one imaging device composed of a plurality of sensitive pixels disposed in at least one dimension; and
- at least one punctual light source; and
- at least one component arranged to cast a shadow on the imaging device, the position of the component being fixed with respect to the imaging device;
- computation means;
characterized in that the component is composed of repetitive patterns realized on a planar surface including a distinctive element, in that the component is designed to cast a shadow made of repetitive patterns and made of a distinctive element, in that the imaging device is designed to record an image of said shadow, and in that the computation means are designed to compute the elevation of the light source from the repetitive patterns and from the distinctive element present in said image.
2. The measurement system according to claim 1 , characterized in that the component patterns are made of microlenses and in that the distinctive element is a set of at least one missing microlens region.
3. The measurement system according to claim 1 , characterized in that the component is a grating composed of opaque repetitive patterns realized on a planar surface including a distinctive element.
4. The measurement system according to one of claims 1 -3, comprising M imaging devices and M components, characterized in that each component is attached between the light source and its respective imaging device, and in that the relative position between each imaging-component couple is fixed and defined, and in that M is greater or equal to two, and in that the imaging devices are not all coplanar, and in that the computation means are designed to compute the three dimensional position of the light source.
5. The measurement system according to one of claims 1 -3, characterized in that the imaging device is composed of a plurality of sensitive pixels disposed in two dimensions; and in that the computation means are designed to compute the elevation along the first dimension and the elevation along the second dimension of the light source from the repetitive patterns and from the distinctive element present in said image, and in that most said pixel that records said image of the shadow affects the elevation value along the first dimension or the elevation value along the second dimension.
6. The measurement system according to claim 5, wherein the distinctive element is a two dimensional code interlaced with the repetitive patterns, characterized in that any squared subset of the code which contains at least three by three elements of the code is unique.
7. The measurement system according to claim 5, characterized in that the component is aligned to the pixel matrix in such a manner that the sum over the lines and the sum over the columns of said image defines the position of the shadow with respect to the component.
8. The measurement system according to one of claims 5-7, comprising at least two punctual light sources emitting light at distinct wavelengths and at least two filters, characterized in that the filters are arranged so as to cover distinct locations of the component, and in that each filter covers a surface which is at least as big as nine times the surface of a single pattern of the component, and in that the filters are matched to the wavelengths of the light sources, and in that the imaging device is designed to deliver one image per area covered by the filters.
9. The measurement system according to one of claims 5-7, comprising a retroreflector, characterized in that the light source is connected to the computing means and to the imaging device, and in that the computation means are designed to compute the elevation of the retroreflector.
10. The measurement system according to claim 9, comprising at least two retroreflectors reflecting light at distinct wavelengths and at least two filters, characterized in that the filters are arranged so as to cover distinct locations of the component, and in that each filter covers a surface which is at least as big as nine times the surface of a single pattern of the component, and in that the filters are matched to the reflecting wavelengths of the retroreflectors, and in that the imaging device is designed to deliver one image per area covered by the filters.
1 1. The measurement system according to claim 9, comprising at least two punctual light sources emitting light at distinct wavelengths and comprising at least two filters, characterized in that the filters are arranged so as to cover distinct locations of the component, and in that each filter covers a surface which is at least as big as nine times the surface of a single pattern of the component, and in that the filters are matched to the wavelengths of the light sources, and in that the imaging device is designed to deliver one image per area covered by the filters, and in that the every light source is connected to the computing means and to the imaging device.
12. The measurement system according to claim 1 1 , comprising several identical copies of said light sources, characterized in that every light source is addressable individually and in that said copies of the light sources are positioned at increasing distance from the imaging device.
13. The measurement system according to one of claims 5-8, characterized in that the computation means are arranged so that the elevation along both dimensions are defined in at least two distinct locations of the same imaging device, and so that the combination of all said elevations defines the three- dimensional position of the punctual light source.
14. The measurement system according to one of claims 5-8, comprising N imaging devices and N components, characterized in that each component is attached between the light source and its respective imaging device, and in that the relative position between each imaging-component couple is fixed and defined, and in that N is larger or equal to two, and in that the computation means are designed to compute the three dimensional position of the light source.
15. The measurement system according to one of claims 5 to 14, characterized in that it comprises control means arranged to switch the light source(s) on-and-off according to a predefined timed sequence.
16. The measurement system according to one of claims 5 to 15, comprising a modulation circuit, characterized in that the modulation circuit is designed to modulate the power of the light in a repetitive manner.
17. Method for the measurement of the position of a light source, for implementing a system as defined in claims 5 to 16, characterized in that it comprises the following steps:
- the imaging device records the image of the shadow,
- the image of the shadow is used to compute the position of the shadow with respect to the component, the repetitive property of the patterns present in the shadow being used to enhance the precision of said position,
- the position of the shadow is used to compute the elevation of the light source or of the retroreflector along the first dimension and along the second dimension of the imaging device.
18. The measurement method according to claim 17, characterized in that the position of the shadow of the component is obtained as the sum of an approximate position computed from the distinctive element and a phase position computed from the repetitive patterns.
19. The measurement method according to claim 17 or 18, characterized in that the position of the shadow of the component is computed from the sum over the lines and from the sum over the columns of the image of the shadow.
20. The measurement method according to claim 17, 18 or 19, characterized in that the position of the shadow with respect to the component is computed in at least two distinct locations of the imaging device, and in that the combination of said positions is used to compute the three dimensional position of the light source or of the retroreflector.
21. The measurement method according to one of claims 17-20 for implementing a measurement system as defined in claim 16, characterized in that
- the modulation circuit modulates the intensity of the light source,
- the image of the shadow is recorded at least twice,
- said recordings are combined to deliver a new image of the shadow,
- said recordings combination annihilates the effect on said new image of any other light source present in the measurement system environment and whose modulation differs from the light source modulated by said modulation circuit, - the position of the shadow with respect to the component is computed from said new image of the shadow.
22. The measurement method according to ones of claims 17-21 for implementing a measurement system as defined in claim 1 1 , characterized in that the position of the shadow with respect to the component is computed at least once per area covered by the filters, and in that the combination of said positions is used to compute the three dimensional position of the retroreflector.
23. The measurement method according to claim 22 for implementing a measurement system as defined in claim 12, characterized in that the position of the three dimensional position of the retroreflector is first computed using a light first source (1301) and a second light source (1302) that are close to the sensor, followed by a computation using a copy of light the first light source (1301 ) and a copy of the second light source (1302) that are positioned further apart from the sensor.
PCT/EP2011/062104 2010-07-16 2011-07-14 Measurement system of a light source in space WO2012007561A2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP11749753.7A EP2593755B1 (en) 2010-07-16 2011-07-14 Measurement system of a light source in space
US13/810,455 US9103661B2 (en) 2010-07-16 2011-07-14 Measurement system of a light source in space
KR1020137000887A KR101906780B1 (en) 2010-07-16 2011-07-14 Measurement system of a light source in space

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US34440810P 2010-07-16 2010-07-16
US61/344,408 2010-07-16

Publications (2)

Publication Number Publication Date
WO2012007561A2 true WO2012007561A2 (en) 2012-01-19
WO2012007561A3 WO2012007561A3 (en) 2012-06-28

Family

ID=44534311

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2011/062104 WO2012007561A2 (en) 2010-07-16 2011-07-14 Measurement system of a light source in space

Country Status (4)

Country Link
US (1) US9103661B2 (en)
EP (1) EP2593755B1 (en)
KR (1) KR101906780B1 (en)
WO (1) WO2012007561A2 (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2629049A2 (en) 2012-02-20 2013-08-21 Tesa Sa Sensor
DE102012110646A1 (en) 2012-11-07 2014-05-08 Scanlab Ag Apparatus for providing light beam used for e.g. dermatology, has operating assembly to determine actual position of beam spot relative to optical sensor responsive to output signal of optical sensor
WO2014137922A1 (en) * 2013-03-05 2014-09-12 Rambus Inc. Phase gratings with odd symmetry for high-resoultion lensless optical sensing
EP2793042A1 (en) * 2013-04-15 2014-10-22 CSEM Centre Suisse d'Electronique et de Microtechnique SA - Recherche et Développement Positioning device comprising a light beam
US9110240B2 (en) 2013-03-05 2015-08-18 Rambus Inc. Phase gratings with odd symmetry for high-resolution lensed and lensless optical sensing
EP3045932A1 (en) 2015-01-15 2016-07-20 CSEM Centre Suisse d'Electronique et de Microtechnique SA - Recherche et Développement Positioning system and method
EP3054311A2 (en) 2015-01-15 2016-08-10 CSEM Centre Suisse d'Electronique et de Microtechnique SA - Recherche et Développement Positioning system and method
EP3162316A1 (en) 2015-11-02 2017-05-03 Medivation AG A surgical instrument system
WO2018083510A1 (en) 2016-11-02 2018-05-11 Precilabs Sa Detector device, positioning code and position detecting method
WO2018109218A1 (en) 2016-12-16 2018-06-21 Universität Basel Apparatus and method for determining the orientation and position of two rigid bodies
US10094651B2 (en) 2015-01-15 2018-10-09 CSEM Centre Suisse d'Electronique et de Microtechnique SA—Recherche et Développement Positioning system and method
WO2018202529A1 (en) 2017-05-02 2018-11-08 Medivation Ag A surgical instrument system
US10175396B2 (en) 2014-12-11 2019-01-08 Rambus Inc. Ultra-miniature wide-angle lensless CMOS visual edge localizer
US10284825B2 (en) 2015-09-08 2019-05-07 Rambus Inc. Systems with integrated refractive and diffractive optics
EP3557182A1 (en) 2018-04-20 2019-10-23 CSEM Centre Suisse d'Electronique et de Microtechnique SA - Recherche et Développement Positioning system and method
EP3557191A1 (en) 2018-04-20 2019-10-23 CSEM Centre Suisse d'Electronique et de Microtechnique SA - Recherche et Développement System and method for improving the accuracy of a position of an object
EP3825659A1 (en) 2019-11-19 2021-05-26 CSEM Centre Suisse D'electronique Et De Microtechnique SA Position encoder

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016042642A1 (en) * 2014-09-18 2016-03-24 富士通フロンテック株式会社 Distance measuring light generating device
US9838119B1 (en) * 2015-01-29 2017-12-05 Google Llc Automatically steered optical wireless communication for mobile devices
US9581434B2 (en) * 2015-06-30 2017-02-28 National Taiwan University Of Science And Technology Apparatus and method for measuring pattern of a grating device
DE102016014384B4 (en) * 2016-12-02 2019-01-17 Carl Zeiss Industrielle Messtechnik Gmbh Method and device for determining the 3D coordinates of an object
DE102017210166A1 (en) 2017-06-19 2018-12-20 eumetron GmbH System and method for positioning measurement
AR128863A1 (en) * 2022-03-24 2024-06-19 Schlumberger Technology Bv DIGITAL MICROSCOPY SYSTEM FOR 3D OBJECTS

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5563408A (en) 1991-03-25 1996-10-08 Nikon Corporation Absolute encoder having absolute pattern graduations and incremental pattern graduations with phase control
DE20121855U1 (en) 2001-06-23 2003-06-26 Forschungszentrum Karlsruhe GmbH, 76133 Karlsruhe Optical position-measuring system for simultaneous recording of all three degrees of freedom in three-dimensional area has measuring sensor, identification tag, lighting device and control unit
WO2006107363A1 (en) 2005-03-30 2006-10-12 Samuel Hollander Imaging optical encoder
EP2169357A1 (en) 2008-09-24 2010-03-31 CSEM Centre Suisse d'Electronique et de Microtechnique SA Recherche et Développement A two-dimension position encoder
WO2010112082A1 (en) 2009-04-03 2010-10-07 Csem Centre Suisse D'electronique Et De Microtechnique Sa Recherche Et Developpement A one-dimension position encoder

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3400485B2 (en) 1993-03-23 2003-04-28 株式会社ワコム Optical position detecting device and optical coordinate input device
US6141104A (en) * 1997-09-09 2000-10-31 Image Guided Technologies, Inc. System for determination of a location in three dimensional space
US6737652B2 (en) * 2000-09-29 2004-05-18 Massachusetts Institute Of Technology Coded aperture imaging
DE10052424C1 (en) * 2000-10-23 2002-05-02 Astrium Gmbh Arrangement for determining the position of a light source
WO2005009239A1 (en) * 2003-07-23 2005-02-03 Go Sensors, Llc. Apparatus and method for determining location of a source of radiation
US7161686B2 (en) * 2003-11-13 2007-01-09 Ascension Technology Corporation Sensor for determining the angular position of a radiating point source in two dimensions and method of operation
US7473884B2 (en) 2005-04-21 2009-01-06 Avago Technologies Ecbu Ip (Singapore) Pte. Ltd. Orientation determination utilizing a cordless device
GB2434935A (en) * 2006-02-06 2007-08-08 Qinetiq Ltd Coded aperture imager using reference object to form decoding pattern
US7984995B2 (en) * 2006-05-24 2011-07-26 Smart Technologies Ulc Method and apparatus for inhibiting a subject's eyes from being exposed to projected light
US8153986B2 (en) * 2007-07-09 2012-04-10 Lawrence Livermore National Security, Llc Hybrid Compton camera/coded aperture imaging system
US7924415B2 (en) 2009-02-19 2011-04-12 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Apparatus and method for a light direction sensor
US8519343B1 (en) * 2011-04-25 2013-08-27 U.S. Department Of Energy Multimode imaging device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5563408A (en) 1991-03-25 1996-10-08 Nikon Corporation Absolute encoder having absolute pattern graduations and incremental pattern graduations with phase control
DE20121855U1 (en) 2001-06-23 2003-06-26 Forschungszentrum Karlsruhe GmbH, 76133 Karlsruhe Optical position-measuring system for simultaneous recording of all three degrees of freedom in three-dimensional area has measuring sensor, identification tag, lighting device and control unit
WO2006107363A1 (en) 2005-03-30 2006-10-12 Samuel Hollander Imaging optical encoder
EP2169357A1 (en) 2008-09-24 2010-03-31 CSEM Centre Suisse d'Electronique et de Microtechnique SA Recherche et Développement A two-dimension position encoder
WO2010112082A1 (en) 2009-04-03 2010-10-07 Csem Centre Suisse D'electronique Et De Microtechnique Sa Recherche Et Developpement A one-dimension position encoder

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
R. HARTLEY, A. ZISSERMANN: "Multiple View Geometry in Computer Vision", 2003, CAMBRIDGE UNIVERSITY PRESS

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9057599B2 (en) 2012-02-20 2015-06-16 Tesa Sa Touch probe
EP2629048A3 (en) * 2012-02-20 2014-04-16 Tesa Sa Sensor
EP2629049A3 (en) * 2012-02-20 2014-06-04 Tesa Sa Sensor
EP2629049A2 (en) 2012-02-20 2013-08-21 Tesa Sa Sensor
US9057598B2 (en) 2012-02-20 2015-06-16 Tesa Sa Touch probe
DE102012110646A1 (en) 2012-11-07 2014-05-08 Scanlab Ag Apparatus for providing light beam used for e.g. dermatology, has operating assembly to determine actual position of beam spot relative to optical sensor responsive to output signal of optical sensor
US9772432B2 (en) 2013-03-05 2017-09-26 Rambus Inc. Diffraction gratings producing curtains of minimum intensity separated by foci and extending to a focal plane
US9470823B2 (en) 2013-03-05 2016-10-18 Rambus Inc. Phase gratings with odd symmetry for optical sensing
US9110240B2 (en) 2013-03-05 2015-08-18 Rambus Inc. Phase gratings with odd symmetry for high-resolution lensed and lensless optical sensing
CN105008969A (en) * 2013-03-05 2015-10-28 拉姆伯斯公司 Phase gratings with odd symmetry for high-resoultion lensless optical sensing
US9971078B2 (en) 2013-03-05 2018-05-15 Rambus Inc. Phase gratings with odd symmetry for high-resolution lensless optical sensing
US9268071B2 (en) 2013-03-05 2016-02-23 Rambus Inc. Phase gratings with odd symmetry for high-resolution lensed and lenseless optical sensing
JP2016510910A (en) * 2013-03-05 2016-04-11 ラムバス・インコーポレーテッド Phase grating with odd symmetry for high resolution lensless optical sensing
US11372147B2 (en) 2013-03-05 2022-06-28 Rambus Inc. Phase gratings with odd symmetry for high-resolution lensless optical sensing
CN105008969B (en) * 2013-03-05 2019-03-19 拉姆伯斯公司 The phase grating with odd symmetry for high-resolution non-lens optical sensing
US9442228B2 (en) 2013-03-05 2016-09-13 Rambus Inc. Phase gratings with odd symmetry for lensed optical sensing
US11029459B2 (en) 2013-03-05 2021-06-08 Rambus Inc. Phase gratings with odd symmetry for high-resolution lensless optical sensing
WO2014137922A1 (en) * 2013-03-05 2014-09-12 Rambus Inc. Phase gratings with odd symmetry for high-resoultion lensless optical sensing
EP2793042A1 (en) * 2013-04-15 2014-10-22 CSEM Centre Suisse d'Electronique et de Microtechnique SA - Recherche et Développement Positioning device comprising a light beam
US9243898B2 (en) 2013-04-15 2016-01-26 Csem Centre Suisse D'electronique Et De Microtechnique Sa Recherche Et Developpement Positioning device comprising a light beam
US10175396B2 (en) 2014-12-11 2019-01-08 Rambus Inc. Ultra-miniature wide-angle lensless CMOS visual edge localizer
US10094651B2 (en) 2015-01-15 2018-10-09 CSEM Centre Suisse d'Electronique et de Microtechnique SA—Recherche et Développement Positioning system and method
EP3054311A2 (en) 2015-01-15 2016-08-10 CSEM Centre Suisse d'Electronique et de Microtechnique SA - Recherche et Développement Positioning system and method
EP3045932A1 (en) 2015-01-15 2016-07-20 CSEM Centre Suisse d'Electronique et de Microtechnique SA - Recherche et Développement Positioning system and method
US10284825B2 (en) 2015-09-08 2019-05-07 Rambus Inc. Systems with integrated refractive and diffractive optics
US11701180B2 (en) 2015-11-02 2023-07-18 Medivation Ag Surgical instrument system
EP3162316A1 (en) 2015-11-02 2017-05-03 Medivation AG A surgical instrument system
WO2018083510A1 (en) 2016-11-02 2018-05-11 Precilabs Sa Detector device, positioning code and position detecting method
WO2018109218A1 (en) 2016-12-16 2018-06-21 Universität Basel Apparatus and method for determining the orientation and position of two rigid bodies
US10914578B2 (en) 2016-12-16 2021-02-09 Universität Basel Apparatus and method for determining the orientation and position of two rigid bodies
US11510738B2 (en) 2017-05-02 2022-11-29 Medivation Ag Surgical instrument system
WO2018202529A1 (en) 2017-05-02 2018-11-08 Medivation Ag A surgical instrument system
EP3557191A1 (en) 2018-04-20 2019-10-23 CSEM Centre Suisse d'Electronique et de Microtechnique SA - Recherche et Développement System and method for improving the accuracy of a position of an object
EP3557182A1 (en) 2018-04-20 2019-10-23 CSEM Centre Suisse d'Electronique et de Microtechnique SA - Recherche et Développement Positioning system and method
EP3825659A1 (en) 2019-11-19 2021-05-26 CSEM Centre Suisse D'electronique Et De Microtechnique SA Position encoder
US11982549B2 (en) 2019-11-19 2024-05-14 Csem Centre Suisse D'electronique Et De Microtechnique Sa—Recherche Et Developpement Position encoder

Also Published As

Publication number Publication date
WO2012007561A3 (en) 2012-06-28
KR20140025292A (en) 2014-03-04
US9103661B2 (en) 2015-08-11
EP2593755B1 (en) 2015-05-27
US20130120763A1 (en) 2013-05-16
KR101906780B1 (en) 2018-10-10
EP2593755A2 (en) 2013-05-22

Similar Documents

Publication Publication Date Title
US9103661B2 (en) Measurement system of a light source in space
US11680790B2 (en) Multiple channel locating
US10514148B2 (en) Pattern projection using microlenses
EP1875161B1 (en) Multiple channel interferometric surface contour measurement system
CN112384167B (en) Device, method and system for generating dynamic projection patterns in a camera
CN103292740B (en) A kind of 3-D scanning instrument measurement method and device thereof
NO884337D0 (en) OPTO-ELECTRONIC SYSTEM FOR POINTING DETERMINATION OF A FLAT GEOMETRY.
US20140293011A1 (en) Scanner System for Determining the Three Dimensional Shape of an Object and Method for Using
JP5770495B2 (en) Shape measuring device and lattice projection device
KR20070003791A (en) System and method for optical navigation using a projected fringe technique
CN110501836B (en) Pattern generating apparatus and method of manufacturing the same
JP7498651B2 (en) 3D measuring device
JP5853284B2 (en) Shape measuring apparatus and shape measuring method
JP2022161985A (en) Three-dimensional measurement device and light source device
JP4357002B2 (en) Method and apparatus for measuring the direction of an object
JP2004053532A (en) Optical shape measuring device
KR102717430B1 (en) Device, method and system for generating dynamic projection patterns in a camera
WO2019088982A1 (en) Determining surface structures of objects
Chang 3D imager using dual color-balanced lights
JP2002277225A (en) Lighting system for optical shape measuring instrument
KR20090010360A (en) A 3d shape measuring system using lateral scan
US20220074738A1 (en) Three dimensional imaging
CN110568652A (en) pattern generating apparatus and method of manufacturing the same
Paveleva et al. Projection colour moiré technique for 3-D surface reconstruction
CN115003982A (en) System and method for determining three-dimensional contours of a surface using plenoptic cameras and structured illumination

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11749753

Country of ref document: EP

Kind code of ref document: A2

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
ENP Entry into the national phase

Ref document number: 20137000887

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 13810455

Country of ref document: US

WWE Wipo information: entry into national phase

Ref document number: 2011749753

Country of ref document: EP