WO2000066972A1 - Procede et dispositif de balayage d'objets - Google Patents

Procede et dispositif de balayage d'objets Download PDF

Info

Publication number
WO2000066972A1
WO2000066972A1 PCT/DE2000/000991 DE0000991W WO0066972A1 WO 2000066972 A1 WO2000066972 A1 WO 2000066972A1 DE 0000991 W DE0000991 W DE 0000991W WO 0066972 A1 WO0066972 A1 WO 0066972A1
Authority
WO
WIPO (PCT)
Prior art keywords
array
elements
receiver
imaging
approximately
Prior art date
Application number
PCT/DE2000/000991
Other languages
German (de)
English (en)
Inventor
Klaus KÖRNER
Hans Tiziani
Original Assignee
Universität Stuttgart
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Universität Stuttgart filed Critical Universität Stuttgart
Priority to AU43913/00A priority Critical patent/AU4391300A/en
Priority to DE10081176T priority patent/DE10081176D2/de
Priority to EP00925063A priority patent/EP1188035A1/fr
Publication of WO2000066972A1 publication Critical patent/WO2000066972A1/fr

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques

Definitions

  • the present invention relates to the preambles of the independent claims.
  • the present invention is concerned with the three-dimensional scanning of objects.
  • the 3D shape of surfaces is often measured using strip triangulation methods.
  • the object or the scene is illuminated from a large angle of incidence, for example between 30 ° and 60 °. However, this leads to disturbing shadowing of the object.
  • Discontinuous surfaces often pose a problem in optical 3D measurement. For example, larger paragraphs in the surface of the object can lead to a violation of the sampling theorem. This is remedied by the Gray code method, in which a sequence of binary images is projected on.
  • Gray code method in which a sequence of binary images is projected on.
  • COMET-500 system from Steinbichler Optotechnik GmbH.
  • all of the methods mentioned are usually dimmed down, both in terms of lighting and imaging the object surface.
  • the company GF in D-14513 Teltow offers digital light projection on the basis of illuminated micro mirrors, digital micromirror devices. Grid images with a repetition frequency of approximately 10 Hz can be obtained generated and read. However, this frequency is not yet sufficient for high-speed imaging.
  • Patent specification WO 92/14118 describes a 3D measuring arrangement, which is referred to as "confocal", which evaluates the contrast of a striped pattern projected onto the object surface. Both the lighting and the imaging lens or a common lens are each focused on the same level in the space of the object. However, no possibility is given to achieve high accuracy for object distances in the decimeter and meter range.
  • the aim of the present invention is to provide something new for commercial use.
  • the invention thus proposes a device for three-dimensional object detection with at least two imaging systems which have imaging optics facing the object, at least one being designed as an observation system for object observation and at least one having an elementary means which is movable in front of the imaging optics and whose elementary image moves on a pixel line through the object space, it being provided that the elementary means is designed as an elementary means which is also movable with a lateral component to the optical axis of the imaging optics, whereby the pixel line is inclined to the optical axis of the imaging optics, and the observation system for observation along the pixel line is arranged.
  • a first essential aspect of the invention is thus to be seen in the fact that two separate optical systems are provided with elementary means which are movable, the observation being carried out with one of the two systems along the pixel line.
  • the elementary means can move against one another.
  • the imaging systems represent systems.
  • Detector arrays in particular CCD arrays, can be used which are moved synchronously, for example with piezo actuators.
  • the signals obtained from these detectors can preferably be selected by detecting when a specific object area results in a specific signal behavior.
  • luminous objects for example in the case of brightly illuminated or even luminous objects, this will be the case if a particularly large signal is detected by the observed object point in the first and second arrays.
  • a scanner can be provided in which one of the imaging systems comprises an illumination system.
  • This lighting system will preferably illuminate the object with a series of separate lighting elements which implement the movable elementary means.
  • the observation system will comprise a multiplicity of observation elements, wherein an object area can be assigned to each observation element.
  • An advantage of the arrangement according to the invention consists in particular in that the evaluation of the signals can take place by the detection of the surface being accepted when a specific signal, such as a signal maximum, is recorded on a specific pixel of the array.
  • the pupil of the observation system is preferably arranged in the focal plane of the imaging optics and / or at least essentially.
  • the invention thus solves the task of flat testing the 3D shape of technical and natural surfaces of objects in space and scenes, preferably with dimensions in the range above one millimeter.
  • the invention thus enables the rapid acquisition and testing of the 3D shape of bodies in scenes with a large depth. Complete scenes can be recorded in real time.
  • the light output required for the illumination of object surfaces in a scene is sometimes greatly reduced. This is due to the fact that an object is recorded when a particularly large amount of light reaches the detector. A further improvement is the significant increase in the evaluation speed for 3D acquisition. There is the technical possibility of making the 3D point cloud of the object or scene available in video clock.
  • At least one electromagnetic radiation source is arranged and this is formed by means of at least one structured array as at least one structured, luminous array with at least two surface elements. At least one surface element lights up.
  • the structured, luminous array can also be understood as a transmitter array and the luminous surface elements can be understood as transmitter elements of this transmitter array.
  • the structured luminous array can represent an array of controllable micro light sources, for example micro light emitting diodes.
  • the radiation source can also act as an unstructured radiation source in a structured array, which can be a transmission or a reflection grating. be upstream.
  • the luminous areas of the structured luminous array represent luminous flat elements in a luminance distribution in the structurally luminous array.
  • at least one illuminating beam path is arranged with at least one illuminating lens, which is assigned to at least one structurally luminous array. This is how an image is realized and the object surfaces can be illuminated in a structured way.
  • at least one imaging beam path for imaging elements of the object surface and at least one receiver array with at least two elements and at least one imaging lens assigned to the receiver array are arranged. Elements of the receiver array detect radiation from elements of the illuminated object surface in the recording process. Furthermore, elements of the receiver array are always formed by the imaging lens with a geometric-optical focus volume in the object space that corresponds to the scene space.
  • At least one luminous flat element of the structured luminous array can experience a shift.
  • an image of at least one luminous flat element in the object space is formed with a geometrical-optical sharp volume.
  • the receiver array can be a target with a coating that is sensitized to X-ray, UV, VIS or IR radiation and is scanned out.
  • the receiver array can be designed as a CCD matrix camera. This enables optimal image acquisition for standard tasks with a good signal-to-noise ratio.
  • the receiver array can be designed as a CMOS matrix camera. The random access to pixels enables tracking of moving elements of the object surface in space.
  • the detection of radiation from the elements of the object surface by the elements of the receiver array takes place in a time range ⁇ t B in which the displacement of at least one luminous flat element of the structured luminous array is also carried out, at least one signal value being obtained in each case. It is within the
  • Time range ⁇ t B at least with a luminous flat element of the structured luminous array has been carried out an at least approximately predetermined displacement - including a predetermined optical displacement thereof as a result of a geometrical-optical path length change - and so transmits at least one luminous flat element to different ones Points in time at at least two different locations.
  • the focus volume of at least one image of at least one luminous flat element of the at least one structured luminous array is formed in the object space, and the focus volume of at least one image of at least one element of the receiver array, this focus volume also in the object space is formed, and at least one element of the at least one object surface is coincident at least approximately once due to the implementation of the predetermined displacement of at least one luminous surface element of the structured luminous array with at least one displacement component, parallel to the optical axis of the illumination objective.
  • the coincidence of the sharp volume of an image of a luminous flat element of the structured luminous array and the sharp volume of an image of an element becomes at least once and at least approximately in the object space of the receiver array and at least one element of the at least one object surface.
  • the coincidence occurs, at least the element of the receiver array involved in this coincidence experiences at least once a time-varying irradiation compared to the case of non-coincidence, and so this element of the receiver array detects a changed signal at least once.
  • a luminous flat element can be firmly bound to a structure of a body, for example to a maximum of transparency on a displaceable transmission grating in connection with a radiation source.
  • the positions of the luminous flat elements of the structured luminous array and the positions of the images of the luminous flat elements in the object space are determined and determined according to the Newtonian imaging equation from the position of the illumination lens in the 3D image arrangement and the focal length f B of the illumination lens realized as far as the associated approximations are acceptable.
  • the shift is preferably carried out at a constant speed.
  • the structured, luminous array can be an electronically controllable, structured, luminous array, for example an LCD with an upstream radiation source, which is shifted in a straight line by a movement system.
  • These distances can be understood in the object space as traces of successively imaged light points are, for example, as the shifted extrema of the luminance in an illuminated line grating at the maximum transparency or the trace of an image of an illuminated slit.
  • the traces of the light spots can be observed on an object surface if the image of a light spot and the observed point of the object surface at least approximate each other. Due to the triangulation effect, a lateral migration of the image of the light point can be observed when the image of a luminous flat element is shifted and the image of an element of the receiver array that coincides with it.
  • the offset from the starting position increases with increasing deviation of the illuminated area of the object surface from the current point of coincidence of the two images, the element of the receiver array detecting an increasingly blurred image of the luminous flat element.
  • This method enables a predetermined displacement process of the structured, luminous array in a displacement direction with a component mz A direction to make a clear statement about the presence of an element of the object surface at a predetermined location in the object space.
  • the amount in z A direction is selected such that the sharp surface gradually passes through the object space from a close range to a far range through a predetermined, controlled displacement of the luminous surface elements of the structured luminous array.
  • This method is carried out with the entirety of the luminous surface elements of the structured luminous array and the entirety of the elements of the receiver array for the entirety of the elements of the object surfaces in the detection volume of the 3D recording arrangement.
  • an interpolation can be carried out to improve the accuracy of the determination of the location of a detected element of the object surface.
  • the aperture diaphragm of the imaging lens can in this case preferably be made small, for example the relative aperture can be 1:22, so that the sharp volume of the image of the elements of the receiver array has a great depth.
  • the lighting lens can have a comparatively large relative aperture.
  • the relative opening can be 1: 2.
  • the sharp volume can have a small depth.
  • the focus volume in the case shown here moves in each case an image of a luminous flat element in the
  • Sharp volume of an image of an element of the receiver array An element of the object surface can thus be permanently in the sharp volume of an image of a receiver element. However, it is only when the sharp volume of the image of a luminous surface element coincides with an element of the object surface that this element of the object surface is structured. Thus, by means of the element of a receiver array that is read out multiple times during the predetermined displacement of a luminous surface element, a signal curve with a relative maximum at the time of the coincidence can be detected.
  • the predetermined displacement of the luminous surface elements can be controlled electronically. In addition, an electronically controlled change in the optical path length in the space in front of the receiver array can also be carried out.
  • a method for 3D recording of object surfaces areas proposed in a scene in which the luminous surface elements are shifted relative to the illumination lens, each in a separate displacement distance, preferably in the time intervals ⁇ t x of the detection of light.
  • the luminous surface elements preferably have an at least approximately predetermined constant luminance at least at a point in time t x within a time interval ⁇ t x .
  • the luminous surface elements are each positioned on a B segment BS A , the B segments BS AJ being the target locations for the luminous surface elements at a point in time tj. represent within the time interval ⁇ t x .
  • the images of these B sections BS A3 are preferably formed in the object space by imaging with at least one illumination lens to form a section bundle SBi with a convergence point Ki.
  • the convergence point Ki is at least at a distance d K ⁇ mm from the optical axis of the illumination lens from the last part of the distance d of the pupil center PZ 0B of the illumination lens from the pupil center of the most distant imaging lens. Accordingly, the sensitivity to depth is low.
  • the maximum distance d ⁇ max is five times the same.
  • the value dki d is preferably realized.
  • At least in a time range .DELTA.t B during the shift operation of the illuminating surface elements are in each case exactly one image by a receiver element and in each case exactly one image of a luminous surface element in the object space at least at a single time t ⁇ within each time interval ⁇ ti the detection of at least approximately together on the Image of a B section BS A3 positioned. So at least at this point in time tj . a pair is formed in the object space from the image of a receiver element and the image of a luminous surface element and so such pairs are generated in the object space and these are pushed through the object space. Sharpness volumes of images of the luminous surface elements coincide with surface elements of the object surface at least once in the displacement process.
  • a current point of coincidence is formed at this point in time t in the focus of the current cutting volume of the focus volume of the two images.
  • the elements of the receiver array preferably detect a signal curve with at least one relative extremum of the signal size in the time interval ⁇ t x of the coincidence.
  • the focal length f B of the illumination lens and the focal length f A of the imaging lens are determined and implemented.
  • both the luminous surface elements of the structured luminous array and the elements of the receiver array are at least approximately mapped in the same plane in a part of the object space.
  • the electronically controlled displacement of the luminous surface elements to another location can take place using micromechanical means. An electronically controlled shift is also possible.
  • the average object point brightness and the color information can also be obtained from the signal curve by using a color camera.
  • a method for 3D recording in which ⁇ t preferably in the time intervals .
  • the detection of light a luminous surface element is positioned on each B section BS A -.
  • the B routes BS A - will thereby directed at the pupil center PZ 0A of the imaging lens in the array space, so that the convergence point Ki is at least approximately positioned in the pupil center of the imaging lens.
  • the convergence point Ki is also positioned in the pupil plane of the illumination objective, and so during the displacement process an image of a receiver element and an image of a luminous flat element in the object space are at least approximately together on the image of a B section BS A -, positioned.
  • a pair with a fixed assignment can be formed from the image of a receiver element and the image of a luminous flat element in the object space and, during the displacement process of the luminous flat elements, one image from a receiver element and one image from a luminous flat element in the object space, at least approximately once to be brought to coincidence.
  • the receiver array can be fixed and set such that the "continuous" focus area or the focus plane of the illumination lens coincides at least once with the focus plane of the imaging lens. It is advantageous if the "continuous" focus plane of the illumination lens is always in the comparatively large depth of field of the imaging lens remains.
  • This approach can be implemented with electronic grids with a very high number of pixels. Electronic grids can be continuously stretched or compressed in the displacement process in order to meet the condition of the convergence of the track bundle.
  • a method for 3D recording of obect areas proposed in a scene, in which a luminous surface element is preferably positioned at least at one point in time ti within each time interval ⁇ ti with an at least approximately constant relative luminance in each case on a B path BS Aj in the time intervals ⁇ ti of the detection of light.
  • the convergence point Ki is positioned at least approximately in the focal plane of the illumination objective in the object space and additionally in the pupil center PZ 0A of the pupil of an imaging objective in the object space.
  • an image of a receiver element and an image of a luminous surface element in the object space are positioned at least approximately together on the image of a B section BS Aj at least at a time t within each time interval ⁇ t of the detection, and so at least At this point in time ti, a pair with a fixed assignment is formed in the object space from the image of a receiver element and the image of a luminous surface element, and pairs with a fixed assignment are thus generated in the object space.
  • the B sections BS Aj are positioned parallel to a straight line g AP , the straight line g AP intersecting the focal point F AB of the illumination lens in the array space and the increase with the amount from the quotient “distance from the pupil center PZ 0A of the pupil of the Imaging lens in the object space from the axis of the lighting lens and the focal length f B of the lighting lens ", this increase in the straight line g AP being related to the axis of the lighting lens.
  • Two central perspective lenses with mutually inclined axes can be used.
  • the array can preferably be fixed and set in such a way that the “continuous” focus plane of the illumination objective is included at least once the focal plane of the imaging lens coincides.
  • a straight-line relative displacement of the receiver array relative to the imaging lens is carried out parallel to the optical axis of the imaging lens, and during the shift, signal values are read out successively from a receiver element in each case, and so a signal curve is formed by means of a receiver element and generated when imaging multiple times Displacement sections of elements of the receiver array with the imaging lens are at least approximately formed from their images in the object space at least approximately one section cluster SB 2 with a convergence point K 2 in the focal point F 0A of the imaging lens.
  • the displacement of the receiver array is carried out in such a way that, during the displacement process, an image of a receiver element and an image of a luminous flat element in the object space are at least approximately together at least at a time t x within each time interval ⁇ t x the image of a B segment BS, are brought to coincidence and shifted, and pairs of images are thus generated in the object space. Since each element of the receiver array enables the acquisition of a signal curve, parallel processing is possible. Furthermore, the imaging lens can also be moved to the receiver array.
  • a method for 3D recording of object surfaces in a scene in which preferably the convergence point Ki of the route bush egg SBi together with the convergence point K 2 of the route bush egg SB 2 in the object space both with the focal point F 0A and with the pupil center PZ 0A Pupil of the imaging lens at least anna- are brought to coincidence, with the illumination lens and the imaging lens being at least approximately telecentric on the array side.
  • the luminous surface elements are shifted at least approximately parallel to a straight line g A on sections.
  • the straight line g A penetrates the focal point F AB of the illumination lens in the array space.
  • the increase is realized with the amount from the quotient "focal length of the illumination lens and distance d of the focal point F ⁇ of the imaging lens from the axis of the illumination lens in the object space", this increase in the straight line g A to an axis perpendicular to the axis of the illumination lens "and because of the telecentricity of the imaging lens in the array space the line g A coincides with the line g AP in this case.
  • This method enables 3D recording in a very large depth measurement range, an arrangement with parallel and at least approximately identical lenses can advantageously be selected.
  • the structured lighting is preferably carried out with an illuminated line grid.
  • a linear shift is carried out parallel to the optical axis of the imaging lens.
  • a separate displacement path is generated for each element of the receiver array.
  • these displacement paths are imaged with the imaging lens, a second cluster of routes with a convergence point K of the imaged routes in the object space at the focal point F 0A of the imaging lens is created from the images of these displacement routes.
  • the convergence point Ki and the focal point F 0A of the imaging lens in the object space are brought at least approximately to the coincidence.
  • the point of convergence K x of the distances in the object space is formed by the locations of certain relative luminance of the illuminated line grid, for example the maxima of the transmis- sion of the line grid, are displaced at least approximately parallel to a straight line g A on displacement paths.
  • periodic signals with a modulation maximum can be detected in the elements of the receiver array, from which the information about the absolute phase of an object point in connection with the arrangement can be obtained. If the illuminated line grid is moved at a constant speed, periodic signals with a constant frequency can be obtained in the elements of the structured illuminated array. This simplifies the signal evaluation and can therefore lead to a considerable reduction in the computing time. The sharpness areas are brought to coincidence by the synchronous positioning of the structured luminous array and the receiver array.
  • the position of the at least one luminous flat element can also be stationary and in this case at least components of the illumination lens move.
  • a method for 3D recording of object surfaces in a scene in which preferably one luminous surface element each in the time intervals ⁇ t x of the detection of light in a time range ⁇ t B at least approximately at its own location 0 AB3 in the structured light Array is arranged relative to the lighting lens and brought to light by control and is imaged by the lighting lens and this luminous surface element is always mapped to a predetermined location in the object space 0 0B3 at least at a point in time t x within the time interval ⁇ t x .
  • This image location 0 0B3 of a luminous flat element is shown in the object space a control is changed by actuating a different, predetermined flat element and lighting it up, so that the image of a luminous flat element on a controllable path curve, structured from distance increments AI 0 of the images of the distances AI A of the luminous
  • an image of a detected and read element of the receiver array is coincident in the object space with the image of a luminous flat element at least at a time t x within the time interval ⁇ t x , and thus a pair of images with changing images is generated, which takes on different positions in the object space.
  • the object space is gradually penetrated in depth by such pairs.
  • sharp volumes of the image of a luminous flat element each coincide with a flat element of the object surface at least once in the time range ⁇ t B in a time interval and the detected and the read elements of the receiver array tinterval ⁇ ti of the coincidence on a signal curve with at least one relative extremum of the signal size, the time range ⁇ t B being made larger than the time interval ⁇ ti and thus at least one time interval ⁇ ti being temporally fitted into the time range ⁇ t B.
  • the structured, luminous array and the receiver array can represent rigid, preferably three-dimensional structures; for example, the luminous array can have luminous diodes or vertically radiating laser diodes in a 3D arrangement. Little by little, individual surface elements are electronically controlled and illuminated. By controlling predetermined, luminous surface elements and reading out elements of a receiver array, whose images in the object space represent a pair of images, an extremum in the signal value of an element of the receiver array is obtained if and only if the image pair with an element of the ob - jektobe flat at least approximately coincides.
  • a luminous surface element represents a small volume element due to its fixed position in the composite of the structured, luminous array and the parameters of its representation in the object space.
  • the mechanical array of the luminous array can represent a 3D model of the object surface to be examined and the luminous surface elements are simultaneously mapped onto the object surface.
  • This also applies analogously to the structure of the receiver array.
  • This can also have an object-adapted 3D structure.
  • Both the luminous array and the receiver array can have several surfaces with luminous elements or receiving elements in the depth, so that the detection of three-dimensional object surfaces of great depth is possible without mechanical displacement. In this way, unknown object surfaces can be detected in a defined measuring volume.
  • a method for 3D recording of object surfaces in a scene in which preferably one with object surfaces illuminated by a radiation source with a first and at least a second imaging beam path between the two axes of two imaging lenses for imaging the object surfaces Symmetry line is formed.
  • At least one receiver array is assigned to each imaging beam path, and the two receiver arrays each have elements that detect light from the elements of the illuminated object surfaces in the object space in the acquisition process in the time range ⁇ t B the two receiver arrays are each moved to another location during the recording process.
  • the light from the elements of the object surfaces is detected for a period of time ⁇ t x by the elements of the receiver array, and the elements of the receiver array are then read out, signal values being obtained in each case.
  • the two receiver arrays are shifted simultaneously on shift paths AS A ⁇ and AS A2 .
  • the images of the displacement sections AS Ai and AS A2 , the sections AS 0 ⁇ and AS 02 are positioned in the object space at least approximately on the line of symmetry between the two axes of the lenses.
  • a convergence point K 2i is formed from the route bundle SB 2 and the images of the displacement routes AS Alj of the individual elements of the first receiver array, the routes AS 0 ij, and from the route bundle SB 22 of the images of the displacement routes AS 2j of the individual elements of the second receiver array, the lines AS 023 , a convergence point K 22 is formed and the convergence point K X2 and the convergence point K 22 are brought to coincidence on the line of symmetry and form a convergence point K 0 on the line of symmetry and the two receiver arrays shifted such that their images at least partially coincide in the object space, so that the images of the elements of the first receiver array and the images of the elements of the second receiver array in the object space are brought at least approximately in coincidence in pairs, the pair-forming elements of the represent corresponding elements in both receiver arrays.
  • a current point of coincidence is preferably formed from two images of elements, which is shifted through the object space. is pushed. This is preferably done with all elements of the receiver arrays.
  • Signal profiles Si of the first receiver array are preferably formed by reading out the elements during the displacement of the first receiver array.
  • the displacement of the first receiver array is carried out parallel to a straight line g AlP and so the elements of the first receiver array are shifted at least approximately parallel to a straight line g AiP on displacement paths AS Aij .
  • signal curves S 2 of the second receiver array are formed by reading out the elements during the displacement of the second receiver array, and the displacement of the second receiver array is carried out parallel to a straight line g A2P , and the elements of the second receiver array are thus Arrays shifted at least approximately parallel to a straight line g A2P on displacement paths AS A2j , the displacement of the second receiver array taking place at least approximately simultaneously with that of the first receiver array.
  • the line g AiP is cut at a point P Ai on the symmetry line in the main plane of the first imaging lens in the array space and the straight line g A2P is cut at a point P A2 on the symmetry line in the main plane of the second imaging lens , the straight line g AiP additionally contains the focal point F Ai of the first imaging lens and the straight line g A2P contains the focal point F A2 of the imaging lens in the array space. Due to the natural structuring of the illuminated or even self-illuminating object surface, the signal profiles S 3 and S 2 j recorded in each element of the receiver array are more or less modulated.
  • the z 0 position of the respective associated element of the object surface is to be determined from the evaluation of this modulation, which occurs particularly on the sharply depicted elements of the object surface.
  • the two signal profiles Sij and S 2j of two corresponding elements lj and 2j Receiver arrays are stored in the memory of a computer by moving the two receiver arrays.
  • the elements of two receiver arrays represent corresponding elements, the images of which coincide in the object space in a focus volume at least at one point in time.
  • an element of the first and an element of the second receiver array in a common focus volume form at least one Time of a pair of corresponding elements.
  • each of the two signal profiles Si- and S 2 - by means of a window function with at least one single window, with a minimum window length corresponding to two signal values and a maximum window length that is at least approximately the length of the signal profiles Si-, and S 2 - , corresponds to overlapping signal pieces Si part 3 and S 2 te ⁇ i 3 in each of the two signal profiles Si and S 2] formed from the windows.
  • Window lengths with a length of, for example, 8 or 16 signal values are advantageous.
  • This window function is synchronously shifted by at least one signal value, which corresponds to an increment of the shifting of the receiver arrays, over each of these two signal profiles S 3 and S 2] and from each current window in the position k, with 1 ⁇ k ⁇ m, a signal piece Si part position 3 and S 2 t e ⁇ i position k 3 is formed.
  • these one after the other signal pieces Si te i l D Posiion k] and S 2 formed covering eil 3 position 3 n j each of the two waveforms Si and S 2 -, in a part region, wherein in each case at the same end of the two waveforms Si, and S 2] is started with the shifting of the window function in both signal pieces.
  • S 2 part position ⁇ j i calculates the cross-correlation function, but first inverting one of the two signal pieces, that is to say mirroring all the values of the same, and so from an original signal piece S x Te ⁇ l pos i t i on 1 3 and an inverted signal piece S 2 Te ⁇ l P os i t i on 1 INV 3, the maximum of the cross correlation function MCC i 2 - is calculated pos i t i on 1 and stored .
  • the inversion is necessary in order to obtain correctable signals, since the imaging beams of the elements of a corresponding pair move in opposite directions along a track in the object space during the displacement in an at least approximately identical section of the scene, that is, for example, towards one another.
  • This track lies parallel to the main section of the 3D recording arrangement.
  • a maximum value curve is formed from the m calculated maxima MCC m , the resulting maximum M m 3 again being determined in this maximum value curve and the location of the maximum M m - the maximum value curve of the two original signal curves and thus the path of the displacement of the two receiver arrays is assigned.
  • This maximum value curve calculated in this way can have the course of a Gaussian function.
  • an intensity threshold can be used, whereby signal pieces with a very low average intensity are excluded from further processing.
  • the location of the respective maximum M - is defined as the location of the image of the respective element of the object surface associated with the two corresponding elements lj and 2j in the array space.
  • this maximum M - in the array space becomes z 0 coordinate of the respective element of the object surface in the z 0 direction and also the x 0 and y 0 position of the respective element of an object surface, since the geometry of the 3D recording arrangement is known. In this way, the positions of the elements of an object surface from which signal profiles are recorded can be calculated, the geometry of the 3D recording arrangement being known and the displacements, including the step size of the displacement, of the two receiver arrays being predetermined.
  • the axis of a first imaging lens for imaging the object surfaces can be aligned parallel to the axis of a second imaging lens for imaging the object surfaces. It is possible that the main plane of the first imaging lens in the array space and the main plane of the second imaging lens coincide at least approximately in a common plane and the receiver arrays are at least approximately jointly in one plane.
  • the point P Ai lies on the line of symmetry and the point P A2 lies on the line of symmetry and so the two points P A1 and P A2 are brought to coincidence at least approximately in one point P A.
  • the 3D point cloud can also be obtained from free-space scenes in the background of the scene.
  • a method for 3D recording of object surfaces in a scene in which preferably illuminated object surfaces are imaged with a first and at least one second imaging beam path.
  • the two receiver arrays are simultaneously and parallel to the respective optical axes of the parallel, at least approximately identical imaging beams. corridors whose main planes coincide are shifted, with the object surfaces in the scene being illuminated.
  • the signal course Siz is formed by reading out laterally adjacent elements of the first receiver array during the displacement of the first receiver array so that exactly those elements of the receiver array are used for signal formation that lie on lines that are parallel to one another Straight lines g AiP are aligned, which intersects the point P A in the common main plane of the imaging lenses.
  • the signal curve formed corresponds at least approximately to the signal curve Si which arises during a real shift parallel to a straight line g AiP
  • the signal curve S 2z is formed by reading out elements of the second receiver array lying laterally next to one another during the shift of the second receiver array, that exactly those elements of the receiver array are used for signal formation which lie on lines which are aligned parallel to a straight line g A2P which intersects the point P A in the common main plane of the imaging objectives.
  • the signal curve S 2z formed thus corresponds at least approximately to the signal S 2 which arises during a real shift parallel to a straight line g A2P .
  • a current point of coincidence of elements of the two receiver arrays is formed, which is formed in succession at different predetermined locations in the object space in the time range ⁇ t B.
  • the two signal courses Si j , S 2j of two elements of the receiver arrays which correspond at least at one time in each case, become, by means of the correlation method already described above, with two windowed signal courses with the piecewise inversion of signal pieces for determining the z 0 position of an element of the object surface, the z 0 coordinate of the respective element of the object surface is calculated, and thus also its x 0 and y 0 position, and so the entire 3D point cloud calculated from object surfaces in a scene, the geometry of the 3D recording arrangement being known and the displacements of the receiver arrays being predetermined.
  • the imaging lens can also be moved to the receiver array.
  • a method for 3D recording of object surfaces in a scene with at least one electromagnetic radiation source is proposed, which is designed as a structured, luminous array with regions of different luminance.
  • at least one radiation source is preferably designed as at least one structured array as a structured, luminous array with luminous surface elements.
  • a structured array in the manner of a line grating with an upstream radiation source can preferably also be used.
  • an electronically controllable line grid can be implemented in the array space.
  • the radiation source and the structured array together form the structured, luminous array.
  • the locations of certain relative luminance of the structured luminous array and also the local extremes of the luminance of this structured luminous array can be made electronically displaceable.
  • the radiation source can be designed for radiation in the visible and in the invisible spectral range, for example in the spectral range from 750 nm to 900 nm.
  • at least one illumination beam path is arranged with at least one illumination objective.
  • the structured, luminous array is assigned to the lighting objective.
  • an image of the structured, luminous array can also be assigned to the lighting objective for imaging.
  • the lighting objective has an effective aperture diaphragm with an extension D B and a diaphragm center BZ B.
  • the structured, luminous array and the lighting lens are used for structured lighting of the object surfaces in the scene.
  • at least one imaging beam path with at least one imaging stage is arranged for imaging the elements of the object surfaces in the scene.
  • At least one receiver array is assigned to this imaging lens.
  • the imaging lens has an effective aperture diaphragm with an aperture center BZ A for imaging the elements of the object surfaces.
  • This imaging lens is assigned at least one receiver array with elements that detect light from the elements of the structured illuminated object surfaces in the object space during the recording process.
  • the distance d of the pupil center PZ 0 B of the illumination lens, as an image of the aperture center BZ B in the object space, from the pupil center PZ 0A of the imaging lens , as an image of the aperture center BZ A in the object space, is at least one eighth of the extent D B of the aperture diaphragm of the illumination lens.
  • an image of a luminous surface element in the object space is formed from a luminous surface element in a luminance distribution with a preferred, at least approximately predetermined constant relative luminance by imaging with the illumination lens.
  • a movement system with preferably at least one movable component is arranged, which is assigned to the structured, luminous array.
  • the displacement distances of the luminous surfaces Chen elements in the array space are preferably formed from the mechanical movement of the structured, luminous array.
  • the luminous surface elements it is also possible for the luminous surface elements to be moved electronically at the same time, for example in the lateral direction, and for the movement system to use at least one movable component to move the structurally luminous array parallel to the optical axis of the illumination objective. After mapping these displacement paths through the illumination lens into the object space, their image is at least approximately formed as a path cluster SBi with a convergence point K x .
  • the displacement distances of the luminous surface elements can be arranged at least approximately in parallel, and thus the convergence point Ki can be positioned at least approximately in the focal plane of the illumination lens in the object space and in the pupil center of the imaging lens in the object space.
  • the luminous array can be designed as an electronically controllable line grid with controllability of the location of the lines and the line width.
  • the lines can be arranged perpendicular to the main section and the displacement distances of the luminous flat elements and thus also the luminous surface elements with local extrema of the luminance in the array space - as a result of the mechanical movement of the structured luminous array and the electronic control of the structured luminous
  • Arrays can be formed in the array space. From these displacement distances in the array space in the main section and in each to Main section parallel section plane at least approximately at least one cluster of lines with a convergence point K x be formed.
  • the convergence point K x of the cluster of lines can be arranged in the pupil center PZ ⁇ of the imaging objective in the array space.
  • the displacement distances of the luminous surface elements can be arranged at least approximately parallel to a defined straight line g AP .
  • the luminous surface elements in a luminance distribution preferably have an at least approximately predetermined predetermined constant luminance.
  • the straight line g AP intersects the focal point F AB of the illumination lens in the array space and has the increase with the amount from the quotient “distance of the pupil center PZ 0A of the pupil of the imaging lens in the object space from the axis of the illumination lens and focal length f B of the illumination lens” , this increase in the straight line g AP being related to the axis of the illumination lens.
  • a component of the movement system can be assigned to the receiver array, and thus, during the mechanical movement of the receiver array on a displacement path, its elements displacement paths AS A;] on parallel straight lines may be assigned, preferably from the images AS ⁇ D of these lines AS A -, when imaging by the imaging lens at least approximately a line bundle SB 2 with a convergence point K 2 in the object space.
  • the convergence point K x and the convergence point K 2 with the focal point F 0A and the pupil center can PZ 0A of the pupil of the imaging lens in the object space can be brought at least approximately to the coincidence.
  • the imaging lens can be implemented telecentrically on the side of the space of the arrays.
  • a component of the movement system can be assigned to the receiver array, and thus the elements of displacement elements AS A -, on a displacement path during the mechanical movement of the receiver array be assigned to parallel straight lines, at least approximately at least one line tuft SB 2 with a convergence point K 2 in the object space being formed from the images of these lines when imaged by the imaging lens.
  • the convergence point K x and the convergence point K 2 can be brought at least approximately to the coincidence with the focal point F 0A and the pupil center PZ 0A of the pupil of the imaging lens in the object space and the illumination lens and the imaging lens can each be designed telecentrically on the side of the space of the arrays.
  • the axes of the illumination lens and the imaging lens can be arranged parallel to one another and the focal planes of the same can be brought to coincide in the object space.
  • the components of the movement system can be arranged such that in the array space with the focal point F AB of the illumination lens as a reference point for the luminous array, a total direction of movement is at least approximately parallel to a straight line g A is realized in the array space, so that the elements of the structured, luminous array move on parallel straight lines to the straight line g A and this straight line g A with the focal point F AB of the lighting object in the array space is brought to the section and has the increase with the amount from the quotient "focal length f B of the illumination lens and distance d of the focal point F ⁇ of the imaging lens in the object space from the axis of the illumination lens ", this increase in the straight line g A to a perpendicular straight line Axis of the lighting lens is related.
  • the structured array can be formed at least on a partial area of a disk, which is preferably associated with a rotary precision bearing with a shaft with a rotary motor, so that a rotating disk is formed.
  • the rotating disk can be formed with transparent plate sectors of different geometrical-optical thickness.
  • the receiver array can represent a color camera.
  • the use of a special receiver array with RGB channels and a fourth channel, the NIR channel, for example with a wavelength interval of 750 nm to 900 nm, is possible for obtaining the information for the 3D point cloud.
  • the radiation source is designed as a structured, luminous array with luminous flat elements by means of at least one structured array.
  • At least one illumination beam path with at least one illumination lens which has an effective aperture diaphragm with an extent D B and a diaphragm center BZ B , is arranged for structured illumination of the object surfaces in the object space.
  • the lighting lens is assigned to the structured, luminous array, including an image thereof.
  • the at least one illumination beam path is assigned an imaging beam path with at least one imaging stage for the at least one object surface with at least one imaging lens assigned to the receiver array or an image of the same for imaging the elements of the object surface, which has an effective aperture diaphragm with a Aperture center BZ A.
  • a receiver array is used to detect electromagnetic radiation from the elements of the illuminated object surfaces in the object space during the recording process.
  • the distance d from the pupil center PZ 0B of the illumination lens as an image of the aperture center BZ B in the object space, from the pupil center PZ 0A of the imaging lens, as an image of the aperture center BZ A in the object space, is at least one eighth of the extent D B of the aperture diaphragm of the illumination lens.
  • the luminous flat elements have an at least approximately predetermined luminance in a luminance distribution, so that at least one image of a luminous flat element is formed in the object space by imaging with the illumination objective.
  • the sharp volume of at least one image of a luminous flat element in a structured luminous array is in the object space - by the predetermined assignment of the luminous flat element to Illumination lens and the assignment of the elements of the receiver array to the imaging lens and the assignment of lighting lens to the imaging lens in the SD recording arrangement using Newton's imaging equation - permanently fitted into the focus volume, which is defined by the totality of the images of the elements of the receiver Arrays is shown in the object space.
  • the sharpness volume which is given by the totality of the images of the elements of the receiver array in the direction of beam propagation, has at least as great a depth as the sharpness volume of an individual image of a luminous surface element.
  • an image of a luminous surface element of a structured array is permanently assigned to at least one image of an element of the receiver array.
  • a structured, luminous array with a plurality of permanently arranged luminous surface elements can be designed in a spatial structure from the data record from a known target object surface.
  • the lighting lens creates images of the same at different locations.
  • at least one element of a receiver array is arranged in the optically conjugated locations in the array space of the imaging objective.
  • Threshold In this case, a parallel detection is carried out by the elements of the receiver array. This can be done with a at high speed.
  • the structured array can be designed as a transparent microlens array and the focal length and the axial position of the microlenses can be designed such that their foci are arranged in a 3D area, which at least approximately represents an optically conjugated surface to the target surface.
  • the foci of the microlenses represent at least approximately some optically conjugated locations on the target surface of a test specimen. The deviation from a target position can be determined by determining the focus position in the image.
  • At least one relief with a spatial structure with at least one period in the form of at least one ramp with at least one inclined ramp surface in the compensation surface can be formed.
  • luminous surface elements are preferably arranged as a binary code pattern. These flat elements are formed by window areas which are illuminated by the radiation source.
  • the ramp surfaces are preferably inclined in such a way that the compensation line AG A - due to the oblique ramp surface in the main section as shown by the illumination lens in the object space as a picture, provides a straight line AG 0] which at least approximately targets the pupil center PZ of the imaging lens.
  • the different compensating lines AG 0] of several different ramps after their imaging by the illumination lens from their images form a bundle of straight lines with one Convergence point K x is formed.
  • the lighting lens is preferably opened high.
  • the convergence point K x is brought to coincidence, at least approximately, in the pupil center PZ of the imaging objective. This means that when taking pictures of the surface of the object, a ramp can be clearly traced at all depths without problems with lateral misalignments.
  • the imaging lens can be comparatively short in focal length, shorter than the lighting lens, and is so far dimmed that there is a large depth of field. The depth of focus range of the imaging lens thus determines the depth range for the 3D recording arrangement.
  • the main section of the images of the ramps form a cluster with the point of origin in the PZ pupil center.
  • the ramp images intersect the object surface to be detected. At the intersection of a ramp image with the object surface, a sharp image of the mask is created on the ramp surface.
  • the main planes of the two imaging lenses can be brought to coincidence and each of them can be assigned a receiver array with detecting elements, so that a first and a second receiver array are arranged, to which at least one movement system is assigned.
  • the resulting movement of the first receiver array can take place on a segment AS Ax on the first upper branch of a letter Y and the segment AS Ax can be parallel to a straight line g A i P lie on the one hand intersecting the focal point of the first imaging lens in the array space and on the other hand intersecting the intersection point P A of the line of symmetry between the two optical axes of the two imaging lenses through the coinciding main planes, so that the detecting ones intersect Move elements of the first receiver array on the lines AS ⁇ I -, with part of the line of symmetry forming the lower part of the letter Y.
  • the resulting movement of the second receiver array can take place on a segment AS A2 on the second upper branch of the letter Y, and the segment AS Ax can lie parallel to a line g A2P , which on the one hand is the focal point of the second imaging lens in the array space intersects and on the other hand intersects the intersection point P A of the line of symmetry between the two optical axes of the two imaging lenses through the coinciding main planes.
  • the detecting elements of the second receiver array can thus move on the lines AS Ax -.
  • the scene can be an open space scene.
  • the resulting movement of the first receiver array can take place on a path parallel to the optical axis of the first imaging lens, and precisely the elements of the first receiver array can be read out and a signal curve can be formed from them, which are located on paths AS Ax - which lie parallel to a straight line g AxP , which on the one hand intersect the focal point of the first imaging lens in the array space and on the other hand intersect the intersection point P A of the line of symmetry between the two optical axes of the two imaging lenses through the coinciding main planes.
  • the elements of the first receiver array used for signal formation correspond to those which are located on lines AS A -, with part of the symmetry Trieline forms the lower part of a letter Y and the resulting direction of movement of the second receiver array can take place over a distance parallel to the optical axis of the second imaging lens, exactly the elements of the second receiver array are read out and a signal curve is formed from these, which are located on lines AS A2j , which are parallel to a straight line g A2P , which on the one hand intersects the focal point of the second imaging lens in the array space and on the other hand intersects the intersection point P A of the line of symmetry between the two optical axes of the two imaging lenses through the coinciding main planes .
  • the elements of the second receiver array used for signal formation correspond to those which are located on lines AS A2j .
  • the main planes of the two imaging lenses can be brought at least approximately to the coincidence and each each of which can be assigned a receiver array with detecting elements, so that a first and a second receiver array with elements are arranged, and the first and the second receiver array each have at least one receiver surface, each of which is perpendicular to the main section .
  • the receiver surface of the first receiver array preferably contains a segment AS Ai that lies parallel to a line g AxP on the first upper branch of a letter Y, which on the one hand intersects the focal point of the first imaging lens in the array space and on the other hand that Penetration point P A of the symmetry line SL intersects between the two optical axes of the two imaging lenses through the coinciding main planes, so that the detecting elements of the first receiver array are arranged in the main section on the line AS Ax .
  • Part of the line of symmetry SL preferably forms the lower part of the letter Y.
  • At least one receiver surface of the second receiver array is preferably on a path AS 2 on the second upper branch of the letter Y parallel to a straight line g A2P , which on the one hand is the focal point of the second imaging lens in the array space and on the other hand intersects the intersection point P A of the symmetry line SL between the two optical axes of the two imaging lenses through the coinciding main planes, so that the detecting elements of the second receiver array are arranged in the main section on the path AS Ax .
  • This arrangement enables the detection of illuminated elements of the object surface in the object space on a plane perpendicular to the main section.
  • the structure of the receiver matrices is the same and is arranged in a position symmetrical to the symmetry line SL and at the same height.
  • the procedure is preferably as follows:
  • the signals from the two receiver areas are preferably read out line by line, so that the receiver area of the first receiver array supplies the signal profiles Si and the receiver area of the second receiver array provides the signal profiles S 2 .
  • These signal profiles are evaluated line by line, the lines containing the corresponding elements at the same distance from the main cut.
  • the evaluation is carried out in accordance with the correlation described above. ons procedure with two windowed signal curves for use.
  • signal pieces are generated by a window function.
  • a signal piece is inverted by mirroring the signal values.
  • the cross-correlation is carried out in each case by an original signal piece and in each case by an inverted signal piece, the signal pieces each representing symmetrically arranged line sections in the 3D arrangement, and a correlation coefficient is obtained and stored in each case.
  • the window of the window function which can have a length of 64 pixels, for example, is shifted, for example, in increments of one, which here corresponds to a pixel in the respectively evaluated line. For overview measurements, the window can also be shifted more than one pixel.
  • the length of the window is chosen depending on the relative opening. The window length can also be designed variably.
  • the z 0 position of the elements of the object surface is thus determined in a plane perpendicular to the main section in the symmetry line SL.
  • a lighting objective can be assigned a first imaging objective with a receiver array and a second imaging objective with a receiver array, the pupil center PZ 0A of the first imaging objective at a distance d from the pupil center PZ 0B of the lighting lens is arranged.
  • a spatially structured receiver array is assigned to each of the two imaging lenses, so that a first and a second receiver array are arranged in the array space.
  • the first and the second spatially structured receiver array at least two receiver areas each on spatially separate areas and the receiver areas of the first and the receiver areas of the second receiver array are each arranged such that at least approximately pairs of optically conjugated images of at least parts of receiver areas of the first receiver array and of parts of the receiver areas of the second receiver array are formed in the object space.
  • the evaluation is carried out using the correlation method with two windowed signal profiles, as already shown above.
  • FIG. 1 to 10 different representations of the invention. 11 to 15 for understanding the invention and its
  • 11 shows the illumination of an object with an object surface 200 by means of a lens 201 by means of a light source realized by a pinhole 202.
  • the opening of the pinhole 202 is arranged on the dashed optical axis 203 of the lens 201.
  • 11a, 11b and 11c differ in the distance of the pinhole from the lens, as indicated by the distances A, B, C.
  • Main plane of the lens 201 selected so that the pixel of the
  • Pinhole 202 is inside the object. Accordingly, the light cone projected into the object space from the pinhole 202 is comparatively extended on the object surface, as indicated by the area A ⁇ .
  • the distance between the pinhole and the lens is chosen so that the pinhole is imaged exactly on the object surface.
  • the illuminated area on the object surface is very small.
  • the distance between the pinhole and the lens is so large that the light cone has its smallest extent in front of the object surface and diverges again towards the object surface.
  • FIGS. 11a, 11b, 11c lie on a straight line, as indicated in FIG. 11D. It should also be noted that the different dimensions of the light cones in the situations of FIGS. 11a to 11c lead to a differently high light intensity on the surface and accordingly the illuminated spot appears more or less bright. This is symbolized in FIGS. 11a to 11c by the arrows, the length of which represents the intensity of the light scattered on the surface in any direction. The highest intensity of the scattered light occurs in FIG. 11b.
  • FIG. 12 once again illustrates the relationship between pinhole spacing, light spot expansion and intensity of the scattered light.
  • the light from the surface of the object 200 is diffusely scattered on typical surfaces, so that it can be observed from different viewing directions. This is shown in Fig. 13.
  • the spatial assignment is now typically carried out using a detector array by evaluating the signals thus obtained.
  • another effect known from optics is used in the invention, which is illustrated with reference to FIG. 14.
  • FIG. 14a shows the illumination of the object surface (which is located in the image plane in the case shown) with a pinhole whose opening lies exactly on the optical axis 203.
  • the pinhole image point projected onto the object also lies on the optical axis. If, as shown in FIG. 14b, the pinhole is laterally displaced to the optical axis and was at a distance A, the image point moves laterally on the surface by a distance AA. This is well known from geometric optics.
  • the invention now proposes to observe an object point by means of an observation system which lies exactly on the inclined pixel straight line, which is described by the movement of the pixels through the object space.
  • luminous dots located at different distances from the optical axis such as openings of a shading diaphragm provided at different distances from the optical axis result in straight lines of pixels with different inclinations in the object space with the same movement of a shading grating .
  • Figure 15a It can be shown that all these pixel straight lines converge in a single point, namely the confocal point, which lies on the focal plane of the imaging optics.
  • the invention makes use of this knowledge by placing the pupil of the observation system in this point; observation of the object surface areas is only possible along the straight line of pixels, as illustrated by FIG. 15b, which simplifies the evaluation in the desired manner.
  • 15c shows signal behavior in an arrangement in which a single opening is used for illumination, compare FIG. 11a.
  • the observed light intensity I is plotted over the grating displacement path X.
  • FIGS. 1 to 10 The arrangement and the method are shown in FIG. A distinction is made between the array space and the object space. The following notation is used: The sizes and points of the array space are indicated with the letter A in the first place and the sizes and points of the object space with the letter 0. In the second place in the index, the associated lens is identified, namely in the case belonging to the illumination lens 1 with the letter B and in the case of belonging to the imaging lens 2 with the letter A. In the array space there are a line grating 3 with a grating constant p and an upstream radiation source with visible light, ie a light source 4.
  • This light source 4 can be computer-controlled, so that the average illuminance of the distance of the respective focus plane after photometric law is adapted.
  • the line grating 3 is assigned to the illumination lens 1 with a telecentric beam path in the array space perpendicular and extrafocal.
  • the illumination objective 1 maps the line grating 3 into the object space, as a result of which structured illumination of the object surface 5 occurs at least at one point in time.
  • the two main planes of the illumination lens 1, H AB and H 0B are combined in FIG. 1. In lenses of this class, the two main levels are far apart.
  • a receiver matrix 6 is assigned to the imaging objective 2, which also has a telecentric beam path in the array space, perpendicular to the axis and extrafocal.
  • the imaging objective 2 images the object surface 5 in the array space.
  • a single imaging beam A 0 ⁇ is shown.
  • H ⁇ and H 0A in Figure 1 are also merged.
  • the optical lens 1 and the imaging lens 2 are arranged with their optical axes parallel to one another with the axis spacing d.
  • the illumination lens 1 and the imaging lens 2 have the focal points F AB and F ⁇ on the array side and the focal points F 0B and F 0A in the object space.
  • the focal points F 0B and F 0A coincide with the exit pupils PZ 0B and PZ 0A in the object space.
  • Two illumination beams BLSoi and BLS 02 and an imaging beam ABS 0 are shown.
  • the first linear guide of the movement system is rigidly connected to the receiver matrix 6 and carries a second, smaller linear guide, not shown here, which in turn carries the line grating 3.
  • the first linear guide is connected to a high-precision length measuring system which has a highly stable zero point.
  • the axis of movement of the first linear guide is parallel to the
  • the objective axes and the measuring axis of the length measuring system lies parallel to the two lens axes.
  • the direction of movement of the second linear guide is perpendicular to the objective axes.
  • the line grating 3 on the second linear guide is assigned a counter-grating which is firmly connected to the first linear guide and has an illumination and receiver optics in the manner of an incremental length measuring system.
  • the evaluation electronics have an electronic interface to the computer in order to have the calculated displacement of the line grid 3 as real-time information available in the computer as phase information.
  • a first reference structure is applied to the line grating 3 in the part outside the image field used, which is optically scanned by a second reference structure, which is also applied to the counter grating. Both guides of the movement system start from the zero position.
  • the direction of movement of the first linear guide is aligned parallel to the optical axis of the imaging lens.
  • the movement takes place to the focal points hm.
  • a position control system is assigned to the smaller, second linear guide, which carries the line grating 3, in order to be able to realize a movement of the line grating with a speed that is as constant as possible and thus also with a constant phase speed.
  • the target values for the position of the first linear guide are calculated from the current, absolute actual phase ⁇ d ter of the Lmi- grating 3, which is derived from a zero point.
  • the straight line g A is defined such that it intersects the focal point F AB of the lighting object 1 and also the main point H A ⁇ of the imaging lens 2.
  • the luminous flat elements move in the array space on the B sections BS A -,.
  • the pictures of these B routes BS A3 , e - finally, the B sections BS Ax and BS A shown in FIG. 1 are mapped into the object space. For example, the B routes BS Ax and BS A2 become images BS 0 ⁇ and BS 02 .
  • the images BS 0 ⁇ and BS 02 form a cluster of lines SB X with the convergence point K x , which coincides with the focal point F 0A of the imaging lens 2. Furthermore, the elements of the receiver array are moved on AS Aj lines.
  • the routes AS Ax and AS A2 are shown . Their images represent the route bundle SB 2 with the routes AS 0 ⁇ and AS 02 with the convergence point K 2 in the object space, which coincides in the focus F 0A of the imaging lens 2 with the convergence point K x , the point of convergence of the convergence point Ki and of the convergence point K 2 is generally the coincidence point K 0 .
  • the planes of the object space perpendicular to the axis are “traversed” one after the other by the focus area in that, in the presence of an object surface, a stripe pattern can be observed in sharp focus from the illumination objective 1, which can be imaged onto the receiver matrix 6 by the imaging objective 2. is formed.
  • FIG. 2 shows, for example, the signal profiles S 0 and S R in a pixel of the receiver matrix 6 in relation to the signal profile S G , which can be detected on the line grating 3 with the aid of a counter grating when the grating 3 is moved.
  • the signal curve in the pixel S 0 of an object point and the signal curve S R in the pixel of a reference point are shown.
  • the reference plate is closer at the focal point F 0B as the object surface.
  • the relative phase ⁇ RR is calculated and at sampling point A P0 in the area of maximum modulation of the signal in the pixel of an object point the relative phase ⁇ RO bj is calculated using equation (3) the absolute phase difference ⁇ citter is calculated and with equation (4) the absolute object phase ⁇ 0 bj, from which the Z 0B coordinate of each object point, namely Z 0bjr , is determined with equation (5).
  • the highly stable zero point N serves as the starting point.
  • FIG. 3 shows a 3D recording arrangement with an illumination lens 1 and an imaging lens 2, both lenses in the array space having a beam path with central perspective on both sides and thus a small structural volume.
  • the axes of the two lenses are inclined to each other. The latter enables a particularly high depth sensitivity to be achieved in a comparatively small measurement volume, for example for the acquisition of teeth in orthodontics. Accordingly, the entire arrangement is miniaturized.
  • the light coming from the light source 4 illuminates a line grating 3. This is moved along a line parallel to the straight line g AP by means of a computer-controlled slide of a linear guide (not shown here) and is projected onto the object surface 5 by the illumination objective 1 with the pupil center PZ 0B .
  • the straight line g AP intersects the focal point F AB of the lighting lens 1 in the array space and the main plane H AB of the lighting lens 1 in the array space at the point H ABG.
  • the image of the straight line g AP , the straight line g 0P lies parallel to the axis of the Illumination lens 1 and intersects the main plane in the extension
  • the illumination lens 1 has the focal length f B.
  • the object surface 5 is imaged on a receiver matrix 6 by means of an imaging objective 2 is connected to a computer with a frame grabber, an evaluation of the recorded images being carried out by means of a computer.
  • the imaging objective 2 is arranged in relation to the illumination objective 1 in such a way that the pupil center PZ 0A is located on the straight line g 0P .
  • the pupil opening of the illumination objective 1 is made as large as possible, for example the aperture ratio is 1: 2.
  • the depth of field in the object space is thus very limited.
  • the pupil opening of the imaging objective 2 is made as small as possible, for example the aperture ratio is 1: 22.
  • the depth of field in the object space is thus comparatively large.
  • the line grating 3 is moved and 6 images are taken with the receiver mat ⁇ x, for example 32.
  • the movement of the grating 3 with the grating constant p is measured with high precision to about 1% of the grating constant p. Images are recorded, the phase change between two images generally being less than 2 ⁇ , for example 3 / 2 ⁇ .
  • the position of the receiver matrix 6, whereby a CMOS camera can be used due to the large dynamic range and the possibility of evaluating individual pixels, is selected so that the entire object area of interest can be grasped sharply.
  • the receiver matrix 6 can also be rotated according to the Scheimpflug condition so that it contains the two points A Ax and A A2 .
  • the 3D point cloud of the 3D measurement object is determined from the 32 images recorded.
  • the absolute object phase ⁇ 0b3 is determined for each object point. From this the z 0 b 3 ⁇ is calculated
  • the x 0 b 3 _ are determined via the imaging scale as a function of the z 0b] _ coordinate and the y 0 b 3 coordinates are calculated from the known pixel pitch of the receiver matrix 6.
  • FIG. 4 shows a 3D recording arrangement with an illumination objective 1 with a comparatively large pupil opening, for example with an aperture ratio of 1: 2.8 and a central perspective beam path optimized for the oblique image.
  • the illuminated field is asymmetrical to the axis of the illumination lens 1.
  • Lenses in the middle focal length range are preferably used here, for example with focal lengths around 25 mm.
  • the imaging lens 2, for example with an aperture ratio of 1: 2.8, has a telecentric beam path in the array space.
  • the axes of the two lenses are arranged parallel to each other.
  • the main planes of the illumination lens 1 and the imaging lens 2 coincide at least approximately in the array space.
  • the light emanating from the light source 4 illuminates a reflection grating, for example an electronically controllable digital micromirror device 61.
  • the electronically controllable digital micromirror device 61 and the receiver matrix 6 are rigidly connected to one another in the at least approximately the same level in the array space. Both components are located here on a computer-controlled slide of a linear guide. The direction of displacement of the slide of this linear guide is aligned parallel to the axes of the two objectives, so that both components move synchronously and parallel to the optical axis of the illumination objective 1. Due to the electronic control of the Digital Micromirror Device 61, it is not only possible to move a glowing lamp laterally
  • Flat element 3A are made, but also the displacement of a light curtain can be realized according to a program specification. So for a single Luminous flat element 3A on the direct mirror device 61, as well as for the luminescent elements with precise coordination with the displacement by the computer-controlled slide, a movement can be realized on a path BS A3 parallel to the straight line g AP .
  • the line g Ap intersects the focal point F AB of the illumination lens 1 in the array space and the main plane of the imaging lens 2 in the array space in the main point H ⁇ .
  • the image of the straight line g AP in the object space, the straight line g 0P lies in the axis of the imaging objective 2.
  • the luminous flat element 3A is shifted as a luminous flat element of constant relative luminance, from point B Ax to point B 2 .
  • the luminous flat element 3A can represent a maximum in an at least approximate cos 2 light distribution, that is to say have a phase value of 0 when a signal is detected.
  • the luminance distribution on the digital micromirror device 61 is projected onto the object surface 5 by the illumination lens 1.
  • the straight lines and lines parallel to the straight line g AP are imaged by the illumination lens 1, the images of the displacement lines BS A3 of the luminous flat elements, that is to say here the luminous micromirror of the digital micromirror device 61, forming a line cluster in the object space.
  • a luminous flat element 3A moves on the displacement path BS A3 .
  • the image of this displacement route BS A3 , route BS 03 , is aimed at the convergence point Ki in the object space.
  • the object surface 5 is imaged on a receiver matrix 6 by means of an imaging lens 2.
  • the computer-controlled slide of a linear guide also carries the receiver matrix 6 and so the pixels of the receiver matrix 6 each experience a shift on a distance AS A3 parallel to the optical axis of the imaging objective 2.
  • an object point 0 in the object space is tracked by an image of a luminous surface element 3A in the depth of the object space during the displacement.
  • FIG. 5 shows a 3D recording arrangement for objects that are illuminated with structured light. The aim here is to determine the point cloud of the latter particularly quickly and precisely by determining the absolute phase of elements of the object surface.
  • a light source 4 illuminates a concentrically designed grating 81, as a result of which a structured, luminous array is formed, which is located in a sector of a pane 83 on a flat surface which belongs to a transparent plane-parallel plate 84.
  • On the disk 83 there are several transparent segments with transparent plane parallel plates 84 in the entire area.
  • the concentrically formed, illuminated grating 81 is imaged with the illumination objective 1.
  • the disk 83 is driven by a motor 85.
  • the speed is, for example, 24 rpm.
  • the information about the axial position of the disk 83 can be obtained in a flat area on the upper side of the disk 83 in the immediate vicinity of the concentrically formed grid 81.
  • the information about the radial position of the disk 83 obtained in the optical sensor head 87 is fed to a piezo actuator 92 in a control circuit (not shown here) with an electronically controllable voltage source for the piezo actuator 92 as a control voltage.
  • the piezo actuator 92 is connected to the receiver matrix 6, so that here the radial position, rather than the distance between the concentrically formed grating 81 and the receiver matrix 6, is not really kept constant. In this way, a defined position between a luminous surface element and an element of the receiver matrix 6 is maintained.
  • the axial position of the associated structured illuminated area is determined by a second piezo actuator 93 in a control loop is kept in a constant position by holding the disk 83 in the desired position as it rotates. With precision bearings, the elimination of the axial runout can usually be dispensed with.
  • a plane-parallel transparent compensation plate 94 for axially compensating the position of the receiver matrix 6 is arranged in the imaging beam path after the imaging objective 2. In this way it is achieved that at each angular position of the disk 83, that is to say in each segment assigned to the light source and the receiver matrix 6, the optical paths are the same, ie the object distance for the structured illuminated array and the image width for the receiver matrix 6 are the same.
  • a sharply imaged strip pattern can be observed in the same plane in the object space on the object surface and this can also be imaged sharply on the receiver matrix 6 via the imaging objective 1 and a transparent plane-parallel plate 95 of a certain optical thickness, since the sharp plane of the Imaging beam path through the adaptation of the transparent plane parallel plate 95 lies in the same plane.
  • transparent segments with different geometrical-optical thicknesses enter both the illumination and the imaging beam path.
  • the geometrical-optical thicknesses are always coordinated so that the sharp planes of the two beam paths coincide in the object space.
  • the position of the focus planes in the object space changes from segment to segment, so that the entire object space is gradually "deeply focused" in depth.
  • the position of the concentric grating changes in the radial direction from segment to segment.
  • the phase is changed step by step every time a segment is changed, for example in steps of 90 °.
  • the evaluation of a synchronization pulse in the edge area ensures that only then an image of the receiver matrix is detected when the plane-parallel plate of defined thickness with the concentrically formed grating 81 is located in full area in front of the receiver area.
  • a periodic signal curve can thus be obtained in each element of the receiver matrix 6, each signal value of the recorded signal curve being generated in another sector by means of another plane parallel plate.
  • the phase difference from signal value to signal value is known with high precision due to the optically read reference grid.
  • FIG. 6 shows a 3D recording arrangement which is particularly suitable for mobile use, for example for the computer-aided 3D orientation of robots in the vicinity for gripping tasks. Furthermore, 3D hand-held devices for detecting objects in the close range up to 1 m can be based on this arrangement.
  • This 3D recording arrangement consists of an illumination lens 1 and an imaging lens 2, both lenses having a central perspective beam path in the array space, and a transparent profile grating 53, which can be viewed as a structured 3D array and by a light source 4 is illuminated. This can represent a flash light source with a flash frequency in video clock.
  • the transparent profile grid 53 has a plurality of ramps 54 and 56. As shown in FIG.
  • FIG. 7 represents.
  • the ramp surface 57 has several stages. There is a different binary code pattern on each level. The designations Ai, A 2 , A 3 and A 4 are assigned to the individual patterns.
  • the compensating line through the ramp area, the straight line AG A] points to the pupil center PZ of the imaging lens 2.
  • the ramp area 55 on the transparent profile grating 53 on the ramp 54, the main section of which can be represented by a compensation line AG A3 is represented by the lighting lens 1 mapped in the object space.
  • the illumination objective 1 should have a large relative aperture, for example 1: 1.2.
  • flat areas of the object surface are shown simultaneously in the pictures A ⁇ o, A 20 , A3 0 and A 40 for illustration in possible positions.
  • a luminous flat element 3A is also depicted and represents the image B FEL .
  • the sharp volume SV FEL of the image B FEL is shown. It is not very deep due to the large relative opening ratio.
  • the imaging lens on the other hand, is heavily dimmed. In this way, the elements of the receiver array are mapped with a much larger depth range.
  • the image of the entire ramp area is thus in the sharp volume SV EE of the associated element of the receiver matrix 6.
  • ramp surfaces 54 can be arranged on the transparent profile grid 53, each with a different mean object width, so that elements of an object surface 5 can be detected at different depths in the object space and the depth detection area is particularly large.
  • FIG. 9 shows an application for the extraction of digital point clouds, which are used for 3D television. This can be the case, for example, at sporting events where there is good lighting.
  • the arrangement according to FIG. 9 can also be used for 3D recordings in the amateur video field and in photography.
  • a sequence of images is recorded with the two receiver matrices 6 and 14 in order to generate a single 3D image.
  • the displacements of the receiver matrix 6 and the receiver matrix 14 are carried out in such a way that the focus area changes from image to image until the object space is completely covered in depth.
  • the lighting is done using natural light or artificial light, which usually has no spatial structure.
  • a first imaging lens 33 and a second imaging lens 2 are assigned to each of the two receiver matrices 6 and 14.
  • the first imaging lens 33 and the second imaging lens 2 are telecentric in the array space, as a result of which the focal point F 0 ⁇ and the pupil center PZ 0 ⁇ coincide in the imaging lens 33.
  • the focus F 02 and the pupil center PZ 02 also coincide due to the tele-center.
  • the images of the straight lines parallel to the straight line g A ⁇ P and thus also the displacement distances of the elements of the receiver matrix 6 on these straight lines are imaged by the imaging lens 33 in the object space and there set up a cluster of lines with the convergence point K 2i of the lines, respectively Tufts of straight lines with the cut in K 2i , which also contains the straight line g 0 ⁇ p.
  • the images of the straight lines parallel to the straight line g A2P and thus also the displacement distances of the elements of the receiver matrix 14 on these straight lines are imaged by the imaging objective 2, as a result of which, in the object space, a tufts of lines with the convergence point K 22 of the lines and a tufts of lines with the section arise in K 22 .
  • This tufts of straight lines contains the straight line g 02P , which coincides with the straight line g 0 ⁇ P.
  • the displacements of the receiver matrix 6 and the receiver matrix 14 are carried out in such a way that the stretching tufts with the convergence point K 2i and the stretching tufts with the convergence point K 22 coincide at the point of coincidence K 0 .
  • the two receiver matrices 6 and 14 are each equipped with at least one powerful computer, which in turn is integrated in a high-performance communication network, via an interface connected.
  • the receiver matrices 6 and 14 are connected to a precise linear guide 20, which in turn is connected to a computer-controlled, highly dynamic linear motor 21.
  • the two receiver arrays 6 and 14 are moved in the z A -R ⁇ chtung in the direction of the two foci F A and F ⁇ A2, the position of the common plane Sharp SCH i-1 whereby changes in the object space.
  • Objects 5, 18 and 19 are adequately illuminated by a light source 15.
  • the object 5 at point P 3 x which is located on the straight line g 03 ⁇ z P , which represents the location of the competing straight lines g 03 ⁇ P and g 0 3 2Pp , from the common plane of focus
  • the stroke frequency of the movement of the two receiver matrices 6 and 14 takes place, for example, at 24 Hz.
  • the evaluation of the images taken by the two receiver matrices 6 and 14 takes place in such a way that signals are formed from elements which are parallel to each other the straight line g A ⁇ P and g A2P located straight lines, for example on the straight lines gAjiPr gAkip, gAj 2 p or g A k 2 p at the same time in at least approximately the same location, that is to say represent corresponding elements.
  • the z 0 coordinates are calculated from the detected signals S Xj and S 2j of a pair of corresponding elements using correlation methods, and the entire point cloud of the object surface or the scene is derived from these by knowing the geometry of the arrangement and the imaging scales the z 0 coordinate.
  • point clouds are provided in the video cycle.
  • the information about the color of the respective object point results, for example, from the receiver element which is closest to the correlation maximum.
  • the surface models of the objects and the scene are calculated from the calculated point clouds. This requires a very high computing speed, which can, however, be achieved on the basis of special processors. It is possible to assign a separate processor to each element of the receiver matrices 6 and 14. This is very advantageous for the execution of the cross-correlation already described. Data reduction techniques can be used to transmit the calculated data for 3D playback.
  • FIG. 10 shows a 3D recording arrangement that can be used, for example, as a multi-level obstacle sensor in daylight.
  • Two imaging beam paths with two imaging lenses 33 and 2 are arranged, the imaging lenses 33 and 2 each being designed to be telecentric in the array space. This is not absolutely necessary, but it has constructive advantages.
  • Each imaging objective 33 and 2 is a spatially structured receiver array
  • receiver surfaces 107 and 108 as well as 109 and 110 are perpendicular to Main cut arranged.
  • these receiver surfaces 107 and 108 and 109 and 110 are parallel to a straight line g A ⁇ P or parallel to a straight line g A2P .
  • These straight lines g AiP and g A2P each intersect the focal point F ⁇ I and F AA2 of the associated lens and the point P A in the common main plane of the two imaging lenses 33 and 2.
  • the images of the receiver surfaces 107 and 108 and 109 and 110 become after the Scheimpflug condition mapped in the object space, the coincidence of the two images of the receiver surfaces 107 and 109 and 108 and 110 of the spatially structured receiver array 106 and 114 being given.
  • elements are imaged on the four receiver surfaces 107 and 109 as well as 108 and 110.
  • the points Oi and 0 2 are imaged on the object surface.
  • the correlation method already described in detail can be used for the determination of the z 0 position of the points O x and 0 2 using the gray value or color distributions from the environment when using color cameras as receiver matrices. This results in four signal curves Si and S 2 as well as S 3 and S 4 each in the line of the associated receiver matrix in the main section of the line-by-line reading, for example in the A A or B A position is started.
  • the associated, fixed elements of two receiver surfaces 107 and 108 or 108 and 110 each form a corresponding pair.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

Dispositif de balayage d'objets en trois dimensions à l'aide d'au moins deux systèmes d'imagerie possédant des optiques d'imagerie tournées vers l'objet. Au moins un des systèmes d'imagerie est conçu comme système d'observation pour l'observation de l'objet et au moins un des systèmes d'imagerie possède un moyen élémentaire mobile devant l'optique d'imagerie, dont l'image élémentaire se déplace sur une ligne de points d'image dans l'espace de l'objet. Selon la présente invention, le moyen élémentaire est conçu comme moyen élémentaire mobile doté d'un composant latéral par rapport à l'axe optique de l'optique d'imagerie, et le système d'observation est placé en vue de l'observation le long de la ligne de points d'images.
PCT/DE2000/000991 1999-04-29 2000-04-01 Procede et dispositif de balayage d'objets WO2000066972A1 (fr)

Priority Applications (3)

Application Number Priority Date Filing Date Title
AU43913/00A AU4391300A (en) 1999-04-29 2000-04-01 Method and device for scanning objects
DE10081176T DE10081176D2 (de) 1999-04-29 2000-04-01 Verfahren und Vorrichtung zur Objektabtastung
EP00925063A EP1188035A1 (fr) 1999-04-29 2000-04-01 Procede et dispositif de balayage d'objets

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE1999119584 DE19919584A1 (de) 1999-04-29 1999-04-29 Verfahren und Anordnung zur 3D-Aufnahme
DE19919584.6 1999-04-29

Publications (1)

Publication Number Publication Date
WO2000066972A1 true WO2000066972A1 (fr) 2000-11-09

Family

ID=7906332

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/DE2000/000991 WO2000066972A1 (fr) 1999-04-29 2000-04-01 Procede et dispositif de balayage d'objets

Country Status (4)

Country Link
EP (1) EP1188035A1 (fr)
AU (1) AU4391300A (fr)
DE (2) DE19919584A1 (fr)
WO (1) WO2000066972A1 (fr)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018206233A1 (fr) 2017-05-08 2018-11-15 Universität Stuttgart Procédé et système de triangulation par bandes robuste servant au balayage en profondeur/à la focalisation, avec plusieurs ondelettes
DE102017004429B4 (de) 2017-05-08 2019-05-09 Universität Stuttgart Verfahren und Anordnung zur robusten, tiefenscannenden/fokussierenden Streifen-Triangulation
US10728519B2 (en) 2004-06-17 2020-07-28 Align Technology, Inc. Method and apparatus for colour imaging a three-dimensional structure
CN112219104A (zh) * 2018-05-28 2021-01-12 维也纳自然资源与生命科学大学 用于确定介质中的三维颗粒分布的方法
US10952827B2 (en) 2014-08-15 2021-03-23 Align Technology, Inc. Calibration of an intraoral scanner
CN114693936A (zh) * 2022-04-18 2022-07-01 华中科技大学 一种基于白光干涉测量的微沟槽特征分割方法及系统
CN118447460A (zh) * 2024-07-08 2024-08-06 湖州丰源农业装备制造有限公司 一种基于人工智能的自动驾驶收割机监控管理系统及方法

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2003273159A1 (en) * 2002-05-23 2003-12-12 Cobra Electronic Gmbh Miniaturised camera
US7830528B2 (en) * 2005-12-14 2010-11-09 Koh Young Technology, Inc. 3D image measuring apparatus and method thereof
KR100612932B1 (ko) * 2005-12-14 2006-08-14 주식회사 고영테크놀러지 3차원 형상 측정장치 및 방법
DE102008016766B4 (de) 2008-04-02 2016-07-21 Sick Ag Sicherheitskamera und Verfahren zur Detektion von Objekten
JP5441840B2 (ja) * 2009-07-03 2014-03-12 コー・ヤング・テクノロジー・インコーポレーテッド 3次元形状測定装置
DE102010060448B4 (de) * 2010-11-09 2021-07-01 Eberhard Lange Projektionsvorrichtung zum Projizieren eines zu projizierenden Objekts
DE102014119126B3 (de) * 2014-12-19 2015-08-06 Sick Ag Streifenprojektor zum Beleuchten einer Szenerie mit einem veränderlichen Streifenmuster
US10352995B1 (en) 2018-02-28 2019-07-16 Nxp Usa, Inc. System and method of multiplexing laser triggers and optically selecting multiplexed laser pulses for laser assisted device alteration testing of semiconductor device
US10782343B2 (en) 2018-04-17 2020-09-22 Nxp Usa, Inc. Digital tests with radiation induced upsets
CN115540759B (zh) * 2022-11-16 2023-05-09 江西滕创洪科技有限公司 一种基于图像识别技术修饰金属的检测方法及检测系统

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4689480A (en) * 1985-04-25 1987-08-25 Robotic Vision Systems, Inc. Arrangement for improved scanned 3-D measurement
US5003166A (en) * 1989-11-07 1991-03-26 Massachusetts Institute Of Technology Multidimensional range mapping with pattern projection and cross correlation

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4689480A (en) * 1985-04-25 1987-08-25 Robotic Vision Systems, Inc. Arrangement for improved scanned 3-D measurement
US5003166A (en) * 1989-11-07 1991-03-26 Massachusetts Institute Of Technology Multidimensional range mapping with pattern projection and cross correlation

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10764557B2 (en) 2004-06-17 2020-09-01 Align Technology, Inc. Method and apparatus for imaging a three-dimensional structure
US10944953B2 (en) 2004-06-17 2021-03-09 Align Technology, Inc. Method and apparatus for colour imaging a three-dimensional structure
US10812773B2 (en) 2004-06-17 2020-10-20 Align Technology, Inc. Method and apparatus for colour imaging a three-dimensional structure
US10728519B2 (en) 2004-06-17 2020-07-28 Align Technology, Inc. Method and apparatus for colour imaging a three-dimensional structure
US10750151B2 (en) 2004-06-17 2020-08-18 Align Technology, Inc. Method and apparatus for colour imaging a three-dimensional structure
US10750152B2 (en) 2004-06-17 2020-08-18 Align Technology, Inc. Method and apparatus for structure imaging a three-dimensional structure
US10924720B2 (en) 2004-06-17 2021-02-16 Align Technology, Inc. Systems and methods for determining surface topology and associated color of an intraoral structure
US10952827B2 (en) 2014-08-15 2021-03-23 Align Technology, Inc. Calibration of an intraoral scanner
WO2018206233A1 (fr) 2017-05-08 2018-11-15 Universität Stuttgart Procédé et système de triangulation par bandes robuste servant au balayage en profondeur/à la focalisation, avec plusieurs ondelettes
DE102017004428B4 (de) 2017-05-08 2018-11-29 Universität Stuttgart Verfahren und Anordnung zur robusten, tiefenscannenden fokussierenden Streifen-Triangulation mit mehreren Wavelets
DE102017004429B4 (de) 2017-05-08 2019-05-09 Universität Stuttgart Verfahren und Anordnung zur robusten, tiefenscannenden/fokussierenden Streifen-Triangulation
CN112219104A (zh) * 2018-05-28 2021-01-12 维也纳自然资源与生命科学大学 用于确定介质中的三维颗粒分布的方法
CN112219104B (zh) * 2018-05-28 2024-03-26 维也纳自然资源与生命科学大学 用于确定介质中的三维颗粒分布的方法
CN114693936A (zh) * 2022-04-18 2022-07-01 华中科技大学 一种基于白光干涉测量的微沟槽特征分割方法及系统
CN118447460A (zh) * 2024-07-08 2024-08-06 湖州丰源农业装备制造有限公司 一种基于人工智能的自动驾驶收割机监控管理系统及方法

Also Published As

Publication number Publication date
EP1188035A1 (fr) 2002-03-20
DE19919584A1 (de) 2000-11-02
AU4391300A (en) 2000-11-17
DE10081176D2 (de) 2002-03-28

Similar Documents

Publication Publication Date Title
EP1728115B1 (fr) Dispositif de mesure a vitesse elevee et procede base sur le principe de microscopie confocale
WO2000066972A1 (fr) Procede et dispositif de balayage d'objets
EP1984770B1 (fr) Procédé et dispositif conçus pour une technique confocale chromatique de mesure 3d rapide et robuste
EP2309948B1 (fr) Caméra 3d à usage dentaire pour la détection par triangulation de structures superficielles d'un objet à mesurer
AT506110B1 (de) Vorrichtung und verfahren zur erfassung von körpermassdaten und konturdaten
DE3642051A1 (de) Verfahren zur dreidimensionalen informationsverarbeitung und vorrichtung zum erhalten einer dreidimensionalen information ueber ein objekt
DE102013212409A1 (de) Verfahren zur Bilderfassung einer vorzugsweise strukturierten Oberfläche eines Objekts und Vorrichtung zur Bilderfassung
DE102013209770B4 (de) Verfahren zur Bestimmung von einstellbaren Parametern mehrerer Koordinatenmessgeräte sowie Verfahren und Vorrichtung zur Erzeugung mindestens eines virtuellen Abbilds eines Messobjekts
DE102008002725B4 (de) Verfahren und Vorrichtung zur 3D-Rekonstruktion
DE102017116758B4 (de) Verfahren und Vorrichtung zum Abtasten von Oberflächen mit einer Stereokamera
DE102019201272B4 (de) Vorrichtung, Vermessungssystem und Verfahren zur Erfassung einer zumindest teilweise spiegelnden Oberfläche unter Verwendung zweier Spiegelungsmuster
WO2007134567A1 (fr) Procédé pour la génération d'informations d'images
EP0671679A1 (fr) Méthode et dispositif de mesure sans contact d'objects tri-dimensionnels basé sur la triangulation optique
DE10321888A1 (de) Messverfahren und Sensor, insbesondere zur optischen Abtastung bewegter Objekte
DE19846145A1 (de) Verfahren und Anordung zur 3D-Aufnahme
DE102014016087B4 (de) Dreidimensionale optische Erfassung von Objektoberflächen
DE102012001307A1 (de) Verfahren und Vorrichtung zur 3D-Messung von Objekten, insbesondere unter hinderlichen Lichtverhältnissen
EP3899423B1 (fr) Dispositif, système de mesure et procédé pour la détection d'une surface au moins partiellement réfléchissante par l'utilisation de deux motifs de réflexion
WO2021052992A1 (fr) Dispositif de capture pour générer une image haute résolution d'un objet se déplaçant à travers une zone capturée, et procédé
DE102011000088A1 (de) Verfahren zur Ermittlung eines Verfahrweges bei der Messung von Strukturen eines Objekts
DE10056073A1 (de) Optisches Verfahren und Sensor zur Gewinnung einer 3D-Punktwolke
DE19504126A1 (de) Vorrichtung und Verfahren zum berührungslosen Vermessen dreidimensionaler Objekte auf der Basis optischer Triangulation
WO2014114663A1 (fr) Dispositif optique et procédé pour déterminer par triangulation de deux caméras linéaires les coordonnées spatiales de surfaces d'objets macroscopiques
DE102006005874A1 (de) Vorrichtung und Verfahren zum berührungsfreien Vermessen
DE102017220720B4 (de) Verfahren und Vorrichtung zum berührungslosen Vermessen dreidimensionaler Oberflächenkonturen

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2000925063

Country of ref document: EP

WD Withdrawal of designations after international publication

Free format text: DE

WWP Wipo information: published in national office

Ref document number: 2000925063

Country of ref document: EP

REF Corresponds to

Ref document number: 10081176

Country of ref document: DE

Date of ref document: 20020328

WWE Wipo information: entry into national phase

Ref document number: 10081176

Country of ref document: DE

WWE Wipo information: entry into national phase

Ref document number: 10030772

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: JP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

WWW Wipo information: withdrawn in national office

Ref document number: 2000925063

Country of ref document: EP