WO2001033164A1 - Measurement of objects - Google Patents

Measurement of objects Download PDF

Info

Publication number
WO2001033164A1
WO2001033164A1 PCT/GB2000/004226 GB0004226W WO0133164A1 WO 2001033164 A1 WO2001033164 A1 WO 2001033164A1 GB 0004226 W GB0004226 W GB 0004226W WO 0133164 A1 WO0133164 A1 WO 0133164A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
fringe pattern
measuring
height
image field
Prior art date
Application number
PCT/GB2000/004226
Other languages
French (fr)
Inventor
David Robert Burton
Michael Joseph Lalor
Christopher John Moore
Original Assignee
David Robert Burton
Michael Joseph Lalor
Christopher John Moore
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by David Robert Burton, Michael Joseph Lalor, Christopher John Moore filed Critical David Robert Burton
Priority to AU11594/01A priority Critical patent/AU1159401A/en
Priority to GB0210548A priority patent/GB2372318A/en
Publication of WO2001033164A1 publication Critical patent/WO2001033164A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/2441Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures using interferometry

Abstract

There is disclosed a method for measuring an object comprising: 1. projecting an interference fringe pattern onto the object using sources of e.m. radiation; 2. measuring the phase shift and image intensity of the fringe pattern across an image field; 3. measuring the intensity of background illumination across the image field; 4. modifying the measurements of the fringe pattern by subtracting an amount corresponding to the measured background illumination across the image field; and 5. generating a 3-D height map of the object using the modified measurements.

Description

DESCRIPTION
MEASUREMENT OF OBJECTS
The present invention relates to the measurement of objects. In particular, but by no means exclusively, the invention is suited to the measurement of the shape and position of a patient who is to undergo radiotherapy treatment.
When planning for radiotherapy treatment, it is important to locate the shape and position of the tumour to be treated as accurately as possible, both to maximise the
treatment on the tumour site and to minimise damage to healthy surrounding tissue. It is
possible, with the use of computer tomography (CT) scans to determine the shape and
location of the tumour very accurately within the body. However, the physical positioning
of the patient with respect to the radiotherapy source can be very crude. Generally
speaking, during the pre-treatment CT scans reference marks are made on the patient's body
surface and during the set up immediately before treatment, the treatment table upon which the patient lies is adjusted so that the position of the reference marks corresponds as closely
as possible to the position of the marks during the CT scans. Clearly, such a manual patient
positioning system is not particularly accurate. Moreover, it is difficult for such a manual
system to take into account the physiological changes which may have taken place in the patient in the time between the preparation of the CT scans and the treatment.
In view of the inherent inaccuracy of existing patient positioning techniques, it is
difficult to maximise the benefit of the knowledge gained during the CT scans in the
radiotherapy treatment.
It is thus an object of the present invention to provide a method and apparatus for determining and measuring the position of an object with increased accuracy and reliability.
There are several optical techniques which exist for the determination of 3-D surface profile or shape. In particular, it is known to project a fringe pattern onto an object
and then to detect the fringe pattern using a camera and apply Fourier fringe analysis (FFA) to the detected image. However, practical applications of FFA are extremely rare since they
tend to suffer from a lack of robustness and fail to deliver a reliable measurement under
hostile conditions. Other techniques such as temporal phase stepping also suffer from
robustness problems.
However, FFA potentially has a number of advantages, including the requirement
of only a single image with the result that, with high-speed shuttered cameras, even
dynamic events can be measured, the high degree of noise reduction which can be achieved
during the processing of the signals and the relatively simple optical set up as compared
with temporal phase stepping systems.
However, existing FFA systems suffer from a number of problems, including: maintaining the performance characteristics of the optical system, e.g. light
transmission, focus, alignment etc; problems with objects which contain discontinuities, boundaries and/or holes; objects which are large and therefore require non-collimated fringe projection
systems, since the optical properties of the divergent fringe pattern do not remain constant
across the field of view; noise in the fringe pattern caused by speckle, variations in lighting/illumination,
variations surface colour and/or texture etc; the on-line design of frequency plane filters so as to provide flexible noise rejection whilst maintaining a good signal fidelity; and
ensuring that under as many circumstances as possible a measurement is delivered
from the system. Where this is not possible the device should "know" that it has failed to
produce correct data and there should be no possibility of the instrument producing data that is invalid.
Moreover, existing FFA arrangements require relatively complicated calibration. Existing techniques have concentrated upon the technical problems of the algorithm used
in the Fourier analysis, to obtain an unwrapped phase distribution which bears a close
resemblance to the surface being measured. However, the issue of determining the physical parameters of the optical system, which must be known in order accurately to calculate 3-D
height from the phase, is usually omitted. However, without this important step the device is of little practical value. It is relatively straightforward to develop a mathematical model
between fringe phase and height, even for quite complex geometries. However, these
mathematical models cannot solve issues such as: distortions in the projected fringes caused by slight misalignment in the optics;
optical aberrations in the viewing system; non-collimation of the fringes; and
the difficulty of measuring the physical optical parameters of the instrument.
Errors in any one of the factors can seriously undermine the final accuracy of the
system.
In accordance with a first aspect of the present invention, a method for measuring the object comprises:
projecting an interference fringe pattern onto the object;
measuring the phase shift and image intensity of the fringe pattern across the image field;
measuring the intensity of the background illumination across the image field; modifying the measurements of the fringe pattern by subtracting an amount
corresponding to the measured background illumination across the image field; and
generating a 3-D height map of the object using the modified measurements.
By SLibtracting the readings relating to the background illumination of the object
from the measured illumination of the fringe pattern, it is possible to compensate for surface marks such as tattoos, scars, moles and the like and additionally to carry out the
technique out under ambient light conditions.
Preferably, the background illumination across the image field is measured by destroying the interference fringe pattern but retaining the same illumination source. This may be achieved by vibrating one of the two sources of e.m. radiation which would
otherwise produce the interference fringe pattern.
Preferably, the 3-D height map is generated by use of a Fourier fringe analysis. The
background image may be used via a thresholding operation to produce a binary mask to
aid in dealing with objects which have boundaries and/or holes in the image.
Preferably, the method also involves a calibration phase in order to convert an
absolute, unwrapped phase distribution to actual 3-D height, relative to a reference phase.
Preferably the calibration phase comprises (1) placing a planar object (e.g. a black disc) of known dimensions on the reference (measurement) plane, capturing, binary
thresholding and measuring the image of the object in pixel units to calculate the X and Y
spatial magnification factors, to yield the practical value of the aspect ratio of the system; (2) measuring and storing the 3-D shape of the reference plane, to compensate for non-
uniform carrier frequency across tόhe image field; (3) placing an object (e.g. a calibration
wedge) having a plurality of identified points at known locations on the object and detecting those points to allow a calculation of direct phase to height multiplier value to be made.
By way of example, a specific embodiment of the present invention will now be
described, with reference to the accompanying drawings, in which:-
Fig. 1 is a flow diagram illustrating the general technique of Fourier fringe analysis
measurement;
Fig. 2 is an illustration of the images obtained from the Fourier fringe analysis of
Fig. 1 ;
Fig. 3 is a diagrammatic representation of an embodiment of measurement system
in accordance with the present invention;
Fig. 4 is a schematic cross-section through an interferometer which forms part of
the system shown in Fig. 2;
Fig. 5 is a flow diagram illustrating the calibration technique of the measurement
system of Fig. 3;
Fig. 6 is a perspective view of a calibration wedge used in the calibration procedure
of Fig. 5;
Fig. 7 is a flow diagram illustrating the measuring process of the present invention; Figs 8(a) to 8(d) are illustrations of the images obtained during the measuring process of Fig. 7;
Figs. 9(a) to 9(d) are plots of intensity against pixel position for one line of each of the images shown in Figs. 8(a) to 8(d) showing the various pre-processing stages; and
Figs. 10(a) and 10(b) illustrate the processing of the signals obtained during measurement, using Fourier fringe analysis.
Referring firstly to Fig. 1, the basic Fourier fringe analysis (FFA) measurement technique is illustrated. The source image, which consists of a fringe pattern projected on
to the surface of the object, is captured at step S10 ("step" will hereafter be abbreviated to
"S") by a CCD camera and placed into the system memory using a conventional
"framegrabber". A typical source fringe pattern of the dorsal surface of a RANDO phantom is shown in Fig. 2a. A background reference image is shown in Fig. 2b. In the basic
method, this source image is Fourier transformed at S12 using a 2-D FFA, giving a
frequency space image at S 14 as shown in Fig. 2c. One of the information peaks is isolated
at S16 by the filtering procedure as shown in Fig. 2c. An inverse Fourier transform is then
applied at SI 8, producing two images, a real image and an imaginary image at S20. An
unwrapped phase map may be calculated at S22 and S24 from the real and imaginary values as shown in Fig. 2e. This phase map is directly proportional to 3-D surface height.
However, it is a discontinuous surface with two π steps or "wraps". The discontinuities
must be joined up or "unwrapped" in order to produce a continuous surface, as shown in
Fig. 2f The final step is to scale the surface correctly by converting phase to 3-D height, usually accomplished by calculating a mathematical relationship between the two. Referring now to Fig. 3, an object 30 (in this case a patient) to be measured for treatment by a radiotherapy treatment machine 32 is placed on a supporting table 34 and is
illuminated by a dual-fibre interferometer 36 which projects a fringe pattern onto the
portion of the patient to be measured. The fringe pattern is detected by means of a CCD camera 38. The camera 38 is provided with a zoom lens 39 and incorporates a laser line
interference filter (not shown) in front of the camera lens, which is transparent only to
wavelengths close to the wavelength of the laser light used.
The interferometer is illustrated in more detail in Fig. 4, and comprises an elongate
housing 40 into which light at 532 nm is supplied from a Nd: YAG laser 42 via a polariser 44, a launch lens 46 and a polarisation maintaining optical fibre 48. The optical fibre 48
is fed into a 50:50 polarisation maintaining coupler 50 which splits the output into two pure
fused silica optical fibres 48a, 48b. The two outgoing fibres are cut to exactly the same length (within 0.2mm), stripped at their ends and placed next to each other, as shown in Fig.
4. By using pure fused silica for the optical fibres it is possible to construct a fringe projection system which contains no projection lens and which is radiation resistant.
A piezoelectric actuator 52 (model 07 PAS 001 from Melles Griot) is located adjacent to one of the split fibres and by applying a voltage to the piezoelectric actuator via
an input wire 54, the fibre can be vibrated, thereby phase shifting or removing the fringe pattern completely, in order to obtain a background intensity without fringes. The
separation of the fibre ends, and thereby the fringe spacing, can be adjusted manually with a fringe-spacing adjusting screw 56 and by means of a fringe-curvature adjusting screw 58
the relative position of the fibre ends along the fibre direction can be adjusted, thereby adjusting the curvature of the fringes.
The interferometer produces a fringe pattern which is projected towards the object
to be detected. However, before any detection takes place the system is calibrated.
In known FFA systems, the mathematical relationship between phase in the fringe pattern and 3-D height is deduced by considering the geometry of the optical system. The equation relating 3-D height to phase normally contains three principal parameters, which
must be physically measured from the optical system. These are the fringe spacings, the
divergence angle of the projection beam and the angle between the viewing and illumination optics. However, it is not easy to measure these parameters with great
accuracy and it has been found that even small errors in these parameters produce large
errors in the final calculated height value.
In the present invention, the situation is further complicated by the fact that the
twin-fibre interferometer produces a non-collimated, diverging fringe field. Moreover, it
is usual to ensure that there is a uniform carrier fringe frequency across the entire field of
view of the image but with the optical system of the present invention this is not the case. It is possible to model this kind of optical geometry but the mathematical model becomes
extremely complex.
Calibration of the system of the present invention is a three-step procedure as
shown schematically in Fig. 5. Firstly, at S80 a reference plane (the table) is placed at the
approximate measurement height and at S82 a black disc of exactly known diameter is place upon its surface. At S84 an image of this disc is captured, binary thresholded and
measured in pixel units in order to calculate at S86 the X and Y spatial magnification factors. This information also yields the practical value of the aspect ratio of the image capture system.
Secondly, the disc is removed and at S88 the 3-D shape of the flat plane is measured and stored. This compensates for the non-uniform carrier frequency across the image field.
By subtracting this reconstructed surface from all subsequently measured objects, the effects of misalignment of the projection system and distortions of the imaging system are
very much reduced. In effect, all measurements become relative to this reference plane.
Thirdly, at S90 a wedge-shaped object 60 (Fig. 6) is placed on the reference plane and measured. The surface of the wedge is imprinted with a grid of crosses whose
individual heights from the plane on which the wedge is standing are very accurately
known. For the wedge shown in Fig. 6 the crosses have a lateral spacing of 60mm and the rows are separated by a vertical height of exactly 30mm. During calibration, at S92 the
positions of the crosses in the image array are extracted from the background reference
image. The true heights of this array of points are accurately known and the absolute phase
values of the corresponding pixels in the image are measured. Therefore, it is possible to
least-squares fit a line of height versus plane and so at S 94 calculate a direct phase to
height multiplier value.
Once the system has been calibrated, it can then be used to measure objects. The
measLiring process is illustrated schematically in Fig. 7. Firstly, at SI 00 the fringe pattern
is projected onto the patient and at SI 02 is detected by the CCD camera 38. A fringe- contoured image of a RANDO phantom, used in radio therapy calibration/testing is shown
in Fig. 8a. At S 104, the piezoelectric actuator 52 is then operated, which causes one of the optical fibres 48b to vibrate. The vibration of the optical fibre destroys the fringe pattern
but results in a background reference image, as illustrated in Fig. 8b, which is detected by
the camera 38 at SI 06.
At SI 08. the processing system then removes the reference data obtained at SI 06 (Fig. 8b) from the unprocessed fringe image obtained at SI 02 (Fig. 8a), resulting in the
fringe data as shown in Fig. 8c. By removing the background reference image data, the
system removes the effect of the Gaussian intensity spread across the image and reduces the effect of surface colouration and marks on the skin. Moreover, the background image is
used via a thresholding operation to produce a binary mask to aid in dealing with the
boundaries and/or holes in the image.
At SI 10, the DC intensity level is removed from the fringe pattern by subtracting the average intensity across the pre-processed image from the entire fringe pattern. A
Hamming window is then applied to the fringe data obtained at SI 08 (Fig. 8c) producing
the data shown in Fig. 8d.
The pre-processing of the signal as shown in Figs. 8a to 8d is illustrated in more
detail in Figs. 9a to 9d. which correspond to Figs. 8a to 8d respectively, and show in each
case a plot of intensity (ordinate) against pixel position (abscissa).
The data is then subjected to FFA treatment. Fig. 10a shows the frequency space
after the application of a 2-D FFT to the pre-processed example shown in Figs. 8a. It is apparent from the large DC peak usually encountered has been removed by the preprocessing stages. This improves the signal to noise ratio during the filtering process and simplifies location of the information peak, reducing the problem to a simple maxima
search. It can also be seen that the overall noise level of the spectrum is low and that there is little evidence of leakage.
Following filtering, frequency shifting in Fourier space is applied and Fig. 10b shows the filtered information peak after frequency shifting. This either completely
removes, or dramatically reduces, the number of wraps in the subsequent wrapped phase distribution. The wrapped phase distribution for the RANDO image is shown in Fig. 10c,
whilst the final, unwrapped image is shown in Fig. lOd.
For non-full field objects, the background reference image is used to produce a
binary mask to exclude the data outside the borders of the object. This is an obvious aid to the phase-unwrapping algorithm. The mask is eroded very slowly around the borders of
the object, to avoid steep edges where the fringes may coalesce and become noisy.
The final step in the measurement algorithm involves the subtraction of a stored
measurement of a flat plane (the table surface), as described previously as part of the system
calibration. This compensates for any fringe curvature in the interferometer and also any
aberrations in the optical system.
By storing the information in the computer, it is possible to compare the shape and
image position data with the corresponding data from the previously-conducted CT scans.
It is thereby possible to match the two sets of data as closely as possible and indeed a mathematical "best match" can be carried out, which will enable a radiographer to position
the patient to within sub-millimetre accuracy with respect to the image obtained from the
CT scans. The invention is not restricted to the details of the foregoing embodiment. In
particular, although the invention has been described with reference to detection of a patient, the technique can be applied to any object to be measured.
However, the invention is particularly useful, as mentioned previously, in the alignment of a patient on a treatment couch in curative radiotherapy of localised cancer.
It is possible to construct a body surface height map from a B-spline skinned model of a planned CT scan outline set and compare it with a height-map obtained from the present
invention. By repeatedly comparing the optical sensor height map with the planned CT height map, it is possible to perform dynamic surface matching at pre-treatment set up. As
the patient orientation on the treatment couch (with respect to the radiation source) is
changed, the optical sensor system would generate new surface height maps at near real¬
time rates. As such, the changes in patient orientation will be reflected in the sensor height
maps at rates which would not incur any noticeable system lags.
Because the sensor is fixed, its height map is just a "patch", albeit changing due to set-up. of the patient. However, the reference CT height map is a full "wrap-around" data-
set that can be height mapped from any perspective, in advance. So if it is known which side of the patient will be in view it is possible to create a detailed height map for this.
Similarly, if it is known that the top of the patient is to be matched a height map will be
created for this area. This significantly improves flexibility and speed.
The preparation of the reference CT height map against which the sensor height map
is compared must be done on the basis exhaustive high reslution transformation of the 3-D B-Spline fitted surface, or another flexible interpolation in 3-D. Without this detailed data it is hard to find corresponding points with the sensor height map, since the underlying
patient orientation is changing within the sensor height map and different surface arrays are
been "seen". It also allows processing to be sped up to achieve near real-time matching. It should also be noted that the sensor does produce height maps -of the patient in
different orientations, with occlusions etc. The combination of changing patient orientation
with respect to the sensor, and occlusions, is that at any given instant during patient set-up the sensor height map represents only a small part of the larger CT height map, and even
then there are "holes" in the available sensor height map due to occlusions etc. The matching of the sensor height map to the reference CT height map must be performed using
labels that tell the algorithm that data points are indeed missing or a boundary has been reached i.e. missing points are labelled "not a data point". The same approach is used to
tag known flaws/errors in the pre-screened height maps, or to limit matching to a smaller
more significant portion of the data-sets e.g. close to a given anatomical feature.
The dynamic surface matching software is a graphical user interface which when
installed in a treatment room will provide a user with a visual indication of correlation
between the planned CT and optical sensor height maps in the region of overlap. By
rendering both 3-D height maps as surface volumes and using simple mouse actions, the
user can alter his/her viewpoint in order to visually assess the degree of surface match.
Alternatively, continuous fast simulated annealing techniques may be adapted in
order to find the best fit between the planned CT and optical sensor height maps. Because of the inherent flexibility of a patient's internal anatomy, it is assumed that height map
alignments will not result in a precise or exact surface fit. As such, the best surface match will result from the minimisation of an objective function.
The 3-D planned CT scan and the optical sensor height maps provide cartisian
coordinate triplets which are reformated into image pairs for comparison. In simple terms, the planned CT height map represents what the optical sensor should see and the affine transfomied sensor height map is what it actually sees. By continuously weighting the rate
of collapse of the probability distribution and the acceptance probability a global minimum
can be computed for the system and a "best match" can be calculated. The system can then be used to provide details as to correct positioning of the treatment table upon which a
patient is supported. Further details of such techniques can be found, for example, in the
article by P. A. Graham, C.J. Moore, R.I. MacKay and P.J. Sharak entitled "Dynamic
Surface Matching for Patient Positioning in Radiotherapy" IEEE Computer Society, Proc 17th International Conference on Information Visualisation pages 16 to 24, London,
England, July 1998.

Claims

1. A method for measuring an object comprising:
projecting an interference fringe pattern onto the object using sources of e.m.
radiation;
measuring the phase shift and image intensity of the fringe pattern across an image
field;
measuring the intensity of background illumination across the image field;
modifying the measurements of the fringe pattern by subtracting an amount
corresponding to the measured background illumination across the image field; and
generating a 3-D height map of the object using the modified measurements.
2. A method as claimed in claim 1 , wherein the background illumination
across the image field is measured by destroying the interference fringe pattern but
retaining the same sources of e.m. radiation.
3. A method as claimed in claim 2, wherein the interference fringe pattern
is destroyed by vibrating one of the sources of e.m. radiation.
4. A method as claimed in any of the preceding claims, whereing the 3-D
height map is generated by use of a Fourier fringe analysis.
5. A method as claimed in any of the preceding claims, wherein a binary
mask is produced by performing a thresholding operation on the measured background
illumination.
6. A method as claimed in any of the preceding claims, further comprising
a calibration phase.
7. A method as claimed in any of the preceding claims, wherein the
calibration phase comprises the steps of:
placing a planar object of known dimensions on a reference plane;
capturing an image of the object and performing a binary thresholding operation
on the image in pixel units;
calculating X and Y spatial magnification factors, and calculating a value of an
aspect ratio of the system therefrom;
measuring and storing the 3-D shape of the reference plane; and
placing an object having a plurality of identified points at known locations on the
object and detecting those points to allow a calculation of a direct phase to height
multiplier to be made.
PCT/GB2000/004226 1999-11-04 2000-11-03 Measurement of objects WO2001033164A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
AU11594/01A AU1159401A (en) 1999-11-04 2000-11-03 Measurement of objects
GB0210548A GB2372318A (en) 1999-11-04 2000-11-03 Measurement of objects

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GBGB9926014.3A GB9926014D0 (en) 1999-11-04 1999-11-04 Measurement of objects
GB9926014.3 1999-11-04

Publications (1)

Publication Number Publication Date
WO2001033164A1 true WO2001033164A1 (en) 2001-05-10

Family

ID=10863873

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2000/004226 WO2001033164A1 (en) 1999-11-04 2000-11-03 Measurement of objects

Country Status (3)

Country Link
AU (1) AU1159401A (en)
GB (2) GB9926014D0 (en)
WO (1) WO2001033164A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113959360B (en) * 2021-11-25 2023-11-24 成都信息工程大学 Method, device and medium for measuring three-dimensional surface shape based on phase shift and focal shift

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE4120115A1 (en) * 1991-06-19 1992-12-24 Volkswagen Ag Contactless determn. of coordinates of points on surface - combining coded light application with phase-shift method using stripes of width corresp. to pattern period
US5381236A (en) * 1991-02-12 1995-01-10 Oxford Sensor Technology Limited Optical sensor for imaging an object
US5612786A (en) * 1994-05-26 1997-03-18 Lockheed Missiles & Space Company, Inc. Contour measurement system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5381236A (en) * 1991-02-12 1995-01-10 Oxford Sensor Technology Limited Optical sensor for imaging an object
DE4120115A1 (en) * 1991-06-19 1992-12-24 Volkswagen Ag Contactless determn. of coordinates of points on surface - combining coded light application with phase-shift method using stripes of width corresp. to pattern period
US5612786A (en) * 1994-05-26 1997-03-18 Lockheed Missiles & Space Company, Inc. Contour measurement system

Also Published As

Publication number Publication date
GB0210548D0 (en) 2002-06-19
AU1159401A (en) 2001-05-14
GB9926014D0 (en) 2000-01-12
GB2372318A (en) 2002-08-21

Similar Documents

Publication Publication Date Title
US10911672B2 (en) Highly efficient three-dimensional image acquisition method based on multi-mode composite encoding and epipolar constraint
Lilley et al. Robust fringe analysis system for human body shape measurement
Bert et al. A phantom evaluation of a stereo‐vision surface imaging system for radiotherapy patient setup
Sansoni et al. A novel, adaptive system for 3-D optical profilometry using a liquid crystal light projector
EP1266187B1 (en) System for simultaneous projections of multiple phase-shifted patterns for the three-dimensional inspection of an object
EP1192414B1 (en) Method and system for measuring the relief of an object
EP0888522B1 (en) Method and apparatus for measuring shape of objects
US9208561B2 (en) Registration method and registration device for a position detection system
Zhang et al. An optical measurement of vortex shape at a free surface
EP1451523A1 (en) System and method for wavefront measurement
JP2002529689A (en) Phase determination of radiation wave field
WO2009113068A1 (en) Intraoral imaging system and method based on conoscopic holography
WO2021207722A1 (en) System and method for 3d image scanning
Price et al. Real-time optical measurement of the dynamic body surface for use in guided radiotherapy
CN106778663B (en) Fingerprint identification system
Perednia et al. Automatic registration of multiple skin lesions by use of point pattern matching
WO2001033164A1 (en) Measurement of objects
Moore et al. 3D dynamic body surface sensing and CT-body matching: a tool for patient set-up and monitoring in radiotherapy
Carelli et al. Holographic contouring method: application to automatic measurements of surface deffects in artwork
JP7152470B2 (en) Method and Apparatus for Measuring Accuracy of Models Generated by Patient Monitoring Systems
KR102129069B1 (en) Method and apparatus of automatic optical inspection using scanning holography
Shao et al. Image contrast enhancement and denoising in micro-gap weld seam detection by periodic wide-field illumination
Naumov et al. Estimating the quality of stereoscopic endoscopic systems
RU2180745C2 (en) Method of radiation computation tomography
Soifer et al. Measuring geometric parameters using image processing and diffractive optics methods

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
ENP Entry into the national phase

Ref country code: GB

Ref document number: 200210548

Kind code of ref document: A

Format of ref document f/p: F

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase