DK2428162T3 - Method of recording data for three-dimensional imaging of intra-oral cavities - Google Patents

Method of recording data for three-dimensional imaging of intra-oral cavities Download PDF

Info

Publication number
DK2428162T3
DK2428162T3 DK11179796.5T DK11179796T DK2428162T3 DK 2428162 T3 DK2428162 T3 DK 2428162T3 DK 11179796 T DK11179796 T DK 11179796T DK 2428162 T3 DK2428162 T3 DK 2428162T3
Authority
DK
Denmark
Prior art keywords
data
handheld
view
measurement
measurement field
Prior art date
Application number
DK11179796.5T
Other languages
Danish (da)
Inventor
Robert Dillon
Andrew Vesper
Timothy Fillion
Original Assignee
Dimensional Photonics Int Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dimensional Photonics Int Inc filed Critical Dimensional Photonics Int Inc
Application granted granted Critical
Publication of DK2428162T3 publication Critical patent/DK2428162T3/en

Links

Landscapes

  • Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)

Description

DESCRIPTION
FIELD OF THE INVENTION
[0001] The invention relates generally to three-dimensional (3D) imaging of the intra-oral cavity. More particularly, the invention relates to a method of acquiring 3D image data of an object scene using a plurality of 3D measurement scans and generating a complete 3D image from the scans.
BACKGROUND
[0002] In a typical dental or medical 3D camera or scanner imaging system, a series of two-dimensional (2D) intensity images of one or more object surfaces in an object scene is acquired where the illumination for each image may vary. In some systems, structured light patterns are projected onto the surface and detected in each 2D intensity image. For example, the projected light pattern can be generated by projecting a pair of coherent optical beams onto the object surface and the resulting fringe pattern varied between successive 2D images. Alternatively, the projected light pattern may be a series of projected parallel lines generated using an intensity mask and the projected pattern shifted in position between successive 2D images. In still other types of 3D imaging systems, techniques such as confocal imaging are employed.
[0003] In a dynamic 3D imaging system, a series of 3D data sets is acquired while the camera or scanner is in motion relative to the object scene. For example, the imaging system can be a wand or other handheld device that a user manually positions relative to the object scene. In some applications, multiple objects surfaces are measured by moving the device relative to the objects so that surfaces obscured from view of the device in one position are observable by the device in another position. For example, in dental applications the presence of teeth or other dental features in a static view can obscure the view of other teeth. A processing unit registers the overlapped region of all acquired 3D data to obtain a full 3D data set representation of all surfaces observed during the measurement procedure.
[0004] US-A-2002/064752 discloses systems and methods for optically imaging a dental structure within an oral cavity by: directing air at a tooth-gum interface of the dental structure through at least one air nozzle movably coupled to an intra-oral track; coating the dental structure with a substance to enhance the image quality; and capturing one or more images of the dental structure through at least one image aperture, the image aperture movably coupled to the intra-oral track.
[0005] WO-A-2007/030340 discloses a system for scanning 3D images which applies sensor fusion of a passive triangulation sensor in combination with an active triangulation sensor to obtain high resolution 3D surface models from objects undergoing arbitrary motion during the data acquisition time.
[0006] The present invention is as claimed in the claims.
BRIEF DESCRIPTION OF THE DRAWINGS
[0007] The above and further advantages of this invention may be better understood by referring to the following description in conjunction with the accompanying drawings, in which like numerals indicate like structural elements and features in the various figures. For clarity, not every element may be labeled in every figure. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the invention. FIG. 1 is a block diagram showing an example of a measurement system that can be used to obtain a 3D image of an object scene. FIG. 2 illustrates a maneuverable wand that is part of a 3D measurement system used to obtain 3D measurement data for an intra-oral cavity. FIG. 3 illustrates how an upper dental arch is measured using a handheld 3D measurement device such as the maneuverable wand of FIG. 2. FIG. 4 a flowchart representation of an embodiment of a method of obtaining 3D surface data for a dental arch according to the invention. FIG. 5 shows a measurement field of view at five different positions along an upper dental arch during 3D data acquisition for an occlusal scan according to the method of FIG. 4. FIG. 6Aand FIG. 6B are an occlusal view and a buccal view, respectively, of a wand and the corresponding location of the measurement field of view during a scan segment of a buccal surface.
DETAILED DESCRIPTION
[0008] The present teaching will now be described in more detail with reference to exemplary embodiments thereof as shown in the accompanying drawings. While the present teaching is described in conjunction with various embodiments and examples, it is not intended that the present teaching be limited to such embodiments. On the contrary, the present teaching encompasses various alternatives, modifications and equivalents, as will be appreciated by those of skill in the art. Those of ordinary skill in the art having access to the teaching herein will recognize additional implementations, modifications and embodiments, as well as other fields of use, which are within the scope of the present disclosure as described herein.
[0009] The methods of the present invention may include any of the described embodiments or combinations of the described embodiments in an operable manner. In brief overview, embodiments of the methods of the invention enable an accurate 3D measurement of one or more object surfaces. In various embodiments described below, the methods relate to the acquisition of 3D data during a 3D measurement procedure. The methods are described with respect to measurements of an oral cavity, such as a measurement made by a clinician in a dental application in which measured surfaces may include the enamel surface of teeth, the dentin substructure of teeth, gum tissue and various dental structures (e.g., posts, inserts and fillings). It will be appreciated that the methods can also be applied in medical applications and other applications in which 3D measurement data are acquired with 3D scanning devices under direct manipulation by an operator or otherwise maneuvered by a control system.
[0010] In the embodiments described below, 3D measurement systems use structured illumination patterns generated by interferometric fringe projection or other techniques. Imaging components acquire 2D images used to determine positional information of points on the surface of objects based on the structured illumination of the objects.
[0011] U.S. Patent No. 5,870, 191 describes a technique referred to as Accordion Fringe Interferometry (AFI) that can be used for high precision 3D measurements based on interferometric fringe projection. AFI-based 3D measurement systems typically employ two closely-spaced coherent optical sources to project the interferometric fringe pattern onto the surface of the object. Images of the fringe pattern are acquired for at least three spatial phases of the fringe pattern.
[0012] FIG. 1 illustrates an AFI-based 3D measurement system 10 used to obtain 3D images of one or more objects 22. Two coherent optical beams 14A and 14B generated by a fringe projector 18 are used to illuminate the surface of the object 22 with a pattern of interference fringes 26. An image of the fringe pattern at the object 22 is formed by an imaging system or lens 30 onto an imager that includes an array of photodetectors 34. For example, the detector array 34 can be a two-dimensional charge coupled device (CCD) imaging array. An output signal generated by the detector array 34 is provided to a processor 38. The output signal includes information on the intensity of the light received at each photodetector in the array 34. An optional polarizer 42 is oriented to coincide with the main polarization component of the scattered light. A control module 46 controls parameters of the two coherent optical beams 14 emitted from the fringe projector 18. The control module 46 includes a phase shift controller 50 to adjust the phase difference of the two beams 14 and a spatial frequency controller 54 to adjust the pitch, or separation, of the interference fringes 26 at the object 22.
[0013] The spatial frequency of the fringe pattern is determined by the separation of two virtual sources of coherent optical radiation in the fringe projector 18, the distance from the virtual sources to the object 22, and the wavelength of the radiation. The virtual sources are points from which optical radiation appears to originate although the actual sources of the optical radiation may be located elsewhere. The processor 38 and control module 46 communicate to coordinate the processing of signals from the photodetector array 34 with respect to changes in phase difference and spatial frequency, and the processor 38 determines 3D information for the object surface according to the fringe pattern images.
[0014] The processor 38 calculates the distance from the imaging system 30 and detector array 34 to the object surface for each pixel based on the intensity values for the pixel in the series of 2D images generated after successive phase shifts of the fringe patterns. Thus the processor creates a set of 3D coordinates that can be displayed as a point cloud or a surface map that represents the object surface. The processor 38 communicates with a memory module 58 for storage of 3D data generated during a measurement procedure. A user interface 62 includes an input device and a display to enable an operator such as a clinician to provide operator commands and to observe the acquired 3D information in a near real-time manner. For example, the operator can observe a display of the growth of a graphical representation of the point cloud or surface map as different regions of the surface of the object 22 are measured and additional 3D measurement data are acquired.
[0015] F IG. 2 illustrates a handheld 3D measurement device in the form of a maneuverable wand 66 that can be used to obtain 3D measurement data for an intra-oral cavity. The wand 66 includes a body section 70 that is coupled through a flexible cable 74 to a processor and other system components (not shown). The wand 66 generates a structured light pattern 78 that is projected from near the projection end 82 to illuminate a surface to be measured. For example, the structured light pattern 78 can be an interferometric fringe pattern based on the principles of an AFI measurement system as described above for FIG. 1. The wand 66 can be used to obtain 3D data for a portion of a dental arch. The wand 66 is maneuvered within the intra-oral cavity by a clinician so that 3D data are obtained for all surfaces that can be illuminated by the structured light pattern 78.
[0016] FIG. 3 illustrates an intra-oral application in which an upper dental arch is measured using a handheld 3D measurement device such as the wand 66 of FIG. 2. Reference is also made to FIG. 4 which presents a flowchart representation of an embodiment of a method 100 of obtaining 3D surface data for a dental arch. The measurement results in a complete 3D data set that accurately represents the full arch of a patient, i.e., from the back molar on one side of the arch to the back molar on the opposite side of the arch. During data processing, stitching errors and motion-induced errors can degrade the measurement results. For a 3D measurement device that includes a 2D imager with a small measurement field of view (FOV) (e.g., 13 mm x 10 mm) relative to the full arch, hundreds of 3D data sets may be stitched together to obtain the complete 3D data set for the arch.
[0017] According to the method 100, numerous overlapping 3D data sets are stitched together in a common coordinate reference. The 3D data are obtained in a preferred manner or sequence so that the "finial" 3D data set resulting from all the 3D data more accurately represents the dental arch. In particular, a backbone 3D data set is first generated and additional sequences of 3D data are subsequently joined to the backbone 3D data set. Individual scan segments are used to acquire subsets of 3D data for the final point cloud.
Limited motion of the wand during the 3D data acquisition for each scan segment results in reduced measurement error and increased measurement accuracy.
[0018] A clinician performs the 3D measurement method 100 by positioning (step 105) the wand so that the structured light pattern illuminates a portion of an occlusal surface of the dental arch at a starting location, for example, at one end of the dental arch. 3D data are acquired for the illuminated portion of the occlusal surface. In this example, data acquisition starts by acquiring data from within a measurement field of view 86A at the patient's left back molar 90 of the upper dental arch as shown in FIG. 3. The wand is then moved (step 110) from the patient's left back molar 90 along the arch to the right back molar while maintaining a substantially occlusal view. FIG. 5 shows the measurement field of view 86Ato 86E for several positions (five positions A to E) along the arch. The full occlusal scan does not require significant rotation of the wand during this portion of the 3D measurement procedure. Rotation and focus induced errors are therefore reduced in comparison to scans obtained by manipulating the wand for other views of the arch. In some embodiments, the motion of the wand is substantially in a direction parallel to the stripes or fringes of the structured light pattern. The primary motion of the wand during the occlusal scan is restricted substantially to a single plane. Advantageously, the occlusal view includes features having substantially higher spatial frequency content than other views. That is, the occlusal view more readily shows rapidly varying structure (e.g., gaps between teeth) that improve the accuracy for stitching of 3D data relative to other scan views (i.e., buccal and lingual). Thus the 3D data corresponding to the occlusal scan defines a backbone that can be used for attachment by other 3D data sets obtained during other directional scan views as necessary to obtain a complete set of 3D data for the arch.
[0019] To continue the measurement procedure, the clinician positions (step 115) the wand so that the structured light pattern illuminates a portion of the occlusal surface, for example, near or at one end of the arch, and 3D data are acquired that overlap a portion of the backbone 3D data set. Preferably, the 3D measurement system provides an affirmative visual or audible indication to the clinician when the new 3D data for the real-time position of the structured light pattern "locks on" to the display of a point cloud for the backbone 3D data set. The newly-acquired 3D data are then registered or joined (step 120) to the backbone 3D data and serve as the start of a buccal scan segment for the arch. The wand is then rotated about its primary axis and moved (step 125) so that a portion of the buccal surface of the arch is illuminated with the structured light pattern and 3D data are acquired. The wand is then maneuvered (step 130) by the clinician so that the structured light pattern moves along a segment of the buccal surface. For example, the wand may be moved so that the structured light pattern is scanned in time from the patient's back left molars to just beyond the midpoint of the buccal surface. FIG. 6Aand FIG. 6B show an occlusal view and a buccal view, respectively, of the position of the projection end 82 of the wand and the corresponding location of the measurement field of view (and structured light pattern) part way through this scan segment of the buccal surface. Optionally, the clinician can rotate (step 135) the wand so that the structured light pattern illuminates the midpoint of the occlusal surface. In this manner, the structured light pattern is used to acquire 3D data in an occlusal view that overlay data in the backbone 3D data set to more accurately "register" to the existing 3D data in the common coordinate reference system.
[0020] A complementary buccal scan segment can now be performed. The clinician positions (step 140) the wand so that the structured light pattern illuminates a portion of the occlusal surface at the opposite end of the arch and 3D data are acquired that overlap a portion of the backbone 3D data set. The 3D data at this location are joined (step 145) to the 3D backbone data set and serve as the start of a complementary buccal scan segment for the arch. The wand is then rotated about its primary axis (step 150) so that a portion of remainder of the buccal surface of the arch is illuminated with the structured light pattern and 3D data are acquired. Subsequently, the wand is maneuvered (step 155) by the clinician so that the structured light pattern moves along the remainder of the buccal surface segment. For example, the wand may be moved so that the structured light pattern is moved in time from the patient's back right molars to just beyond the midpoint of the buccal surface. Optionally, the clinician can rotate (step 160) the wand so that the structured light pattern illuminates the midpoint region of the occlusal surface and 3D data are acquired in an occlusal view that overlay the data in the backbone 3D data set. Thus the 3D data for this buccal segment are accurately registered in the common reference system of the backbone 3D data set.
[0021] To complete the 3D measurement of the arch, the clinician obtains 3D data for the lingual surface of the dental arch in a manner similar to that used for obtaining 3D data for the buccal surface. More specifically, steps 125 to steps 160 are performed by replacing all references to the buccal surface with references to the lingual surface. In total, five scan segments are performed to obtain a full set of 3D data for the final 3D data set for the dental arch.
[0022] In effect, the steps of joining 3D data to the backbone 3D data set allows sequences of individual 3D images to be attached by referring to a subset of the chronologically ordered 3D images in the backbone 3D data set. This joining technique "primes the stitcher" so that the subsequent scan is properly registered to the backbone 3D data set and accurately shares the same global coordinate system.
[0023] In an alternative embodiment, the order of scan segments can differ. For example, acquisition of the 3D data for the two lingual segments can precede acquisition of the 3D data for the buccal segments.
[0024] In other embodiments, the clinician can use a greater number of buccal and lingual scan segments and the extent of each segment can be smaller. In such embodiments, the measurement system displays various portions of the backbone 3D data set to allow joining the backbone 3D data set at other locations.
[0025] In the embodiments described above for the method 100, the structured light pattern and measurement field of view used for 3D data acquisition are moved along various surfaces of a dental arch and repositioned by manipulating the position and rotation of a wand. The method can be adapted for other types of 3D measurement systems. For example, the method can be performed using a measurement field of view of a wand or maneuverable 3D measurement device that can be translated, rotated and positioned in a similar manner to the structured light pattern such that 3D data initially generated during the procedure can be used to generate a backbone 3D data set and subsequent 3D data can be joined to the backbone 3D data set to obtain a high accuracy 3D data representation of an object scene. Furthermore, the method preferably obtains 3D measurement data first for a directional view that substantially requires only two dimensional translation of the measurement field of view and in which high spatial frequency content is observable to create the backbone 3D data set and subsequent directional views are used to generate additional 3D data that can be attached to the backbone 3D data set.
[0026] While the invention has been shown and described with reference to specific embodiments, it should be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the scope of the invention which is as recited in the accompanying claims.
REFERENCES CITED IN THE DESCRIPTION
This list of references cited by the applicant is for the reader's convenience only. It does not form part of the European patent document. Even though great care has been taken in compiling the references, errors or omissions cannot be excluded and the EPO disclaims all liability in this regard.
Patent documents cited in the description • US2002064752A f0004t • WQ2007030340A fOODSl • US5870191A Γ60111

Claims (14)

1. Fremgangsmåde til opnåelse af tredimensionale (3D) overfladedata for en objektscene, hvor fremgangsmåden omfatter: at positionere en håndholdt 3D-måleenhed (66) således, at et målingssynsfelt inkluderer en første del af en retningsbestemt visning af en objektscene ved et startpunkt i objektscenen og at indsamle 3D-data i målingssynsfeltet; at translatere den håndholdte 3D-måleenhed (66), samtidig med, at den retningsbestemte visning af objektscenen bevares således, at målingssynsfeltet translateres over en resterende del af den retningsbestemte visning af objektscenen, og at den håndholdte 3D-målenhed (66) optager 3D-data for det translaterede målingssynsfelt, der er registreret med 3D-dataene for startpunktet i objektscenen, idet 3D-dataene for det positionerede og translatrerede målingssynsfelt definerer et grundlæggende 3D-datasæt; at positionere den håndholdte 3D-måleenhed (66) således, at målingssynsfeltet inkluderer en anden del af den retningsbestemte visning af objektscenen, og at den håndholdte 3D-måleenhed (66) optager 3D-data for målingssynsfeltet, der overlapper en del af det grundlæggende 3D-datasæt; at sammenføre de 3D-data, der overlapper delen af det grundlæggende 3D-datasæt, med det grundlæggende 3D-datasæt; og at bevægelse af den håndholdte 3D-måleenhed (66) foregår således, at målingssynsfeltet inkluderer en del afen ortogonal, retningsbestemt visning af objektscenen, og at den håndholdte 3D-måleenhed (66) optager 3D-data for det målingssynsfelt, der er registreret med det grundlæggende 3D-datasæt.A method of obtaining three-dimensional (3D) surface data for an object scene, the method comprising: positioning a handheld 3D measurement unit (66) such that a measurement field of view includes a first portion of a directional view of an object scene at a starting point in the object scene and collecting 3D data in the measurement field of view; translating the handheld 3D measurement unit (66) while preserving the directional view of the object scene so that the measurement field of view is translated over a remaining portion of the directional view of the object scene, and the handheld 3D measuring unit (66) recording 3D data for the translated measurement field recorded with the 3D data for the starting point in the object scene, the 3D data for the positioned and translated measurement field defining a basic 3D data set; positioning the handheld 3D measurement unit (66) such that the measurement field includes another portion of the directional view of the object scene, and the handheld 3D measuring unit (66) records 3D data for the measurement field that overlaps part of the basic 3D. -datasæt; combining the 3D data overlapping with the basic 3D data set with the basic 3D data set; and moving the handheld 3D measurement unit (66) so that the measurement field includes a portion of an orthogonal, directional view of the object scene, and the handheld 3D measuring unit (66) records 3D data for the measurement field of view recorded with the basic 3D dataset. 2. Fremgangsmåden ifølge krav 1, der yderligere omfatter at bevæge den håndholdte 3D-målenhed (66) således, at målingssynsfeltet scanner et segment af den ortogonale, retningsbestemte visning af objektscenen, og at den håndholdte 3D-måleenhed (66) optager 3D-data for målingssynsfeltet, der er registreret med det grundlæggende 3D-datasæt.The method of claim 1, further comprising moving the handheld 3D measurement unit (66) such that the measurement field of view scans a segment of the orthogonal, directional view of the object scene and the handheld 3D measuring unit (66) records 3D data. for the measurement field of view registered with the basic 3D dataset. 3. Fremgangsmåden ifølge krav 1, der yderligere omfatter visning af de optagede 3D-data til observation.The method of claim 1, further comprising displaying the recorded 3D data for observation. 4. Fremgangsmåden ifølge krav 3, hvori de optagede 3D-data vises som en tredimensionel sky af punkter.The method of claim 3, wherein the recorded 3D data is displayed as a three-dimensional cloud of points. 5. Fremgangsmåden ifølge krav 3, hvori de optagede 3D-data vises som en eller flere objektoverflader.The method of claim 3, wherein the recorded 3D data is displayed as one or more object surfaces. 6. Fremgangsmåden ifølge krav 1, hvori objektscenen omfatter mindst en del af en intraoral kavitet.The method of claim 1, wherein the object scene comprises at least a portion of an intraoral cavity. 7. Fremgangsmåden ifølge krav 1, hvori den håndholdte 3D-måleenhed (66) projekterer et struktureret lysmønster (78) i objektscenen inden for målingssynsfeltet.The method of claim 1, wherein the handheld 3D measurement unit (66) projects a structured light pattern (78) into the object scene within the measurement field of view. 8. Fremgangsmåden ifølge krav 7, hvori det strukturerede lysmønster (78) omfatter et stribet mønster.The method of claim 7, wherein the structured light pattern (78) comprises a striped pattern. 9. Fremgangsmåden ifølge krav 7, hvori det strukturerede lysmønster (78) omfatter et interferometrisk intensitetsmønster.The method of claim 7, wherein the structured light pattern (78) comprises an interferometric intensity pattern. 10. Fremgangsmåden ifølge krav 7, hvori den retningsbestemte visning af objektscenen omfatter en okklusal overflade i tandrækken, og hvori den ortogonale, retningsbestemte visning omfatter en bukkal overflade eller en lingual overflade i tandrækken.The method of claim 7, wherein the directional view of the object scene comprises an occlusal surface of the tooth row, and wherein the orthogonal, directional view comprises a buccal surface or a lingual surface of the tooth row. 11. Fremgangsmåden ifølge krav 1, hvori den håndholdte 3D-måleenhed (66) i det alt væsentlige ikke roterer under translation af den håndholdte 3D-måleenhed (66).The method of claim 1, wherein the handheld 3D measuring unit (66) does not substantially rotate during translation of the handheld 3D measuring unit (66). 12. Fremgangsmåden ifølge krav 10, der yderligere omfatter at bevæge den håndholdte 3D-måleenhed (66) således, at projektionen af det strukturerede lysmønster (78) bevæges over et segment af den bukkale eller linguale overflade således, at 3D-data optages for segmentet.The method of claim 10, further comprising moving the handheld 3D measurement unit (66) such that the projection of the structured light pattern (78) is moved over a segment of the buccal or lingual surface such that 3D data is recorded for the segment. . 13. Fremgangsmåden ifølge krav 10, der yderligere omfatter at rotere den håndholdte 3D-måleenhed (66) således, at projekteringen af det strukturerede lysmønster (78) er indfaldende på en del af den okklusale overflade, således, at der optages 3D-data, som overlapper det grundlæggende 3D-datasæt.The method of claim 10, further comprising rotating the handheld 3D measurement unit (66) such that the design of the structured light pattern (78) is incident on a portion of the occlusal surface so that 3D data is recorded. which overlaps the basic 3D dataset. 14. Fremgangsmåde til opnåelse af tredimensionale (3D) overfladedata for en objektscene ifølge krav 1, hvori objektscenen er en intraoral kavitet, idet fremgangsmåden omfatter: at positionere den håndholdte 3D-måleenhed (66) i en intraoral kavitet således, at et målingssynsfelt for den håndholdte 3D-måleenhed (66) inkluderer en del afen okklusal overflade i en tandrække ved et startpunkt, og at optage 3D-data for målingssynsfeltet; at translationen af den håndholdte 3D-måleenhed (66) er således, at målingssynsfeltet inkluderer en resterende del af okklusaloverfladen i tandrækken, og at 3D-data optaget for det bevægede målingssynsfelt registreres til 3D-data for delen af den okklusale overflade, til 3D-dataene for delen, og til den resterende del af den okklusale overflade, der definerer et grundlæggende 3D-datasæt for tandrækken; at positionere den håndholdte 3D-måleenhed (66) således, at målingssynsfeltet inkluderer en del af den okklusale overflade således, at der optages 3D-data, der overlapper en del af det grundlæggende 3D-datasæt; at sammenføre 3D-data, der overlapper en del af det grundlæggende 3D-datasæt, med det grundlæggende 3D-datasæt; og at bevægelsen af den håndholdte 3D-måleenhed (66) er således, at målingssynsfeltet inkludereren del af den intraorale kavitet, som er adskilt fra den okklusale overflade i tandrækken således, at der optages 3D-data for den del af den intraorale kavitet, der er registreret med det grundlæggende 3D-datasæt.A method of obtaining three-dimensional (3D) surface data for an object scene according to claim 1, wherein the object scene is an intraoral cavity, the method comprising: positioning the handheld 3D measuring unit (66) in an intraoral cavity such that a measurement field of view for the handheld 3D measurement unit (66) includes a portion of an occlusal surface in a row of teeth at a starting point, and recording 3D data for the measurement field of view; that the translation of the handheld 3D measurement unit (66) is such that the measurement field of view includes a residual portion of the occlusal surface of the tooth row, and that 3D data recorded for the moving measurement field of view is recorded to 3D data for the portion of the occlusal surface, the data for the portion, and to the remaining portion of the occlusal surface, defining a basic 3D dataset for the tooth row; positioning the handheld 3D measurement unit (66) such that the measurement field of view includes a portion of the occlusal surface such that 3D data is recorded that overlaps a portion of the basic 3D data set; combining 3D data that overlaps part of the basic 3D dataset with the basic 3D dataset; and that the movement of the handheld 3D measurement unit (66) is such that the measurement field of view includes part of the intraoral cavity which is separated from the occlusal surface of the tooth row so that 3D data is recorded for that part of the intraoral cavity which is registered with the basic 3D dataset.
DK11179796.5T 2010-09-10 2011-09-02 Method of recording data for three-dimensional imaging of intra-oral cavities DK2428162T3 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US38173110P 2010-09-10 2010-09-10
US21759511A 2011-08-25 2011-08-25

Publications (1)

Publication Number Publication Date
DK2428162T3 true DK2428162T3 (en) 2017-10-02

Family

ID=59958325

Family Applications (1)

Application Number Title Priority Date Filing Date
DK11179796.5T DK2428162T3 (en) 2010-09-10 2011-09-02 Method of recording data for three-dimensional imaging of intra-oral cavities

Country Status (1)

Country Link
DK (1) DK2428162T3 (en)

Similar Documents

Publication Publication Date Title
US9955872B2 (en) Method of data acquisition for three-dimensional imaging
US11629954B2 (en) Intraoral scanner with fixed focal position and/or motion tracking
DK2732434T3 (en) DETECTING A MOVABLE ITEM BY 3D SCANNING OF A RIGGET ITEM
US7573583B2 (en) Laser digitizer system for dental applications
EP2428764A1 (en) System and method for processing and displaying intra-oral measurement data
JP5583761B2 (en) 3D surface detection method and apparatus using dynamic reference frame
US9436868B2 (en) Object classification for measured three-dimensional object scenes
US20180263482A1 (en) Structured light generation for intraoral 3d camera using 1d mems scanning
JP6293122B2 (en) How to measure dental conditions
WO2019005056A1 (en) Automatic intraoral 3d scanner using light sheet active triangulation
DK2428162T3 (en) Method of recording data for three-dimensional imaging of intra-oral cavities
WO2019034901A1 (en) Stencil for intraoral surface scanning