US20120062716A1 - Object classification for measured three-dimensional object scenes - Google Patents
Object classification for measured three-dimensional object scenes Download PDFInfo
- Publication number
- US20120062716A1 US20120062716A1 US13/217,652 US201113217652A US2012062716A1 US 20120062716 A1 US20120062716 A1 US 20120062716A1 US 201113217652 A US201113217652 A US 201113217652A US 2012062716 A1 US2012062716 A1 US 2012062716A1
- Authority
- US
- United States
- Prior art keywords
- value
- coordinates
- point
- determining
- scene
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 claims abstract description 64
- 230000003287 optical effect Effects 0.000 claims abstract description 14
- 230000004044 response Effects 0.000 claims description 16
- 210000000515 tooth Anatomy 0.000 claims description 16
- 238000005286 illumination Methods 0.000 claims description 10
- 210000004195 gingiva Anatomy 0.000 claims description 8
- 238000004891 communication Methods 0.000 claims description 4
- 210000004872 soft tissue Anatomy 0.000 claims description 3
- 238000001228 spectrum Methods 0.000 claims 1
- 238000005259 measurement Methods 0.000 abstract description 40
- 239000013589 supplement Substances 0.000 abstract description 3
- 238000003384 imaging method Methods 0.000 description 11
- 230000001427 coherent effect Effects 0.000 description 7
- 210000002455 dental arch Anatomy 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 230000005855 radiation Effects 0.000 description 4
- 210000003298 dental enamel Anatomy 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 210000004268 dentin Anatomy 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012014 optical coherence tomography Methods 0.000 description 2
- 230000010363 phase shift Effects 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 230000003595 spectral effect Effects 0.000 description 2
- 230000000153 supplemental effect Effects 0.000 description 2
- 230000015556 catabolic process Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000010226 confocal imaging Methods 0.000 description 1
- 238000004624 confocal microscopy Methods 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000005305 interferometry Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 210000000214 mouth Anatomy 0.000 description 1
- 230000035515 penetration Effects 0.000 description 1
- 230000010287 polarization Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 210000001519 tissue Anatomy 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61C—DENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
- A61C9/00—Impression cups, i.e. impression trays; Impression methods
- A61C9/004—Means or methods for taking digitized impressions
- A61C9/0046—Data acquisition means or methods
- A61C9/0053—Optical means or methods, e.g. scanning the teeth by a laser or light beam
- A61C9/006—Optical means or methods, e.g. scanning the teeth by a laser or light beam projecting one or more stripes or patterns on the teeth
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/06—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements
- A61B1/0605—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements for spatially modulated illumination
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
Definitions
- the invention relates generally to three-dimensional imaging (3D) of an object scene. More particularly, the invention relates to a method of differentiating between different classes of objects in a 3D measurement of the object scene.
- a series of two-dimensional (2D) intensity images of one or more object surfaces in an object scene is acquired where the illumination for each image may vary.
- structured light patterns are projected onto the surface and detected in each 2D intensity image.
- the projected light pattern can be generated by projecting a pair of coherent optical beams onto the object surface and the resulting fringe pattern varied between successive 2D images.
- the projected light pattern may be a series of projected parallel lines generated using an intensity mask and the projected pattern shifted in position between successive 2D images.
- techniques such as confocal imaging are employed.
- a series of 3D data sets is acquired while the camera or scanner is in motion relative to the object scene.
- the imaging system can be a wand or other handheld device that a user manually positions relative to the object scene.
- multiple objects surfaces are measured by moving the device relative to the objects so that surfaces obscured from view of the device in one position are observable by the device in another position.
- a processing unit registers the overlapped region of all acquired 3D data to obtain a full 3D data set representation of all surfaces observed during the measurement procedure.
- measurements of an intra-oral cavity typically include 3D data for different classes of objects such as teeth, artificial dental structures and gingiva.
- the 3D data can be presented to a user in different graphical formats such as a display of the points in the form of a 3D surface map representation or in the form of a 3D point cloud.
- Differentiating between different structures represented in the display can be problematic and may require extensive effort to properly interpret features in the display.
- the clinician may be unable to distinguish adjacent portions of separate objects. For example, it can be difficult for a dental professional to accurately recognize the boundary between gingiva and enamel or dentin.
- the invention features a method of object classification for images of an intra-oral cavity.
- the method includes illuminating at least a portion of an intra-oral cavity with an optical beam and acquiring an image of the illuminated portion. Coordinates are determined for a plurality of points in the image and a translucence value is determined for each of the points. An object class is determined for each of the points based on the translucence value of the point.
- the invention features a method of object classification of 3D data for an object scene.
- the method includes illuminating an object scene with a structured light pattern and acquiring images of the illuminated object scene. Coordinates are determined for a plurality of points in the object scene based on the acquired images. A translucence value is determined for each point and an object class is determined for each point based on the translucence value of the point.
- the invention features a method of object classification of 3D data for an object scene.
- the method includes illuminating an object scene with a sequence of structured light patterns each having a different spatial phase.
- An image of the object scene is acquired for each of the structured light patterns.
- Coordinates are determined for a plurality of points in the object scene based on the acquired images.
- a background intensity value is determined for each point based on the acquired images and an object class is determined for each point based on the background intensity value for the point.
- the invention features an apparatus for object classification of an object scene.
- the apparatus includes an illumination source to illuminate an object scene, an imager and a processor in communication with the imager.
- the imager is configured to acquire an image of the illuminated object scene and to provide an output signal comprising 2D image data for the object scene.
- the processor is configured to determine a translucence value for a plurality of coordinates represented in the image in response to the 2D image data.
- the processor determines an object class for each of the coordinates in response to the translucence value for the coordinate.
- the invention features an apparatus for object classification of an object scene.
- the apparatus includes a projector, an imager and a processor in communication with the imager.
- the projector is configured to illuminate an object scene with a sequence of structured light patterns.
- the imager is configured to acquire an image of the illuminated object scene for each structured light patterns in the sequence.
- the imager provides an output signal comprising 2D image data for the object scene for each of the images.
- the processor is configured to determine 3D coordinates for the object scene and a translucence value for each of the 3D coordinates in response to the 2D image data for the sequence.
- the processor determines an object class for each of the 3D coordinates in response to the translucence value for the 3D coordinate.
- FIG. 1 is a block diagram showing an example of a measurement system that can be used to obtain a 3D image of an object scene.
- FIG. 2 illustrates a maneuverable wand that is part of a 3D measurement system used to obtain 3D measurement data for an intra-oral cavity.
- FIG. 3 illustrates how a 3D measurement of the upper dental arch is performed using the wand of FIG. 2 .
- FIG. 4 is a flowchart representation of a method of classification of 3D data for an object scene according to an embodiment of the invention.
- FIG. 5 is a flowchart representation of a method of object classification of 3D data for an object scene according to another embodiment of the invention.
- the methods of the present invention may include any of the described embodiments or combinations of the described embodiments in an operable manner.
- the methods of the invention enable rapid automated object classification of measured 3D object scenes.
- object classification refers to the determination of a type or class of object from a plurality of possible object classes for a measured object.
- the method can be performed during the 3D measurement procedure while data are being acquired. Alternatively, the method can be performed after completion of a measurement procedure with previously acquired data.
- an object scene is illuminated with an optical beam and an image is acquired. Coordinates are determined for points in the image and a translucence value is determined for each of the points.
- An object class is determined for each point based on the translucence value for the point.
- grayscale or color image data for each point is used to supplement the object class determination.
- the methods relate to object classification for 3D data during or after a 3D measurement of an oral cavity, such as a measurement made by a clinician in a dental application.
- Measured surfaces may include the enamel surface of teeth, the dentin substructure of teeth, gingiva, various dental structures (e.g., posts, inserts and fillings) and soft tissue (e.g., the tongue or lip).
- Classification of the 3D measurement data for the intra-oral measurement allows a distinction among the 3D data that correspond to these different object classes.
- the ability to distinguish among different types of objects allows the 3D measurement data to be displayed in a manner that shows the object class in the measured object scene.
- 3D measurement data from objects not of interest can be managed accordingly.
- motion of the tongue or lip through the measurement field of view in an intra-oral measurement application can cause data to be acquired from the interfering object.
- the unwanted data can be discarded or otherwise prevented from corrupting the measurement of the intended object scene, i.e., the teeth and gingiva.
- the methods can also be applied in medical applications and other applications in which 3D measurement data are acquired for object scenes having a plurality of object classes.
- 3D measurement systems use structured illumination patterns generated by interferometric fringe projection or other techniques.
- Imaging components acquire 2D images to determine coordinate information of points on the surface of objects based on the structured illumination of the objects.
- AFI Accordion Fringe Interferometry
- FIG. 1 illustrates an AFI-based 3D measurement system 10 used to obtain 3D images of one or more objects 22 .
- Two coherent optical beams 14 A and 14 B generated by a fringe projector 18 are used to illuminate the surface of the object 22 with a pattern of interference fringes 26 .
- An image of the fringe pattern at the object 22 is formed by an imaging system or lens 30 onto an imager that includes an array of photodetectors 34 .
- the detector array 34 can be a two-dimensional charge coupled device (CCD) imaging array.
- An output signal generated by the detector array 34 is provided to a processor 38 .
- the output signal includes information on the intensity of the light received at each photodetector in the array 34 .
- An optional polarizer 42 is oriented to coincide with the main polarization component of the scattered light.
- a control module 46 controls parameters of the two coherent optical beams 14 emitted from the fringe projector 18 .
- the control module 46 includes a phase shift controller 50 to adjust the phase difference of the two beams 14 and a spatial frequency controller 54 to adjust the pitch, or separation, of the interference fringes 26 at the object 22 .
- the spatial frequency of the fringe pattern is determined by the separation of two virtual sources of coherent optical radiation in the fringe projector 18 , the distance from the virtual sources to the object 22 , and the wavelength of the radiation.
- the virtual sources are points from which optical radiation appears to originate although the actual sources of the optical radiation may be located elsewhere.
- the processor 38 and control module 46 communicate to coordinate the processing of signals from the photodetector array 34 with respect to changes in phase difference and spatial frequency, and the processor 38 determines 3D information for the object surface according to the fringe pattern images.
- the processor 38 calculates the distance from the imaging system 30 and detector array 34 to the object surface for each pixel based on the intensity values for the pixel in the series of 2D images generated after successive phase shifts of the fringe patterns. Thus the processor creates a set of 3D coordinates that can be displayed as a point cloud or a surface map that represents the object surface.
- the processor 38 communicates with a memory module 58 for storage of 3D data generated during a measurement procedure.
- a user interface 62 includes an input device and a display to enable an operator such as a clinician to provide operator commands and to observe the acquired 3D information in a near real-time manner. For example, the operator can observe a display of the growth of a graphical representation of the point cloud or surface map as different regions of the surface of the object 22 are measured and additional 3D measurement data are acquired.
- FIG. 2 illustrates a handheld 3D measurement device in the form of a maneuverable wand 66 that can be used to obtain 3D measurement data for an intra-oral cavity.
- the wand 66 includes a body section 70 that is coupled through a flexible cable 74 to a processor and other system components (not shown).
- the wand 66 generates a structured light pattern 78 that is projected from near the projection end 82 to illuminate the object scene to be measured.
- the structured light pattern 78 can be an interferometric fringe pattern based on the principles of an AFI measurement system as described above for FIG. 1 .
- the wand 66 can be used to obtain 3D data for a portion of a dental arch.
- the wand 66 is maneuvered within the intra-oral cavity by a clinician so that 3D data are obtained for all surfaces that can be illuminated by the structured light pattern 78 .
- FIG. 3 shows an example of how a 3D measurement of the upper dental arch is performed using the wand 66 of FIG. 2 .
- the wand 66 is a maneuverable component of an AFI type 3D measurement system. Fringes are projected from the wand 66 onto the teeth 86 and adjacent gum tissue 90 in a measurement field of view 94 during a portion of a buccal scan of the dental arch. 3D data obtained from the measurement scan are displayed to the clinician preferably as a 3D point could or as a surface map (e.g., a wireframe representation) that shows the measured surfaces of the teeth 86 and gingiva 90 .
- a surface map e.g., a wireframe representation
- the imaging array 34 receives an image of the fringe pattern projected onto the teeth 86 and adjacent gingiva 90 within the measurement field of view 94 . Due to the translucent nature of the enamel, some of the light in the projected fringe pattern penetrates the surface of the teeth 86 and is scattered in a subsurface region. The scattered light typically results in degradation of the images of the fringe pattern. The degree of translucency determines the amount of light in the fringe pattern that penetrates the surface and is scattered below. If the scattered light contribution from the subsurface region is significant relative to the scattered light contribution from the fringe illumination at the surface, the apparent location (i.e., apparent phase) of the fringe pattern in the images can be different than the actual location of the fringe patterns on the surface of the teeth 86 .
- the fringe projector 18 uses an illumination wavelength that increases internal scatter near the surface.
- the fringe illumination can include a near ultraviolet wavelength or shorter visible wavelength (e.g., from approximately 350 nm to 500 nm) which results in greater scatter near the surface and less penetration below the surface than longer wavelengths.
- the fringe pattern is preferably configured to have a high spatial frequency such that the light scattered from the shallow subsurface region results in a nearly uniform background light contribution to the images of the fringe patterns. During processing of the 2D images to determine the 3D data for the teeth 86 , the background contribution from the subsurface region is ignored.
- the magnitude of residual error induced by any spatially-varying intensity contribution from the subsurface region is less significant because the contribution is limited to a shallow region below the surface of each tooth 86 .
- the wavelength of the projected fringe pattern is 405 nm and the spatial frequency, or pitch, of the fringe pattern at the tooth surface is at least 1 fringe/mm.
- FIG. 4 is a flowchart representation of an embodiment of a method 100 of object classification for 3D data for an object scene.
- the method 100 includes illuminating (step 110 ) the object scene with a structured light pattern.
- the structured light pattern can be a striped intensity pattern generated by interference of coherent optical beams or shadow mask projection.
- Images of the illuminated object scene are acquired (step 120 ).
- the object scene corresponds to a measurement field of view of a 3D imaging device and a larger object scene is measured by maneuvering the device so that the measurement field of view includes other regions of the larger object scene.
- the coordinates of the 3D points on the surfaces of the object scene are determined (step 130 ) from the acquired images.
- a translucence value for each measured 3D point is determined (step 140 ).
- the object scene can include objects that are distinguishable from each other. For example, two objects may be comprised of different materials that exhibit different translucence.
- the translucence value can be used to determine (step 150 ) the type of object, or object class, for the point.
- Object classification can be based on comparing the translucence value to one or more threshold values associated with different types of objects. For example, the object classification can be based on determining which range in a plurality of ranges of translucence values includes the translucence value for the point. In this example, each range of translucence values corresponds to a unique object classification.
- a reflectance value corresponding to the magnitude of the light scattered from the surface of a corresponding object is used in combination with the translucence value to determine the object class.
- the reflectance value is compared to reflectance threshold values or ranges of threshold values that are associated, in combination with the translucence values, with various object classes.
- a graphical display of the object scene is generated (step 160 ).
- the display includes an indication of the object class for each of the points.
- the display can be a 3D surface map representation where a wireframe representation, surface element depiction or the like is displayed with different colors to indicate the object classes.
- Other graphical parameters can be used to indicate the object classification for each point.
- the display can be a 3D point cloud where each point has a color associated with its object classification.
- the graphical display can include boundary lines or similar features to segment or differentiate different regions of the object scene into graphical objects so that different objects can easily be recognized.
- a color image of the illuminated region of the object scene is acquired.
- the color data acquired for each point can be used in combination with the translucence value for the point to make the determination of the object class for the point.
- the color image can be acquired under passive lighting or a supplemental light source such as a white light source or broadband light source can be used to improve the ability to perform differentiation by color.
- sequential operation of spectral light sources such as red, green and blue light emitting diodes (LEDs) can be used to generate RGB images.
- a monochromatic imager can be used to generate the color data to supplement the object classification.
- a grayscale image of the illuminated region of the object scene is acquired.
- the object grayscale value for each point is used in combination with the translucence value for the point to determine the object class for the point.
- the grayscale image may be acquired with passive lighting or a supplemental light source can be utilized.
- FIG. 5 is a flowchart representation of an embodiment of a method 200 of object classification of 3D data for an object scene.
- the method 200 includes illuminating (step 210 ) an object scene with a sequence of structured light patterns of different spatial phase.
- the structured light patterns are interferometric intensity patterns having a sinusoidal intensity variation in one dimension.
- the sinusoidal intensity pattern is generated, for example, by the interference of two coherent beams as described above with respect to FIG. 1 .
- the sequence includes a set of three sinusoidal intensity patterns each having a spatial phase that is offset from the other two sinusoidal intensity patterns by 120°.
- An image of the illuminated object scene is acquired (step 220 ) for each of the light patterns in the sequence.
- 3D data are determined (step 230 ) for points in the object scene based on the images of the sequence of structured light patterns.
- a background intensity value is calculated (step 240 ) for each point from the sequence of images.
- the background intensity value for a point in the object scene is primarily due to the translucence of the object associated with the point if other sources of illumination of the object scene are maintained at low levels and if the image acquisition time is sufficiently small.
- the background intensity value can be used as a measure of the translucency (i.e., a translucence value) for the point.
- the background intensity value for a point is determined by first mathematically fitting a sinusoidal intensity variation to the three intensity values for the location of the point in the 2D image of the illuminated object scene.
- the mathematical fitting can be a least squares fit of a sinusoidal function.
- the background intensity is present in all the images of the sequence and degrades the contrast.
- the value of the background intensity is determined as the minimum value of the fitted sinusoidal function.
- the background intensity level can be used to determine (step 250 ) the type of object, or object class, for the point, for example, by comparing the background intensity value to one or more threshold values or background intensity value ranges associated with different types of objects.
- object classification is a two-step comparison in which object classification also includes comparing the maximum of the fitted sinusoidal function to one or more threshold intensity values.
- a graphical display of the object scene is generated (step 260 ) and includes an indication of the object class for each of the points.
- the display can be any type of surface map representation or a 3D point cloud, as described above with respect to the method of FIG. 4 , in which color or other graphical features are used to indicate different object classes and structures.
- color or grayscale images of the object scene are acquired and used in combination with the background intensity values to make the determinations of the object classes.
- the embodiments described above relate primarily to object classification in which the object scene is illuminated using a structured light pattern, it will be recognized that object classification by determination of translucence can be performed under more general illumination conditions.
- the object scene can be illuminated with an optical beam in any manner that allows the translucence value for points or regions on the object to be determined.
- Optical coherence tomography (OCT) systems and confocal microscopy systems are examples of measurement systems that can be adapted for translucence measurement and object classification.
- the characteristics of the optical beam such as wavelength or spectral width, can be chosen to best assist in discriminating between different object classes.
- grayscale or color image data for the object scene can be utilized in various embodiments to improve the object classification capability.
Abstract
Description
- This application claims the benefit of the earlier filing date of U.S. Provisional Patent Application Ser. No. 61/381,731, filed Sep. 10, 2010 and titled “Method of Data Processing and Display for a Three-Dimensional Intra-Oral Scanner,” the entirety of which is incorporated herein by reference.
- The invention relates generally to three-dimensional imaging (3D) of an object scene. More particularly, the invention relates to a method of differentiating between different classes of objects in a 3D measurement of the object scene.
- In a typical dental or medical 3D camera or scanner imaging system, a series of two-dimensional (2D) intensity images of one or more object surfaces in an object scene is acquired where the illumination for each image may vary. In some systems, structured light patterns are projected onto the surface and detected in each 2D intensity image. For example, the projected light pattern can be generated by projecting a pair of coherent optical beams onto the object surface and the resulting fringe pattern varied between successive 2D images. Alternatively, the projected light pattern may be a series of projected parallel lines generated using an intensity mask and the projected pattern shifted in position between successive 2D images. In still other types of 3D imaging systems, techniques such as confocal imaging are employed.
- In a dynamic 3D imaging system, a series of 3D data sets is acquired while the camera or scanner is in motion relative to the object scene. For example, the imaging system can be a wand or other handheld device that a user manually positions relative to the object scene. In some applications, multiple objects surfaces are measured by moving the device relative to the objects so that surfaces obscured from view of the device in one position are observable by the device in another position. For example, in dental applications the presence of teeth or other dental features in a static view can obscure the view of other teeth. A processing unit registers the overlapped region of all acquired 3D data to obtain a full 3D data set representation of all surfaces observed during the measurement procedure.
- The results of a 3D measurement can be difficult to interpret. For example, measurements of an intra-oral cavity typically include 3D data for different classes of objects such as teeth, artificial dental structures and gingiva. The 3D data can be presented to a user in different graphical formats such as a display of the points in the form of a 3D surface map representation or in the form of a 3D point cloud. Differentiating between different structures represented in the display can be problematic and may require extensive effort to properly interpret features in the display. In some instances, the clinician may be unable to distinguish adjacent portions of separate objects. For example, it can be difficult for a dental professional to accurately recognize the boundary between gingiva and enamel or dentin.
- In one aspect, the invention features a method of object classification for images of an intra-oral cavity. The method includes illuminating at least a portion of an intra-oral cavity with an optical beam and acquiring an image of the illuminated portion. Coordinates are determined for a plurality of points in the image and a translucence value is determined for each of the points. An object class is determined for each of the points based on the translucence value of the point.
- In another aspect, the invention features a method of object classification of 3D data for an object scene. The method includes illuminating an object scene with a structured light pattern and acquiring images of the illuminated object scene. Coordinates are determined for a plurality of points in the object scene based on the acquired images. A translucence value is determined for each point and an object class is determined for each point based on the translucence value of the point.
- In still another aspect, the invention features a method of object classification of 3D data for an object scene. The method includes illuminating an object scene with a sequence of structured light patterns each having a different spatial phase. An image of the object scene is acquired for each of the structured light patterns. Coordinates are determined for a plurality of points in the object scene based on the acquired images. A background intensity value is determined for each point based on the acquired images and an object class is determined for each point based on the background intensity value for the point.
- In still another aspect, the invention features an apparatus for object classification of an object scene. The apparatus includes an illumination source to illuminate an object scene, an imager and a processor in communication with the imager. The imager is configured to acquire an image of the illuminated object scene and to provide an output signal comprising 2D image data for the object scene. The processor is configured to determine a translucence value for a plurality of coordinates represented in the image in response to the 2D image data. The processor determines an object class for each of the coordinates in response to the translucence value for the coordinate.
- In still another aspect, the invention features an apparatus for object classification of an object scene. The apparatus includes a projector, an imager and a processor in communication with the imager. The projector is configured to illuminate an object scene with a sequence of structured light patterns. The imager is configured to acquire an image of the illuminated object scene for each structured light patterns in the sequence. The imager provides an output signal comprising 2D image data for the object scene for each of the images. The processor is configured to determine 3D coordinates for the object scene and a translucence value for each of the 3D coordinates in response to the 2D image data for the sequence. The processor determines an object class for each of the 3D coordinates in response to the translucence value for the 3D coordinate.
- The above and further advantages of this invention may be better understood by referring to the following description in conjunction with the accompanying drawings, in which like numerals indicate like structural elements and features in the various figures. For clarity, not every element may be labeled in every figure. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the invention.
-
FIG. 1 is a block diagram showing an example of a measurement system that can be used to obtain a 3D image of an object scene. -
FIG. 2 illustrates a maneuverable wand that is part of a 3D measurement system used to obtain 3D measurement data for an intra-oral cavity. -
FIG. 3 illustrates how a 3D measurement of the upper dental arch is performed using the wand ofFIG. 2 . -
FIG. 4 is a flowchart representation of a method of classification of 3D data for an object scene according to an embodiment of the invention. -
FIG. 5 is a flowchart representation of a method of object classification of 3D data for an object scene according to another embodiment of the invention. - The present teaching will now be described in more detail with reference to exemplary embodiments thereof as shown in the accompanying drawings. While the present teaching is described in conjunction with various embodiments and examples, it is not intended that the present teaching be limited to such embodiments. On the contrary, the present teaching encompasses various alternatives, modifications and equivalents, as will be appreciated by those of skill in the art. Those of ordinary skill in the art having access to the teaching herein will recognize additional implementations, modifications and embodiments, as well as other fields of use, which are within the scope of the present disclosure as described herein.
- The methods of the present invention may include any of the described embodiments or combinations of the described embodiments in an operable manner. In brief overview, the methods of the invention enable rapid automated object classification of measured 3D object scenes. As used herein, object classification refers to the determination of a type or class of object from a plurality of possible object classes for a measured object. The method can be performed during the 3D measurement procedure while data are being acquired. Alternatively, the method can be performed after completion of a measurement procedure with previously acquired data. In various embodiments, an object scene is illuminated with an optical beam and an image is acquired. Coordinates are determined for points in the image and a translucence value is determined for each of the points. An object class is determined for each point based on the translucence value for the point. Optionally, grayscale or color image data for each point is used to supplement the object class determination.
- In specific embodiments described below, the methods relate to object classification for 3D data during or after a 3D measurement of an oral cavity, such as a measurement made by a clinician in a dental application. Measured surfaces may include the enamel surface of teeth, the dentin substructure of teeth, gingiva, various dental structures (e.g., posts, inserts and fillings) and soft tissue (e.g., the tongue or lip). Classification of the 3D measurement data for the intra-oral measurement allows a distinction among the 3D data that correspond to these different object classes. The ability to distinguish among different types of objects allows the 3D measurement data to be displayed in a manner that shows the object class in the measured object scene. Moreover, 3D measurement data from objects not of interest can be managed accordingly. For example, motion of the tongue or lip through the measurement field of view in an intra-oral measurement application can cause data to be acquired from the interfering object. The unwanted data can be discarded or otherwise prevented from corrupting the measurement of the intended object scene, i.e., the teeth and gingiva. It will be appreciated that the methods can also be applied in medical applications and other applications in which 3D measurement data are acquired for object scenes having a plurality of object classes.
- In some of the embodiments described below, 3D measurement systems use structured illumination patterns generated by interferometric fringe projection or other techniques. Imaging components acquire 2D images to determine coordinate information of points on the surface of objects based on the structured illumination of the objects.
- U.S. Pat. No. 5,870,191, incorporated herein by reference, describes a technique referred to as Accordion Fringe Interferometry (AFI) that can be used for
high precision 3D measurements based on interferometric fringe projection. AFI-based 3D measurement systems typically employ two closely-spaced coherent optical sources to project the interferometric fringe pattern onto the surface of the object. Images of the fringe pattern are acquired for at least three spatial phases of the fringe pattern. -
FIG. 1 illustrates an AFI-based3D measurement system 10 used to obtain 3D images of one or more objects 22. Two coherentoptical beams fringe projector 18 are used to illuminate the surface of theobject 22 with a pattern ofinterference fringes 26. An image of the fringe pattern at theobject 22 is formed by an imaging system orlens 30 onto an imager that includes an array ofphotodetectors 34. For example, thedetector array 34 can be a two-dimensional charge coupled device (CCD) imaging array. An output signal generated by thedetector array 34 is provided to aprocessor 38. The output signal includes information on the intensity of the light received at each photodetector in thearray 34. Anoptional polarizer 42 is oriented to coincide with the main polarization component of the scattered light. Acontrol module 46 controls parameters of the two coherent optical beams 14 emitted from thefringe projector 18. Thecontrol module 46 includes aphase shift controller 50 to adjust the phase difference of the two beams 14 and aspatial frequency controller 54 to adjust the pitch, or separation, of theinterference fringes 26 at theobject 22. - The spatial frequency of the fringe pattern is determined by the separation of two virtual sources of coherent optical radiation in the
fringe projector 18, the distance from the virtual sources to theobject 22, and the wavelength of the radiation. The virtual sources are points from which optical radiation appears to originate although the actual sources of the optical radiation may be located elsewhere. Theprocessor 38 andcontrol module 46 communicate to coordinate the processing of signals from thephotodetector array 34 with respect to changes in phase difference and spatial frequency, and theprocessor 38 determines 3D information for the object surface according to the fringe pattern images. - The
processor 38 calculates the distance from theimaging system 30 anddetector array 34 to the object surface for each pixel based on the intensity values for the pixel in the series of 2D images generated after successive phase shifts of the fringe patterns. Thus the processor creates a set of 3D coordinates that can be displayed as a point cloud or a surface map that represents the object surface. Theprocessor 38 communicates with amemory module 58 for storage of 3D data generated during a measurement procedure. Auser interface 62 includes an input device and a display to enable an operator such as a clinician to provide operator commands and to observe the acquired 3D information in a near real-time manner. For example, the operator can observe a display of the growth of a graphical representation of the point cloud or surface map as different regions of the surface of theobject 22 are measured and additional 3D measurement data are acquired. -
FIG. 2 illustrates a handheld 3D measurement device in the form of amaneuverable wand 66 that can be used to obtain 3D measurement data for an intra-oral cavity. Thewand 66 includes abody section 70 that is coupled through aflexible cable 74 to a processor and other system components (not shown). Thewand 66 generates a structuredlight pattern 78 that is projected from near theprojection end 82 to illuminate the object scene to be measured. For example, the structuredlight pattern 78 can be an interferometric fringe pattern based on the principles of an AFI measurement system as described above forFIG. 1 . Thewand 66 can be used to obtain 3D data for a portion of a dental arch. Thewand 66 is maneuvered within the intra-oral cavity by a clinician so that 3D data are obtained for all surfaces that can be illuminated by the structuredlight pattern 78. -
FIG. 3 shows an example of how a 3D measurement of the upper dental arch is performed using thewand 66 ofFIG. 2 . In this example, thewand 66 is a maneuverable component of anAFI type 3D measurement system. Fringes are projected from thewand 66 onto theteeth 86 andadjacent gum tissue 90 in a measurement field ofview 94 during a portion of a buccal scan of the dental arch. 3D data obtained from the measurement scan are displayed to the clinician preferably as a 3D point could or as a surface map (e.g., a wireframe representation) that shows the measured surfaces of theteeth 86 andgingiva 90. - Referring also to
FIG. 1 , theimaging array 34 receives an image of the fringe pattern projected onto theteeth 86 andadjacent gingiva 90 within the measurement field ofview 94. Due to the translucent nature of the enamel, some of the light in the projected fringe pattern penetrates the surface of theteeth 86 and is scattered in a subsurface region. The scattered light typically results in degradation of the images of the fringe pattern. The degree of translucency determines the amount of light in the fringe pattern that penetrates the surface and is scattered below. If the scattered light contribution from the subsurface region is significant relative to the scattered light contribution from the fringe illumination at the surface, the apparent location (i.e., apparent phase) of the fringe pattern in the images can be different than the actual location of the fringe patterns on the surface of theteeth 86. Preferably, thefringe projector 18 uses an illumination wavelength that increases internal scatter near the surface. For example, the fringe illumination can include a near ultraviolet wavelength or shorter visible wavelength (e.g., from approximately 350 nm to 500 nm) which results in greater scatter near the surface and less penetration below the surface than longer wavelengths. In addition, the fringe pattern is preferably configured to have a high spatial frequency such that the light scattered from the shallow subsurface region results in a nearly uniform background light contribution to the images of the fringe patterns. During processing of the 2D images to determine the 3D data for theteeth 86, the background contribution from the subsurface region is ignored. Moreover, the magnitude of residual error induced by any spatially-varying intensity contribution from the subsurface region is less significant because the contribution is limited to a shallow region below the surface of eachtooth 86. In an exemplary embodiment, the wavelength of the projected fringe pattern is 405 nm and the spatial frequency, or pitch, of the fringe pattern at the tooth surface is at least 1 fringe/mm. -
FIG. 4 is a flowchart representation of an embodiment of amethod 100 of object classification for 3D data for an object scene. Themethod 100 includes illuminating (step 110) the object scene with a structured light pattern. By way of examples, the structured light pattern can be a striped intensity pattern generated by interference of coherent optical beams or shadow mask projection. Images of the illuminated object scene are acquired (step 120). In some applications the object scene corresponds to a measurement field of view of a 3D imaging device and a larger object scene is measured by maneuvering the device so that the measurement field of view includes other regions of the larger object scene. The coordinates of the 3D points on the surfaces of the object scene are determined (step 130) from the acquired images. - A translucence value for each measured 3D point is determined (step 140). The object scene can include objects that are distinguishable from each other. For example, two objects may be comprised of different materials that exhibit different translucence. Thus the translucence value can be used to determine (step 150) the type of object, or object class, for the point. Object classification can be based on comparing the translucence value to one or more threshold values associated with different types of objects. For example, the object classification can be based on determining which range in a plurality of ranges of translucence values includes the translucence value for the point. In this example, each range of translucence values corresponds to a unique object classification. Optionally, a reflectance value corresponding to the magnitude of the light scattered from the surface of a corresponding object is used in combination with the translucence value to determine the object class. In this case, the reflectance value is compared to reflectance threshold values or ranges of threshold values that are associated, in combination with the translucence values, with various object classes.
- A graphical display of the object scene is generated (step 160). The display includes an indication of the object class for each of the points. For example, the display can be a 3D surface map representation where a wireframe representation, surface element depiction or the like is displayed with different colors to indicate the object classes. Other graphical parameters can be used to indicate the object classification for each point. In another example, the display can be a 3D point cloud where each point has a color associated with its object classification. In some embodiments, the graphical display can include boundary lines or similar features to segment or differentiate different regions of the object scene into graphical objects so that different objects can easily be recognized.
- Optionally, a color image of the illuminated region of the object scene is acquired. The color data acquired for each point can be used in combination with the translucence value for the point to make the determination of the object class for the point. The color image can be acquired under passive lighting or a supplemental light source such as a white light source or broadband light source can be used to improve the ability to perform differentiation by color. In an alternative embodiment, sequential operation of spectral light sources such as red, green and blue light emitting diodes (LEDs) can be used to generate RGB images. In this manner, a monochromatic imager can be used to generate the color data to supplement the object classification.
- In an alternative option, a grayscale image of the illuminated region of the object scene is acquired. The object grayscale value for each point is used in combination with the translucence value for the point to determine the object class for the point. The grayscale image may be acquired with passive lighting or a supplemental light source can be utilized.
-
FIG. 5 is a flowchart representation of an embodiment of amethod 200 of object classification of 3D data for an object scene. Themethod 200 includes illuminating (step 210) an object scene with a sequence of structured light patterns of different spatial phase. Preferably, the structured light patterns are interferometric intensity patterns having a sinusoidal intensity variation in one dimension. The sinusoidal intensity pattern is generated, for example, by the interference of two coherent beams as described above with respect toFIG. 1 . Preferably, the sequence includes a set of three sinusoidal intensity patterns each having a spatial phase that is offset from the other two sinusoidal intensity patterns by 120°. - An image of the illuminated object scene is acquired (step 220) for each of the light patterns in the sequence. 3D data are determined (step 230) for points in the object scene based on the images of the sequence of structured light patterns.
- A background intensity value is calculated (step 240) for each point from the sequence of images. In general, the background intensity value for a point in the object scene is primarily due to the translucence of the object associated with the point if other sources of illumination of the object scene are maintained at low levels and if the image acquisition time is sufficiently small. Thus the background intensity value can be used as a measure of the translucency (i.e., a translucence value) for the point. In an embodiment based on the projection of the three sinusoidal intensity patterns, the background intensity value for a point is determined by first mathematically fitting a sinusoidal intensity variation to the three intensity values for the location of the point in the 2D image of the illuminated object scene. For example, the mathematical fitting can be a least squares fit of a sinusoidal function. The background intensity is present in all the images of the sequence and degrades the contrast. The value of the background intensity is determined as the minimum value of the fitted sinusoidal function.
- As the background intensity value is closely related to the translucence value, the background intensity level can be used to determine (step 250) the type of object, or object class, for the point, for example, by comparing the background intensity value to one or more threshold values or background intensity value ranges associated with different types of objects. In a further embodiment, object classification is a two-step comparison in which object classification also includes comparing the maximum of the fitted sinusoidal function to one or more threshold intensity values.
- A graphical display of the object scene is generated (step 260) and includes an indication of the object class for each of the points. The display can be any type of surface map representation or a 3D point cloud, as described above with respect to the method of
FIG. 4 , in which color or other graphical features are used to indicate different object classes and structures. Optionally, color or grayscale images of the object scene are acquired and used in combination with the background intensity values to make the determinations of the object classes. - Although the embodiments described above relate primarily to object classification in which the object scene is illuminated using a structured light pattern, it will be recognized that object classification by determination of translucence can be performed under more general illumination conditions. For instance, the object scene can be illuminated with an optical beam in any manner that allows the translucence value for points or regions on the object to be determined. Optical coherence tomography (OCT) systems and confocal microscopy systems are examples of measurement systems that can be adapted for translucence measurement and object classification. The characteristics of the optical beam, such as wavelength or spectral width, can be chosen to best assist in discriminating between different object classes. Moreover, grayscale or color image data for the object scene can be utilized in various embodiments to improve the object classification capability.
- While the invention has been shown and described with reference to specific embodiments, it should be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as recited in the accompanying claims.
Claims (38)
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/217,652 US9436868B2 (en) | 2010-09-10 | 2011-08-25 | Object classification for measured three-dimensional object scenes |
DK11179804.7T DK2428913T3 (en) | 2010-09-10 | 2011-09-02 | Object classification for measured three-dimensional object scenes |
EP11179804.7A EP2428913B1 (en) | 2010-09-10 | 2011-09-02 | Object classification for measured three-dimensional object scenes |
JP2011195659A JP5518018B2 (en) | 2010-09-10 | 2011-09-08 | Data acquisition method for 3D imaging |
CN201110273996.XA CN102402799B (en) | 2010-09-10 | 2011-09-09 | Object classification for measured three-dimensional object scenes |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US38173110P | 2010-09-10 | 2010-09-10 | |
US13/217,652 US9436868B2 (en) | 2010-09-10 | 2011-08-25 | Object classification for measured three-dimensional object scenes |
Publications (2)
Publication Number | Publication Date |
---|---|
US20120062716A1 true US20120062716A1 (en) | 2012-03-15 |
US9436868B2 US9436868B2 (en) | 2016-09-06 |
Family
ID=44785311
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/217,652 Active 2033-09-28 US9436868B2 (en) | 2010-09-10 | 2011-08-25 | Object classification for measured three-dimensional object scenes |
Country Status (5)
Country | Link |
---|---|
US (1) | US9436868B2 (en) |
EP (1) | EP2428913B1 (en) |
JP (1) | JP5518018B2 (en) |
CN (1) | CN102402799B (en) |
DK (1) | DK2428913T3 (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130116512A1 (en) * | 2011-04-26 | 2013-05-09 | Mir Imran | Mouthpiece for measurement of biometric data of a diver and underwater communication |
DE102012021185A1 (en) * | 2012-10-30 | 2014-04-30 | Smart Optics Sensortechnik Gmbh | Method for 3D optical measurement of teeth with reduced point-spread function |
US20140253686A1 (en) * | 2013-03-08 | 2014-09-11 | Victor C. Wong | Color 3-d image capture with monochrome image sensor |
US20140288463A1 (en) * | 2011-10-21 | 2014-09-25 | Koninklijke Philips N.V. | Method and apparatus for determining anatomic properties of a patient |
WO2015006518A1 (en) * | 2013-07-12 | 2015-01-15 | Carestream Health, Inc. | Video-based auto-capture for dental surface imaging apparatus |
CN104718446A (en) * | 2012-08-23 | 2015-06-17 | 西门子能量股份有限公司 | System and method for visual inspection and 3D white light scanning of off-line industrial gas turbines and other power generation machinery |
CN106683087A (en) * | 2016-12-26 | 2017-05-17 | 华南理工大学 | Coated tongue constitution distinguishing method based on depth neural network |
US9962244B2 (en) | 2013-02-13 | 2018-05-08 | 3Shape A/S | Focus scanning apparatus recording color |
WO2018222823A1 (en) * | 2017-06-01 | 2018-12-06 | Gooee Limited | Monitoring system |
US20190110690A1 (en) * | 2016-05-10 | 2019-04-18 | Masaki KAMBARA | Dental health assessment assisting apparatus and dental health assessment assisting system |
US10281264B2 (en) * | 2014-12-01 | 2019-05-07 | Canon Kabushiki Kaisha | Three-dimensional measurement apparatus and control method for the same |
US11295113B2 (en) * | 2019-04-25 | 2022-04-05 | ID Lynx Ltd. | 3D biometric identification system for identifying animals |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102013200898A1 (en) * | 2013-01-21 | 2014-07-24 | Siemens Aktiengesellschaft | Endoscope, especially for minimally invasive surgery |
CN106999020A (en) * | 2014-12-17 | 2017-08-01 | 卡尔斯特里姆保健公司 | 3D fluorescence imagings in oral cavity |
JP2016166765A (en) * | 2015-03-09 | 2016-09-15 | 株式会社モリタ製作所 | Three-dimensional measuring device and three-dimensional measurement method |
CN105160346B (en) * | 2015-07-06 | 2019-02-01 | 上海大学 | A kind of greasy recognition methods of curdy fur on tongue based on texture and distribution characteristics |
CN105488524B (en) * | 2015-11-26 | 2018-12-21 | 中山大学 | A kind of lip reading recognition methods and system based on wearable device |
CN105894581B (en) * | 2016-01-22 | 2020-05-19 | 上海肇观电子科技有限公司 | Method and device for presenting multimedia information |
CN106707485A (en) * | 2016-12-21 | 2017-05-24 | 中国科学院苏州生物医学工程技术研究所 | Miniature structure light microscopic illumination system |
CN110084858A (en) * | 2019-04-30 | 2019-08-02 | 中国石油大学(华东) | A kind of the gum line coordinate measuring method and device of tooth hot pressing model |
US11250580B2 (en) | 2019-09-24 | 2022-02-15 | Dentsply Sirona Inc. | Method, system and computer readable storage media for registering intraoral measurements |
US11826016B2 (en) * | 2020-01-31 | 2023-11-28 | Medit Corp. | External light interference removal method |
CN111536921A (en) * | 2020-03-19 | 2020-08-14 | 宁波吉利汽车研究开发有限公司 | Online measurement state monitoring method and device and storage medium |
CN116327405B (en) * | 2023-04-13 | 2024-01-30 | 广州市小萤成像技术有限公司 | Hand-held oral cavity scanner |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5651675A (en) * | 1994-03-23 | 1997-07-29 | Gary Singer | Healing cap system |
US20020130873A1 (en) * | 1997-01-29 | 2002-09-19 | Sharp Kabushiki Kaisha | Method of processing animation by interpolation between key frames with small data quantity |
US6507675B1 (en) * | 2001-03-23 | 2003-01-14 | Shih-Jong J. Lee | Structure-guided automatic learning for image feature enhancement |
US6674880B1 (en) * | 1999-11-24 | 2004-01-06 | Confirma, Inc. | Convolution filtering of similarity data for visual display of enhanced image |
US20050234946A1 (en) * | 2004-04-20 | 2005-10-20 | Samsung Electronics Co., Ltd. | Apparatus and method for reconstructing three-dimensional graphics data |
US20060079981A1 (en) * | 1999-11-30 | 2006-04-13 | Rudger Rubbert | Interactive orthodontic care system based on intra-oral scanning of teeth |
US20070076074A1 (en) * | 2005-10-05 | 2007-04-05 | Eastman Kodak Company | Method and apparatus for print medium determination |
Family Cites Families (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4575805A (en) | 1980-12-24 | 1986-03-11 | Moermann Werner H | Method and apparatus for the fabrication of custom-shaped implants |
US5440393A (en) | 1990-03-13 | 1995-08-08 | Com Dent Gmbh | Process and device for measuring the dimensions of a space, in particular a buccal cavity |
DE4229466C2 (en) | 1992-09-03 | 2001-04-26 | Kaltenbach & Voigt | Tooth measurement without calibration body |
JP3324013B2 (en) | 1993-10-01 | 2002-09-17 | 大日本印刷株式会社 | Transmittance measurement method and apparatus |
JP2879003B2 (en) | 1995-11-16 | 1999-04-05 | 株式会社生体光情報研究所 | Image measurement device |
US5870191A (en) | 1996-02-12 | 1999-02-09 | Massachusetts Institute Of Technology | Apparatus and methods for surface contour measurement |
JPH1047936A (en) | 1996-08-07 | 1998-02-20 | Egawa:Kk | Shape measuring method |
US6301004B1 (en) * | 2000-05-31 | 2001-10-09 | Lj Laboratories, L.L.C. | Apparatus and method for measuring optical characteristics of an object |
US6594539B1 (en) | 1999-03-29 | 2003-07-15 | Genex Technologies, Inc. | Three-dimensional dental imaging method and apparatus having a reflective member |
JP2003515417A (en) * | 1999-12-08 | 2003-05-07 | エックス−ライト、インコーポレイテッド | Optical metrology device and related methods |
US6386867B1 (en) * | 2000-11-30 | 2002-05-14 | Duane Milford Durbin | Method and system for imaging and modeling dental structures |
WO2004100068A2 (en) | 2003-05-05 | 2004-11-18 | D3D, L.P. | Optical coherence tomography imaging |
JP4469977B2 (en) | 2004-07-09 | 2010-06-02 | 日本電信電話株式会社 | Teeth optical interference tomography device |
US7339586B2 (en) * | 2004-04-23 | 2008-03-04 | Siemens Medical Solutions Usa, Inc. | Method and system for mesh-to-image registration using raycasting |
NZ565509A (en) * | 2005-07-18 | 2011-08-26 | Andreas Mandelis | Method and apparatus using infrared photothermal radiometry (PTR) and modulated laser luminescence (LUM) for diagnostics of defects in teeth |
US7596253B2 (en) * | 2005-10-31 | 2009-09-29 | Carestream Health, Inc. | Method and apparatus for detection of caries |
US7668355B2 (en) * | 2006-08-31 | 2010-02-23 | Carestream Health, Inc. | Method for detection of caries |
US7702139B2 (en) * | 2006-10-13 | 2010-04-20 | Carestream Health, Inc. | Apparatus for caries detection |
JP2009090091A (en) * | 2007-09-18 | 2009-04-30 | Olympus Corp | Dental observation apparatus |
US20090133260A1 (en) | 2007-11-26 | 2009-05-28 | Ios Technologies, Inc | 3D dental shade matching and apparatus |
US7929151B2 (en) | 2008-01-11 | 2011-04-19 | Carestream Health, Inc. | Intra-oral camera for diagnostic and cosmetic imaging |
JP2009204991A (en) * | 2008-02-28 | 2009-09-10 | Funai Electric Co Ltd | Compound-eye imaging apparatus |
JP5276006B2 (en) | 2008-05-13 | 2013-08-28 | パナソニック株式会社 | Intraoral measurement device and intraoral measurement system |
JP2010148860A (en) * | 2008-11-28 | 2010-07-08 | Panasonic Corp | Intraoral measuring instrument |
CN102334006A (en) | 2009-02-25 | 2012-01-25 | 立体光子国际有限公司 | The intensity that is used for the three-dimentional measurement system shows with colored |
US8570530B2 (en) * | 2009-06-03 | 2013-10-29 | Carestream Health, Inc. | Apparatus for dental surface shape and shade imaging |
-
2011
- 2011-08-25 US US13/217,652 patent/US9436868B2/en active Active
- 2011-09-02 DK DK11179804.7T patent/DK2428913T3/en active
- 2011-09-02 EP EP11179804.7A patent/EP2428913B1/en active Active
- 2011-09-08 JP JP2011195659A patent/JP5518018B2/en active Active
- 2011-09-09 CN CN201110273996.XA patent/CN102402799B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5651675A (en) * | 1994-03-23 | 1997-07-29 | Gary Singer | Healing cap system |
US20020130873A1 (en) * | 1997-01-29 | 2002-09-19 | Sharp Kabushiki Kaisha | Method of processing animation by interpolation between key frames with small data quantity |
US6674880B1 (en) * | 1999-11-24 | 2004-01-06 | Confirma, Inc. | Convolution filtering of similarity data for visual display of enhanced image |
US20060079981A1 (en) * | 1999-11-30 | 2006-04-13 | Rudger Rubbert | Interactive orthodontic care system based on intra-oral scanning of teeth |
US6507675B1 (en) * | 2001-03-23 | 2003-01-14 | Shih-Jong J. Lee | Structure-guided automatic learning for image feature enhancement |
US20050234946A1 (en) * | 2004-04-20 | 2005-10-20 | Samsung Electronics Co., Ltd. | Apparatus and method for reconstructing three-dimensional graphics data |
US20070076074A1 (en) * | 2005-10-05 | 2007-04-05 | Eastman Kodak Company | Method and apparatus for print medium determination |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130116512A1 (en) * | 2011-04-26 | 2013-05-09 | Mir Imran | Mouthpiece for measurement of biometric data of a diver and underwater communication |
US9795296B2 (en) * | 2011-04-26 | 2017-10-24 | Incube Labs, Llc | Mouthpiece for measurement of biometric data of a diver and underwater communication |
US20140288463A1 (en) * | 2011-10-21 | 2014-09-25 | Koninklijke Philips N.V. | Method and apparatus for determining anatomic properties of a patient |
US9538955B2 (en) * | 2011-10-21 | 2017-01-10 | Koninklijke Philips N.V. | Method and apparatus for determining anatomic properties of a patient |
CN104718446A (en) * | 2012-08-23 | 2015-06-17 | 西门子能量股份有限公司 | System and method for visual inspection and 3D white light scanning of off-line industrial gas turbines and other power generation machinery |
DE102012021185A1 (en) * | 2012-10-30 | 2014-04-30 | Smart Optics Sensortechnik Gmbh | Method for 3D optical measurement of teeth with reduced point-spread function |
US9204952B2 (en) | 2012-10-30 | 2015-12-08 | Smart Optics Sensortechnik Gmbh | Method for optical 3D measurement of teeth with reduced point spread function |
US9962244B2 (en) | 2013-02-13 | 2018-05-08 | 3Shape A/S | Focus scanning apparatus recording color |
US10736718B2 (en) | 2013-02-13 | 2020-08-11 | 3Shape A/S | Focus scanning apparatus recording color |
US10383711B2 (en) | 2013-02-13 | 2019-08-20 | 3Shape A/S | Focus scanning apparatus recording color |
US20140253686A1 (en) * | 2013-03-08 | 2014-09-11 | Victor C. Wong | Color 3-d image capture with monochrome image sensor |
EP2786722A1 (en) * | 2013-03-08 | 2014-10-08 | Carestream Health, Inc. | Color 3-D image capture with monochrome image sensor |
US9675428B2 (en) | 2013-07-12 | 2017-06-13 | Carestream Health, Inc. | Video-based auto-capture for dental surface imaging apparatus |
WO2015006518A1 (en) * | 2013-07-12 | 2015-01-15 | Carestream Health, Inc. | Video-based auto-capture for dental surface imaging apparatus |
US10281264B2 (en) * | 2014-12-01 | 2019-05-07 | Canon Kabushiki Kaisha | Three-dimensional measurement apparatus and control method for the same |
US20190110690A1 (en) * | 2016-05-10 | 2019-04-18 | Masaki KAMBARA | Dental health assessment assisting apparatus and dental health assessment assisting system |
US10842383B2 (en) * | 2016-05-10 | 2020-11-24 | Masaki KAMBARA | Dental health assessment assisting apparatus and dental health assessment assisting system |
CN106683087A (en) * | 2016-12-26 | 2017-05-17 | 华南理工大学 | Coated tongue constitution distinguishing method based on depth neural network |
WO2018222823A1 (en) * | 2017-06-01 | 2018-12-06 | Gooee Limited | Monitoring system |
US11295113B2 (en) * | 2019-04-25 | 2022-04-05 | ID Lynx Ltd. | 3D biometric identification system for identifying animals |
Also Published As
Publication number | Publication date |
---|---|
EP2428913A3 (en) | 2014-05-14 |
CN102402799A (en) | 2012-04-04 |
JP2012059268A (en) | 2012-03-22 |
EP2428913A2 (en) | 2012-03-14 |
DK2428913T3 (en) | 2018-09-17 |
JP5518018B2 (en) | 2014-06-11 |
US9436868B2 (en) | 2016-09-06 |
EP2428913B1 (en) | 2018-07-25 |
CN102402799B (en) | 2015-07-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9436868B2 (en) | Object classification for measured three-dimensional object scenes | |
AU2018275860B2 (en) | Method for intraoral scanning | |
CN104990516B (en) | Intensity and colored display for three-dimensional metrology system | |
JP6045676B2 (en) | Intraoral imaging device | |
US20230181020A1 (en) | Properties measurement device | |
JP6430934B2 (en) | Intraoral 3D scanner to measure fluorescence | |
US9955872B2 (en) | Method of data acquisition for three-dimensional imaging | |
US11529056B2 (en) | Crosstalk reduction for intra-oral scanning using patterned light | |
JP6454489B2 (en) | Observation system | |
KR20150119191A (en) | Focus scanning apparatus recording color | |
US20240035891A1 (en) | Color measurement with structured light | |
DK2428162T3 (en) | Method of recording data for three-dimensional imaging of intra-oral cavities |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: DIMENSIONAL PHOTONICS INTERNATIONAL, INC., MASSACH Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DILLON, ROBERT F.;ZHAO, BING;REEL/FRAME:027117/0415 Effective date: 20111019 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |
|
AS | Assignment |
Owner name: BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT, NORTH CAROLINA Free format text: SECURITY INTEREST;ASSIGNOR:DENTAL IMAGING TECHNOLOGIES CORPORATION;REEL/FRAME:052611/0340 Effective date: 20200506 |
|
AS | Assignment |
Owner name: DENTAL IMAGING TECHNOLOGIES CORPORATION, CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:055886/0194 Effective date: 20210408 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |