WO2013021022A2 - Base de données et procédé servant à générer une représentation d'objet dentaire virtuel à partir d'une prise de vue - Google Patents

Base de données et procédé servant à générer une représentation d'objet dentaire virtuel à partir d'une prise de vue Download PDF

Info

Publication number
WO2013021022A2
WO2013021022A2 PCT/EP2012/065557 EP2012065557W WO2013021022A2 WO 2013021022 A2 WO2013021022 A2 WO 2013021022A2 EP 2012065557 W EP2012065557 W EP 2012065557W WO 2013021022 A2 WO2013021022 A2 WO 2013021022A2
Authority
WO
WIPO (PCT)
Prior art keywords
image data
modality
virtual
correlated
data
Prior art date
Application number
PCT/EP2012/065557
Other languages
German (de)
English (en)
Other versions
WO2013021022A3 (fr
Inventor
Johannes Ulrici
Original Assignee
Sirona Dental Systems Gmbh
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sirona Dental Systems Gmbh filed Critical Sirona Dental Systems Gmbh
Publication of WO2013021022A2 publication Critical patent/WO2013021022A2/fr
Publication of WO2013021022A3 publication Critical patent/WO2013021022A3/fr

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/50Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
    • A61B6/51Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications for dentistry
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C9/00Impression cups, i.e. impression trays; Impression methods
    • A61C9/004Means or methods for taking digitized impressions
    • A61C9/0046Data acquisition means or methods
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C9/00Impression cups, i.e. impression trays; Impression methods
    • A61C9/004Means or methods for taking digitized impressions
    • A61C9/0046Data acquisition means or methods
    • A61C9/0053Optical means or methods, e.g. scanning the teeth by a laser or light beam
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C9/00Impression cups, i.e. impression trays; Impression methods
    • A61C9/004Means or methods for taking digitized impressions
    • A61C9/0046Data acquisition means or methods
    • A61C9/0053Optical means or methods, e.g. scanning the teeth by a laser or light beam
    • A61C9/006Optical means or methods, e.g. scanning the teeth by a laser or light beam projecting one or more stripes or patterns on the teeth
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C9/00Impression cups, i.e. impression trays; Impression methods
    • A61C9/004Means or methods for taking digitized impressions
    • A61C9/0046Data acquisition means or methods
    • A61C9/0086Acoustic means or methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5229Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image
    • A61B6/5247Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image combining images from an ionising-radiation diagnostic technique and a non-ionising radiation diagnostic technique, e.g. X-ray and ultrasound
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical

Definitions

  • the invention relates to a method for generating a virtual dental ob ect representation starting from a recording with a first modality.
  • a method for creating a drilling aid for a dental implant wherein first an X-ray of the jaw and teeth of a patient is performed, then a three-dimensional opti- see measurement of the visible surface of the jaw and the teeth is performed and then takes place correla ⁇ tion of the measurement records from the X-ray image and the three-dimensional optical measurement based on specific markers. This can be done automatically or interactively.
  • DE 199 24 291 C1 likewise discloses a method for correlating two or more image data sets which represent the same object in partial regions.
  • the 3D s Kunststoff ze can chenmaschinen by known automatic re be accurately correlated, ie the Transforma ⁇ tion parameters between the reference systems of the two individual images to be constructed.
  • the object appears in the images to be correlated approximately in the same position on the image, wherein only the Berwin ⁇ angle and the distance to the object are varied.
  • the use of a statistical model in the image processing of three-dimensional data sets is already known in the field of medical imaging techniques. In the article "A 3D Statistical shape Model of the Pelvic Bone for Segmentation", Proceedings of SPIE - Volume 5370 Medical Imaging 2004, pp.
  • the object of this invention is to provide a method for creating a virtual dental object representation with superimposed image data of different modalities in a simple and time-saving manner setanizu ⁇ . Presentation of the invention
  • the invention relates to a method for generating a virtual dental ob ect representation starting from a recording of the first modality of at least a part of a dental object. Applying a transformation method to the image, a virtual model superimposed on the dental object or a part thereof or adjacent to the dental object or a part thereof is then generated, which has additional information to the dental object which differs from the image.
  • the virtual model complements the image and together with it forms the virtual dental object representation.
  • the inclusion of the first modality can win a MRI micrograph, a two-dimensional or three-dimensional x-ray micrograph, be a two-dimensional or three-dimensional optical ⁇ specific recording or a three-dimensional Ultraschallaufnah ⁇ me.
  • the aim of the present method is the creation ei ⁇ ner virtual object creation, comprising a superposition of the recording image data of at least one other modality or virtual image data, however, the match although their type and configuration of any modality anatomical structures to the orientation in the receptacle.
  • the anatomical structures of the virtual can, for example, be made color-coded in order to improve the orientation in the recording.
  • the anatomical structures in the virtual image data also do not have to correspond exactly to the dimensions of realistic anatomical structures, but can also be modeled.
  • Starting point of the method is a Wegdimensiona ⁇ le or a three-dimensional recording with a first Mo ⁇ d Rund, an MRI recording, an X-ray, a optical recording or an ultrasound recording can be.
  • the three-dimensional radiograph can be recorded by the DVT method or by the CT method.
  • An advantage of the present inventive method is that the individual recording must be generated only with the first modality and the virtual model is generated suitable for recording, which corresponds in its structure and form image data of the second and / or image data of a third modality and equivalent / or virtual image data, however, although they comprise anatomical structures for orientation in the receiving no modality entspre ⁇ chen.
  • Such virtual image data may include, for example, anatomical structures identified in different colors to facilitate orientation in the image.
  • the displayed virtual model does not have to be dia ⁇ gnostisch perfect, but serves the orientation within the 3D volume with the virtual representation of the dental object.
  • the virtual model is thus le ⁇ diglich a simulated approximation.
  • the created object ⁇ representation can be used in particular for patient communications to evaluate different possible treatment methods ⁇ with the patient visually.
  • the ER begat 3D volume can also be used as a navigation aid for Diag ⁇ nose recording.
  • the transformation method can be performed using a statistical model of the dental object, wherein a selection of an instance of the statistical model of a
  • Transformation of image data of the first modality into virtual rush image data corresponds to hold the anatomical structures around ⁇ .
  • the statistical model can heat a room in different appearances respectively instances of a dental object such as a jaw bone, or a tooth, describe in egg ⁇ ner receiving a second modality.
  • a dental object such as a jaw bone, or a tooth
  • the statistical model can describe a Kieferkno ⁇ chen in a three-dimensional radiograph.
  • the statistical model may include several biometric information or factors, such as certain characteristic lengths, radii of curvature and surfaces of the jawbone.
  • the statistical model can also describe entire surfaces of the dental object. Each of the factors may be associated with variability.
  • a particular subset of the statistical model may be iteratively fitted to a particular subset of the first modality capture by modifying the factors.
  • the subregion of the recording can be a corresponding counterpart to the subarea of the statistical model.
  • the partial region of the image may be a specific bone structure of the jaw bone in the X-ray image and the counterpart soft tissue in the MRI image data of the statistical model that surrounds this bone structure. Fitting the subset of the statistical model creates the virtual model that complements the recording.
  • the virtual model can be larger than the entire image.
  • an x-ray of the jawbone can be supplemented with a virtual whole head model that simulates MRI image data.
  • the X-ray image can be superimposed together with the virtual model of the entire head be presented as a dental virtual O ekteergna.
  • the statistical model can be formed over a certain number of data records.
  • the statistical model can be used over several datasets with surfaces of different jaw bones. Then the differences in these datasets determine the variance of the surface of the jawbone.
  • the factors such as certain charac ⁇ teristic lengths, radii of curvature can be used for principal component analysis as basic vectors.
  • a new base may be defined via the principal component analysis in the space of the statistical model that the data sets form.
  • the basis vectors can be sorted according to the variance. That is, the first base vector has a first parameter with the highest
  • the second basis vector describes a second parameter with the second highest variance, etc.
  • the advantage of this principal component analysis is that the shape of egg ⁇ ner particular instance of the statistical model, that is an element of the said space, over a minimum of Ba ⁇ sisvektoren can be described very accurately.
  • the selection of an instance of the sta ⁇ tical model can describe a transformation of image data of the first modality image data in at least one other modality or in combined image data of the first and at least one other modality.
  • the virtual Mo ⁇ dell is then generated which is similar to the image data of at least one other modality or the combined image data of the first and at least one further modality in its shape and configuration.
  • the present method of He ⁇ a virtual O generation ektdarwolfwolf starting from a recording of the first modality is generated using a statistical model which describes at least two combinable th image data of different modalities of different dental objects, the virtual model, the recording complements and thereby the virtual object ⁇ representation is formed.
  • the generation of the statistical model may additionally include the following steps.
  • the correlated regions in the image data of different modalities are manually determined or extracted from the image data by means of a known segmentation method.
  • An ⁇ closing the extracted correlation regions are joined together so that the image data of different Mo ⁇ modalities are superimposed to form an overall image.
  • the corre ⁇ profiled portions may be, for example, characteristic anatomical structures in the image data of an entire head of a patient.
  • the surfaces of the extracted cor- related areas in the overlaid image data of different modalities can be discretized by a grid. In the case of a grid with m grid points, the anatomical structure can thus be described as a vector of 3 * m dimensions.
  • the grid can for example consist of triangles or even any polygons.
  • a principal component analysis is performed for the combined image data of the respective data sets.
  • the main component analysis leads to the definition of a new coordinate system into which the respective data sets from combined image data of different modalities can be converted via a base transformation.
  • the general representation of any composite image data set in this new coordinate system is called a statistical model.
  • the coordinate system defined by the principal component analysis is characterized by the fact that a large part of the information about the type and structure of the correlated areas is described with a very small number of coordinates and only little information about the shape of the extracted anatomical structure in the remaining coordinates are included.
  • the maximum number of dimensions of the coor ⁇ ordinate line of the statistical model therefore 3 * m * n.
  • the principal component analysis is mathematically a base ⁇ transformation, wherein the base vectors are selected so that the statistical dispersion of a set of objects can be described with a few coordinates.
  • anatomical structures are extracted, wherein the extra ⁇ hierten anatomical structures are decomposed in the image data of different modality using a principal component analysis in several basis vectors and thus the statistical model of the anatomical structures is generated, wherein in the application of the statistical model ⁇ characteristic anatomical structures on the assumption ⁇ are extracted and these structures are iteratively fitted arrival of the statistical model of the structures in the image data, so that thereby the virtual model will he ⁇ testifies.
  • characteristic structural ⁇ structures are extracted from the initiation of the first modality.
  • the extracted structures from the recording are iteratively adapted to the statistical model so that the virtual model of the second and / or third modality is automatically generated as well.
  • the extracted anatomical structures may be discretized over a grid prior to application of the statistical model for simplicity.
  • the recording is input parameters for the statistical model table.
  • the output values of the statistical Mo ⁇ dells thus forming the virtual model.
  • the database is no longer required.
  • For the virtual model can then be calculated using the statistical model alone to record the first modality.
  • the recording may be the MRI scan.
  • a first correlated area is defined by using a segmentation ⁇ approximation method, wherein the statistical model is applied to the first correlated range and thereby the MR image is transferred to the virtual model which in its shape and structure of the with X-ray image data and / or ultrasound image data and / or the optical image data superimposed MRI image data corresponds.
  • a first statistical model for Studentsat ⁇ tion of the MRI image data with the MRI scan Edinburghla ⁇ siege X-ray image data and / or a second statistical model for transferring the MRI image data in the superimposed with the MRI scan ultrasonic Image data and / or a third statistical model for transferring the MRI image data in the superimposed with the MRI image optical image data applied to the MRI image and thereby generates a virtual model, consisting of a virtual X-ray, a virtual ultrasound image and / or a virtual optical recording and complements the MRI recording accordingly.
  • the recording can be the X-ray image.
  • a first correlated area is defined by using a segmentation method, wherein the statistical model is applied to the first correlated range and thereby the Rönt ⁇ genfact is transferred into the virtual model, the in its form and structure corresponds to the X-ray image data superimposed with MRI image data and / or the ultrasound image data and / or the optical image data.
  • the recording can be the Ultraschallaufnah ⁇ me.
  • the ultrasonic receiving a first correlated Be is set ⁇ rich using a segmentation method, wherein the statistical model is applied to the first correlated range and thereby the
  • Ultrasound image is transferred to the virtual model, which corresponds in its shape and structure to the superimposed with MRI image data, optical image data and / or X-ray image data ultrasound image data.
  • a first statistical model for Studentst ⁇ tion of the ultrasonic receiving in the superimposed with the ultrasonic recording MRI image data and / or a second statistical model for transferring the ultrasonic receiving in the superimposed with the ultrasonic recording X-ray image data and or a third statistical model for transferring the ultrasound image into the optical image data superimposed with the ultrasound image applied the ultrasound recording and thereby generates a virtu ⁇ elles model, which consists of a virtual MRI image, a virtual optical recording and a virtual X-ray recording and complements the ultrasound recording accordingly.
  • the transformation method based on the application of a database comprising a plurality of records of a dental object with dental image data, wherein the database comprises a plurality of records of different dental objects, each record first Image data and at least second image data, wherein the first image data with the first modality and the second image data are generated with a further modality.
  • a first modality may be the X-ray acquisition procedure
  • a second modality may be the MRI acquisition procedure
  • a third modality may be the ultrasound imaging procedure
  • a fourth modality may be an optical imaging procedure.
  • the individual image data namely x-ray image data, MRI image data, optical image data and ultrasound image data, are stored for each recorded object in a dataset of the database.
  • the individual objects can be entire patient heads, noses, temporomandibular joints, lower jaw, upper jaw, teeth or only individual jawbone structures.
  • a three-dimensional X-ray image such as a CT scan of a mandible may be performed, a first correlated area is determined as a certain jaw bone structure by user controlled marking and with reference to the database, a virtual Mo ⁇ be dell generated that a virtual image data MRI recording and / or a three-dimensional ultrasound recording and / or a three-dimensional optical recording corresponds.
  • This generated three-dimensional vir ⁇ tual model is then superimposed with the three-dimensional X-ray recording and thereby the dreidimen ⁇ dimensional dental Ob ektdar ein example, the entire ⁇ th jaw or the entire head formed.
  • This allows for example the jaw bone structure surrounding white ⁇ che tissue that is better visible in the MR image as in the three-dimensional X-ray image as a virtuel ⁇ les model are displayed in a virtual three-dimensional X-ray.
  • the image data of different modalities can be three-dimensional x-ray image data and / or MRI image data and / or ultrasound image data and / or three-dimensional optical image data.
  • the image data of various modalities in the database can be provided with correlation data on correlated regions, the correlation data including the dimensions and arrangement of these correlated regions in the image data, wherein a first correlated region is determined in the first modality photograph, where the database is searched for a data record with a nearly matching second correlated region in the image data of the first modality and a pas ⁇ sender data set is selected, based on this from ⁇ selected data set, the virtual model is generated in its form and structure at least the image data second modality and / or the image data of the third modality of the selected data set.
  • a first correlated area is defined in the recording, which has a characteristic Bone structure can be.
  • This first correlated area can be automatically set manually by the user, virtually using input means or computer-assisted.
  • a segmentation of the image data in the respective modality can take place.
  • conventional computer algorithms such as thresholding, smoothing, noise reduction, and pattern recognition are used.
  • a second correlated ent ⁇ speaking area in the image data of the database is then searched to the first correlated area from the recording with the first modality, which were generated with the first modality.
  • the search process can also be performed automatically computer-aided by pattern matching or manually by the user using input means.
  • the result of the search process is a data record in the database with image data of a specific object, which were generated with the first modality and / or with the second modality and / or with the third modality and have a second correlated region which coincides with the first determined correlated range of the recording, is nearly coincident or at least similar.
  • the database may also be searched for a data set having second modality image data having a second correlated region, wherein the second correlated region is complementary or nearly complementary to the first predetermined correlated region.
  • a virtual model of the object in the surrounding of the receiving fabric wherein the dental recording is completed by this virtual model is created based on the extended ⁇ searched image data with matching possible correlated profiled areas.
  • the virtual model corresponds in its shape and structural ⁇ tur image data generated by the second and / or third modality.
  • a complete dental 3D volume with a virtual representation of the dental object is generated which corresponds in its structure to superimposed image data of the first and / or the second and / or the third modality.
  • the virtual model can be made of image data of the second
  • the image data of the second and / or third modality can also be reworked using conventional operators, whereby an antialiasing is performed, the image data being scaled in its extent, mirrored by a mirroring operator, or distorted in a particular direction.
  • the image data of the second and / or third modality of the selected data set can also be taken over identically in order to form the virtual model.
  • the correlated areas in the image data can matching or complementary tissue structures as in ⁇ play, certain bone structures to be.
  • Correlation data may be stored in the database to the correlated regions, the correlation data including the more accurate dimensions and location of the correlated regions in the image data.
  • the correlations between the correlated regions can be positional relationships that allow the correlated regions to be superimposed in order to produce the three-dimensional len image data of various modalities of an object of a particular data set exactly in a ⁇ be known geometrical positional relationship to the recorded object and display the image data in superimposition to each other. These correlations can be stored together with the correlated areas in the datasets of the database.
  • the dental image data are generated with various recording methods, termed modalities in the medical field, namely using an X-ray recording method, such as a CT recording method or a DVT recording method, an MRT recording method and / or an ultrasound recording method and / or a three-dimensional optical recording method, such as a triangulation method.
  • an X-ray recording method such as a CT recording method or a DVT recording method
  • an MRT recording method and / or an ultrasound recording method and / or a three-dimensional optical recording method such as a triangulation method.
  • a characteristic first correlated region may include a jawbone structure at the outer edge of the jawbone in the three-dimensional radiograph be.
  • the corresponding second correlated region in the MRI image is then the surrounding complementary soft tissue of this jawbone structure.
  • the recording may be the MRI scan.
  • a first correlated area is defined in the MRI image.
  • the database is searched for a record having a nearly matching second correlated region in the MRI image data, with a matching data set being selected.
  • a third step generates the virtual based on this selected data set corresponding to the superimposed X-ray image data, the optical image data and / or the ultrasonic image data of the selected record in ⁇ be of one shape and structure.
  • the image data of the selected data set can form the virtual model identically without post-processing.
  • Structures of the virtual model that correspond to an X-ray, an optical image and an ultrasound image can be displayed individually or jointly virtually.
  • the database is searched for a data set consisting of MRI image data, ultrasound image data, optical image data and X-ray image data having a matching or nearly matching correlated region of the MRI image data with the MRI image.
  • a virtual model based on this data set will he witnesses ⁇ , which corresponds in its structure and form an X-ray and / or an ultrasound image.
  • the vir- The actual model can only include structures of a modality.
  • the recording can be the three-dimensional radiograph.
  • a first correlated region is determined in the X-ray image.
  • the database is searched for a dataset having a nearly matching second correlated region in the x-ray image data, wherein a matching dataset is selected.
  • the virtual model for example of the object in the on ⁇ acquisition surrounding tissue is generated from this selected data set, which the superimposed MRI image data, the optical image data and / or the ultrasonic image data of the freshlymalel in its shape and structure - th data set corresponds.
  • the image is a three-dimensional radiograph.
  • the first step of the process of the database is for a record with Ü prepared voting correlated areas examined between the X-ray image data of the gen-sets and the radiograph by ⁇ .
  • the second step of the optical image data and / or ultrasonic image data of the selected data rate is then based on the MRI image data, generates a virtual Mo ⁇ model which corresponds in its shape and structure of an MRI recording, an optical recording and / or ultrasound , These structures can be displayed virtually alterna ⁇ se.
  • the receptacle may be an ultrasound transit ⁇ acquisition.
  • the virtual model of the surrounding tissue which is generated by means of the database with correlated MRI images, X-ray recordings, optical recordings and ultrasound recordings, corresponds in its shape and form Structure of an MRI image, an optical image and / or an X-ray image.
  • the image is the ultrasound image.
  • a first correlated range is defined in the ultra-sound recording.
  • the database is searched for a record having a nearly matching second correlated region in the ultrasound image data, with a matching data set being selected.
  • the virtual model of the object in the receiving vice ⁇ reproduced tissue is generated from this selected data record, which in its shape and structure of the superimposed MRI image data, the optical image data
  • the correlated regions may be contained, matched or complementary tissue structures in the image data.
  • the correlated regions may be katalo gembl ⁇ and be divided into different structural groups.
  • Structural groups may be, for example, groups of certain bone structures, such as groups of nasal bone, temporomandibular joint bone, mandibular mandibular angle, chin protrusion bone or disc bone.
  • the different structure groups can be stored in individual data records in the database and the correct areas in the image data can be assigned by reference to specific structure groups.
  • the database can also be used to generate the statistical model according to the first embodiment.
  • the data records stored in the database are used here as training data for generating the statistical model.
  • the correlated region in the three-dimensional X-ray image data may be a bone structure and the correlated region in the MRI image data may be soft tissue surrounding this bone structure.
  • Characterized in the X-ray image clearly visible bone structural ⁇ structure and as a corresponding second correlated area can be stored as a first correlated range that a specific highly visible in the MR image tissue surrounding the bone structure ⁇ ser.
  • the advantage of this advantageous embodiment is that not bone structures but areas of the better visible surrounding soft tissue are defined as correlated areas in the MRI image.
  • surfaces of the correlated regions can be discretized by a grid. This allows the records to be searched faster for matching correlated areas.
  • the generated virtual model can be faded into the recording and displayed by means of a display device in superposition with the recording.
  • the display device may be a conventional monitor connected to a computer.
  • the volume of the virtuel ⁇ len model generated may be greater than the volume of recording.
  • the ob ekt Scheme the recording represent a three-dimensional radiograph of a Kieferknochenstruk ⁇ tur and the ob ekt Scheme of the virtual model represent a virtual MRI representation of a soft tissue of a complete facial area of a patient.
  • the volume of the generated virtual model can be greater than the volume of the recording.
  • the virtual model can be identified to distinguish it from the image by highlighting the virtual model in color or by shading.
  • FIG. 1 shows a sketch of a virtual dental object representation
  • Fig. 2 is a diagram showing the construction of the database
  • Fig. 3A is a detailed sketch of the three-dimensional
  • Fig. 3B is a detailed sketch of the three-dimensional
  • Fig. 1 shows a sketch of a virtual dreidimensio ⁇ dimensional dental 3D volume 1 or a virtual Whether ektdarwolf 1, comprising a three-dimensional on ⁇ acceptance 2, namely a three-dimensional X-ray image, and a virtual model 3 of a tissue that the objects in the surrounds the three-dimensional recording 2, wherein the virtual ⁇ le model 3 is shown superimposed together with the three-dimensional image 2 and the central 3D volume 1 forms.
  • the virtual model 3 of the surrounding tissue comprises structures, such as the soft tissue 4, around a jawbone 5, which correspond in their shape and structure to MRT image data that can be generated with a second modality, namely after the MRI acquisition procedure.
  • the virtual model 3 also includes other structures, such as the upper jaw 6 and other skull bones, such as the nasal bone 7 and the eye sockets 8, which correspond in their structure and shape image data of a three-dimensional radiograph.
  • the different structures of the virtual model 3 also includes other structures, such as the upper jaw 6 and other skull bones, such as the nasal bone 7 and the eye sockets 8, which correspond in their structure and shape image data of a three-dimensional radiograph.
  • the different structures of the virtual model 3 also includes other structures, such as the upper jaw 6 and other skull bones, such as the nasal bone 7 and the eye sockets 8, which correspond in their structure and shape image data of a three-dimensional radiograph.
  • the different structures of the virtual model 3 also includes other structures, such as the upper jaw 6 and other skull bones, such as the nasal bone 7 and the eye sockets 8, which correspond in their structure and shape image data of a three-dimensional radiograph.
  • the different structures of the virtual model 3 also includes other structures, such as the upper jaw 6 and other skull bones, such as the nasal bone 7 and the eye socket
  • Models 3 can be displayed alternately and serve as orientation during navigation in 3D volume 1 as well as for patient communication.
  • the 3D volume 1 is generated using the inventive method.
  • the three dimensional X-ray ⁇ 2 of the lower jaw 9 is sev- from a database, the data sets containing image data of central objects of various modalities of several patients.
  • a single data set may include x-ray image data, MRI image data, optical image data, and ultrasound image data of a mandible or the entire head.
  • a first correlated region 10 namely a characteristic bone structure of the lower jawbone, is compared with data records from the database and a dataset with a matching second correlated region in the x-ray image data is selected.
  • the second correlated region 10 namely a characteristic bone structure of the lower jawbone
  • Step is generated based on the MRI image data, the dreidimensiona len optical image data and ultrasound image data of this selected data set from the database, the virtual model 3 of the surrounding tissue.
  • the virtual Mo ⁇ dell 3 is Lucasblen ⁇ det in the three-dimensional image. 2
  • the image data of different modalities are correlated to one another in the individual data records of the database, so that the virtual model 3 is superimposed in exact position in superimposition complementary to the three-dimensional image 2.
  • a third correlated region 11 of the MRT image data from the selected data set has a known positional relationship to the second correlated region of the X-ray data of this selected data set.
  • the correlated areas can be divided into a plurality of structural groups, the atomic groups may be groups of particular bone structures, such as groups of temporomandibular joint bone 12 at the rear end of the lower jaw, jaw angle 13 of the mandibular bone 14 or Kinnvor ⁇ jump bone 15 at the front end of the lower jaw ,
  • the virtual model generated 3 can also be structures umfas ⁇ sen corresponding to a recording of a third modality, such as a three-dimensional ultrasound image or a dreidimensi ⁇ -dimensional optical recording.
  • the finding compliance provoking correlated areas for selecting the appropriate record from the database may contain conventional procedural ⁇ acids, such as segmentation of the image data in the respective modality, pattern recognition for locating similar correlated areas, as well as algorithms, such as the Schwellwertbil- dung, smoothing and noise reduction, be applied.
  • segmentation characteristic structures which can serve as correlated regions can be extracted from the image data of the database and from the three-dimensional image.
  • correlated areas in the image data of different modalities for example, a characteristic bone structure of the lower Kieferkno ⁇ Chen in the X-ray image data and the surrounding soft tissue of this jaw bone can be used in the MRI image data.
  • the virtual model does not provide any diagnostic information, but serves as a navigation aid or for patient communication.
  • the virtual model may be generated as described above, also using a sta ⁇ tical model.
  • certain characteristic structures are extracted from the image data using a segmentation method.
  • the extracted correlated regions can be represented discretized by a grid, so that at m grid points the surface can be described by a vector of 3 * m elements.
  • several data sets of different objects are combined with combined image data of different modalities using a master computer. component analysis. Characterized the statistical Mo ⁇ dell is generated. For the implementation of the inventive method, this statistical model is already available.
  • the three-dimensional image is then the input parameters and the virtual model of the output values of statistical ⁇ rule model.
  • the virtual model may be generated he ⁇ stern displayed together with the three-dimensional image then, whereby the three-dimensional virtual ⁇ tale the 3D volume is formed.
  • FIG. 2 shows a diagram illustrating the structure of the database 20.
  • the database 20 comprises a plurality of data records 21 and 22 of different objects, such as the jaw or the entire head of several patients.
  • the first data set contains at three different image data of different modalities, namely three-dimensional X-ray image data 23, MRI image data 24 and three-dimensional ultrasonic image data 25 of second data 22 also includes dreidimensio ⁇ -dimensional X-ray image data 26, MRI image data 27 and ultrasound Image data 28 of a second object.
  • the database 20 can contain any number of records.
  • the Jardinda ⁇ th 23-28 are stored correlated to each other, wherein the individual image data having 23-28 a known positional relationship to each other and provided with correlated areas to each other allows the unambiguous position determination of the image data.
  • Such correlation regions can, for example, a characteristic bone structure 10 of the jawbone, as illustrated in Fig. 1, three ⁇ dimensional X-ray image data 23 and 26 and the surrounding soft tissue of this jaw bone to be in the in the MRI image data 24 and 27.
  • the first correlated region 10 of the three-dimensional image 2 from FIG. 1 is compared with the three-dimensional X-ray image.
  • Image data 23 and 26 compared using pattern recognition.
  • a data set 21 or 22 is then selected which has a coincident or similar second correlated area in the x-ray image data 23 and 26.
  • the virtual model 3 is then created. Inventory ⁇ parts of the virtual model 3 are then through the jeweili ⁇ gen image data of the second and third modality, namely the MRI image data 24 and ultrasound image data 25 of the ERS th data set 21 or by the image data 27, 28 of the second data set 22 , and possibly by the addi ⁇ chen image data of the first modality, namely X-ray image data 23 and 26 formed.
  • the use of the image data of the first modality 23 and 26 for generating the virtual model 3 is advantageous if the image data 23 and 26 ers ⁇ ter modality of the selected data set 21 or 22 include a larger ob ekt Scheme than the three-dimensional recording 2 first modality.
  • the correlated regions of the image data 23 to 28 are subdivided into different structural groups of specific bone structures.
  • a first structural group of TMJ bones, as shown in FIG. 1, are stored in a first data set 29.
  • a second structural group of Kie ⁇ ferwinkel-bone of the mandibular bone, as shown in Fig. 1 are stored in a second record 30th
  • a third structural group of the chin bone projection 15 as shown in Fig. 1 are stored in a third sat ⁇ zes 31st
  • Further structural groups could be example ⁇ as teeth, jaw bone, nasal bone and jaw joints.
  • the correlated areas in the image data 23 to 28 can be computer-assisted automatically or by the user to the data sets 29, 30 or 31 of the individual structure groups. assigned to pen.
  • the correlated regions may be computer-aided identified as a particular type of bone structure and associated with a corresponding data set 29, 30, or 31, or the corresponding regions visually identified by the user and associated with a data set 29, 30, or 31 of the structural groups
  • FIG. 3a shows in detail the three-dimensional X-ray image data 23 of the first data set 21 from FIG. 2, wherein the second correlated region 32 shown in broken lines is a characteristic bone structure at the rear end of the lower jawbone.
  • the second correlated region 32 is compared with the first correlated region 10 of the three-dimensional image 2 of FIG. 1 using pattern recognition.
  • Selecting the appropriate record from the database is an optimization problem. For example, a mean square distance between the individual grid points of the data records from the database and from the recorded image data is then formed. The record from the database where this distance is minimal is then selected.
  • FIG. 3b shows the MRI image data 24 of the first record's zes 21 in the database 20 of FIG. 2, wherein soft tissue ⁇ be as different muscle groups 40, brain tissue 41, and different skin layers 42, is clearly visible and hard tissue, as the bone structures and teeth of Fig. 3a are hardly visible because of the lower water content.
  • the third correlated region 43 of the MRI image data 24 shown in dashed lines is the surrounding soft tissue of the characteristic bone structure of the second correlated region 32 such that there is a fixed positional relationship between the second correlated region 32 and the third correlated region 43.
  • the rebe ⁇ area of the three-dimensional image 2 of Fig. 1, which comprises diglich le- the lower jawbone, is significantly different from the Ob ekt Scheme the generated virtual model 3 of Fig. 1, which comprises the complete head of the patient.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Dentistry (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Epidemiology (AREA)
  • Engineering & Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Biophysics (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Acoustics & Sound (AREA)
  • Computer Hardware Design (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Medical Treatment And Welfare Office Work (AREA)
  • Processing Or Creating Images (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

L'invention concerne un procédé servant à générer une représentation d'objet dentaire virtuel (1) en partant d'une prise de vue (2) de première modalité d'au moins une partie d'un objet dentaire. Par l'application d'un procédé de transformation à la prise de vue (2), un modèle virtuel (3) est généré, se superposant à l'objet dentaire ou à une partie de celui-ci, ou contigu à l'objet dentaire ou à une partie de celui-ci, lequel modèle virtuel comporte une information supplémentaire différente de la prise de vue (2) relative à l'objet dentaire. Ledit modèle virtuel (3) complète la prise de vue (2) et forme avec celle-ci la représentation de l'objet dentaire virtuel (1).
PCT/EP2012/065557 2011-08-09 2012-08-09 Base de données et procédé servant à générer une représentation d'objet dentaire virtuel à partir d'une prise de vue WO2013021022A2 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102011080700.4 2011-08-09
DE102011080700A DE102011080700A1 (de) 2011-08-09 2011-08-09 Datenbank und Verfahren zur Erzeugung einer virtuellen dentalen Objektdarstellung aus einer Aufnahme

Publications (2)

Publication Number Publication Date
WO2013021022A2 true WO2013021022A2 (fr) 2013-02-14
WO2013021022A3 WO2013021022A3 (fr) 2013-07-04

Family

ID=46800160

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2012/065557 WO2013021022A2 (fr) 2011-08-09 2012-08-09 Base de données et procédé servant à générer une représentation d'objet dentaire virtuel à partir d'une prise de vue

Country Status (2)

Country Link
DE (1) DE102011080700A1 (fr)
WO (1) WO2013021022A2 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102014223967A1 (de) * 2014-11-25 2016-05-25 Sirona Dental Systems Gmbh Verfahren für die Diagnostik in der Kieferorthopädie

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19924291C1 (de) 1999-05-27 2000-06-08 Sirona Dental Systems Gmbh Verfahren zur Erfassung und Darstellung eines oder mehrerer Objekte, bspw. Zähne
DE19952962B4 (de) 1999-11-03 2004-07-01 Sirona Dental Systems Gmbh Verfahren zur Herstellung einer Bohrhilfe für ein Zahnimplantat

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AUPO280996A0 (en) * 1996-10-04 1996-10-31 Dentech Investments Pty Ltd Creation and utilization of 3D teeth models
US7234937B2 (en) * 1999-11-30 2007-06-26 Orametrix, Inc. Unified workstation for virtual craniofacial diagnosis, treatment planning and therapeutics
DE10252298B3 (de) * 2002-11-11 2004-08-19 Mehl, Albert, Prof. Dr. Dr. Verfahren zur Herstellung von Zahnersatzteilen oder Zahnrestaurationen unter Verwendung elektronischer Zahndarstellungen
DE10312848A1 (de) * 2003-03-21 2004-10-07 Sirona Dental Systems Gmbh Datenbank, Zahnmodell und Zahnersatzteil, aufgebaut aus digitalisierten Abbildungen realer Zähne
US7880751B2 (en) * 2004-02-27 2011-02-01 Align Technology, Inc. Method and system for providing dynamic orthodontic assessment and treatment profiles
DE102005011066A1 (de) * 2005-03-08 2006-09-14 Sirona Dental Systems Gmbh Verfahren zur Herstellung der Lageübereinstimmung von 3D-Datensätzen in einem dentalen CAD/CAM-System
GB0514554D0 (en) * 2005-07-15 2005-08-24 Materialise Nv Method for (semi-) automatic dental implant planning
US7844429B2 (en) * 2006-07-19 2010-11-30 Align Technology, Inc. System and method for three-dimensional complete tooth modeling

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19924291C1 (de) 1999-05-27 2000-06-08 Sirona Dental Systems Gmbh Verfahren zur Erfassung und Darstellung eines oder mehrerer Objekte, bspw. Zähne
DE19952962B4 (de) 1999-11-03 2004-07-01 Sirona Dental Systems Gmbh Verfahren zur Herstellung einer Bohrhilfe für ein Zahnimplantat

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"A 3D statistical shape model of the pelvic bone for segmentation", PROCEEDINGS OF SPIE - VOLUME 5370 MEDICAL IMAGING 2004, vol. 5370, May 2004 (2004-05-01), pages 1341 - 1351
"Atlas-based 3D-Shape Reconstruction from X-Ray Images", 18TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION, vol. 1, 2006, pages 371 - 374
"Automatic segmentation of mandibles in low-dose CT-data", INT. J. COMPUTER ASSISTED RADIOLOGY AND SURGERY, vol. 1, no. 1, 2006, pages 393 - 395

Also Published As

Publication number Publication date
WO2013021022A3 (fr) 2013-07-04
DE102011080700A1 (de) 2013-02-14

Similar Documents

Publication Publication Date Title
DE102007001684B4 (de) Bildregistrierung
EP3449830B1 (fr) Commande d'un système technique médicale de capteur d'images
EP2189940B1 (fr) Détermination d'éléments de corps d'indicateur et de trajectoires de pré-indicateurs
EP1890261B1 (fr) Recalage des données d'imagerie par résonance magnétique en utilisant des modèles génériques
EP2083390B1 (fr) Procédé destiné à la segmentation d'un ensemble de données d'image en 3D, produit de programme informatique correspondant et système correspondant
EP2863831B1 (fr) Procédé de contrôle de positions dentaires
DE10357206B4 (de) Verfahren und Bildbearbeitungssystem zur Segmentierung von Schnittbilddaten
WO2017085160A1 (fr) Procédé pour visualiser une situation dentaire
EP1930832B1 (fr) Détermination automatique de points de repère de structures anatomiques
DE112004002435B4 (de) Bestimmung von patientenbezogenen Informationen zur Position und Orientierung von MR-Bildern durch Individualisierung eines Körpermodells
DE19829224A1 (de) Verfahren zur Lokalisation von Behandlungszielen im Bereich weicher Körperteile
DE102017214447B4 (de) Planare Visualisierung von anatomischen Strukturen
EP3389496B1 (fr) Procédé pour calibrer un cliché radiographique
EP2584534B1 (fr) Procédé implémenté par ordinateur destiné à générer un modèle en 3D virtuel d'un objet réel tridimensionnel ainsi que le produit formé sur cette base
DE102015222821A1 (de) Verfahren und Vorrichtung zum Betreiben eines dentaldiagnostischen Bilderzeugungssystems
EP1348394A1 (fr) Assistance à la planification ou navigation par des données génériques obtenues de patients avec adaptation bi-dimensionelle
DE102005029903A1 (de) Verfahren und Vorrichtung zur 3D-Navigation auf Schichtbildern
EP1498851A1 (fr) Détermination d'une forme tridimensionnelle d'un corps, en particulier d'une structure anatomique, à partir d'images de projection bidimensionnelles
DE102006017932A1 (de) Verfahren zur Steuerung des Aufnahme- und/oder Auswertebetriebes von Bilddaten bei medizinischen Untersuchungen
EP3682453B1 (fr) Procédé de détermination et de visualisation de mouvements de dents et de changements de position de dents prevus
DE102019126111A1 (de) Verfahren, Computerprogrammprodukt und Simulationssystem zur Erstellung und Ausgabe eines dreidimensionalen Modells eines Gebisses
DE19805917A1 (de) Verfahren zur reproduzierbaren Positions- oder Haltungserkennung oder Lagerung von dreidimensionalen, beweglichen und verformbaren Körpern sowie Vorrichtung zur Durchführung des Verfahrens
WO2013021022A2 (fr) Base de données et procédé servant à générer une représentation d'objet dentaire virtuel à partir d'une prise de vue
DE10210644B4 (de) Verfahren zum Erstellen einer Sequenz
DE102008054298B4 (de) Verfahren und Vorrichtung zur 3D-Visualisierung eines Eingriffspfades eines medizinischen Instrumentes, eines medizinischen Instrumentes und/oder einer bestimmten Gewebestruktur eines Patienten

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12755959

Country of ref document: EP

Kind code of ref document: A2

122 Ep: pct application non-entry in european phase

Ref document number: 12755959

Country of ref document: EP

Kind code of ref document: A2