EP3250111B1 - Estimation et prédiction de l'usure des dents au moyen d'images scanner 3d intrabuccales - Google Patents

Estimation et prédiction de l'usure des dents au moyen d'images scanner 3d intrabuccales Download PDF

Info

Publication number
EP3250111B1
EP3250111B1 EP16743844.9A EP16743844A EP3250111B1 EP 3250111 B1 EP3250111 B1 EP 3250111B1 EP 16743844 A EP16743844 A EP 16743844A EP 3250111 B1 EP3250111 B1 EP 3250111B1
Authority
EP
European Patent Office
Prior art keywords
teeth
tooth
digital model
module
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP16743844.9A
Other languages
German (de)
English (en)
Other versions
EP3250111A1 (fr
EP3250111A4 (fr
Inventor
Evan J. Ribnick
Guruprasad Somasundaram
Brian J. Stankiewicz
Aya EID
Ravishankar Sivalingam
Shannon D. SCOTT
Anthony J. SABELLI
Robert D. Lorentz
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
3M Innovative Properties Co
Original Assignee
3M Innovative Properties Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 3M Innovative Properties Co filed Critical 3M Innovative Properties Co
Priority to EP20166109.7A priority Critical patent/EP3708069A1/fr
Publication of EP3250111A1 publication Critical patent/EP3250111A1/fr
Publication of EP3250111A4 publication Critical patent/EP3250111A4/fr
Application granted granted Critical
Publication of EP3250111B1 publication Critical patent/EP3250111B1/fr
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/45For evaluating or diagnosing the musculoskeletal system or teeth
    • A61B5/4538Evaluating a particular part of the muscoloskeletal system or a particular medical condition
    • A61B5/4542Evaluating the mouth, e.g. the jaw
    • A61B5/4557Evaluating bruxism
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0062Arrangements for scanning
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0082Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence adapted for particular medical purposes
    • A61B5/0088Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence adapted for particular medical purposes for oral or dental tissue
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/45For evaluating or diagnosing the musculoskeletal system or teeth
    • A61B5/4538Evaluating a particular part of the muscoloskeletal system or a particular medical condition
    • A61B5/4542Evaluating the mouth, e.g. the jaw
    • A61B5/4547Evaluating teeth
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7271Specific aspects of physiological measurement analysis
    • A61B5/7275Determining trends in physiological measurement data; Predicting development of a medical condition based on physiological measurements, e.g. determining a risk factor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7271Specific aspects of physiological measurement analysis
    • A61B5/7278Artificial waveform generation or derivation, e.g. synthesising signals from measured signals
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/742Details of notification to user or communication with user or patient ; user input means using visual displays
    • A61B5/743Displaying an image simultaneously with additional graphical information, e.g. symbols, charts, function plots
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • G06T7/0014Biomedical image inspection using an image reference approach
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/50ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2576/00Medical imaging apparatus involving image processing or analysis
    • A61B2576/02Medical imaging apparatus involving image processing or analysis specially adapted for a particular organ or body part
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C9/00Impression cups, i.e. impression trays; Impression methods
    • A61C9/004Means or methods for taking digitized impressions
    • A61C9/0046Data acquisition means or methods
    • A61C9/0053Optical means or methods, e.g. scanning the teeth by a laser or light beam
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth

Definitions

  • Tooth wear associated with Bruxism
  • gingival recession are both conditions that, if not treated in a timely manner by dental professionals, can have serious medical consequences.
  • lateral movements and tooth grinding can cause significant tooth wear and lead to muscle pain, temporomandibular joint issues, and headaches. In some cases, this may lead to the dentin being exposed, dental decay, and even tooth fracture.
  • the tools available to dental professionals for diagnosing and assessing the severity of tooth wear and gingival recession are limited.
  • these tools include patient questionnaires, clinical examination by a dentist, and bite force measurements. Clinical examinations may be performed using the Individual Tooth-Wear Index, which provides a rating between 0 and 3 based on visual assessment by a dentist. Accordingly, a need exists for additional tools to assess tooth wear, particularly using intra-oral 3D scans.
  • EP 2 258 303 A1 discloses a system for creating an individual, three-dimensional virtual tooth model representing a tooth of a patient.
  • the system includes a memory storing at least one virtual three-dimensional model of a template object corresponding to a tooth, including a template tooth corresponding to the tooth, and a data processing system including a memory storing a virtual three-dimensional model of at least a part of the dentition of the patient.
  • the data processing system further comprises software processing the virtual three-dimensional model of at least a part of the dentition and the virtual model of the template object and responsively deriving the individual, three-dimensional, virtual tooth model.
  • US 2013/0308843 A1 discloses a method for the geometrical analysis of scan data from oral structures.
  • EP 2 604 220 A1 discloses an information processing method that allows the mandibular movement of a patient to be reproduced on a computer.
  • WO 2010/033404 A1 discloses methods and systems for determining the positions of orthodontic appliances.
  • EP2349062 (A1 ) is also relevant for understanding the background of the present invention.
  • a method for estimating teeth wear includes receiving a 3D digital model of teeth and segmenting the 3D digital model of teeth to identify individual teeth within the 3D digital model of teeth.
  • a digital model of a tooth is selected from the segmented 3D digital model of teeth, and an original shape of the selected tooth is predicted to obtain a digital model of a predicted original shape.
  • the digital model of the tooth is compared with the digital model of the predicted original shape to estimate wear areas in the tooth.
  • a method for predicting teeth wear that includes receiving a 3D digital model of teeth.
  • a mapping function is applied to the digital model of the teeth based upon values relating to tooth wear. Wear areas in the teeth are predicted based upon the applying step.
  • Embodiments includes analyzing tooth wear from a single 3D scan of the patient's dentition.
  • One approach is based on estimating the original shape of the surface through use of a database (or collection) of known tooth shapes and learned mathematical model and for reconstructing shape, and then comparing this estimated original shape with the current shape of the surface.
  • Another approach is based upon comparing the current shape of the surface with annotated scans where the annotation indicates an amount of tooth wear.
  • FIG. 1 is a diagram of a system 10 for detecting tooth wear using a digital 3D model based upon intra-oral 3D scans.
  • System 10 includes a processor 20 receiving digital 3D models of teeth (12) from intra-oral 3D scans or scans of impressions of teeth.
  • System 10 can also include an electronic display device 16, such as a liquid crystal display (LCD) device, for displaying indications of tooth wear and an input device 18 for receiving user commands or other information.
  • LCD liquid crystal display
  • FIG. 2 An example of digital 3D model of a patient's teeth from a scan is shown in FIG. 2 .
  • Systems to generate digital 3D images or models based upon image sets from multiple views are disclosed in U.S. Patent Nos. 7,956,862 and 7,605,817 .
  • System 10 can be implemented with, for example, a desktop, notebook, or tablet computer. System 10 can receive the 3D scans locally or remotely via a network.
  • the individual teeth in the model need to be segmented from one another before the desired analysis or manipulation can be performed.
  • a software interface may be presented in order for a user to perform this segmentation, or some parts of it, manually.
  • this process can be quite labor intensive and tedious. As such, the automation of this task is desirable.
  • An example of teeth that have been segmented in a digital model is shown in FIG. 3 .
  • the segmentation provides for separating individual teeth in the digital 3D model, as represented by the shading in FIG. 3 , and each tooth in the model can essentially be digitally separated from the other teeth for further processing to detect tooth wear.
  • Using a segmented digital 3D model for comparing or analyzing individual teeth is more accurate than comparing whole or partial arches within the model.
  • the technique is a combination of two separate algorithms and combines the strengths of both of them.
  • the first algorithm is a geometric hill-climbing approach which takes into account topological structures such as height and curvature.
  • the second algorithm is a machine learning approach which classifies each point on the surface as belonging to either a boundary or a non-boundary.
  • the second algorithm is interstice detection which classifies a set of planes (or points) that approximate the intersticial spaces between teeth.
  • the second algorithm can be complementary to the first algorithm (geometric hill-climbing) and combined with the first algorithm to produce a resulting segmentation.
  • the first algorithm can be combined with user input estimating centroids of teeth in the digital 3D model.
  • the first algorithm can be combined with user input estimating centroids of teeth in the digital 3D model.
  • only one algorithm can be used to segment the digital 3D model such as any one of the algorithms described herein.
  • the 3D scans addressed herein are represented as triangular meshes.
  • the triangular mesh is common representation of 3D surfaces and has two components.
  • the first component referred to as the vertices of the mesh, are simply the coordinates of the 3D points that have been reconstructed on the surface - i.e., a point cloud.
  • the second component the mesh faces, encodes the connections between points on the object and is an efficient way of interpolating between the discrete sample points on the continuous surface.
  • Each face is a triangle defined by three vertices, resulting in a surface that can be represented as a set of small triangular planar patches.
  • FIG. 4 is a flow chart of a method 22 for segmenting teeth in a digital 3D model.
  • Method 22 can be implemented in software or firmware modules, for example, for execution by processor 20.
  • Method 22 can alternatively be implemented in hardware modules or a combination of software and hardware.
  • Method 22 includes receiving a digital 3D model of a patient's teeth (step 24) and optionally aligning the model (step 25). Method 22 then involving segmenting the model by geometric hill-climbing (step 26) and point classification (step 28). Optionally, post processing on boundaries of the segmentation by point classification is performed (step 32). As an alternative to point classification, the model can be segmented by interstice detection (step 29). As another alternative to point classification, method 22 can receive user input identifying centroids of each tooth in the model (step 31).
  • the results of the segmentation methods are iteratively merged (step 30).
  • the results of segmentation by hill-climbing are merged with the results of segmentation by point classification or interstice detection or user input identifying the centroids.
  • the merged segmentation can optionally be refined based upon manual, for example user-entered, input (step 34).
  • the results of the segmentation are stored (step 38).
  • the segmentation results in a separate mesh for each tooth from the digital 3D model, as illustrated in FIG. 3 .
  • the optional alignment step 25 can be implemented using a Support Vector Regression (SVR) method to find the occlusal plane fitted to a mesh of the teeth in the digital 3D model.
  • the alignment can be used to have the teeth in the digital 3D model essentially aligned with the Y axis.
  • the alignment can use the LIBSVM toolbox and ⁇ - SVR method.
  • the training is based on the assumption that teeth are roughly pointing up along the Y axis.
  • the output is sample points from the occlusal plane which is given to a simple principal component analysis (PCA) method to find the normal direction.
  • PCA principal component analysis
  • SVR uses a linear loss function with a zero part within the margins which performs better for teeth dataset than the quadratic loss function in regular least square regression methods. It helps to decrease the effect of gingiva cut-lines which can be very jagged and bumpy in mesh scans.
  • Table 1 provides exemplary pseudocode for implementing the alignment step.
  • Table 1 - Pseudocode for Normal Direction Extraction Input a 3D mesh with a set of vertices V specified in 3D coordinate system X,Y and Z. Y represents the rough direction of vertical axis in which the teeth point upwards.
  • Output the normal vector perpendicular to occlusal plane which represents the correct upward direction of teeth. Assumptions: Teeth are roughly pointing up along the Y axis. The mesh has been truncated below the gum line.
  • Method steps 1 Subtract the mean of data points to centralize the data points around (0,0,0). 2 Apply the Support Vector Regression with linear kernel and margin value ⁇ to find the occlusal plane. 3 Find the normal direction of the occlusal plane by geometrical methods or applying a simple PCA.
  • One of the algorithms for segmentation is based upon geometric operations on the mesh. Specifically, the main idea behind this approach is that, if one starts from any point on the surface and moves upwards through a series of points, one will converge to a high point that is a local maximum. In most cases it would be expected all points on a tooth (or on the same cusp of a tooth) will converge to the same local maximum.
  • This type of segmentation can produce very accurate boundaries between teeth, but it typically results in an over-segmentation in which a single tooth may be divided into multiple segments.
  • the mesh is preprocessed using Laplacian smoothing. This preprocessing is an effective way of removing high-frequency noise in the surface reconstruction.
  • the parameter ⁇ can be any value greater than zero or, alternatively, ⁇ can be equal to zero.
  • Angular divergence is a measure of overall curvature around a point.
  • a face F comprised of vertices v i , v j , and v k , with normal vectors n i , n j , and n k , respectively
  • the angular divergence of the i-th vertex v i is the mean of the angular divergences of the faces of which v i is a part.
  • segmentation is performed according to a hill-climbing procedure.
  • the algorithm can be understood as follows. For each vertex on the surface, the algorithm initializes a hill-climb, in which at each iteration it moves to the connected neighbor (as defined by the faces) that has the highest energy function value. The algorithm continues climbing until it reaches a local maximum that has higher energy than all of its neighbors. All vertices that were passed through along this route are assigned to this local maximum, and all such paths that converge to this local maximum define a segment. This process is repeated until all vertices on the mesh have been traversed.
  • This segmentation assigns vertices to segments defined by local energy maxima that can be reached through a monotonically-increasing path through the energy function.
  • the energy function f i is defined such that each iteration of hill-climbing moves upwards in height, but is discouraged from crossing an area with high curvature by the angular divergence term. This helps ensure that the boundaries between teeth are not crossed.
  • FIG. 5 An example of a segmentation produced by this algorithm is shown in FIG. 5 .
  • the algorithm over-segments the teeth by separating each cusp of a tooth into its own segment - this can be understood intuitively as a result of the hill-climbing procedure, since each cusp will have its own unique local maximum.
  • the digital model of tooth 40 is segmented into five sections.
  • the boundaries produced by this approach are quite precise and accurately separate teeth from one another.
  • Table 2 provides exemplary pseudocode for implementing the geometric hill-climbing algorithm.
  • Table 2 - Pseudocode for Hill-Climbing Segmentation Input a 3D mesh with a set of vertices V specified in 3D coordinate system X,Y and Z.
  • Y represents the vertical axis or the general direction in which the teeth point upwards.
  • the mesh also has a set of triangulations or faces F based on the vertices.
  • Output Segmented mesh, where for each vertex v i in the mesh, a label l i corresponding to the segment to which that vertex belongs is assigned.
  • Method steps 1 Perform mesh Laplacian smoothing to reduce error 2 For each vertex v i in V , compute the surface normal at that vertex 3 For each face f i in F, compute the divergence of the face as where n i , n j and n k are the normal directions of vertices i, j, and k of the face 4 Apply the divergence value of every face to all the individual vertices of the face 5 Compute the energy function value at each vertex as y + lambda ⁇ D f 6 For each vertex determine the maximum function value in a local neighborhood 7 Assign all vertices to a segment assigned to the local maximum value in step 6 8 Repeat steps 6 to 7 until a local maximum is reached 9 Assign the appropriate cluster labels to each vertex
  • the segmentation by point classification is a data-driven approach. Unlike the geometric hill-climbing approach, this approach relies on manually provided groundtruth segmentation.
  • Groundtruth can be obtained from a user providing nearly accurate segmentation manually using mesh manipulation tools such as the MeshLab system. A selection of an individual tooth can be made using a face selection tool. Individual teeth are selected in this manner and saved as individual mesh files. Using the original mesh and the individual teeth files, a labeling of the vertices in the original mesh can then be inferred. Once groundtruth for a full scan is completed, the inferred labels of all the segments can be visualized.
  • the boundary vertices between segments can be determined. For each vertex the distribution of vertex labels around that vertex is examined. If the distribution is not unimodal (i.e., the vertex labels are predominantly the same), then that vertex is considered an interior vertex. If not, the vertex is considered a boundary vertex. This data can be manually entered one time, for example, as training data and then used repeatedly in the point classification algorithm.
  • the algorithm Given the groundtruth boundary vertices labels from multiple training meshes, the algorithm provides for a function that is capable of predicting whether a vertex on a mesh lies in the interior of a tooth or on the boundary between teeth.
  • the algorithm can classify or label points in the mesh as being on a tooth or on a boundary between teeth. This process involves two tasks: feature extraction and classification.
  • FIG. 6 illustrates detection of boundary vertices 42 between teeth in a digital 3D model.
  • Table 3 provides exemplary pseudocode for implementing the point classification (machine learning) training data algorithm.
  • Table 3 - Pseudocode for Machine Learning Training Input Multiple 3D meshes with a sets of vertices V specified in 3D coordinate system X, Y and Z. Y represents the vertical axis or the general direction in which the teeth point upwards. The mesh also has a set of triangulations or faces F based on the vertices. Also the groundtruth segmentation in the form of the vertices corresponding to boundaries and those in the interior as indicated by manual annotation.
  • Output A predictive model that is capable of generating the boundary vertex prediction labels for a query set of vertices.
  • the point classification algorithm extracts many characteristic features for every vertex in the mesh. It is often difficult to determine which features are useful in a segmentation algorithm.
  • features which can be used for segmentation in this framework including but not limited to multi-scale surface curvature, singular values extracted from PCA of local shape, shape diameter, distances from medial surface points, average geodesic distances, shape contexts, and spin images.
  • the algorithm implements the following features: absolute and mean curvature, direction of normal at vertex, local covariance of the mesh around the vertex and its principal Eigen values, spin images, Fourier features, shape contexts, and PCA features.
  • the function f Given the feature set for a vertex X, the function f is defined as follows: f: X -> ⁇ 1,0 ⁇ , that is the function f maps the set of features X to either a 1 or 0. A value 1 indicates that vertex is a boundary vertex and the value 0 indicates otherwise.
  • This function can be one or a combination of many classification methods such as support vector machines, decision trees, conditional random fields, and the like. Additionally, in the segmentation as a classification problem, there is a class imbalance. The number of interior vertices is much greater than the number of boundary vertices. The ratio of interior vertices to boundary vertices is typically 100:1. In such extreme class imbalance situations, regular classifiers are not optimal.
  • one option involves using classifier ensembles such as boosting.
  • the classification algorithm uses RUSBoosting on decision stumps as a classifier.
  • RUSBoost stands for random undersampling boosting and is known to handle the class imbalance very well. Additionally RUSBoost is already implemented in the MATLAB program "fitensemble" function. Based on preliminary analysis, RUSBoost was performed on 700 decision stumps. This number was chosen using cross-validation on the training set with the resubstitution loss as the metric. For our experiments, we used a "leave-scan-out" cross-validation scheme. Our dataset consisted of 39 scans, and for every test scan the remaining 38 scans were used for training. The resulting predictions were compared to the groundtruth boundary labels of the test scan. A confusion matrix can then be obtained by comparing the groundtruth labels with the predicted labels. From this we obtained the false alarm rate and the hit rate. With cross-validation testing on 39 scans we obtained an 80% hit rate and 1.7% false alarm rate on average.
  • Table 4 provides exemplary pseudocode for implementing the point classification (machine learning) algorithm.
  • Table 4 - Pseudocode for Machine Learning Prediction Input a 3D mesh with a set of vertices V specified in 3D coordinate system X,Y and Z. Y represents the vertical axis or the general direction in which the teeth point upwards. The mesh also has a set of triangulations or faces F based on the vertices.
  • Output Binarized mesh where for each vertex v i in the mesh, a label l i corresponding to whether the vertex belongs to a boundary or not. Assumptions: Teeth are roughly pointing up along the Y axis. The mesh has been truncated below the gum line.
  • Method steps 1 For each vertex v i in V , compute the following features: a. Normal direction b. Absolute, mean and Gaussian curvature c. Shape context d. Mesh fourier e. Spin image f. Mesh local covariance 2 Construct a data matrix X which is M X N where M is the number of vertices in the mesh and N is the total number of feature dimensions when all features in step 1 are concatenated 3 Predict using the learned decision tree RUSBoost classifier the labels corresponding to whether a vertex lies on the boundary or not
  • the second algorithm for segmentation can use interstice detection (step 29 in method 22).
  • Table 5 provides exemplary pseudocode for implementing the interstice detection algorithm.
  • Table 5 - Pseudocode for Interstice Detection Input a 3D mesh with a set of vertices V specified in 3D coordinate system X,Y and Z.
  • Y represents the vertical axis or the general direction in which the teeth point upwards.
  • the mesh also has a set of triangulations or faces F based on the vertices.
  • Output a set of planes that approximate the intersticial spaces between each pair of teeth.
  • Teeth are roughly pointing up along the Y axis.
  • Method steps 1 Form a plan-view range image of the mesh. That is, a range image from the top view, where each pixel represents the height of the surface at the corresponding point.
  • 2 Estimate a one-dimensional parameterization of the dental arch using the Locally-Linear Embedding (LLE) algorithm, which results in a curve that represents the general shape of the arch and passes roughly through the centers of the teeth.
  • 3 Compute a set of evenly-spaced sample points along the one-dimensional parameterization.
  • 4 For each sample point along the curve, compute the sum of heights in the range image along a line normal to the curve at that point.
  • Intersticial spaces are identified as sample points that are local minima in the sum of heights computed in step 4. The orientation of the intersticial space is given by the direction of the normal to the one-dimensional parameterization curve at the corresponding sample point.
  • 6 Detected intersticial spaces, and their orientations, are mapped back to the three-dimensional coordinates of the original mesh.
  • FIGS. 7A and 7B illustrate morphological clean up to fix boundaries between teeth in a digital 3D model with FIG. 7B illustrating clean up of the boundaries shown in FIG. 7A . This morphological clean up can be used to for the optional step 32 in method 22 after the segmentation by point classification.
  • the hill-climbing approach captures the general geometry of cusp and has a tendency to form good boundaries around teeth, but it over-segments and thus creates more false boundaries.
  • the classification approach on the contrary has a somewhat less than desired hit rate on boundaries but has a very low false alarm rate.
  • a method to merge the results helps reduce the demerits of both approaches and boost the merits of both.
  • a hierarchical merging algorithm is used, which merges the segments in the hill-climbing approach using the boundary predictions of the classification approach. Every boundary predicted by the hill-climbing approach is given a score based on the predicted boundary vertices from the classification approach.
  • a hierarchical merging is performed. All the boundaries with a score less than a threshold are discarded and the corresponding segments are merged and the boundary scores are corrected accordingly.
  • This threshold is gradually increased. For example, all boundaries that have score less than 5 are discarded first. The corresponding segments are merged, and then this process is repeated by increasing the threshold step-by-step to, for example, 50. This heuristic provides correct segmentation of the teeth in one of the merge steps in most cases.
  • FIGS. 9A-9H Sample results of the classification or machine learning (ML), hill-climbing (HC), and the merging steps are shown in FIGS. 9A-9H .
  • the machine learning output ( FIG. 9A ) shows the mesh labeling for the boundary vertices and the interior vertices.
  • the second mesh ( FIG. 9B ) is the result of the hill climbing. As shown in FIG. 9B , the hill-climbing over-segments each tooth but in general there is a reduced chance of a segment being shared across teeth. This is also a behavior associated with the choice of the parameter ⁇ .
  • the meshes displayed in FIGS. 9C-9H indicate iteratively the result of each merge step.
  • Merge 1 corresponds to discarding boundaries with a score less than 5 and merge 2 corresponds to scores less than 10 and so on.
  • the correct segmentation was achieved at step 6.
  • Successive merge steps indicate how aggressively nearby segments are merged and, therefore, in some cases changes are only noticeable at later merge steps.
  • the score used for merging can represent, for example, the number of points classified as a boundary from the point classification algorithm within a particular vicinity of a boundary determined from the hill-climbing algorithm.
  • An exemplary score of 5 means at least 5 points classified as a boundary are within a particular vicinity of a boundary determined by the hill-climbing algorithm.
  • the particular vicinity used can be based upon, for example, empirical evidence, the typical width or size of a true boundary, or other factors.
  • the best result would be achieved earlier than the 6th merging step and it is possible to get an over-merged result at step 6.
  • an under-merged or over-segmented result can occur even after step 6.
  • a cursor control device and user interface a user could manually select ("click on") and merge the segments that require merging to extract the teeth correctly, for example.
  • the final segmented digital 3D model can then be stored in an electronic storage device for later processing.
  • Table 6 provides exemplary pseudocode for implementing the algorithm for merging hill-climbing segmentation with point classification (machine learning) segmentation.
  • Table 7 provides exemplary pseudocode for implementing the algorithm for merging hill-climbing segmentation with interstice detection segmentation.
  • Output Segmented mesh, where for each vertex v i in the mesh, a label l i corresponding to the segment to which that vertex belongs is assigned. Assumptions: Teeth are roughly pointing up along the Y axis. The mesh has been truncated below the gum line. Method steps: 1 Convert hill-climbing label assignments to boundaries between segments and interior vertices of segments resulting in a set of boundaries B 2 Eliminate small boundary prediction regions in the machine learning prediction by way of morphological erosion.
  • Output Segmented mesh, where for each vertex v i in the mesh, a label l i corresponding to the segment to which that vertex belongs is assigned. Assumptions: Teeth are roughly pointing up along the Y axis. Method steps: 1 Each detected intersticial space defines a plane in the 3D space of the mesh. For each segment found in Hill-Climbing, compute which side of each interstice plane the majority of its vertices reside. This is referred to the "polarity" of each segment with respect to each intersticial plane. 2 Merge together segments that have the same polarities with respect to nearby intersticial planes.
  • the algorithm can merge the hill-climbing segmentation with user input identifying centroids of teeth (step 31 in method 22).
  • This segmentation method requires input from a user at the beginning of the process.
  • the user identifies the centroid of each tooth in the digital 3D model of teeth.
  • input device 18 such as a cursor control device
  • the centroid can include the actual centroid or an estimation of the centroid as perceived by the user.
  • This user entered information is used as the initialization for the step of the segmentation which merges the hill-climbing segments using the Kmeans method.
  • These user-identified centroids need to be close to actual centroids of the teeth in order for the segmentation process to work well and not require post-processing by the user.
  • the only parameter required for this method to be trained is ⁇ in SVR for normal direction extraction described above for the alignment process.
  • the user-entered information to identify the centroids of each tooth is then merged with the results of the hill-climbing segmentation using the Kmeans clustering method.
  • the vertices should first be replaced by the corresponding local maximum from the hill-climbing step.
  • Kmeans method is applied on the new set of vertices to cluster them in k segments, where k is equal to the number of inputs ("clicks") of the user at the beginning of the process.
  • the user's inputs (estimation of teeth centroids) are used as the centroid starting locations of the Kmeans method.
  • This merging method can result in successful segmentation as follows: clustering is applied on the local maxima (mostly located on the teeth cusps) and not the full mesh, yielding accuracy and speed benefits.
  • the local maxima of larger clusters find higher weights in Kmeans method, and the centroid starting locations entered by the user avoid converging to other possible local optima of Kmeans methods.
  • Table 8 provides exemplary pseudocode for implementing the algorithm for merging hill-climbing segmentation with user-entered estimations of teeth centroids.
  • Input a 3D mesh with a set of vertices V specified in 3D coordinate system X,Y and Z, the segmentation result from the hill-climbing segmentation algorithm, in which the local maximum coordination that has been reached by each vertex is reported, and the estimation of centroids of teeth, which has been received from the user at the beginning of the process.
  • Output Segmented mesh, where to each vertex v i in the mesh, a label l i corresponding to the segment to which that vertex belongs is assigned.
  • Method steps 1 Represent/substitute each vertex with the local maximum it has reached. 2 Apply the Kmeans clustering method on the new vertices, with the user's centroid estimation as the centroid starting locations of the Kmeans. 3 Assign all vertices to a segment assigned to the corresponding local maximum value in step 2. 4 Assign the appropriate cluster labels to each vertex.
  • a single scan of a patient's dentition for a person presumed to have tooth wear is compared to an estimated reconstruction of the original "virgin" tooth shape for each tooth or particular teeth (i.e., the shape before any tooth wear occurred), as predicted by a mathematical model as applied to the current patient's teeth.
  • This approach can be summarized according to the following steps: a 3D scan of the patient's teeth is acquired; the individual teeth are segmented from one another using a segmentation algorithm; for each tooth comprising the patient's dentition, the original "virgin" shape of the tooth is predicted using a mathematical model of tooth shape learned from a large database of teeth; and the current scan of each tooth is compared to the predicted virgin tooth shape in order to assess the degree of wear that has occurred.
  • FIG. 10 is a flow chart of a method 50 to implement this approach of estimating tooth wear using 3D scans.
  • Method 50 can be implemented in software or firmware modules, for example, for execution by processor 20.
  • Method 50 can alternatively be implemented in hardware modules or a combination of software and hardware.
  • Method 50 includes receiving a segmented model (step 52), for example from the segmentation methodology described above.
  • a tooth is selected from the model (step 54), and the original "virgin" shape of the tooth is predicted based upon a database of tooth scans (step 56).
  • a large database of 3D tooth models can be accessed and used, where the degree of wear in each of the teeth in the database is known.
  • an aggregate generic mathematical model can be formed of the canonical original "virgin” shape of a tooth location. This can be accomplished for each tooth.
  • multiple "virgin” models can be formed, which can include clustering of the space of tooth shape for that particular tooth location.
  • the original "virgin" shape of a tooth from the current scan can be predicted.
  • One approach is as follows. First, the appropriate model from the database for the current tooth is determined, since multiple clustered models may exist for each tooth location, depending on the variability of tooth shape at this bite location. If multiple models exist for this tooth, the appropriate model is determined by computing the similarity of this tooth to each model. Then, a mapping is computed from the current tooth to the model tooth shape. This mapping can be accomplished through use of a non-rigid registration algorithm. Then, once the new tooth is mapped to the model space, its original "virgin" shape is associated with that of the model. Using the inverse of the mapping estimated previously, this model is mapped back to the space of the current tooth, resulting in a prediction of the original shape of this tooth.
  • the original "virgin” tooth shape can be compared with the actual current shape in order to assess the amount of wear exhibited (step 60).
  • these two models may need to be registered (step 58), using a 3D registration algorithm, so that they are aligned with one another as closely as possible.
  • An example of a registration algorithm is disclosed in the application referenced above.
  • the areas in which the actual and predicted "virgin" models are in disagreement must be located and compared (step 62). These represent the areas of the tooth that have been worn.
  • Wear areas can optionally be estimated using local smooth contours on the tooth (step 63), and this estimation can be used to supplement the estimated wear areas from step 62.
  • local discontinuities in the tooth surface can be detected by analysis of the smooth contours in the digital model localized on or near the top surface of the tooth. A discontinuity in the model of that surface satisfying particular criteria can tend to indicate a worn area.
  • the results of the estimated tooth wear, such as the heights or volumes of the worn areas, can be computed and displayed (step 64).
  • FIG. 11 is a diagram of a user interface 68 for displaying estimated tooth wear, for example on display device 16.
  • User interface 68 includes a section 70 for displaying a predicted original shape of the selected tooth, a section 72 for displaying the actual shape as shown in a 3D scan, and a section 74 for displaying a comparison of the shapes to indicate tooth wear.
  • Section 74 can display, for example, the model of the actual shape superimposed on the model of the predicted original shape.
  • Another approach involves predicting tooth wear by determining from a single 3D scan of a person's dentition a score or a rating an amount of tooth wear for that person.
  • This approach takes advantage of a large number of annotated 3D scan data of dentitions that have been given scores related to tooth wear. For example, the Smith and Knight tooth wear index could be used for annotating scans. Given these annotations this approach learns a mapping function that uses low-level mesh features such as curvature, spin images, and the features to predict the Smith and Knight tooth wear index for a particular tooth.
  • Such an approach has a benefit for predicting the onset of conditions such as Bruxism at an early stage without the requirement of multiple scans spread out over a longer period of time.
  • FIG. 12 is a flow chart of a method 80 to implement this approach of predicting tooth wear using 3D scans.
  • Method 80 can be implemented in software or firmware modules, for example, for execution by processor 20.
  • Method 80 can alternatively be implemented in hardware modules or a combination of software and hardware.
  • Method 80 includes optionally receiving a segmented model (step 82), for example from the segmentation methodology described above. Segmentation is optional in that mapping can be applied to the full mesh or a portion of it, representing a full or partial arch of teeth in the digital model of the teeth.
  • a tooth or arch is selected from the model (step 84), and a mapping function is applied to the selected tooth or arch to predict tooth wear (step 86).
  • the results of the predicted tooth wear can be displayed (step 88).
  • Step 86 can be implemented as follows.
  • Many low level mesh features can be computed using well known computer vision and geometric methods. Some examples are features such as multi-scale surface curvature, singular values extracted from Principal Component Analysis of local shape, shape diameter, distances from medial surface points, average geodesic distances, shape contexts, and spin images.
  • a function f Given the ensemble feature set for a mesh X, a function f is defined as follows: f: X -> ⁇ 0,1,2,3 ⁇ , that is the function f maps the set of features X to a Smith and Knight tooth wear index which takes one of 4 values. In terms of classification, this is a 4-class classification problem. Many different types of classifiers are possible to model this function f.
  • This function can be one or a combination of many classification methods such as support vector machines, decision trees, conditional random fields, or other methods.
  • Support vector machines are known to provide a high a degree of separation between classes with the use of kernels by comparing the features in the appropriate kernel space.
  • Decision trees and ensemble versions of decision trees/stumps such as boosting, bootstrapping, and other versions can be used to combine multiple weak classifiers into strong classifiers with very high performance.
  • conditional random fields provide great performance by taking neighborhood and group labeling into account. This can be used for localized classification such as classifying the top portion of teeth, i.e. the cusps. Bag-of-features are also possible for use in classifying objects globally by the weighted assimilation of local mesh features computed all over the mesh.
  • FIG. 13 is a diagram of a user interface 90 displaying predicted tooth wear, for example on display device 16.
  • User interface 90 includes a section 92 for displaying predicted tooth wear by showing, for example, an annotated scan of the selected tooth.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Pathology (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biophysics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Artificial Intelligence (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Dentistry (AREA)
  • Signal Processing (AREA)
  • Physiology (AREA)
  • Psychiatry (AREA)
  • Rheumatology (AREA)
  • Orthopedic Medicine & Surgery (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Quality & Reliability (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)
  • Image Analysis (AREA)

Claims (15)

  1. Procédé pour estimer l'usure des dents, comprenant les étapes consistant à :
    recevoir un modèle numérique 3D de dents ;
    segmenter le modèle numérique 3D de dents pour identifier des dents individuelles au sein du modèle numérique 3D de dents ;
    sélectionner un modèle numérique d'une dent (40) à partir du modèle numérique 3D de dents segmenté ;
    prédire une forme originale de la dent sélectionnée pour obtenir un modèle numérique d'une forme originale prédite ; et
    comparer le modèle numérique de la dent (40) avec le modèle numérique de la forme originale prédite pour estimer des zones d'usure dans la dent sélectionnée (40).
  2. Procédé selon la revendication 1, dans lequel l'étape de segmentation comprend :
    la réalisation d'un premier procédé de segmentation qui sursegmente au moins certaines des dents au sein du modèle numérique 3D de dents ;
    la réalisation d'un deuxième procédé de segmentation qui classifie des points au sein du modèle numérique 3D de dents comme étant soit sur un intérieur d'une dent soit sur une limite entre des dents dans le modèle numérique 3D de dents ; et
    la combinaison de résultats des premier et deuxième procédés de segmentation pour générer un modèle numérique 3D de dents segmenté.
  3. Procédé selon la revendication 2, dans lequel le premier procédé de segmentation comprend une approche d'escalade géométrique qui prend en compte des structures topologiques, et dans lequel le deuxième procédé de segmentation comprend un apprentissage automatique ou une détection d'interstice.
  4. Procédé selon la revendication 1, dans lequel l'étape de prédiction comprend la sélection d'un modèle numérique d'une dent à partir de formes de dents connues en fonction de caractéristiques d'un patient correspondant au modèle numérique 3D de dents.
  5. Procédé selon la revendication 1, comprenant en outre l'estimation de zones d'usure dans la dent sélectionnée en détectant des discontinuités dans le modèle numérique de la dent (40) qui satisfont à des critères particuliers.
  6. Procédé selon la revendication 1, dans lequel l'étape de comparaison comprend la détection de différences de surface entre le modèle numérique de la dent (40) et le modèle numérique de la forme originale prédite.
  7. Procédé selon la revendication 1, comprenant en outre l'affichage d'une indication des zones d'usure estimées (74).
  8. Procédé selon l'une quelconque des revendications précédentes, dans lequel l'usure de dent d'un patient est estimée à partir d'un seul balayage de la dentition du patient.
  9. Système (10) d'estimation de l'usure des dents, comprenant :
    un module pour recevoir un modèle numérique 3D de dents ;
    un module pour segmenter le modèle numérique 3D de dents pour identifier des dents individuelles au sein du modèle numérique 3D de dents ;
    un module pour sélectionner un modèle numérique d'une dent (40) à partir du modèle numérique 3D de dents segmenté ;
    un module pour prédire une forme originale de la dent sélectionnée pour obtenir un modèle numérique d'une forme originale prédite ; et
    un module pour comparer le modèle numérique de la dent (40) avec le modèle numérique de la forme originale prédite pour estimer des zones d'usure dans la dent.
  10. Système (10) selon la revendication 9, dans lequel le module de segmentation comprend :
    un module pour réaliser un premier procédé de segmentation qui sursegmente au moins certaines des dents au sein du modèle numérique 3D de dents ;
    un module pour réaliser un deuxième procédé de segmentation qui classifie des points au sein du modèle numérique 3D de dents comme étant soit sur un intérieur d'une dent soit sur une limite entre des dents dans le modèle numérique 3D de dents ; et
    un module pour combiner des résultats des premier et deuxième procédés de segmentation pour générer un modèle numérique 3D de dents segmenté.
  11. Système (10) selon la revendication 9, dans lequel le module de prédiction comprend la sélection d'un modèle numérique d'une dent (40) à partir de formes de dent connues en fonction de caractéristiques d'un patient correspondant au modèle numérique 3D de dents.
  12. Système (10) selon la revendication 9, comprenant en outre un module pour estimer des zones d'usure dans la dent sélectionnée en détectant des discontinuités dans le modèle numérique de la dent qui satisfont à des critères particuliers.
  13. Système (10) selon la revendication 9, dans lequel le module de comparaison comprend un module pour détecter des différences de surface entre le modèle numérique de la dent et le modèle numérique de la forme originale prédite.
  14. Système (10) selon la revendication 9, comprenant en outre un module (68) pour afficher une indication des zones d'usure estimées (74).
  15. Système (10) selon l'une quelconque des revendications 9 à 14, dans lequel le système (10) est adapté pour estimer l'usure de dent d'un patient à partir d'un seul balayage de la dentition du patient.
EP16743844.9A 2015-01-30 2016-01-14 Estimation et prédiction de l'usure des dents au moyen d'images scanner 3d intrabuccales Active EP3250111B1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP20166109.7A EP3708069A1 (fr) 2015-01-30 2016-01-14 Estimation et prédiction de l'usure des dents au moyen d'images scanner 3d intrabuccales

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US14/609,529 US9737257B2 (en) 2015-01-30 2015-01-30 Estimating and predicting tooth wear using intra-oral 3D scans
PCT/US2016/013329 WO2016122890A1 (fr) 2015-01-30 2016-01-14 Estimation et prédiction de l'usure des dents au moyen d'images scanner 3d intrabuccales

Related Child Applications (2)

Application Number Title Priority Date Filing Date
EP20166109.7A Division EP3708069A1 (fr) 2015-01-30 2016-01-14 Estimation et prédiction de l'usure des dents au moyen d'images scanner 3d intrabuccales
EP20166109.7A Division-Into EP3708069A1 (fr) 2015-01-30 2016-01-14 Estimation et prédiction de l'usure des dents au moyen d'images scanner 3d intrabuccales

Publications (3)

Publication Number Publication Date
EP3250111A1 EP3250111A1 (fr) 2017-12-06
EP3250111A4 EP3250111A4 (fr) 2018-07-25
EP3250111B1 true EP3250111B1 (fr) 2020-06-17

Family

ID=56544172

Family Applications (2)

Application Number Title Priority Date Filing Date
EP20166109.7A Pending EP3708069A1 (fr) 2015-01-30 2016-01-14 Estimation et prédiction de l'usure des dents au moyen d'images scanner 3d intrabuccales
EP16743844.9A Active EP3250111B1 (fr) 2015-01-30 2016-01-14 Estimation et prédiction de l'usure des dents au moyen d'images scanner 3d intrabuccales

Family Applications Before (1)

Application Number Title Priority Date Filing Date
EP20166109.7A Pending EP3708069A1 (fr) 2015-01-30 2016-01-14 Estimation et prédiction de l'usure des dents au moyen d'images scanner 3d intrabuccales

Country Status (5)

Country Link
US (2) US9737257B2 (fr)
EP (2) EP3708069A1 (fr)
AU (1) AU2016212031B2 (fr)
DK (1) DK3250111T3 (fr)
WO (1) WO2016122890A1 (fr)

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8108189B2 (en) * 2008-03-25 2012-01-31 Align Technologies, Inc. Reconstruction of non-visible part of tooth
EP3679888B1 (fr) 2011-02-23 2021-10-20 3Shape A/S Procédé de modification de la partie gingivale d'un modèle virtuel d'un ensemble de dents
US9737257B2 (en) * 2015-01-30 2017-08-22 3M Innovative Properties Company Estimating and predicting tooth wear using intra-oral 3D scans
US9814549B2 (en) * 2015-09-14 2017-11-14 DENTSPLY SIRONA, Inc. Method for creating flexible arch model of teeth for use in restorative dentistry
WO2017207454A1 (fr) 2016-05-30 2017-12-07 3Shape A/S Prédiction du développement d'un état dentaire
EP3496654B1 (fr) * 2016-08-15 2020-05-06 Trophy Schéma d'arcade dentaire dynamique
EP4241725A3 (fr) * 2017-03-20 2023-11-01 Align Technology, Inc. Génération de représentation virtuelle d'un traitement orthodontique d'un patient
CN108986123A (zh) * 2017-06-01 2018-12-11 无锡时代天使医疗器械科技有限公司 牙颌三维数字模型的分割方法
US10327693B2 (en) 2017-07-07 2019-06-25 3M Innovative Properties Company Tools for tracking the gum line and displaying periodontal measurements using intra-oral 3D scans
TWI644655B (zh) 2017-11-23 2018-12-21 勘德股份有限公司 數位齒模分割方法及數位齒模分割裝置
US10916053B1 (en) * 2019-11-26 2021-02-09 Sdc U.S. Smilepay Spv Systems and methods for constructing a three-dimensional model from two-dimensional images
US11403813B2 (en) 2019-11-26 2022-08-02 Sdc U.S. Smilepay Spv Systems and methods for constructing a three-dimensional model from two-dimensional images
US11270523B2 (en) * 2017-11-29 2022-03-08 Sdc U.S. Smilepay Spv Systems and methods for constructing a three-dimensional model from two-dimensional images
US11553988B2 (en) 2018-06-29 2023-01-17 Align Technology, Inc. Photo of a patient with new simulated smile in an orthodontic treatment review software
EP3899973A1 (fr) 2018-12-21 2021-10-27 The Procter & Gamble Company Appareil et procédé pour faire fonctionner un appareil de toilette personnelle ou un appareil de nettoyage domestique
US11488062B1 (en) * 2018-12-30 2022-11-01 Perimetrics, Inc. Determination of structural characteristics of an object
CN109864829B (zh) * 2019-01-28 2024-06-18 苏州佳世达光电有限公司 扫描系统及扫描方法
US11030801B2 (en) 2019-05-17 2021-06-08 Standard Cyborg, Inc. Three-dimensional modeling toolkit
JP6800358B1 (ja) * 2020-02-03 2020-12-16 株式会社松風 歯科補綴装置の設計方法および設計装置
CN112991273B (zh) * 2021-02-18 2022-12-16 山东大学 三维牙齿模型的正畸特征自动检测方法及系统
US20240081946A1 (en) * 2022-09-08 2024-03-14 Enamel Pure Systems and methods for dental treatment and remote oversight
CN117934562A (zh) * 2022-10-26 2024-04-26 上海时代天使医疗器械有限公司 检测牙齿三维数字模型形态差异的方法

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2349062A1 (fr) * 2008-09-19 2011-08-03 3M Innovative Properties Company Procédés et systèmes de détermination des positions d'appareils orthodontiques
EP2604220A1 (fr) * 2010-08-10 2013-06-19 Hidefumi Ito Dispositif de traitement d'informations, procédé de traitement d'informations et programme

Family Cites Families (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ATE232701T1 (de) * 1995-07-21 2003-03-15 Cadent Ltd Verfahren und system zur dreidimensionalen bilderfassung von zähnen
US6152731A (en) * 1997-09-22 2000-11-28 3M Innovative Properties Company Methods for use in dental articulation
US6227850B1 (en) * 1999-05-13 2001-05-08 Align Technology, Inc. Teeth viewing system
US6406292B1 (en) * 1999-05-13 2002-06-18 Align Technology, Inc. System for determining final position of teeth
US6227851B1 (en) * 1998-12-04 2001-05-08 Align Technology, Inc. Manipulable dental model system for fabrication of a dental appliance
TW461806B (en) 1999-08-02 2001-11-01 Advance Kk Method of manufacturing dental prosthesis, method of placing object and measuring device
US6648640B2 (en) * 1999-11-30 2003-11-18 Ora Metrix, Inc. Interactive orthodontic care system based on intra-oral scanning of teeth
US7160110B2 (en) 1999-11-30 2007-01-09 Orametrix, Inc. Three-dimensional occlusal and interproximal contact detection and display using virtual tooth models
US6371761B1 (en) * 2000-03-30 2002-04-16 Align Technology, Inc. Flexible plane for separating teeth models
JP2004504077A (ja) 2000-04-19 2004-02-12 オラメトリックス インコーポレイテッド 歯の口内走査に基づいたインタラクティブな歯列矯正ケアシステム
US7471821B2 (en) 2000-04-28 2008-12-30 Orametrix, Inc. Method and apparatus for registering a known digital object to scanned 3-D model
US7027642B2 (en) * 2000-04-28 2006-04-11 Orametrix, Inc. Methods for registration of three-dimensional frames to create three-dimensional virtual models of objects
US7040896B2 (en) 2000-08-16 2006-05-09 Align Technology, Inc. Systems and methods for removing gingiva from computer tooth models
US7080979B2 (en) * 2001-04-13 2006-07-25 Orametrix, Inc. Method and workstation for generating virtual tooth models from three-dimensional tooth data
US7362890B2 (en) * 2001-05-24 2008-04-22 Astra Tech Inc. Registration of 3-D imaging of 3-D objects
US20030040009A1 (en) * 2001-08-14 2003-02-27 University Of Southern California Saliva-based methods for preventing and assessing the risk of diseases
US7077647B2 (en) * 2002-08-22 2006-07-18 Align Technology, Inc. Systems and methods for treatment analysis by teeth matching
US7156661B2 (en) * 2002-08-22 2007-01-02 Align Technology, Inc. Systems and methods for treatment analysis by teeth matching
US7029279B2 (en) * 2002-10-07 2006-04-18 Mark Schomann Prosthodontia system
US7736857B2 (en) * 2003-04-01 2010-06-15 Proactive Oral Solutions, Inc. Caries risk test for predicting and assessing the risk of disease
US7695278B2 (en) 2005-05-20 2010-04-13 Orametrix, Inc. Method and system for finding tooth features on a virtual three-dimensional model
US20070024611A1 (en) 2005-07-27 2007-02-01 General Electric Company System and method for aligning three-dimensional models
US7605817B2 (en) 2005-11-09 2009-10-20 3M Innovative Properties Company Determining camera motion
US7840042B2 (en) * 2006-01-20 2010-11-23 3M Innovative Properties Company Superposition for visualization of three-dimensional data acquisition
US20070207441A1 (en) * 2006-03-03 2007-09-06 Lauren Mark D Four dimensional modeling of jaw and tooth dynamics
US8275180B2 (en) 2007-08-02 2012-09-25 Align Technology, Inc. Mapping abnormal dental references
US8099268B2 (en) * 2007-05-25 2012-01-17 Align Technology, Inc. Tooth modeling
US8075306B2 (en) 2007-06-08 2011-12-13 Align Technology, Inc. System and method for detecting deviations during the course of an orthodontic treatment to gradually reposition teeth
CN101689309A (zh) * 2007-06-29 2010-03-31 3M创新有限公司 视频数据和三维模型数据的同步视图
KR100971762B1 (ko) 2008-08-28 2010-07-26 주식회사바텍 가상 치아 생성 방법 및 그 장치, 상기 방법을 구현하는 프로그램이 기록된 기록매체
US8244028B2 (en) 2010-04-30 2012-08-14 Align Technology, Inc. Virtual cephalometric imaging
DE102011010975A1 (de) * 2011-02-10 2012-08-16 Martin Tank Verfahren und Analysesystem zur geometrischen Analyse von Scandaten oraler Strukturen
EP3569167B1 (fr) * 2011-02-18 2021-06-16 3M Innovative Properties Co. Installations orthodontiques numériques
JP4997340B1 (ja) 2011-08-23 2012-08-08 株式会社松風 咬耗評価装置、咬耗評価方法および咬耗評価プログラム
EP2626036B1 (fr) * 2012-02-10 2018-04-25 3Shape A/S Conception virtuelle de restauration de tenon et faux moignon à l'aide d'une forme 3d numérique
US9626462B2 (en) * 2014-07-01 2017-04-18 3M Innovative Properties Company Detecting tooth wear using intra-oral 3D scans
US10192003B2 (en) * 2014-09-08 2019-01-29 3M Innovative Properties Company Method of aligning intra-oral digital 3D models
US11147652B2 (en) * 2014-11-13 2021-10-19 Align Technology, Inc. Method for tracking, predicting, and proactively correcting malocclusion and related issues
US9737257B2 (en) * 2015-01-30 2017-08-22 3M Innovative Properties Company Estimating and predicting tooth wear using intra-oral 3D scans
US9770217B2 (en) * 2015-01-30 2017-09-26 Dental Imaging Technologies Corporation Dental variation tracking and prediction
US10032271B2 (en) * 2015-12-10 2018-07-24 3M Innovative Properties Company Method for automatic tooth type recognition from 3D scans

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2349062A1 (fr) * 2008-09-19 2011-08-03 3M Innovative Properties Company Procédés et systèmes de détermination des positions d'appareils orthodontiques
EP2604220A1 (fr) * 2010-08-10 2013-06-19 Hidefumi Ito Dispositif de traitement d'informations, procédé de traitement d'informations et programme

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YOKESH KUMAR ET AL: "Automatic Feature Identification in Dental Meshes", COMPUTER-AIDED DESIGN AND APPLICATIONS, vol. 9, no. 6, 1 January 2012 (2012-01-01), pages 747 - 769, XP055388319, DOI: 10.3722/cadaps.2012.747-769 *

Also Published As

Publication number Publication date
WO2016122890A1 (fr) 2016-08-04
EP3250111A1 (fr) 2017-12-06
EP3708069A1 (fr) 2020-09-16
EP3250111A4 (fr) 2018-07-25
US10405796B2 (en) 2019-09-10
AU2016212031B2 (en) 2018-05-10
AU2016212031A1 (en) 2017-08-10
US20170311873A1 (en) 2017-11-02
US20160220173A1 (en) 2016-08-04
DK3250111T3 (da) 2020-09-21
US9737257B2 (en) 2017-08-22

Similar Documents

Publication Publication Date Title
US10405796B2 (en) Estimating and predicting tooth wear using intra-oral 3D scans
US10410346B2 (en) Detecting tooth wear using intra-oral 3D scans
Silva et al. Automatic segmenting teeth in X-ray images: Trends, a novel data set, benchmarking and future perspectives
US20180300877A1 (en) Method for automatic tooth type recognition from 3d scans
US7945080B2 (en) Method for visualizing damage in the myocardium
Hogeweg et al. Clavicle segmentation in chest radiographs
Ribeiro et al. Handling inter-annotator agreement for automated skin lesion segmentation
CN113782184A (zh) 一种基于面部关键点与特征预学习的脑卒中辅助评估系统
Dharmalingham et al. A model based segmentation approach for lung segmentation from chest computer tomography images
Sundararajan et al. A multiresolution support vector machine based algorithm for pneumoconiosis detection from chest radiographs
Mortaheb et al. Metal artifact reduction and segmentation of dental computerized tomography images using least square support vector machine and mean shift algorithm
CN116958169A (zh) 一种三维牙颌模型牙齿分割方法
US20220122261A1 (en) Probabilistic Segmentation of Volumetric Images
Dhar et al. Automatic tracing of mandibular canal pathways using deep learning
Tatjana et al. Computer-aided Analysis and Interpretation of HRCT Images of the Lung
Zhu et al. ViSTooth: A Visualization Framework for Tooth Segmentation on Panoramic Radiograph
Do Tran et al. Global Analysis of Three-Dimensional Shape Symmetry: Human Skulls (Part II)
Atuhaire Reconstruction of three-dimensional facial geometric features related to fetal alcohol syndrome using adult surrogates
Sivasankaran et al. A Rapid Advancing Image Segmentation Approach in Dental to Predict Cryst.
Gaikwad et al. Paper classification of dental images using teeth instance segmentation for dental treatment
Samanu et al. Semi-automatic spine extraction for disc space narrowing diagnosis
Wu Analysis of Human Face Shape Abnormalities Using Machine Learning
Su et al. A Knowledge-Based Lung Nodule Detection System for Helical CT Images

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20170728

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

RIN1 Information on inventor provided before grant (corrected)

Inventor name: SCOTT, SHANNON D.

Inventor name: SIVALINGAM, RAVISHANKAR

Inventor name: STANKIEWICZ, BRIAN J.

Inventor name: SABELLI, ANTHONY J.

Inventor name: EID, AYA

Inventor name: LORENTZ, ROBERT D.

Inventor name: SOMASUNDARAM, GURUPRASAD

Inventor name: RIBNICK, EVAN J.

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
A4 Supplementary search report drawn up and despatched

Effective date: 20180625

RIC1 Information provided on ipc code assigned before grant

Ipc: A61B 5/00 20060101AFI20180619BHEP

Ipc: G06F 19/00 20110101ALI20180619BHEP

Ipc: A61C 19/04 20060101ALI20180619BHEP

Ipc: G06T 7/11 20170101ALI20180619BHEP

Ipc: G06T 7/00 20170101ALI20180619BHEP

Ipc: A61C 9/00 20060101ALI20180619BHEP

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20190614

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

RIC1 Information provided on ipc code assigned before grant

Ipc: G06T 7/00 20170101ALI20191122BHEP

Ipc: A61C 9/00 20060101ALI20191122BHEP

Ipc: G16H 30/40 20180101ALI20191122BHEP

Ipc: A61B 5/00 20060101AFI20191122BHEP

Ipc: G16H 50/50 20180101ALI20191122BHEP

INTG Intention to grant announced

Effective date: 20191220

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602016038253

Country of ref document: DE

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 1280354

Country of ref document: AT

Kind code of ref document: T

Effective date: 20200715

REG Reference to a national code

Ref country code: DK

Ref legal event code: T3

Effective date: 20200915

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200617

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200917

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200617

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200918

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200617

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20200617

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200917

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200617

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200617

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200617

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 1280354

Country of ref document: AT

Kind code of ref document: T

Effective date: 20200617

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200617

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200617

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200617

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200617

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200617

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200617

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200617

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20201019

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200617

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200617

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20201017

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200617

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200617

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602016038253

Country of ref document: DE

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20210318

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200617

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200617

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20210114

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20210114

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20210131

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20210131

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20210131

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20210114

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20210131

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20210114

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20201017

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20210131

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DK

Payment date: 20221220

Year of fee payment: 8

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200617

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230530

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20160114

REG Reference to a national code

Ref country code: DE

Ref legal event code: R081

Ref document number: 602016038253

Country of ref document: DE

Owner name: SOLVENTUM INTELLECTUAL PROPERTIES CO. (N.D.GES, US

Free format text: FORMER OWNER: 3M INNOVATIVE PROPERTIES COMPANY, SAINT PAUL, MINN., US

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200617

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20231219

Year of fee payment: 9