WO2021147333A1 - 利用人工神经网络生成牙科正畸治疗效果的图像的方法 - Google Patents
利用人工神经网络生成牙科正畸治疗效果的图像的方法 Download PDFInfo
- Publication number
- WO2021147333A1 WO2021147333A1 PCT/CN2020/113789 CN2020113789W WO2021147333A1 WO 2021147333 A1 WO2021147333 A1 WO 2021147333A1 CN 2020113789 W CN2020113789 W CN 2020113789W WO 2021147333 A1 WO2021147333 A1 WO 2021147333A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- orthodontic treatment
- neural network
- patient
- tooth
- digital model
- Prior art date
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61C—DENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
- A61C7/00—Orthodontics, i.e. obtaining or maintaining the desired position of teeth, e.g. by straightening, evening, regulating, separating, or by correcting malocclusions
- A61C7/002—Orthodontic computer assisted systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61C—DENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
- A61C9/00—Impression cups, i.e. impression trays; Impression methods
- A61C9/004—Means or methods for taking digitized impressions
- A61C9/0046—Data acquisition means or methods
- A61C9/0053—Optical means or methods, e.g. scanning the teeth by a laser or light beam
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
- G06T7/0014—Biomedical image inspection using an image reference approach
- G06T7/0016—Biomedical image inspection using an image reference approach involving temporal comparison
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/75—Determining position or orientation of objects or cameras using feature-based methods involving models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
- G06V10/451—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
- G06V10/454—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H20/00—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
- G16H20/30—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H20/00—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
- G16H20/40—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mechanical, radiation or invasive therapies, e.g. surgery, laser therapy, dialysis or acupuncture
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/20—ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/50—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30036—Dental; Teeth
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/41—Medical
Definitions
- the present application generally relates to a method for generating images of the effects of orthodontic treatment using artificial neural networks.
- One aspect of the present application provides a method for generating images of orthodontic treatment effects by using artificial neural networks, including: obtaining toothy facial photos of patients before orthodontic treatment; extracting deep neural networks using trained features, from Extracting the mouth region mask and the first set of tooth contour features from the toothy facial photos of the patient before orthodontic treatment; obtaining a first three-dimensional digital model representing the patient’s original tooth layout and representing the patient’s target tooth layout
- the second three-dimensional digital model of the based on the first set of tooth profile features and the first three-dimensional digital model, the first pose of the first three-dimensional digital model is obtained; based on the first pose of the A second three-dimensional digital model to obtain a second set of tooth contour features; and use the trained pictures to generate a deep neural network, based on the toothless face photos of the patient before the orthodontic treatment, the mask, and the second set
- the tooth contour feature generates the toothless face image of the patient after orthodontic treatment.
- the picture generation deep neural network may be a CVAE-GAN network.
- the sampling method adopted by the CVAE-GAN network may be a differentiable sampling method.
- the feature extraction deep neural network may be a U-Net network.
- the first pose is obtained based on the first set of tooth contour features and the first three-dimensional digital model using a nonlinear projection optimization method
- the second set of tooth contour features is based on The second three-dimensional digital model of the first pose is obtained by projection.
- the method for generating an image of the effect of orthodontic treatment by using an artificial neural network may further include: using a face key point matching algorithm to capture a toothless face photo of the patient before the orthodontic treatment The first mouth area picture, wherein the mouth area mask and the first group of tooth contour features are extracted from the first mouth area picture.
- the toothless face photo of the patient before orthodontic treatment may be a complete front face photo of the patient.
- the edge contour of the mask is consistent with the inner edge contour of the lips in the toothless facial photo of the patient before orthodontic treatment.
- the first set of tooth contour features includes the edge contour lines of the teeth visible in the toothless facial photos of the patient before orthodontic treatment
- the second set of tooth contour features includes the second three-dimensional The edge contour line of the tooth when the digital model is in the first posture.
- the tooth contour feature may be a tooth edge feature map.
- FIG. 1 is a schematic flowchart of a method for generating an appearance image of a patient after orthodontic treatment by using an artificial neural network in an embodiment of the application;
- Figure 2 is a picture of the first mouth area in an embodiment of the application
- FIG. 3 is a mask generated based on the first mouth region picture shown in FIG. 2 in an embodiment of the application;
- FIG. 4 is a first tooth edge feature map generated based on the first mouth region picture shown in FIG. 2 in an embodiment of the application;
- FIG. 5 is a structural diagram of a feature extraction deep neural network in an embodiment of this application.
- FIG. 5A schematically shows the structure of the convolutional layer of the feature extraction deep neural network shown in FIG. 5 in an embodiment of the present application
- FIG. 5B schematically shows the structure of the deconvolution layer of the feature extraction deep neural network shown in FIG. 5 in an embodiment of the present application
- Fig. 6 is a feature diagram of the second tooth edge in an embodiment of the application.
- FIG. 7 is a structural diagram of a deep neural network used to generate pictures in an embodiment of this application.
- Fig. 8 is a picture of the second mouth area in an embodiment of the application.
- the inventor of the present application has discovered through a lot of research work that with the rise of deep learning technology, in some fields, the adversarial generation network technology has been able to generate fake and real pictures. However, in the field of orthodontics, there is still a lack of robust image generation technology based on deep learning. After a lot of design and experimental work, the inventor of the present application has developed a method of using artificial neural networks to generate an image of the patient's appearance after orthodontic treatment.
- FIG. 1 is a schematic flowchart of a method 100 for generating an appearance image of a patient after orthodontic treatment by using an artificial neural network in an embodiment of the application.
- the toothless face photo of the patient before the orthodontic treatment may be a complete frontal photo of the patient's toothy smile, such a photo Can more clearly reflect the difference before and after orthodontic treatment.
- the photo of the toothy face of the patient before the orthodontic treatment can also be a photo of a part of the face, and the angle of the photo can also be other angles than the front.
- the face key point matching algorithm is used to intercept the first mouth region picture from the toothless face photo of the patient before the orthodontic treatment.
- the mouth area picture has fewer features, and the subsequent processing based on the mouth area picture only can simplify the calculation, make the artificial neural network easier to learn, and make the artificial neural network more robust.
- the key point matching algorithm for face can refer to the "Displaced Dynamic Expression Regression for Real-Time Facial Tracking and Animation” published in 2014 by Chen Cao, Qiming Hou and Kun Zhou. ACM Transactions on Graphics (TOG) 33, 4 (2014), 43 “, and "One Millisecond Face Alignment with an Ensemble of Regression Trees” published by Vahid Kazemi and Josephine Sullivan in Proceedings of the IEEE conference on computer vision and pattern recognition, 1867--1874, 2014.
- Fig. 2 is a picture of a patient's mouth area before orthodontic treatment in an embodiment of this application.
- the picture of the mouth area in FIG. 2 includes a part of the nose and a part of the chin, as mentioned above, the mouth area can be reduced or expanded according to specific needs.
- the trained feature extraction deep neural network is used to extract the mouth region mask and the first set of tooth contour features based on the first mouth region picture.
- the range of the mouth area mask may be defined by the inner edge of the lips.
- the mask may be a black and white bitmap, and the undesired part of the picture can be removed through the mask operation.
- FIG. 3 is a mouth area mask obtained based on the mouth area picture of FIG. 2 in an embodiment of this application.
- the tooth contour feature may include the contour line of each tooth visible in the picture, which is a two-dimensional feature.
- the tooth contour feature may be a tooth contour feature map, which only includes the contour information of the tooth.
- the tooth contour feature may be a tooth edge feature map, which not only includes the contour information of the tooth, but also the edge feature inside the tooth, for example, the edge line of the spot on the tooth. Please refer to FIG. 4, which is a tooth edge feature map obtained based on the mouth region image of FIG. 2 in an embodiment of this application.
- the feature extraction neural network may be a U-Net network. Please refer to FIG. 5, which schematically shows the structure of the feature extraction neural network 200 in an embodiment of the present application.
- the feature extraction neural network 200 may include a 6-layer convolution 201 (downsampling) and a 6-layer deconvolution 203 (upsampling).
- each layer of convolution 2011 may include a convolution layer 2013 (conv), a ReLU activation function 2015, and a maximum pooling layer 2017 (max pool).
- each layer of deconvolution 2031 may include a sub-pixel convolution layer 2033 (sub-pixel), a convolution layer 2035 (conv), and a ReLU activation function 2037.
- the training atlas used to train the feature extraction neural network can be obtained as follows: obtain multiple toothy facial photos; intercept oral region pictures from these facial photos; based on these oral region pictures, Use the PhotoShop cable annotation tool to generate their respective mouth area masks and tooth edge feature maps. These mouth region pictures and corresponding mouth region masks and tooth edge feature maps can be used as training atlases for training feature extraction neural networks.
- the training atlas can also be augmented, including Gaussian smoothing, rotation, and horizontal flipping.
- a first three-dimensional digital model representing the patient's original tooth layout is obtained.
- the patient's original tooth layout is the tooth layout before orthodontic treatment.
- a three-dimensional digital model representing the original tooth layout of the patient can be obtained by directly scanning the jaw of the patient.
- a solid model of the patient's jaw such as a plaster model, can be scanned to obtain a three-dimensional digital model representing the patient's original tooth layout.
- the impression of the patient's jaw can be scanned to obtain a three-dimensional digital model representing the patient's original tooth layout.
- the projection optimization algorithm is used to calculate the first pose of the first three-dimensional digital model matching the contour features of the first group of teeth.
- the optimization goal of the nonlinear projection optimization algorithm can be expressed by equation (1):
- the correspondence between the points of the first three-dimensional digital model and the first group of tooth profile features can be calculated based on the following equation (2):
- t i and t j represent the tangent vectors at the two points p i and p j , respectively.
- a second three-dimensional digital model representing the target tooth layout of the patient is obtained.
- the method for obtaining a three-dimensional digital model representing the target tooth layout of the patient based on the three-dimensional digital model representing the patient's original tooth layout is well known in the industry, and will not be repeated here.
- the second three-dimensional digital model in the first pose is projected to obtain the second set of tooth contour features.
- the second set of tooth contour features includes the edge contour lines of all teeth when the complete upper and lower jaw dentition is in the target tooth layout and in the first posture.
- FIG. 6 is a feature diagram of the second tooth edge in an embodiment of this application.
- the CVAE-GAN network can be used as a deep neural network for generating pictures.
- FIG. 7 schematically shows the structure of a deep neural network 300 for generating pictures in an embodiment of the present application.
- the deep neural network 300 for generating pictures includes a first sub-network 301 and a second sub-network 303.
- a part of the first sub-network 301 is responsible for processing shapes
- the second sub-network 303 is responsible for processing textures. Therefore, the toothless face photo of the patient before orthodontic treatment or the part of the mask area in the first mouth region picture can be input into the second sub-network 303, so that the deep neural network 300 used to generate the image can be used for orthodontic treatment
- the mask area generates texture; and the mask and the second tooth edge feature map are input to the first sub-network 301, so that the deep neural network 300 used to generate the picture can be used for orthodontic treatment.
- the part of the mask area in the patient's toothy face picture is divided into areas, that is, which part is the teeth, which part is the gum, which part is the tooth gap, which part is the tongue (when the tongue is visible), and so on.
- the first sub-network 301 includes a 6-layer convolution 3011 (downsampling) and a 6-layer deconvolution 3013 (upsampling).
- the second sub-network 303 includes a 6-layer convolution 3031 (downsampling).
- the deep neural network 300 used to generate pictures may adopt a differentiable sampling method to facilitate end-to-end training.
- sampling methods please refer to "Auto-Encoding Variational Bayes" published on ICLR 12 2013 by Diederik Kingma and Max Welling.
- the training of the deep neural network 300 for generating pictures may be similar to the training of the feature extraction neural network 200 described above, and will not be repeated here.
- networks such as cGAN, cVAE, MUNIT, and CycleGAN can also be used as networks for generating pictures.
- the part of the mask area in the toothless face photo of the patient before orthodontic treatment can be input to the deep neural network 300 used to generate the picture to generate the toothless face image of the patient after orthodontic treatment. Then, based on the toothy face photo of the patient before orthodontic treatment and the part of the masked area in the toothy face image of the patient after orthodontic treatment, the toothy face of the patient after orthodontic treatment is synthesized image.
- the part of the mask area in the first mouth area picture may be input to the deep neural network 300 used to generate the picture to generate the mask area in the toothy facial image of the patient after orthodontic treatment. Then, based on the first mouth area picture and the part of the mask area in the patient’s toothy face image after orthodontic treatment, the second mouth area picture is synthesized, and then based on the patient’s toothy face before orthodontic treatment Photographs and pictures of the second mouth area are combined to synthesize the toothy facial image of the patient after orthodontic treatment.
- FIG. 8 is a picture of the second oral region in an embodiment of this application.
- the toothless face pictures of the patient after orthodontic treatment produced by the method of the present application are very close to the actual effect, and have high reference value. With the help of the patient's toothy face pictures after orthodontic treatment, it can effectively help patients build confidence in the treatment, and at the same time promote the communication between orthodontists and patients.
- the various diagrams may show exemplary architectures or other configurations of the disclosed methods and systems, which are helpful in understanding the features and functions that can be included in the disclosed methods and systems.
- the claimed content is not limited to the exemplary architecture or configuration shown, and the desired features can be implemented with various alternative architectures and configurations.
- the order of the blocks given here should not be limited to the various embodiments that are implemented in the same order to perform the functions, unless clearly indicated in the context .
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Public Health (AREA)
- Multimedia (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Epidemiology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Primary Health Care (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Radiology & Medical Imaging (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Human Computer Interaction (AREA)
- Veterinary Medicine (AREA)
- Dentistry (AREA)
- Animal Behavior & Ethology (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Quality & Reliability (AREA)
- Biodiversity & Conservation Biology (AREA)
- Surgery (AREA)
- Urology & Nephrology (AREA)
- Physical Education & Sports Medicine (AREA)
Abstract
一种利用人工神经网络生成牙科正畸治疗效果的图像的方法,包括:获取正畸治疗前患者的露齿脸部照片;利用经训练的特征提取深度神经网络,从所述正畸治疗前患者的露齿脸部照片中提取口部区域掩码以及第一组牙齿轮廓特征;获取表示所述患者原始牙齿布局的第一三维数字模型和表示所述患者目标牙齿布局的第二三维数字模型;基于所述第一组牙齿轮廓特征以及所述第一三维数字模型,获得所述第一三维数字模型的第一位姿;基于处于所述第一位姿的所述第二三维数字模型,获得第二组牙齿轮廓特征;以及利用经训练的图片生成深度神经网络,基于所述正畸治疗前患者的露齿脸部照片、所述掩码以及所述第二组牙齿轮廓特征,生成正畸治疗后所述患者的露齿脸部图像。
Description
本申请总体上涉及利用人工神经网络生成牙科正畸治疗效果的图像的方法。
当今,越来越多的人开始了解到,牙科正畸治疗不仅利于健康,还能提升个人形象。对于不了解牙科正畸治疗的患者,如果能够在治疗前向其展示治疗完成时牙齿和面部的外观,就可以帮助其建立对治疗的信心,同时促进正畸医生与患者之间的沟通。
目前还没有类似的可以预测牙科正畸治疗效果的图像技术,而传统的利用三维模型纹理贴图的技术往往不能满足高质量逼真效果的呈现。因此,有必要提供一种用于产生牙科正畸治疗后患者外观图像的方法。
发明内容
本申请的一方面提供了一种利用人工神经网络生成牙科正畸治疗效果的图像的方法,包括:获取正畸治疗前患者的露齿脸部照片;利用经训练的特征提取深度神经网络,从所述正畸治疗前患者的露齿脸部照片中提取口部区域掩码以及第一组牙齿轮廓特征;获取表示所述患者原始牙齿布局的第一三维数字模型和表示所述患者目标牙齿布局的第二三维数字模型;基于所述第一组牙齿轮廓特征以及所述第一三维数字模型,获得所述第一三维数字模型的第一位姿;基于处于所述第一位姿的所述第二三维数字模型,获得第二组牙齿轮廓特征;以及利用经训练的图片生成深度神经网络,基于所述正畸治疗前患者的露齿脸部照片、所述掩码以及所述第二组牙齿轮廓特征,生成正畸治疗后所述患者的露齿脸部图像。
在一些实施方式中,所述图片生成深度神经网络可以是CVAE-GAN网络。
在一些实施方式中,所述CVAE-GAN网络所采用的采样方法可以是可微的采样方法。
在一些实施方式中,所述特征提取深度神经网络可以是U-Net网络。
在一些实施方式中,所述第一位姿是基于所述第一组牙齿轮廓特征和所述第一三维数字模型,利用非线性投影优化方法获得,所述第二组牙齿轮廓特征是基于处于所述第一位姿的所述第二三维数字模型,通过投影获得。
在一些实施方式中,所述的利用人工神经网络生成牙科正畸治疗效果的图像的方法还可以包括:利用人脸关键点匹配算法,从所述正畸治疗前患者的露齿脸部照片截取第一口部区域图片,其中,所述口部区域掩码以及第一组牙齿轮廓特征是从所述第一口部区域图片中提取。
在一些实施方式中,所述正畸治疗前患者的露齿脸部照片可以是所述患者的完整的正脸照片。
在一些实施方式中,所述掩码的边缘轮廓与所述正畸治疗前患者的露齿脸部照片中唇部的内侧边缘轮廓相符。
在一些实施方式中,所述第一组牙齿轮廓特征包括所述正畸治疗前患者的露齿脸部照片中可见牙齿的边缘轮廓线,所述第二组牙齿轮廓特征包括所述第二三维数字模型处于所述第一位姿时牙齿的边缘轮廓线。
在一些实施方式中,所述牙齿轮廓特征可以是牙齿边缘特征图。
通过下面说明书和所附的权利要求书并与附图结合,将会更加充分地清楚理解本公开内容的上述和其他特征。应当理解,这些附图仅描绘了本公开内容的若 干实施方式,因此不应认为是对本公开内容范围的限定,通过采用附图,本公开内容将会得到更加明确和详细地说明。
图1为本申请一个实施例中利用人工神经网络产生牙科正畸治疗后患者外观图像的方法的示意性流程图;
图2为本申请一个实施例中的第一口部区域图片;
图3为本申请一个实施例中基于图2所示的第一口部区域图片而产生的掩码;
图4为本申请一个实施例中基于图2所示的第一口部区域图片而产生的第一牙齿边缘特征图;
图5为本申请一个实施例中特征提取深度神经网络的结构图;
图5A示意性地展示了本申请一个实施例中图5所示特征提取深度神经网络的卷积层的结构;
图5B示意性地展示了本申请一个实施例中图5所示特征提取深度神经网络的反卷积层的结构;
图6为本申请一个实施例中的第二牙齿边缘特征图;
图7为本申请一个实施例中用于生成图片的深度神经网络的结构图;以及
图8为本申请一个实施例中的第二口部区域图片。
在下面的详细描述中,参考了构成其一部分的附图。在附图中,类似的符号通常表示类似的组成部分,除非上下文另有说明。详细描述、附图和权利要求书中描述的例示说明性实施方式不意在限定。在不偏离本文所述的主题的精神或范围的情况下,可以采用其他实施方式,并且可以做出其他变化。应该很容易理解, 可以对本文中一般性描述的、在附图中图解说明的本公开内容的各个方面进行多种不同构成的配置、替换、组合,设计,而所有这些都在明确设想之中,并构成本公开内容的一部分。
本申请的发明人经过大量的研究工作发现,随着深度学习技术的兴起,在一些领域,对抗生成网络技术已经能够生成以假乱真的图片。然而,在牙科正畸领域,还缺乏基于深度学习的生成图像的鲁棒技术。经过大量的设计和实验工作,本申请的发明人开发出了一种利用人工神经网络产生牙科正畸治疗后患者外观图像的方法。
请参图1,为本申请一个实施例中的利用人工神经网络产生牙科正畸治疗后患者外观图像的方法100的示意性流程图。
在101中,获取牙科正畸治疗前患者的露齿脸部照片。
由于人们通常比较在意露齿微笑时的形象,因此,在一个实施例中,牙科正畸治疗前患者露齿的脸部照片可以是患者露齿微笑时的完整的脸部正面照片,这样的照片能够比较清楚地体现正畸治疗前后的差别。在本申请的启示下,可以理解,牙科正畸治疗前患者露齿的脸部照片也可以是部分脸部的照片,照片的角度也可以是除正面外的其他角度。
在103中,利用人脸关键点匹配算法,从牙科正畸治疗前患者的露齿脸部照片中截取第一口部区域图片。
相较于完整人脸照片,口部区域图片特征较少,仅基于口部区域图片进行后续处理,能够简化运算,使人工神经网络更容易学习,同时使得人工神经网络更加鲁棒。
人脸关键点匹配算法可以参考由Chen Cao、Qiming Hou以及Kun Zhou发表于2014.ACM Transactions on Graphics(TOG)33,4(2014),43的《Displaced Dynamic Expression Regression for Real-Time Facial Tracking and Animation》,以及由Vahid Kazemi与Josephine Sullivan发表于Proceedings of the IEEE conference on computer vision and pattern recognition,1867--1874,2014.的《One Millisecond Face Alignment with an Ensemble of Regression Trees》。
在本申请的启示下,可以理解,口部区域的范围可以自由定义。请参图2,为本申请一个实施例中某患者正畸治疗前的口部区域图片。虽然图2的口部区域图片包括鼻子的一部分和下巴的一部分,但如前所述,可以根据具体需求缩小或者扩大口部区域的范围。
在105中,利用经训练的特征提取深度神经网络,基于第一口部区域图片,提取口部区域掩码以及第一组牙齿轮廓特征。
在一个实施例中,口部区域掩码的范围可以由嘴唇内边缘界定。
在一个实施例中,掩码可以是黑白位图,通过掩码运算,能够把图片中不希望显示的部分去除。请参图3,为本申请一个实施例中基于图2的口部区域图片获得的口部区域掩码。
牙齿轮廓特征可以包括图片中可见的每一颗牙齿的轮廓线,是二维特征。在一个实施例中,牙齿轮廓特征可以是牙齿轮廓特征图,其仅包括牙齿的轮廓信息。在又一实施例中,牙齿轮廓特征可以是牙齿边缘特征图,其不仅包括牙齿的轮廓信息,还可以包括牙齿内部的边缘特征,例如,牙齿上的斑点的边缘线。请参图4,为本申请一个实施例中基于图2的口部区域图片获得的牙齿边缘特征图。
在一个实施例中,特征提取神经网络可以是U-Net网络。请参图5,示意性地展示了本申请一个实施例中特征提取神经网络200的结构。
特征提取神经网络200可以包括6层卷积201(downsampling)和6层反卷积203(upsampling)。
请参图5A,每一层卷积2011(down)可以包括卷积层2013(conv)、ReLU激活函数2015以及最大池化层2017(max pool)。
请参图5B,每一层反卷积2031(up)可以包括子像素卷积层2033(sub-pixel)、 卷积层2035(conv)以及ReLU激活函数2037。
在一个实施例中,可以这样获得用于训练特征提取神经网络的训练图集:获取多张露齿的脸部照片;从这些脸部照片中截取口部区域图片;基于这些口部区域图片,以PhotoShop拉索标注工具,生成其各自的口部区域掩码及牙齿边缘特征图。可以把这些口部区域图片以及对应的口部区域掩码以及牙齿边缘特征图作为训练特征提取神经网络的训练图集。
在一个实施例中,为提升特征提取神经网络的鲁棒性,还可以把训练图集进行增广,包括高斯平滑,旋转和水平翻转等。
在107中,获取表示患者原始牙齿布局的第一三维数字模型。
患者的原始牙齿布局即进行牙科正畸治疗前的牙齿布局。
在一些实施方式中,可以通过直接扫描患者的牙颌,获得表示患者原始牙齿布局的三维数字模型。在又一些实施方式中,可以扫描患者牙颌的实体模型,例如石膏模型,获得表示患者原始牙齿布局的三维数字模型。在又一些实施方式中,可以扫描患者牙颌的印模,获得表示患者原始牙齿布局的三维数字模型。
在109中,利用投影优化算法计算得到与第一组牙齿轮廓特征匹配的第一三维数字模型的第一位姿。
在一个实施例中,非线性投影优化算法的优化目标可以方程式(1)表达:
在一个实施例中,可以基于以下方程式(2)来计算第一三维数字模型与第一组牙齿轮廓特征之间的点的对应关系:
其中,t
i和t
j分别代表p
i和p
j两点处的切向量。
在111中,获取表示患者目标牙齿布局的第二三维数字模型。
基于表示患者原始牙齿布局的三维数字模型获得表示患者目标牙齿布局的三维数字模型的方法已为业界所熟知,此处不再赘述。
在113中,将处于第一位姿的第二三维数字模型进行投影得到第二组牙齿轮廓特征。
在一个实施例中,第二组牙齿轮廓特征包括完整的上、下颌牙列在处于目标牙齿布局下,且处于第一位姿时,所有牙齿的边缘轮廓线。
请参图6,为本申请一个实施例中的第二牙齿边缘特征图。
在115中,利用经训练的用于生成图片的深度神经网络,基于正畸治疗前患者的露齿脸部照片、掩码以及第二组牙齿轮廓特征图,正畸治疗后患者的露齿脸部图片。
在一个实施例中,可以采用CVAE-GAN网络作为用于生成图片的深度神经网络。请参图7,示意性地展示了本申请一个实施例中用于生成图片的深度神经网络300的结构。
用于生成图片的深度神经网络300包括第一子网络301和第二子网络303。其中,第一子网络301的一部分负责处理形状,第二子网络303负责处理纹理。因此,可以将正畸治疗前患者的露齿脸部照片或第一口部区域图片中掩码区域的部分输入第二子网络303,使得用于生成图片的深度神经网络300能够为正畸治疗后患者的露齿脸部图片中掩码区域部分产生纹理;而掩码以及第二牙齿边缘特征图则输入第一子网络301,使得用于生成图片的深度神经网络300能够为正畸治疗后患者的露齿脸部图片中掩码区域的部分划分区域,即哪部分为牙齿,哪部分为牙龈,哪部分为牙齿间隙,哪部分为舌头(在舌头可见的情况下)等。
第一子网络301包括6层卷积3011(downsampling)和6层反卷积3013 (upsampling)。第二子网络303包括6层卷积3031(downsampling)。
在一个实施例中,用于生成图片的深度神经网络300可以采用可微分的采样方法,以方便端到端训练(end to end training)。类似的采样方法请参由Diederik Kingma和Max Welling在2013年发表于ICLR 12 2013的《Auto-Encoding Variational Bayes》。
对用于生成图片的深度神经网络300的训练可以与前述对特征提取神经网络200的训练类似,此处不再赘述。
在本申请的启示下,可以理解,除了CVAE-GAN网络,还可以采用cGAN、cVAE、MUNIT以及CycleGAN等网络作为用于生成图片的网络。
在一个实施例中,可以把正畸治疗前患者的露齿脸部照片中掩码区域的部分输入用于生成图片的深度神经网络300,以生成正畸治疗后患者的露齿脸部图像中掩码区域的部分,然后,基于正畸治疗前患者的露齿脸部照片和正畸治疗后患者的露齿脸部图像中掩码区域的部分,合成正畸治疗后患者的露齿脸部图像。
在又一实施例中,可以把第一口部区域图片中掩码区域的部分输入用于生成图片的深度神经网络300,以生成正畸治疗后患者的露齿脸部图像中掩码区域的部分,然后,基于第一口部区域图片和正畸治疗后患者的露齿脸部图像中掩码区域的部分,合成第二口部区域图片,再基于正畸治疗前患者的露齿脸部照片和第二口部区域图片,合成正畸治疗后患者的露齿脸部图像。
请参图8,为本申请一个实施例中的第二口部区域图片。利用本申请的方法产生的牙科正畸治疗后患者的露齿脸部图片与实际效果非常接近,具有很高的参考价值。借助牙科正畸治疗后患者的露齿脸部图片,可以有效地帮助患者建立对治疗的信心,同时促进正畸医生与患者的沟通。
在本申请的启示下,可以理解,虽然,牙科正畸治疗后患者完整的脸部图片能够让患者较好地了解治疗效果,但这并不是必需的,一些情况下,牙科正畸治疗后患者的口部区域图片就足以让患者了解治疗效果。
尽管在此公开了本申请的多个方面和实施例,但在本申请的启发下,本申请的其他方面和实施例对于本领域技术人员而言也是显而易见的。在此公开的各个方面和实施例仅用于说明目的,而非限制目的。本申请的保护范围和主旨仅通过后附的权利要求书来确定。
同样,各个图表可以示出所公开的方法和系统的示例性架构或其他配置,其有助于理解可包含在所公开的方法和系统中的特征和功能。要求保护的内容并不限于所示的示例性架构或配置,而所希望的特征可以用各种替代架构和配置来实现。除此之外,对于流程图、功能性描述和方法权利要求,这里所给出的方框顺序不应限于以同样的顺序实施以执行所述功能的各种实施例,除非在上下文中明确指出。
除非另外明确指出,本文中所使用的术语和短语及其变体均应解释为开放式的,而不是限制性的。在一些实例中,诸如“一个或多个”、“至少”、“但不限于”这样的扩展性词汇和短语或者其他类似用语的出现不应理解为在可能没有这种扩展性用语的示例中意图或者需要表示缩窄的情况。
Claims (10)
- 一种利用人工神经网络生成牙科正畸治疗效果的图像的方法,包括:获取正畸治疗前患者的露齿脸部照片;利用经训练的特征提取深度神经网络,从所述正畸治疗前患者的露齿脸部照片中提取口部区域掩码以及第一组牙齿轮廓特征;获取表示所述患者原始牙齿布局的第一三维数字模型和表示所述患者目标牙齿布局的第二三维数字模型;基于所述第一组牙齿轮廓特征以及所述第一三维数字模型,获得所述第一三维数字模型的第一位姿;基于处于所述第一位姿的所述第二三维数字模型,获得第二组牙齿轮廓特征;以及利用经训练的图片生成深度神经网络,基于所述正畸治疗前患者的露齿脸部照片、所述掩码以及所述第二组牙齿轮廓特征,生成正畸治疗后所述患者的露齿脸部图像。
- 如权利要求1所述的利用人工神经网络生成牙科正畸治疗效果的图像的方法,其特征在于,所述图片生成深度神经网络是CVAE-GAN网络。
- 如权利要求2所述的利用人工神经网络生成牙科正畸治疗效果的图像的方法,其特征在于,所述CVAE-GAN网络所采用的采样方法是可微的采样方法。
- 如权利要求1所述的利用人工神经网络生成牙科正畸治疗效果的图像的方法,其特征在于,所述特征提取深度神经网络是U-Net网络。
- 如权利要求1所述的利用人工神经网络生成牙科正畸治疗效果的图像的方法,其特征在于,所述第一位姿是基于所述第一组牙齿轮廓特征和所述第一三维数字模型,利用非线性投影优化方法获得,所述第二组牙齿轮廓特征是基于处于所述第一位姿的所述第二三维数字模型,通过投影获得。
- 如权利要求1-5之一所述的利用人工神经网络生成牙科正畸治疗效果的 图像的方法,其特征在于,它还包括:利用人脸关键点匹配算法,从所述正畸治疗前患者的露齿脸部照片截取第一口部区域图片,其中,所述口部区域掩码以及第一组牙齿轮廓特征是从所述第一口部区域图片中提取。
- 如权利要求6所述的利用人工神经网络生成牙科正畸治疗效果的图像的方法,其特征在于,所述正畸治疗前患者的露齿脸部照片是所述患者的完整的正脸照片。
- 如权利要求6之一所述的利用人工神经网络生成牙科正畸治疗效果的图像的方法,其特征在于,所述掩码的边缘轮廓与所述正畸治疗前患者的露齿脸部照片中唇部的内侧边缘轮廓相符。
- 如权利要求8所述的利用人工神经网络生成牙科正畸治疗效果的图像的方法,其特征在于,所述第一组牙齿轮廓特征包括所述正畸治疗前患者的露齿脸部照片中可见牙齿的边缘轮廓线,所述第二组牙齿轮廓特征包括所述第二三维数字模型处于所述第一位姿时牙齿的边缘轮廓线。
- 如权利要求9所述的利用人工神经网络生成牙科正畸治疗效果的图像的方法,其特征在于,所述牙齿轮廓特征是牙齿边缘特征图。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/531,708 US20220084653A1 (en) | 2020-01-20 | 2021-11-19 | Method for generating image of orthodontic treatment outcome using artificial neural network |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010064195.1A CN113223140A (zh) | 2020-01-20 | 2020-01-20 | 利用人工神经网络生成牙科正畸治疗效果的图像的方法 |
CN202010064195.1 | 2020-01-20 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/531,708 Continuation-In-Part US20220084653A1 (en) | 2020-01-20 | 2021-11-19 | Method for generating image of orthodontic treatment outcome using artificial neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021147333A1 true WO2021147333A1 (zh) | 2021-07-29 |
Family
ID=76992788
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/113789 WO2021147333A1 (zh) | 2020-01-20 | 2020-09-07 | 利用人工神经网络生成牙科正畸治疗效果的图像的方法 |
Country Status (3)
Country | Link |
---|---|
US (1) | US20220084653A1 (zh) |
CN (1) | CN113223140A (zh) |
WO (1) | WO2021147333A1 (zh) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11842484B2 (en) * | 2021-01-04 | 2023-12-12 | James R. Glidewell Dental Ceramics, Inc. | Teeth segmentation using neural networks |
US11606512B2 (en) * | 2020-09-25 | 2023-03-14 | Disney Enterprises, Inc. | System and method for robust model-based camera tracking and image occlusion removal |
US20220222814A1 (en) * | 2021-01-14 | 2022-07-14 | Motahare Amiri Kamalabad | System and method for facial and dental photography, landmark detection and mouth design generation |
CN116563475B (zh) * | 2023-07-07 | 2023-10-17 | 南通大学 | 一种图像数据处理方法 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108665533A (zh) * | 2018-05-09 | 2018-10-16 | 西安增材制造国家研究院有限公司 | 一种通过牙齿ct图像和三维扫描数据重建牙列的方法 |
CN109528323A (zh) * | 2018-12-12 | 2019-03-29 | 上海牙典软件科技有限公司 | 一种基于人工智能的正畸方法及装置 |
CN109729169A (zh) * | 2019-01-08 | 2019-05-07 | 成都贝施美医疗科技股份有限公司 | 基于c/s架构的牙齿美化ar智能辅助方法 |
US20190350680A1 (en) * | 2018-05-21 | 2019-11-21 | Align Technology, Inc. | Photo realistic rendering of smile image after treatment |
Family Cites Families (112)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6463344B1 (en) * | 2000-02-17 | 2002-10-08 | Align Technology, Inc. | Efficient data representation of teeth model |
US20150305830A1 (en) * | 2001-04-13 | 2015-10-29 | Orametrix, Inc. | Tooth positioning appliance and uses thereof |
US7156655B2 (en) * | 2001-04-13 | 2007-01-02 | Orametrix, Inc. | Method and system for comprehensive evaluation of orthodontic treatment using unified workstation |
US8021147B2 (en) * | 2001-04-13 | 2011-09-20 | Orametrix, Inc. | Method and system for comprehensive evaluation of orthodontic care using unified workstation |
US7717708B2 (en) * | 2001-04-13 | 2010-05-18 | Orametrix, Inc. | Method and system for integrated orthodontic treatment planning using unified workstation |
US9412166B2 (en) * | 2001-04-13 | 2016-08-09 | Orametrix, Inc. | Generating three dimensional digital dentition models from surface and volume scan data |
US8029277B2 (en) * | 2005-05-20 | 2011-10-04 | Orametrix, Inc. | Method and system for measuring tooth displacements on a virtual three-dimensional model |
ES2690122T3 (es) * | 2006-02-28 | 2018-11-19 | Ormco Corporation | Software y métodos para planificación de tratamiento dental |
US10342638B2 (en) * | 2007-06-08 | 2019-07-09 | Align Technology, Inc. | Treatment planning and progress tracking systems and methods |
US8075306B2 (en) * | 2007-06-08 | 2011-12-13 | Align Technology, Inc. | System and method for detecting deviations during the course of an orthodontic treatment to gradually reposition teeth |
US20080306724A1 (en) * | 2007-06-08 | 2008-12-11 | Align Technology, Inc. | Treatment planning and progress tracking systems and methods |
DE102010002206B4 (de) * | 2010-02-22 | 2015-11-26 | Sirona Dental Systems Gmbh | Bracketsystem und Verfahren zur Planung und Positionierung eines Bracketsystems zur Korrektur von Zahnfehlstellungen |
US8417366B2 (en) * | 2010-05-01 | 2013-04-09 | Orametrix, Inc. | Compensation orthodontic archwire design |
EP2588021B1 (en) * | 2010-06-29 | 2021-03-10 | 3Shape A/S | 2d image arrangement |
US8371849B2 (en) * | 2010-10-26 | 2013-02-12 | Fei Gao | Method and system of anatomy modeling for dental implant treatment planning |
EP2845171B1 (en) * | 2012-05-02 | 2020-08-19 | Cogent Design, Inc. dba Tops Software | Systems and methods for consolidated management and distribution of orthodontic care data, including an interactive three-dimensional tooth chart model |
US9414897B2 (en) * | 2012-05-22 | 2016-08-16 | Align Technology, Inc. | Adjustment of tooth position in a virtual dental model |
WO2016073792A1 (en) * | 2014-11-06 | 2016-05-12 | Matt Shane | Three dimensional imaging of the motion of teeth and jaws |
CN105769352B (zh) * | 2014-12-23 | 2020-06-16 | 无锡时代天使医疗器械科技有限公司 | 用于产生牙齿矫治状态的直接分步法 |
US11850111B2 (en) * | 2015-04-24 | 2023-12-26 | Align Technology, Inc. | Comparative orthodontic treatment planning tool |
DE102015212806A1 (de) * | 2015-07-08 | 2017-01-12 | Sirona Dental Systems Gmbh | System und Verfahren zum Scannen von anatomischen Strukturen und zum Darstellen eines Scanergebnisses |
US9814549B2 (en) * | 2015-09-14 | 2017-11-14 | DENTSPLY SIRONA, Inc. | Method for creating flexible arch model of teeth for use in restorative dentistry |
WO2018022752A1 (en) * | 2016-07-27 | 2018-02-01 | James R. Glidewell Dental Ceramics, Inc. | Dental cad automation using deep learning |
US10945818B1 (en) * | 2016-10-03 | 2021-03-16 | Myohealth Technologies LLC | Dental appliance and method for adjusting and holding the position of a user's jaw to a relaxed position of the jaw |
CN113648088B (zh) * | 2016-11-04 | 2023-08-22 | 阿莱恩技术有限公司 | 用于牙齿图像的方法和装置 |
US10695150B2 (en) * | 2016-12-16 | 2020-06-30 | Align Technology, Inc. | Augmented reality enhancements for intraoral scanning |
CA3059462A1 (en) * | 2017-02-22 | 2018-08-30 | Christopher John Ciriello | Automated dental treatment system |
US10828130B2 (en) * | 2017-03-20 | 2020-11-10 | Align Technology, Inc. | Automated 2D/3D integration and lip spline autoplacement |
AU2018254785A1 (en) * | 2017-04-21 | 2019-12-05 | Andrew S. MARTZ | Fabrication of dental appliances |
RU2652014C1 (ru) * | 2017-09-20 | 2018-04-24 | Общество с ограниченной ответственностью "Авантис3Д" | Способ использования динамического виртуального артикулятора для имитационного моделирования окклюзии при выполнении проектирования стоматологических протезов для пациента и носитель информации |
EP3459438B1 (en) * | 2017-09-26 | 2020-12-09 | The Procter & Gamble Company | Device and method for determing dental plaque |
CN114939001A (zh) * | 2017-10-27 | 2022-08-26 | 阿莱恩技术有限公司 | 替代咬合调整结构 |
WO2019089989A2 (en) * | 2017-11-01 | 2019-05-09 | Align Technology, Inc. | Automatic treatment planning |
US10997727B2 (en) * | 2017-11-07 | 2021-05-04 | Align Technology, Inc. | Deep learning for tooth detection and evaluation |
US11403813B2 (en) * | 2019-11-26 | 2022-08-02 | Sdc U.S. Smilepay Spv | Systems and methods for constructing a three-dimensional model from two-dimensional images |
US10916053B1 (en) * | 2019-11-26 | 2021-02-09 | Sdc U.S. Smilepay Spv | Systems and methods for constructing a three-dimensional model from two-dimensional images |
EP3517071B1 (fr) * | 2018-01-30 | 2022-04-20 | Dental Monitoring | Système pour l'enrichissement d'un modèle numérique dentaire |
US10839578B2 (en) * | 2018-02-14 | 2020-11-17 | Smarter Reality, LLC | Artificial-intelligence enhanced visualization of non-invasive, minimally-invasive and surgical aesthetic medical procedures |
WO2019204520A1 (en) * | 2018-04-17 | 2019-10-24 | VideaHealth, Inc. | Dental image feature detection |
EP3566673A1 (fr) * | 2018-05-09 | 2019-11-13 | Dental Monitoring | Procede d'evaluation d'une situation dentaire |
US11553988B2 (en) * | 2018-06-29 | 2023-01-17 | Align Technology, Inc. | Photo of a patient with new simulated smile in an orthodontic treatment review software |
US11395717B2 (en) * | 2018-06-29 | 2022-07-26 | Align Technology, Inc. | Visualization of clinical orthodontic assets and occlusion contact shape |
US10835349B2 (en) * | 2018-07-20 | 2020-11-17 | Align Technology, Inc. | Parametric blurring of colors for teeth in generated images |
US20200066391A1 (en) * | 2018-08-24 | 2020-02-27 | Rohit C. Sachdeva | Patient -centered system and methods for total orthodontic care management |
US11151753B2 (en) * | 2018-09-28 | 2021-10-19 | Align Technology, Inc. | Generic framework for blurring of colors for teeth in generated images using height map |
JP6650996B1 (ja) * | 2018-12-17 | 2020-02-19 | 株式会社モリタ製作所 | 識別装置、スキャナシステム、識別方法、および識別用プログラム |
EP3671531A1 (en) * | 2018-12-17 | 2020-06-24 | Promaton Holding B.V. | Semantic segmentation of non-euclidean 3d data sets using deep learning |
US11321918B2 (en) * | 2019-02-27 | 2022-05-03 | 3Shape A/S | Method for manipulating 3D objects by flattened mesh |
US20200306011A1 (en) * | 2019-03-25 | 2020-10-01 | Align Technology, Inc. | Prediction of multiple treatment settings |
US20220183792A1 (en) * | 2019-04-11 | 2022-06-16 | Candid Care Co. | Dental aligners and procedures for aligning teeth |
US10878566B2 (en) * | 2019-04-23 | 2020-12-29 | Adobe Inc. | Automatic teeth whitening using teeth region detection and individual tooth location |
WO2020223384A1 (en) * | 2019-04-30 | 2020-11-05 | uLab Systems, Inc. | Attachments for tooth movements |
US11238586B2 (en) * | 2019-05-02 | 2022-02-01 | Align Technology, Inc. | Excess material removal using machine learning |
CA3140069A1 (en) * | 2019-05-14 | 2020-11-19 | Align Technology, Inc. | Visual presentation of gingival line generated based on 3d tooth model |
US11189028B1 (en) * | 2020-05-15 | 2021-11-30 | Retrace Labs | AI platform for pixel spacing, distance, and volumetric predictions from dental images |
FR3096255A1 (fr) * | 2019-05-22 | 2020-11-27 | Dental Monitoring | Procede de generation d’un modele d’une arcade dentaire |
FR3098392A1 (fr) * | 2019-07-08 | 2021-01-15 | Dental Monitoring | Procédé d’évaluation d’une situation dentaire à l’aide d’un modèle d’arcade dentaire déformé |
US20210022832A1 (en) * | 2019-07-26 | 2021-01-28 | SmileDirectClub LLC | Systems and methods for orthodontic decision support |
US11232573B2 (en) * | 2019-09-05 | 2022-01-25 | Align Technology, Inc. | Artificially intelligent systems to manage virtual dental models using dental images |
EP4025154A4 (en) * | 2019-09-06 | 2023-12-20 | Cyberdontics (USA), Inc. | GENERATION OF THREE-DIMENSIONAL (3D) DATA FOR THE PREPARATION OF A PROSTHETIC CROWN OF A TOOTH |
US11514694B2 (en) * | 2019-09-20 | 2022-11-29 | Samsung Electronics Co., Ltd. | Teaching GAN (generative adversarial networks) to generate per-pixel annotation |
DK180755B1 (en) * | 2019-10-04 | 2022-02-24 | Adent Aps | Method for assessing oral health using a mobile device |
RU2725280C1 (ru) * | 2019-10-15 | 2020-06-30 | Общество С Ограниченной Ответственностью "Доммар" | Приспособления и методы планирования ортодонтического лечения |
US11735306B2 (en) * | 2019-11-25 | 2023-08-22 | Dentsply Sirona Inc. | Method, system and computer readable storage media for creating three-dimensional dental restorations from two dimensional sketches |
US11810271B2 (en) * | 2019-12-04 | 2023-11-07 | Align Technology, Inc. | Domain specific image quality assessment |
US11723748B2 (en) * | 2019-12-23 | 2023-08-15 | Align Technology, Inc. | 2D-to-3D tooth reconstruction, optimization, and positioning frameworks using a differentiable renderer |
US11842484B2 (en) * | 2021-01-04 | 2023-12-12 | James R. Glidewell Dental Ceramics, Inc. | Teeth segmentation using neural networks |
WO2021163285A1 (en) * | 2020-02-11 | 2021-08-19 | Align Technology, Inc. | At home progress tracking using phone camera |
WO2021200392A1 (ja) * | 2020-03-31 | 2021-10-07 | ソニーグループ株式会社 | データ調整システム、データ調整装置、データ調整方法、端末装置及び情報処理装置 |
US20210315669A1 (en) * | 2020-04-14 | 2021-10-14 | Chi-Ching Huang | Orthodontic suite and its manufacturing method |
US20210321872A1 (en) * | 2020-04-15 | 2021-10-21 | Align Technology, Inc. | Smart scanning for intraoral scanners |
US11960795B2 (en) * | 2020-05-26 | 2024-04-16 | 3M Innovative Properties Company | Neural network-based generation and placement of tooth restoration dental appliances |
US20230190409A1 (en) * | 2020-06-03 | 2023-06-22 | 3M Innovative Properties Company | System to Generate Staged Orthodontic Aligner Treatment |
US11978207B2 (en) * | 2021-06-03 | 2024-05-07 | The Procter & Gamble Company | Oral care based digital imaging systems and methods for determining perceived attractiveness of a facial image portion |
FR3111538B1 (fr) * | 2020-06-23 | 2023-11-24 | Patrice Bergeyron | Procédé de fabrication d’un appareil orthodontique |
WO2022003537A1 (en) * | 2020-07-02 | 2022-01-06 | Shiseido Company, Limited | System and method for image transformation |
JP2022020509A (ja) * | 2020-07-20 | 2022-02-01 | ソニーグループ株式会社 | 情報処理装置、情報処理方法およびプログラム |
WO2022020267A1 (en) * | 2020-07-21 | 2022-01-27 | Get-Grin Inc. | Systems and methods for modeling dental structures |
WO2022020638A1 (en) * | 2020-07-23 | 2022-01-27 | Align Technology, Inc. | Systems, apparatus, and methods for dental care |
KR102448395B1 (ko) * | 2020-09-08 | 2022-09-29 | 주식회사 뷰노 | 치아 영상 부분 변환 방법 및 장치 |
US11880766B2 (en) * | 2020-10-16 | 2024-01-23 | Adobe Inc. | Techniques for domain to domain projection using a generative model |
US11521299B2 (en) * | 2020-10-16 | 2022-12-06 | Adobe Inc. | Retouching digital images utilizing separate deep-learning neural networks |
US20220148188A1 (en) * | 2020-11-06 | 2022-05-12 | Tasty Tech Ltd. | System and method for automated simulation of teeth transformation |
WO2022102589A1 (ja) * | 2020-11-13 | 2022-05-19 | キヤノン株式会社 | 患者の口腔内の状態を推定する画像処理装置およびその制御方法、プログラム |
US20220180527A1 (en) * | 2020-12-03 | 2022-06-09 | Tasty Tech Ltd. | System and method for image synthesis of dental anatomy transformation |
US20240008955A1 (en) * | 2020-12-11 | 2024-01-11 | 3M Innovative Properties Company | Automated Processing of Dental Scans Using Geometric Deep Learning |
US20220207355A1 (en) * | 2020-12-29 | 2022-06-30 | Snap Inc. | Generative adversarial network manipulated image effects |
EP4272129A1 (en) * | 2020-12-29 | 2023-11-08 | Snap Inc. | Compressing image-to-image models |
US20220202295A1 (en) * | 2020-12-30 | 2022-06-30 | Align Technology, Inc. | Dental diagnostics hub |
US11229504B1 (en) * | 2021-01-07 | 2022-01-25 | Ortho Future Technologies (Pty) Ltd | System and method for determining a target orthodontic force |
US11241301B1 (en) * | 2021-01-07 | 2022-02-08 | Ortho Future Technologies (Pty) Ltd | Measurement device |
US20220222814A1 (en) * | 2021-01-14 | 2022-07-14 | Motahare Amiri Kamalabad | System and method for facial and dental photography, landmark detection and mouth design generation |
US20220350936A1 (en) * | 2021-04-30 | 2022-11-03 | James R. Glidewell Dental Ceramics, Inc. | Neural network margin proposal |
US12020428B2 (en) * | 2021-06-11 | 2024-06-25 | GE Precision Healthcare LLC | System and methods for medical image quality assessment using deep neural networks |
US11759296B2 (en) * | 2021-08-03 | 2023-09-19 | Ningbo Shenlai Medical Technology Co., Ltd. | Method for generating a digital data set representing a target tooth arrangement |
US20230042643A1 (en) * | 2021-08-06 | 2023-02-09 | Align Technology, Inc. | Intuitive Intraoral Scanning |
US11423697B1 (en) * | 2021-08-12 | 2022-08-23 | Sdc U.S. Smilepay Spv | Machine learning architecture for imaging protocol detector |
US20230053026A1 (en) * | 2021-08-12 | 2023-02-16 | SmileDirectClub LLC | Systems and methods for providing displayed feedback when using a rear-facing camera |
EP4391958A1 (en) * | 2021-08-25 | 2024-07-03 | Aicad Dental Inc. | System and method for augmented intelligence in dental pattern recognition |
US20230068727A1 (en) * | 2021-08-27 | 2023-03-02 | Align Technology, Inc. | Intraoral scanner real time and post scan visualizations |
US11836936B2 (en) * | 2021-09-02 | 2023-12-05 | Ningbo Shenlai Medical Technology Co., Ltd. | Method for generating a digital data set representing a target tooth arrangement |
US20230093827A1 (en) * | 2021-09-28 | 2023-03-30 | Qualcomm Incorporated | Image processing framework for performing object depth estimation |
US20230131313A1 (en) * | 2021-10-27 | 2023-04-27 | Align Technology, Inc. | Methods for planning ortho-restorative treatment procedures |
WO2023091043A1 (en) * | 2021-11-17 | 2023-05-25 | SmileDirectClub LLC | Systems and methods for automated 3d teeth positions learned from 3d teeth geometries |
CN114219897B (zh) * | 2021-12-20 | 2024-04-30 | 山东大学 | 一种基于特征点识别的牙齿正畸结果预测方法及系统 |
US20230210634A1 (en) * | 2021-12-30 | 2023-07-06 | Align Technology, Inc. | Outlier detection for clear aligner treatment |
US20230225831A1 (en) * | 2022-01-20 | 2023-07-20 | Align Technology, Inc. | Photo-based dental appliance fit |
US20230386045A1 (en) * | 2022-05-27 | 2023-11-30 | Sdc U.S. Smilepay Spv | Systems and methods for automated teeth tracking |
US20230390027A1 (en) * | 2022-06-02 | 2023-12-07 | Voyager Dental, Inc. | Auto-smile design setup systems |
US20240037995A1 (en) * | 2022-07-29 | 2024-02-01 | Rakuten Group, Inc. | Detecting wrapped attacks on face recognition |
WO2024030310A1 (en) * | 2022-08-01 | 2024-02-08 | Align Technology, Inc. | Real-time bite articulation |
US20240065815A1 (en) * | 2022-08-26 | 2024-02-29 | Exocad Gmbh | Generation of a three-dimensional digital model of a replacement tooth |
-
2020
- 2020-01-20 CN CN202010064195.1A patent/CN113223140A/zh active Pending
- 2020-09-07 WO PCT/CN2020/113789 patent/WO2021147333A1/zh active Application Filing
-
2021
- 2021-11-19 US US17/531,708 patent/US20220084653A1/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108665533A (zh) * | 2018-05-09 | 2018-10-16 | 西安增材制造国家研究院有限公司 | 一种通过牙齿ct图像和三维扫描数据重建牙列的方法 |
US20190350680A1 (en) * | 2018-05-21 | 2019-11-21 | Align Technology, Inc. | Photo realistic rendering of smile image after treatment |
CN109528323A (zh) * | 2018-12-12 | 2019-03-29 | 上海牙典软件科技有限公司 | 一种基于人工智能的正畸方法及装置 |
CN109729169A (zh) * | 2019-01-08 | 2019-05-07 | 成都贝施美医疗科技股份有限公司 | 基于c/s架构的牙齿美化ar智能辅助方法 |
Also Published As
Publication number | Publication date |
---|---|
CN113223140A (zh) | 2021-08-06 |
US20220084653A1 (en) | 2022-03-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021147333A1 (zh) | 利用人工神经网络生成牙科正畸治疗效果的图像的方法 | |
US11810271B2 (en) | Domain specific image quality assessment | |
JP7458711B2 (ja) | ディープラーニングを用いた歯科用cadの自動化 | |
JP3288353B2 (ja) | 顔イメージから開始して3d顔モデルを作る方法 | |
US11517272B2 (en) | Simulated orthodontic treatment via augmented visualization in real-time | |
KR20220104036A (ko) | 2차원 스케치로부터 3차원 치아 복원물을 생성하기 위한 방법, 시스템 및 컴퓨터 판독가능 저장 매체 | |
WO2017035966A1 (zh) | 用于人脸图像处理的方法和装置 | |
US20060023923A1 (en) | Method and system for a three dimensional facial recognition system | |
EP2450852A1 (fr) | Procédé et dispositif de simulation virtuelle d' une image | |
CN112308895B (zh) | 一种构建真实感牙列模型的方法 | |
US20220338966A1 (en) | Method For Exporting A Three-Dimensional Esthetic Dental Design Model From An Augmented Reality Application To A Computer-Aided Design Application | |
CN114586069A (zh) | 用于生成牙科图像的方法 | |
CN107689077B (zh) | 一种全冠桥桥体数字化生成方法 | |
KR100918095B1 (ko) | 한 대의 비디오 카메라를 이용한 3차원 얼굴 모델 및애니메이션 생성 시스템 및 방법 | |
CN107481310A (zh) | 一种图像渲染方法和系统 | |
WO2022174747A1 (zh) | 牙齿的计算机断层扫描图像的分割方法 | |
Davy et al. | Forensic facial reconstruction using computer modeling software | |
US20210074076A1 (en) | Method and system of rendering a 3d image for automated facial morphing | |
US20220175491A1 (en) | Method for estimating and viewing a result of a dental treatment plan | |
US11967178B2 (en) | Progressive transformation of face information | |
JP2003141563A (ja) | 顔3次元コンピュータグラフィック生成方法、そのプログラム及び記録媒体 | |
US20230260238A1 (en) | Method for Generating a Virtual 4D Head and Teeth | |
CN112017280B (zh) | 一种生成具有颜色纹理信息的数字化牙齿模型的方法 | |
EP4307229A1 (en) | Method and system for tooth pose estimation | |
US20230401804A1 (en) | Data processing device and data processing method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20915778 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20915778 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20915778 Country of ref document: EP Kind code of ref document: A1 |