WO2024071571A1 - Procédé de segmentation de modèle buccal tridimensionnel et système s'y rapportant - Google Patents

Procédé de segmentation de modèle buccal tridimensionnel et système s'y rapportant Download PDF

Info

Publication number
WO2024071571A1
WO2024071571A1 PCT/KR2023/008155 KR2023008155W WO2024071571A1 WO 2024071571 A1 WO2024071571 A1 WO 2024071571A1 KR 2023008155 W KR2023008155 W KR 2023008155W WO 2024071571 A1 WO2024071571 A1 WO 2024071571A1
Authority
WO
WIPO (PCT)
Prior art keywords
crown
data
segmentation
region
oral model
Prior art date
Application number
PCT/KR2023/008155
Other languages
English (en)
Korean (ko)
Inventor
최병선
김완
용태훈
안홍기
Original Assignee
오스템임플란트 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 오스템임플란트 주식회사 filed Critical 오스템임플란트 주식회사
Publication of WO2024071571A1 publication Critical patent/WO2024071571A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0082Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence adapted for particular medical purposes
    • A61B5/0088Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence adapted for particular medical purposes for oral or dental tissue
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0062Arrangements for scanning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth

Definitions

  • the present disclosure relates to a segmentation method of a three-dimensional oral model. More specifically, it relates to a segmentation method and device that is applied to a 3D oral model and partitions a 3D area for each tooth.
  • Digital dentistry refers to IT technology that helps treat patients who visit the dentist by digitizing and analyzing the patient's oral information.
  • digital dentistry can also be used in orthodontic treatment.
  • an improved tooth arrangement can be predicted reliably and conveniently compared to the current tooth condition.
  • teeth can be freely moved in all directions in a virtual space implemented as a three-dimensional space, allowing various orthodontic treatment plans to be established.
  • a 3D oral model in which gums or other anatomical structures and teeth are realized in 3D is provided to the surgeon, and the surgeon can select teeth to be treated using the 3D oral model. Additionally, the surgeon can edit the teeth to be treated in the 3D oral model.
  • the 3D area of each crown included in the 3D oral model must be distinguished from the gums or other anatomical structures. In other words, 3D segmentation of the tooth area for the 3D oral model is required.
  • the technical problem that the present disclosure aims to solve is to provide a method for automatically segmenting the crown area of a 3D oral model and a device to which the method is applied.
  • the technical problems of the present disclosure are not limited to the technical problems mentioned above, and other technical problems not mentioned can be clearly understood by those skilled in the art from the description below.
  • a segmentation method of a three-dimensional oral model is performed by a computing system, comprising the steps of acquiring at least one tooth scan data, and the three-dimensional tooth scan data.
  • An artificial neural network outputs center point data indicating the center point of the crown area of each tooth on the occlusal crown image
  • a third artificial neural network receiving the 3D tooth scan data and the center point data outputs the 3D oral cavity. It may include outputting segmentation data representing a 3D segmentation model in which the crown area of each tooth included in the model is segmented for each tooth.
  • the step of outputting the region separation data includes downsampling the 3D tooth scan data, and inputting the downsampled 3D tooth scan data into the first artificial neural network. It may include steps.
  • the step of outputting the region separation data may include masking the gum area of the region separation 3D oral model to have the first attribute, or masking the crown area of the region separation 3D oral model to have the second attribute. It may include performing masking processing to have , adjusting the region separation data so that the region separation data reflects the results of the masking process, and outputting the adjusted region separation data.
  • the step of generating the occlusal crown image includes analyzing the region-separated 3D oral cavity model according to the region separation data to identify the occlusal surface of the region-separated 3D oral cavity model, and the occlusal surface It will include positioning a virtual camera to have a field-of-view (FOV) perpendicular to the You can.
  • the step of outputting the region separation data includes masking the gum region of the region separation 3D oral model to have a first attribute, or masking the crown region of the region separation 3D oral model to have a second attribute. It may include performing masking processing and adjusting the region separation data so that the region separation data reflects the results of the masking process.
  • the step of generating the occlusal crown image using the image of the occlusal surface captured by the virtual camera includes transparently processing the gum area in the image of the occlusal surface using the result of the masking process, It may include generating the occlusal surface crown image.
  • the step of outputting the center point data may include selecting the second artificial neural network from among a plurality of candidate artificial neural networks using the shape of the crown area of the occlusal crown image.
  • the step of outputting the center point data includes selecting the second artificial neural network from among a plurality of candidate artificial neural networks using the shape of the crown area of the pre-designated area of the occlusal crown image. can do.
  • the step of outputting the center point data includes calculating reliability of the center point data using output data of the second artificial neural network, and when the reliability is less than a reference value, the center point on the occlusal crown image It may include outputting a user interface for manual input.
  • the step of outputting the segmentation data includes down-sampling the 3D tooth scan data, and combining the downsampled 3D tooth scan data and the center point data with the third artificial It may include inputting into a neural network.
  • the step of outputting the segmentation data includes masking each crown region so that each crown region of the three-dimensional segmentation model according to the segmentation data is visually distinguished from each other, and the segmentation data is masked. It may include adjusting the segmentation data to reflect a result of processing, and outputting the adjusted segmentation data. At this time, the step of outputting the adjusted segmentation data includes identifying each crown position using the result of the masking process, and based on the identified crown position, a dental formula number for each crown region. It may include labeling and outputting labeling result data for the dental formula number for each crown region along with the adjusted segmentation data.
  • the segmentation method of the 3D oral model may further include generating an output screen for 3D orthodontic simulation using the segmentation data.
  • a segmentation method of a 3D oral model includes acquiring at least one 3D tooth scan data, and a first artificial neural network receiving the 3D tooth scan data 3D
  • a segmentation device for a three-dimensional oral model according to another embodiment for solving the above technical problem includes at least one three-dimensional tooth scan data, a first artificial neural network, a second artificial neural network, a third artificial neural network, and a three-dimensional oral model. It may include a memory into which a segmentation program is loaded, and a processor through which the 3D oral model segmentation program is executed.
  • the first artificial neural network that receives the 3D tooth scan data separates the area where the gum area and the crown area of the 3D oral model are separated and separates the area representing the 3D oral model.
  • Instructions for outputting data analyzing the region-separated 3D oral model to identify the occlusal surface of the 3D oral model, and generating an occlusal crown image representing only the crown area of the occlusal surface; , an instruction for the second artificial neural network receiving the occlusal crown image to output center point data indicating the center point of the crown area of each tooth on the occlusal crown image, and the three-dimensional tooth scan data and the center point data.
  • the received third artificial neural network may include instructions for outputting segmentation data representing a 3D segmentation model in which the crown region of each tooth included in the 3D oral model is segmented for each tooth.
  • the 3D tooth segmentation program may further include instructions for generating an output screen for 3D orthodontic simulation using the output segmentation data.
  • segmentation results for each tooth region of a 3D oral model can be provided in a highly automated manner.
  • the complexity of the hyper parameter aspect of the artificial neural network is prevented from excessively increasing, thereby improving the machine learning requirements of the artificial neural network. costs can be reduced.
  • segmentation errors can be minimized by inputting data that has completed preprocessing required for each step in each step.
  • FIG. 1 illustrates an example environment in which a segmentation system for a three-dimensional oral model can be applied, according to an embodiment of the present disclosure.
  • Figure 2 is a flowchart of a segmentation method of a 3D oral cavity model according to another embodiment of the present disclosure.
  • Figure 3 is a flowchart of a segmentation method of a 3D oral cavity model according to another embodiment of the present disclosure.
  • FIG. 4 is a diagram illustrating down-sampling of 3D scan data performed in some embodiments of the present disclosure.
  • FIG. 5 is a diagram for explaining the operation of a first artificial neural network performed in some embodiments of the present disclosure.
  • 6 to 7 are diagrams for explaining an occlusal crown image generation operation performed in some embodiments of the present disclosure.
  • FIGS. 8 to 9 are diagrams for explaining a center point data generation operation performed in some embodiments of the present disclosure.
  • 10 to 11 are flowcharts for explaining the operation of a third artificial neural network performed in some embodiments.
  • FIG. 12 is a conceptual diagram of the segmentation method of a 3D oral model described with reference to FIG. 2.
  • FIG. 12 is a conceptual diagram of the segmentation method of a 3D oral model described with reference to FIG. 2.
  • Figure 13 is a hardware configuration diagram of an oral model segmentation device according to another embodiment of the present disclosure.
  • a segmentation method of a 3D oral model according to an embodiment of the present disclosure will be described with reference to FIG. 1.
  • the oral model segmentation system 100 can perform segmentation of the oral cavity model through interaction with the user device 220.
  • the oral model segmentation system 100 may acquire 3D tooth scan data stored from the scan data storage device 230, analyze the oral model, and output segmentation data using the analyzed results.
  • the 3D tooth scan data may be scan data captured by a 3D scanner.
  • segmentation function of the 3D oral model may be divided and performed on a plurality of cloud computing nodes for each detailed module.
  • the subject performing some operations is the computing device.
  • the computing device will be referred to as an ‘oral model segmentation device.’
  • Figure 2 is a flowchart of a 3D oral model segmentation method according to an embodiment of the present invention.
  • the oral model segmentation system 100 can acquire 3D tooth scan data (S110).
  • the 3D tooth scan data may be received from a 3D scanner connected to the oral model segmentation system 100.
  • the 3D scanner may be an oral scanner that photographs the patient's oral cavity.
  • the 3D tooth scan data may be understood as data for rendering a 3D oral model.
  • the 3D tooth scan data may be input to a first machine-learned artificial neural network.
  • the first artificial neural network is machine-learned to distinguish between the gum area and the crown area of the 3D oral model.
  • the first artificial neural network may be supervised learning using learning data in which the crown region is masked from a 3D oral model.
  • the data size will be significant. If so, the first artificial neural network that receives the 3D tooth scan data should have a high structural complexity. For example, the first artificial neural network that receives the 3D tooth scan data with a large data size will have a large number of layers and a large number of nodes in each layer.
  • the first artificial neural network will have high complexity in terms of hyperparameters. In that case, in order to machine learn the first artificial neural network to have a level of performance that can be commercialized, a very large amount of learning data will be needed. Considering that it costs a considerable amount of money to prepare learning data with the crown area masked in a 3D oral model, and that machine learning a large number of learning data also results in a significant computing load, the first It is undesirable for artificial neural networks to have a high level of structural complexity in terms of economic efficiency.
  • the 3D tooth scan data may be down-sampled and then input to the first artificial neural network.
  • FIG. 4 shows an exemplary first 3D oral cavity model 310 represented by original 3D scan data and an exemplary second 3D oral cavity model 320 represented by downsampled 3D scan data.
  • the second 3D oral model 320 expresses the tooth scan results relatively simply by changing the curved area to a straight area and the curved area to a flat area. You will understand.
  • the degree of downsampling corresponds to the structural complexity of the first artificial neural network.
  • the degree to which scan results are derived can be determined.
  • the size of the 3D tooth scan data may be reduced.
  • a plurality of first artificial neural networks having different structural complexities are machine learned and stored, and the structural complexity of a first artificial neural network selected from among the plurality of first artificial neural networks is stored.
  • the 3D tooth scan data may be downsampled to a corresponding level.
  • the 3D tooth scan data whose data size has been reduced as a result of the above-described downsampling will reduce the amount of computation related to the first artificial neural network, and as a result, the time required for computation related to the first artificial neural network may also be reduced. This has the effect of reducing the overall time required for 3D oral model segmentation.
  • the oral model segmentation device may input the 3D tooth scan data into the first artificial neural network and obtain region separation data output from the first artificial neural network (S120).
  • the region separation data may be understood as representing a region separation 3D oral model in which the gum region and crown region of the 3D oral model expressed by the 3D tooth scan data are separated.
  • the oral model segmentation device may input downsampled 3D tooth scan data into the first artificial neural network.
  • the first artificial neural network 330 may receive downsampled 3D tooth scan data and output the region separation data.
  • FIG. 5 shows the results of rendering downsampled 3D dental scan data as an exemplary 3D oral cavity model 320 and the results of rendering the region separation data as an exemplary 3D oral cavity model 340.
  • the 3D oral cavity model 340 in which the crown and gums are separated will be referred to as the ‘region-separated 3D oral cavity model’.
  • the region-separated 3D oral model may be one in which the gum region is masked to have the first attribute.
  • the region-separated 3D oral model may be masked so that the gum region has a first color, a first pattern, or a first transparency.
  • the first color, the first pattern, and the first transparency may be colors, patterns, and transparency that are difficult for the original three-dimensional oral model to have.
  • the region-separated 3D oral model may be one in which the crown region is masked to have a second attribute.
  • the region-separated 3D oral model may be masked so that the crown region has a second color, a second pattern, or a second transparency.
  • the second color, the second pattern, and the second transparency may also be colors, patterns, and transparency that are difficult for the original three-dimensional oral model to have.
  • the masking process for the region-separated three-dimensional oral model may be a visual masking process.
  • the second and third artificial neural networks which will be described later, are CNN (Convolutional Neural Network)-based artificial neural networks for processing visual information.
  • the region-separated three-dimensional oral cavity model 340 of FIG. 5 may be understood as masking the crown region in a manner that is colored with a second color.
  • the oral model segmentation device analyzes the region-separated three-dimensional oral model according to the region separation data, uses the results of the analysis to identify the occlusal surface of the region-separated three-dimensional oral model, and expresses only the coronal area of the occlusal surface.
  • An occlusal crown image can be created (S130).
  • the oral model segmentation device can generate an occlusal crown image, which is a two-dimensional image looking at the crown from a perspective parallel to the occlusal surface, by executing logic based on a 3D model viewer.
  • the operation of the oral model segmentation device performing S130 will be described in more detail with reference to FIGS. 6 and 7.
  • the oral model segmentation device analyzes the region-separated three-dimensional oral model 350 viewed from the facial side to the lingual direction, and determines the occlusal line 351 of the maxillary crown and mandibular crown and the crown.
  • the center line 352 can be confirmed.
  • the occlusal line of the crown may be a line extending in the left and right directions from the point where the upper and lower front teeth meet.
  • the center line of the crown may be a line extending upward and downward from the point where the upper and lower front teeth meet.
  • the oral model segmentation device analyzes the region-separated three-dimensional oral model 360 viewed from the right posterior tooth in the lingual direction, and can further confirm the occlusal line 361 of the maxillary crown and mandibular crown and the center line 362 of the crown. .
  • the oral model segmentation device analyzes the region-separated three-dimensional oral model 370 viewed from the left posterior tooth in the lingual direction, and can further confirm the occlusal line 371 of the maxillary crown and mandibular crown and the center line 372 of the crown. .
  • the oral model segmentation device will be able to uniquely specify the occlusal surface of the region-separated three-dimensional oral model (340) using the analysis results of the region-separated three-dimensional oral model (350, 360, 370) viewed from three directions.
  • Figure 6 shows an occlusal crown image 510 representing only the crown area of the occlusal surface specified in this way.
  • the oral model segmentation device matches the visual line of the virtual camera 530 with the vertical axis passing through the center of the occlusal image 520 specified by the above-described method, and also ensures that the virtual camera 530 does not delete part of the occlusal image.
  • the virtual camera 530 can be positioned to accommodate a predefined size.
  • the virtual camera 530 can capture the occlusal crown image 510, which is a snapshot image of the occlusal surface.
  • the captured occlusal crown image 510 is input to the second artificial neural network 540.
  • the oral model segmentation device uses the second artificial neural network to select the center point of the crown image with similar individual characteristics among the previously learned crown images as the center point, thereby creating the center point of each crown region on the occlusal crown image 510. Identify (S140).
  • the second artificial neural network may also be supervised using learning data from occlusal crown images in which the center point of each crown region is labeled, similar to the first artificial neural network.
  • the second artificial neural network can be machine-learned by grouping based on the arrangement and shape of the crown. For example, a first type of second artificial neural network machine-learned using a first group of learning data having a first type of crown arrangement and shape, and a second type of second artificial neural network having a second type of crown arrangement and shape. A second type of second artificial neural network machine-learned using learning data and a third type of second artificial neural network machine-learned using a third group of learning data having a third type of crown arrangement and shape, respectively. It can exist.
  • the oral model segmentation device may be able to select one second artificial neural network among the first to third types of second artificial neural networks using the shape of the crown area of the occlusal crown image.
  • the oral model segmentation device may select a second artificial neural network among the first to third types of second artificial neural networks using the shape of the crown area of the pre-designated area of the occlusal crown image. .
  • the second artificial neural network 540 receives the occlusal crown image 510 and outputs center point data 550 indicating the center point of the crown area of each tooth on the occlusal crown image. can do. 9 shows the result 560 where each crown center point 560a, 560b, 560c, 560d according to the center point data 550 is displayed on the occlusal crown image 550.
  • the oral model segmentation device calculates the reliability of the center point data using the output data of the second artificial neural network, and when the reliability is less than a reference value, manually inputs the center point on the occlusal crown image. You can also print the user interface for this.
  • the reliability may be lower as the central point data deviates from the standard data of the central point data collected by the artificial neural network by more than a certain level.
  • the oral model segmentation device requests the user to manually point the center point, so that the 3D segmentation can be completed accurately through the subsequent operations. make it possible
  • the oral model segmentation device inputs the 3D tooth scan data into a machine-learned third artificial neural network (S160) and inputs the center point data into the machine-learned third artificial neural network, and as a result, the third artificial neural network is Segmentation data can be output (S150).
  • the segmentation data may be understood as representing a 3D segmentation model in which the crown area of each tooth included in the 3D oral model is segmented for each tooth.
  • the third artificial neural network Similar to the first artificial neural network and the second artificial neural network, the third artificial neural network also uses learning data in which the 3D area of each tooth crown is distinguished on the 3D oral model and the center point on the occlusal crown image is labeled. It may be supervised learning.
  • the oral model segmentation device uses a third artificial neural network to select the crown state of a crown image with similar individual characteristics among pre-learned crown images as the crown state of the acquired 3D tooth scan data, and applies a second artificial neural network to the selected crown state. Segmentation data can be generated and output by inputting the output center point data (S150).
  • the third artificial neural network 910 receives downsampled 3D tooth scan data 320 and center point data 560, and the crown region 920a to 920d of each tooth is segmented to create a 3D segmentation model ( It shows outputting segmentation data representing 920).
  • the oral model segmentation device masks each crown region of the three-dimensional segmentation model 920 so that each crown region is visually distinguished from each other, and the segmentation data reflects the result of the masking process. Segmentation data can be adjusted, and the adjusted segmentation data can be output.
  • Figure 9 shows a three-dimensional segmentation model 920 of segmentation data adjusted as described above.
  • the oral model segmentation device uses the results of the masking process to identify each crown location, labels a tooth number for each crown region based on the identified crown location, and labels the adjusted crown location with a tooth number for each crown region. Labeling result data for the dental formula number for each crown region may be output along with the segmentation data.
  • the segmentation method of the 3D oral model according to this embodiment has been described. As described above, it can be seen that no user manipulation is required to obtain a 3D oral model with complete segmentation using 3D scan data. Meanwhile, in the segmentation method of a 3D oral model according to another embodiment of the present disclosure, as shown in FIG. 11, the fourth artificial neural network 930 creates a 3D oral model 340 with the crown and gums separated and a center point. Data 560 may be input, and segmentation data representing a 3D segmentation model 940 in which the crown regions 940a to 940d of each tooth are segmented may be output.
  • the fourth artificial neural network also divides the 3D region of each tooth crown on a 3D oral model in which the crown and gums are separated, and the center point on the occlusal crown image is labeled. It may be supervised learning using trained learning data.
  • the fourth artificial neural network receives region separation data as a result of step S120 (S170). That is, the fourth artificial neural network receives the region separation data and the center point data and outputs the segmentation data.
  • the fourth artificial neural network receives area separation data in which the crown area and gum area have been processed as a kind of preprocessing by the first artificial neural network instead of the 3D tooth scan data, which can be considered original data.
  • the region separation data may be understood to contain more information because information about the distinction between the crown region and the gum region is added compared to the 3D tooth scan data. Therefore, the fourth artificial neural network receives more information than the third artificial neural network, and this may have an advantage in segmentation accuracy.
  • the downsampled 3D tooth scan data 320 is input to the first artificial neural network 330, and the first artificial neural network 330 is region classification data representing a 3D oral model 340 with the crown and gums separated. outputs.
  • an occlusal crown image 520 is created using a 3D oral model 340 in which the crown and gums are separated.
  • the occlusal crown image 520 is input to the second artificial neural network 540, and the second artificial neural network 540 outputs center point data indicating the center point 560 of each crown region.
  • the 3D tooth scan data 320 and the center point data indicating the center point 560 of each tooth crown area are input to the third artificial neural network 910, and the third artificial neural network 910 is the crown area of each tooth. Segmentation data representing this segmented 3D segmentation model 920 is output.
  • the down-sampled 3D tooth scan data 320 is input to the first artificial neural network 330, and the first artificial neural network 330 is region classification data representing a 3D oral model 340 with the crown and gums separated. outputs.
  • an occlusal crown image 520 is created using a 3D oral model 340 in which the crown and gums are separated.
  • the occlusal crown image 520 is input to the second artificial neural network 540, and the second artificial neural network 540 outputs center point data indicating the center point 560 of each crown region.
  • region classification data representing the three-dimensional oral model 340 in which the crown and gums are separated, and center point data indicating the center point 560 of each crown region are input to the fourth artificial neural network 930, and the fourth The artificial neural network 930 outputs segmentation data representing a 3D segmentation model 940 in which the crown region of each tooth is segmented.
  • segmentation data obtained according to the embodiments described so far with reference to FIGS. 1 to 13 can be used to generate an output screen for 3D orthodontic simulation.
  • the segmentation data may be used in the process of displaying the results of each tooth placement movement for a 3D orthodontic simulation.
  • the oral model segmentation system 100 of this embodiment includes one or more processors 1100, a system bus 1600, a communication interface 1200, and a computer program 1500 performed by the processor 1100.
  • the oral model segmentation system 100 of this embodiment may include a memory 1400 for loading a computer program 1500 and a storage 1300 for storing a computer program 1500.
  • the processor 1100 controls the overall operation of each component of the oral model segmentation system 100.
  • the processor 1100 may perform operations on at least one application or program to execute methods/operations according to various embodiments of the present disclosure.
  • the memory 1400 stores various data, commands and/or information.
  • the memory 1400 may load one or more computer programs 1500 from the storage 1300 to execute methods/operations according to various embodiments of the present disclosure.
  • the bus 1600 provides communication between components of the simulation device 200.
  • the communication interface 1200 supports Internet communication of the 3D oral model segmentation system 100. Additionally, the communication interface 1200 may be connected to a 3D scanner device (not shown).
  • Storage 1300 may non-temporarily store one or more computer programs 1500.
  • the computer program 1500 may include one or more instructions implementing methods/operations according to various embodiments of the present disclosure.
  • the processor 1100 can perform methods/operations according to various embodiments of the present disclosure by executing the one or more instructions.
  • the memory 1400 includes the patient's 3D dental scan data, data defining a first artificial neural network, data defining a second artificial neural network, data defining a third artificial neural network, and a computer program that performs 3D oral model segmentation. (1500) can be loaded.
  • the computer program 1500 may include instructions for performing one or more operations in which methods/operations according to various embodiments of the present disclosure are implemented.
  • the computer program 1500 separates the gum area and crown area of the 3D oral model represented by the 3D tooth scan data by the first artificial neural network receiving the 3D tooth scan data.
  • the third artificial neural network that receives the 3D tooth scan data and the center point data outputs segmentation data representing a 3D segmentation model in which the crown area of each tooth included in the 3D oral model is segmented for each tooth.
  • the technical ideas of the present disclosure described so far can be implemented as computer-readable code on a computer-readable medium.
  • the computer program recorded on the computer-readable recording medium can be transmitted to another computing device through a network such as the Internet, installed on the other computing device, and thus used on the other computing device.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Theoretical Computer Science (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Surgery (AREA)
  • Medical Informatics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Pathology (AREA)
  • General Physics & Mathematics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)

Abstract

L'invention concerne un procédé de segmentation d'un modèle buccal tridimensionnel et un appareil auquel le procédé est appliqué. Le procédé de segmentation d'un modèle buccal tridimensionnel, selon un mode de réalisation, peut comprendre des étapes dans lesquelles : un premier réseau neuronal artificiel ayant reçu des données de balayage tridimensionnel de dents fournie en sortie des données de segmentation de région représentant un modèle buccal tridimensionnel de segmentation de région dans lequel une région de gencive et une région de couronne d'un modèle buccal tridimensionnel représenté par les données de balayage tridimensionnel de dents sont identifiées ; le modèle buccal tridimensionnel de segmentation de région selon les données de segmentation de région est analysé pour identifier une surface occlusale du modèle buccal tridimensionnel de segmentation de région et génère une image de couronne occlusale représentant uniquement la région de couronne de la surface occlusale ; un deuxième réseau neuronal artificiel ayant reçu l'image de couronne occlusale fournit en sortie des données de point central indiquant le point central d'une région de couronne de chaque dent dans l'image de couronne occlusale ; et un troisième réseau neuronal artificiel ayant reçu les données de balayage tridimensionnel de dents et les données de point central fournit en sortie des données de segmentation représentant un modèle de segmentation tridimensionnel dans lequel la région de couronne de chaque dent incluse dans le modèle buccal tridimensionnel est segmentée par dent.
PCT/KR2023/008155 2022-09-27 2023-06-14 Procédé de segmentation de modèle buccal tridimensionnel et système s'y rapportant WO2024071571A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020220122544A KR20240043443A (ko) 2022-09-27 2022-09-27 3차원 구강 모델의 세그멘테이션 방법 및 그 시스템
KR10-2022-0122544 2022-09-27

Publications (1)

Publication Number Publication Date
WO2024071571A1 true WO2024071571A1 (fr) 2024-04-04

Family

ID=90478169

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2023/008155 WO2024071571A1 (fr) 2022-09-27 2023-06-14 Procédé de segmentation de modèle buccal tridimensionnel et système s'y rapportant

Country Status (2)

Country Link
KR (1) KR20240043443A (fr)
WO (1) WO2024071571A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160140326A (ko) * 2015-05-27 2016-12-07 주식회사 디오코 치아 교정 시뮬레이션 장치에서의 치아 자동 교정 방법, 그 방법이 적용된 치아 교정 시뮬레이션 장치, 및 이를 저장하는 컴퓨터로 판독 가능한 기록 매체
KR20190020756A (ko) * 2016-06-21 2019-03-04 노벨 바이오케어 서비시스 아게 치아 수복물의 형상, 위치 및 배향 중 적어도 하나를 추정하는 방법
KR20220069655A (ko) * 2020-11-20 2022-05-27 주식회사 쓰리디산업영상 Ct 영상에서의 치아 분할 시스템 및 방법
US20220215531A1 (en) * 2021-01-04 2022-07-07 James R. Glidewell Dental Ceramics, Inc. Teeth segmentation using neural networks

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102380166B1 (ko) 2020-04-21 2022-03-29 서울대학교산학협력단 치주염 자동 판단 방법 및 이를 구현하는 프로그램

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160140326A (ko) * 2015-05-27 2016-12-07 주식회사 디오코 치아 교정 시뮬레이션 장치에서의 치아 자동 교정 방법, 그 방법이 적용된 치아 교정 시뮬레이션 장치, 및 이를 저장하는 컴퓨터로 판독 가능한 기록 매체
KR20190020756A (ko) * 2016-06-21 2019-03-04 노벨 바이오케어 서비시스 아게 치아 수복물의 형상, 위치 및 배향 중 적어도 하나를 추정하는 방법
KR20220069655A (ko) * 2020-11-20 2022-05-27 주식회사 쓰리디산업영상 Ct 영상에서의 치아 분할 시스템 및 방법
US20220215531A1 (en) * 2021-01-04 2022-07-07 James R. Glidewell Dental Ceramics, Inc. Teeth segmentation using neural networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YU-BING CHANG ; J J XIA ; J GATENO ; ZIXIANG XIONG ; XIAOBO ZHOU ; S T C WONG: "An Automatic and Robust Algorithm of Reestablishment of Digital Dental Occlusion", IEEE TRANSACTIONS ON MEDICAL IMAGING, IEEE, USA, vol. 29, no. 9, 1 September 2010 (2010-09-01), USA, pages 1652 - 1663, XP011311066, ISSN: 0278-0062 *

Also Published As

Publication number Publication date
KR20240043443A (ko) 2024-04-03

Similar Documents

Publication Publication Date Title
US20220218449A1 (en) Dental cad automation using deep learning
WO2022164126A1 (fr) Dispositif et procédé de mise en correspondance automatique de données de balayage oral et d'image de tomodensitométrie au moyen d'une segmentation en couronne de données de balayage oral
WO2021157966A1 (fr) Procédé de fourniture d'informations concernant l'orthodontie à l'aide d'un algorithme d'intelligence artificielle d'apprentissage profond, et dispositif l'utilisant
WO2020184875A1 (fr) Procédé de sélection de nombre de dents utilisant une image panoramique et dispositif de traitement d'image médicale associé
WO2014123395A1 (fr) Affichage d'image pour afficher une image 3d et des images en coupe
WO2022108082A1 (fr) Système et procédé de segmentation de dents dans une image de tomodensitométrie
WO2021210723A1 (fr) Procédé et appareil de détection automatique de points caractéristiques de données d'image médicale tridimensionnelle par apprentissage profond
WO2022177095A1 (fr) Procédé et application à base d'intelligence artificielle pour la fabrication d'une prothèse 3d pour la restauration dentaire
WO2021006472A1 (fr) Procédé d'affichage de densité osseuse multiple pour établir un plan de procédure d'implant et dispositif de traitement d'image associé
WO2021006471A1 (fr) Procédé de planification de chirurgie implantaire par mise en place automatique d'une structure d'implant, procédé de fourniture d'interface utilisateur associé, et dispositif de traitement d'image dentaire associé
WO2017039220A1 (fr) Procédé de traitement d'image pour plan orthodontique, dispositif et support d'enregistrement associés
WO2020226473A1 (fr) Procédé de fourniture d'informations dentaires supplémentaires et appareil associé
WO2021025296A1 (fr) Procédé de recommandation automatique d'un modèle de couronne et appareil de cao de prothèse pour sa mise en œuvre
WO2024046400A1 (fr) Procédé et appareil de génération de modèle de dent, dispositif électronique et support de stockage
WO2022124462A1 (fr) Procédé pour détecter automatiquement un point d'intérêt dans des données de tomographie dentaire tridmensionnelles et support d'enregistrement lisible par ordinateur sur lequel est enregistré un programme pour exécuter celui-ci sur un ordinateur
WO2021034138A1 (fr) Procédé d'évaluation de la démence et appareil utilisant un tel procédé
WO2023018206A1 (fr) Procédé et appareil destinés à la recommandation d'un plan de traitement d'orthodontie par séparation d'un objet dentaire à partir de données de balayage oral en trois dimensions et à la détermination automatique d'une anomalie de position d'une dent et support d'enregistrement lisible par ordinateur
WO2020189917A1 (fr) Procédé d'établissement d'un plan de mise en place d'implant utilisant un axe central d'implant, et appareil de traitement d'image dentaire associé
WO2022154523A1 (fr) Procédé et dispositif de mise en correspondance de données de balayage buccal tridimensionnel par détection de caractéristique 3d basée sur l'apprentissage profond
WO2020218734A1 (fr) Procédé d'affichage d'une zone de sous-coupe lors de la conception d'une prothèse et dispositif cad de prothèse pour le mettre en œuvre
WO2024071571A1 (fr) Procédé de segmentation de modèle buccal tridimensionnel et système s'y rapportant
WO2024111914A1 (fr) Procédé de conversion d'images médicales au moyen d'une intelligence artificielle à polyvalence améliorée et dispositif associé
WO2021054700A1 (fr) Procédé pour fournir des informations de lésion dentaire et dispositif l'utilisant
WO2020209496A1 (fr) Procédé de détection d'objet dentaire, et procédé et dispositif de mise en correspondance d'image utilisant un objet dentaire
WO2023182702A1 (fr) Dispositif et procédé de traitement de données de diagnostic par intelligence artificielle pour des images numériques de pathologie

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23872698

Country of ref document: EP

Kind code of ref document: A1