WO2020218560A1 - 歯の位置の分析装置、歯の領域抽出モデル生成方法、歯の位置の分析方法、プログラム、および記録媒体 - Google Patents

歯の位置の分析装置、歯の領域抽出モデル生成方法、歯の位置の分析方法、プログラム、および記録媒体 Download PDF

Info

Publication number
WO2020218560A1
WO2020218560A1 PCT/JP2020/017802 JP2020017802W WO2020218560A1 WO 2020218560 A1 WO2020218560 A1 WO 2020218560A1 JP 2020017802 W JP2020017802 W JP 2020017802W WO 2020218560 A1 WO2020218560 A1 WO 2020218560A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional
tooth
mesh
image
oral scanner
Prior art date
Application number
PCT/JP2020/017802
Other languages
English (en)
French (fr)
Japanese (ja)
Inventor
匡治 小林
義典 小岩
浩太郎 樋口
雄之 石田
卓史 小野
一博 須賀
Original Assignee
株式会社カイ
国立大学法人東京医科歯科大学
学校法人工学院大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社カイ, 国立大学法人東京医科歯科大学, 学校法人工学院大学 filed Critical 株式会社カイ
Priority to JP2021516290A priority Critical patent/JPWO2020218560A1/ja
Publication of WO2020218560A1 publication Critical patent/WO2020218560A1/ja

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/50Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
    • A61B6/51Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications for dentistry
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation

Definitions

  • the present invention relates to a tooth position analyzer, a tooth region extraction model generation method, a tooth position analysis method, a program, and a recording medium.
  • Periodontal disease is involved in various systemic diseases, so the importance of orthodontics is increasing. In orthodontics, it is necessary to periodically check the position fluctuations of the entire upper tooth, the entire lower tooth, and each tooth, such as how much they have moved and how much they have rotated. Therefore, in recent years, attempts have been made to analyze the fluctuation of the position by various image processing.
  • an object of the present invention is to provide a new system capable of easily analyzing changes in tooth position.
  • the tooth movement analyzer of the present invention is Includes input section, tooth area extraction section, joint section, correction section, reconstruction section, and alignment section.
  • the input unit is A three-dimensional CT image of the target oral cavity and an oral scanner three-dimensional mesh are input, and the three-dimensional CT image is a three-dimensional image in which two-dimensional images of the entire upper tooth or the entire lower tooth are stacked.
  • the oral scanner three-dimensional mesh is data on the crown surface of the upper tooth or the crown surface of the lower tooth, which is composed of a set of three-dimensional coordinate points.
  • the tooth region extraction unit The three-dimensional CT image is decomposed into a two-dimensional image of the X cross section, a two-dimensional image of the Y cross section, and a two-dimensional image of the Z cross section, and the tooth region including the crown and the root is color-coded for each tooth. Extract a 2D image and The joint is The color-coded colored two-dimensional images are combined to generate a color-coded colored three-dimensional image.
  • the correction unit Correct the error in the colored three-dimensional image and
  • the reconstruction unit From the color-coded tooth region in the corrected colored three-dimensional image, a three-dimensional mesh is reconstructed as data of the tooth surface shape composed of a set of three-dimensional coordinate points.
  • the alignment part is The reconstructed three-dimensional mesh and the oral scanner three-dimensional mesh are aligned with each other by the crown surface, and this is used as reference information for the positional relationship.
  • the method for analyzing tooth movement of the present invention is Including input process, tooth region extraction process, joining process, correction process, reconstruction process, and alignment process.
  • the input process is A three-dimensional CT image of the target oral cavity and an oral scanner three-dimensional mesh are input, and the three-dimensional CT image is a three-dimensional image in which two-dimensional images of the entire upper tooth or the entire lower tooth are stacked.
  • the oral scanner three-dimensional mesh is data on the crown surface of the upper tooth or the crown surface of the lower tooth, which is composed of a set of three-dimensional coordinate points.
  • the tooth region extraction step The three-dimensional CT image is decomposed into a two-dimensional image of the X cross section, a two-dimensional image of the Y cross section, and a two-dimensional image of the Z cross section, and the tooth region including the crown and the root is color-coded for each tooth. Extract a 2D image and The joining step is The color-coded colored two-dimensional images are combined to generate a color-coded colored three-dimensional image.
  • the correction step is Correct the error in the colored three-dimensional image and
  • the reconstruction step is From the color-coded tooth region in the corrected colored three-dimensional image, a three-dimensional mesh is reconstructed as data of the tooth surface shape composed of a set of three-dimensional coordinate points.
  • the alignment step is The reconstructed three-dimensional mesh and the oral scanner three-dimensional mesh are aligned with each other by the crown surface, and this is used as reference information for the positional relationship.
  • the program of the present invention is a program for causing a computer to execute each step of the method for analyzing tooth movement of the present invention.
  • the recording medium of the present invention is a computer-readable recording medium on which the program of the present invention is recorded.
  • a reconstructed three-dimensional mesh in which the surface shape of the tooth is reconstructed is generated in advance using a three-dimensional CT image, and the position of this and the oral scanner three-dimensional mesh.
  • the movement and rotation of the teeth can be easily analyzed by simply acquiring the oral scanner 3D mesh over time after the start of treatment and comparing it with the reference information. can do.
  • the three-dimensional CT image may be acquired only for the first time, for example, and after the second time, it is only necessary to acquire the simple and inexpensive three-dimensional mesh of the oral scanner. Therefore, the present invention is an extremely useful technique in orthodontics and the like.
  • FIG. 1 is a block diagram showing an example of the analyzer of the first embodiment.
  • FIG. 2 is a block diagram showing an example of the model generator of the first embodiment.
  • FIG. 3 is a schematic view showing an example of various images of the second embodiment.
  • FIG. 4 is a schematic diagram of the coordinates in the second embodiment.
  • FIG. 1A is a block diagram showing a configuration of an example of the tooth position analyzer 20 of the present embodiment.
  • the analyzer 20 has an input unit 21, a tooth region extraction unit 22, a coupling unit 23, a correction unit 24, a reconstruction unit 25, and an alignment unit 26, and further, a storage unit. 27, a mobile analysis unit 28, and an output unit 29 may be provided.
  • the analyzer 20 is also referred to as, for example, a tooth position analysis system.
  • the analyzer 20 may be, for example, one device including the above-mentioned parts, or a device in which the above-mentioned parts can be connected via a communication network.
  • the communication line network is not particularly limited, and a known communication line network can be used, and it may be wired or wireless.
  • Examples of the communication line network include an Internet line, a telephone line, a LAN (Local Area Network), a WiFi (Wireless Fidelity), and the like.
  • the analyzer 20 the processing of each part may be performed on the cloud.
  • FIG. 1 (B) illustrates a block diagram of the hardware configuration of the analyzer 20.
  • the analyzer 20 includes, for example, a CPU 101, a memory 102, a bus 103, an input device 104, a display 105, a communication device 106, and a storage device 207. Each part of the analyzer 20 is connected to each other by an interface (I / F) via a bus 103.
  • I / F interface
  • the CPU 101 is a processor that controls the entire analyzer 20, and is not limited to the CPU, and may be another processor.
  • the analyzer 20 for example, the program of the present invention and other programs are executed by the CPU 101, and various information is read and written.
  • the analyzer 20 can be connected to the communication network by, for example, the communication device 106 connected to the bus 103, and can also be connected to an external device via the communication network.
  • the external device is not particularly limited, and examples thereof include a server, a PC, and a tablet.
  • the connection method with the external device is not particularly limited, and may be, for example, a wired connection or a wireless connection.
  • the wired connection may be, for example, a cord connection or a cable connection for using a communication network.
  • the wireless connection may be, for example, a connection using a communication network or a connection using wireless communication.
  • the communication line network is not particularly limited, and for example, a known communication line network can be used, which is the same as described above.
  • the memory 102 includes, for example, a main memory, and the main memory is also referred to as a main storage device.
  • the main memory is, for example, a RAM (random access memory).
  • the memory 102 further includes, for example, a ROM (read-only memory).
  • the storage device 207 is also referred to as a so-called auxiliary storage device with respect to the main memory (main storage device), for example.
  • the storage device 207 includes, for example, a storage medium and a drive for reading and writing to the storage medium.
  • the storage medium is not particularly limited, and may be, for example, an internal type or an external type, and examples thereof include HD (hard disk), CD-ROM, CD-R, CD-RW, MO, DVD, flash memory, and memory card.
  • the drive is not particularly limited.
  • a hard disk drive (HDD) in which a storage medium and a drive are integrated can be exemplified.
  • the program 208 of the present invention and the learning model 209 described later are stored in the storage device 207, and as described above, when the CPU 101 is executed, the memory 102 stores the operation program 208 and the learning model 209 from the storage device 207.
  • the storage device 207 may include, for example, a storage unit 27, and may store information input to the input unit 21, information obtained by the analyzer 20, and the like.
  • the input device 104 is, for example, a scanner, a touch panel, a keyboard, or the like.
  • Examples of the display 105 include an LED display and a liquid crystal display.
  • the method for analyzing tooth movement in the present embodiment can be carried out using, for example, the analyzer 20 of FIG.
  • the analysis method of this embodiment is not limited to the use of the analyzer 20.
  • the method for analyzing tooth movement of the present invention includes an input step, a tooth region extraction step, a joining step, a correction step, a reconstruction step, and an alignment step.
  • the input step is a step of inputting a three-dimensional CT image of the target oral cavity and an oral scanner three-dimensional mesh, and can be executed by, for example, the input unit 21 of the analyzer 20.
  • the three-dimensional mesh is data composed of a set of three-dimensional coordinate points related to the surface shape, and examples thereof include polygon data and point cloud data.
  • the polygon data is, for example, a set of triangles, and the triangles are composed of three three-dimensional coordinate points.
  • the point cloud data is, for example, a set of three-dimensional coordinate points on a surface.
  • the target is, for example, a patient undergoing dental treatment.
  • the three-dimensional CT image is a CT image in the oral cavity obtained by a CT (Computed Tomography) apparatus, and is composed of two-dimensional images stacked.
  • the type of the CT apparatus is not particularly limited, and examples thereof include a cone beam (CB) type CT apparatus specialized in dentistry, and the three-dimensional image obtained by this is referred to as, for example, a CBCT three-dimensional image.
  • the image is not limited to the CBCT three-dimensional image, and a three-dimensional image obtained by another CT device of CBCT (for example, a helical CT device or the like) may be used, and DICOM data or the like can be used as a specific example. ..
  • the oral scanner three-dimensional mesh is data composed of a set of three-dimensional coordinate points obtained by optically scanning a three-dimensional surface shape in the oral cavity with an oral scanner, and is, for example, a polygon datar or a polygon data as described above. Point cloud data can be given.
  • an oral scanner three-dimensional mesh data on the crown surface of the upper tooth, the crown surface of the lower tooth, or the crown surface of each of the upper and lower teeth is used.
  • the oral scanner is not particularly limited, and for example, the trade name Trophy 3DI Pro ⁇ (Yoshida Co., Ltd.) or the like can be used.
  • the three-dimensional CBCT image is decomposed into a two-dimensional image of the X cross section, a two-dimensional image of the Y cross section, and a two-dimensional image of the Z cross section, and the tooth region including the crown and the root is formed.
  • This is a step of extracting a colored two-dimensional image color-coded for each tooth.
  • This step can be performed, for example, by the tooth region extraction unit 22 of the analyzer 20.
  • the colored two-dimensional image can be extracted (also referred to as generation) by subjecting the decomposed two-dimensional image to the learning model 209, for example.
  • the training data for generating the training model 209 is, for example, two of the decomposed X cross sections of the coloring learning image of the CBCT three-dimensional image in which the tooth regions are color-coded so that the adjacent teeth have different colors.
  • a two-dimensional image, a two-dimensional image of a Y-section, and a two-dimensional image of a Z-section can be used as a set. Therefore, according to the learning model 209, for example, when an uncolored decomposed two-dimensional image of the X cross section, a two-dimensional image of the Y cross section, and a two-dimensional image of the Z cross section are provided as a set, the tooth It is possible to generate a colored two-dimensional image in which the regions are color-coded for each tooth.
  • An example of the decomposition into a two-dimensional image and the learning model 209 will be described later in Modification 1.
  • the joining step is a step of combining the color-coded colored two-dimensional images to generate a color-coded colored three-dimensional image, and can be executed by, for example, the joining portion 23 of the analyzer 20.
  • the method of combining the colored two-dimensional images to generate the colored three-dimensional image is not particularly limited, and a general method can be adopted. For example, it can be performed by an OR operation process in a logical operation process between images.
  • the OR operation is an operation method in which the operation result is set to 1 if one of the voxel (pixel) bits at the same position in the two images is 1, and the operation result is set to 0 only when both are 0. is there.
  • the correction step is a step of correcting an error in the colored three-dimensional image, and can be executed by, for example, the correction unit 24 of the analyzer 20.
  • the error in the colored three-dimensional image is, for example, the occurrence of a defect or a protrusion that cannot exist in the colored three-dimensional image, the coloring that should exist only in one place, and the like in two or more regions.
  • the small holes or small protrusions or the like are colored three-dimensionally by using a three-dimensional morphological filter. Can be deleted from the image.
  • the threshold value is not particularly limited and can be set arbitrarily.
  • the reconstruction step is a step of reconstructing a three-dimensional mesh as data on the surface shape of the tooth composed of a set of three-dimensional coordinate points from the color-coded tooth region in the corrected colored three-dimensional image. is there.
  • the step can be performed, for example, by the reconstruction unit 25 of the analyzer 20.
  • the reconstructed three-dimensional mesh and the oral scanner three-dimensional mesh are aligned by the crown surface, and this is used as the reference information of the positional relationship.
  • This step can be performed, for example, by the alignment unit 26 of the analyzer 20.
  • the reconstructed three-dimensional mesh is data composed of a set of three-dimensional coordinate points. Specifically, for example, for the entire upper tooth or the entire lower tooth, the surface of the entire tooth (crown and root). Is the data represented by.
  • the three-dimensional mesh is, for example, polygon data or point cloud data.
  • the analysis method of the present embodiment may further include, for example, a storage process, in which the identification information of the target and the reference information of the positional relationship are stored in association with each other. This step can be performed, for example, by the storage unit 27 of the analyzer 20.
  • the reconstructed three-dimensional mesh and the oral scanner three-dimensional mesh are acquired as information at the start of treatment, and the positions are aligned by aligning them. You can get the reference information of the relationship. Therefore, for the subject, for example, after the start of treatment, it is possible to analyze the movement and rotation of the tooth position by using the reference information only by acquiring the oral scanner three-dimensional mesh. More specifically, the reconstructed three-dimensional mesh is, as described above, data composed of a set of three-dimensional coordinate points representing the surfaces of the crown and the root (for example, polygon data or point cloud data).
  • the oral scanner three-dimensional mesh is data (for example, polygon data or point cloud data) composed of a set of three-dimensional coordinate points representing the upper surface of the crown. Therefore, the reconstructed three-dimensional mesh and the oral scanner three-dimensional mesh can be aligned based on each other's crown surfaces. Therefore, by using this positional relationship as reference information, it is possible to analyze the position by extracting the change in the three-dimensional mesh of the oral scanner after the start of treatment.
  • the present embodiment may further include, for example, a movement analysis step for analyzing the movement of the target tooth after the start of treatment.
  • the new oral scanner three-dimensional mesh of the target is input as the input process by the input unit 21, as the input process by the input unit 21, the new oral scanner three-dimensional mesh of the target is input.
  • the new oral scanner three-dimensional mesh is, for example, an image after the start of treatment.
  • the image before the start of treatment which is the source of the reference information
  • n is a positive integer of 2 or more.
  • a new oral scanner three-dimensional mesh Sn and the reference information oral scanner three-dimensional mesh S 1 are provided in global coordinates with respect to the corresponding jaw. position aligned, for each tooth, and the oral scanner 3D mesh S n new, the reconstructed 3D mesh D 1 of the said reference information, the local coordinates of each tooth, and alignment, the said global coordinate local
  • the movement and rotation angles of each tooth are analyzed based on the coordinates.
  • This step can be performed, for example, by the mobile analysis unit 28 of the analyzer 20.
  • Modification example 1 In the first modification, the generation of a learning model that can be used for extracting the colored two-dimensional image will be illustrated. It should be noted that this is an example, and the analyzer and the analysis method of the present invention are not limited to these examples.
  • FIG. 2A is a block diagram showing a configuration of an example of a model generation device 10 that generates a learning model 209.
  • the model generation device 10 has an input unit 11, a preprocessing unit 12, and a learning model generation unit 13, and may further have an output unit.
  • the model generation device 10 is also referred to as, for example, a model generation system.
  • the model generation device 10 may be, for example, one device including the above-mentioned parts, or may be a device in which the above-mentioned parts can be connected via a communication network.
  • the communication network is the same as described above.
  • FIG. 2 (B) illustrates a block diagram of the hardware configuration of the model generator 10.
  • the model generation device 10 includes, for example, a CPU (central processing unit) 101, a memory 102, a bus 103, an input device 104, a display 105, a communication device 106, and a storage device 107. Each part of the model generation device 10 is connected to each other by an interface (I / F) via a bus 103.
  • I / F interface
  • the program of the present invention and other programs are executed by the CPU 101, and various information is read and written.
  • the model generator 10 can be connected to the communication network by, for example, the communication device 106 connected to the bus 103, and can also be connected to an external device via the communication network.
  • the external device is not particularly limited, and examples thereof include a server, a PC, and a tablet.
  • the connection method with the external device is not particularly limited, and is, for example, the same as described above.
  • the memory 102 includes, for example, a main memory, and the main memory is also referred to as a main storage device.
  • the main memory is, for example, a RAM (random access memory).
  • the memory 102 further includes, for example, a ROM (read-only memory).
  • the storage device 107 is also referred to as a so-called auxiliary storage device with respect to the main memory (main storage device), for example.
  • the storage device 107 includes, for example, a storage medium and a drive for reading and writing to the storage medium.
  • the storage medium is not particularly limited, and is, for example, the same as described above.
  • the storage device 107 may be, for example, a hard disk drive (HDD) in which a storage medium and a drive are integrated.
  • the program 108 is stored in the storage device 107, and as described above, when the CPU 101 is executed, the memory 102 reads the operation program 108 from the storage device 107.
  • the storage device 107 may store, for example, information input to the input unit 11, information obtained by the model generation device 10, and the like.
  • the method of generating the learning model 209 for extracting the tooth region can be carried out using, for example, the model generation device 10 of FIG.
  • the generation method of this example is not limited to the use of the model generation device 10.
  • the method of generating the tooth region extraction model of this example includes an input step, a pretreatment step, and a learning model generation step.
  • the input step is a step of inputting a plurality of CT three-dimensional images (as a specific example, a CBCT three-dimensional image) in which two-dimensional images are stacked as training data, and is executed by, for example, the input unit 11 of the model generation device 10.
  • the CBCT three-dimensional image is a CT image in the oral cavity obtained by a cone beam (CB) type CT device specialized in dentistry, and is composed of two-dimensional images stacked.
  • CB cone beam
  • the image is not limited to the CBCT three-dimensional image, and may be a three-dimensional image obtained by another CT device (for example, a helical CT device) of the CBCT, and as a specific example, DICOM data or the like may also be used. Can be used.
  • the preprocessing step is a step of generating a coloring learning image in which the tooth regions are color-coded so that adjacent teeth have different colors in the CBCT three-dimensional image.
  • the preprocessing unit of the model generation device 10. It can be executed by 12.
  • adjacent teeth may have different colors.
  • distant teeth that are not adjacent to each other may be colored in the same color. Since each tooth can be identified, for example, it is preferable that all teeth are colored in different colors.
  • the right central incision in the maxillary upper tooth is "red" and the left central incisor in the maxilla. May be specified, such as "pink".
  • the learning model generation step is a step of decomposing the coloring learning image into a two-dimensional image of an X cross section, a two-dimensional image of a Y cross section, and a two-dimensional image of a Z cross section to generate a learning model that recognizes a tooth region. Is.
  • This step can be executed, for example, by the learning model generation unit 13 of the model generation device 10.
  • the learning model generation unit 13 is not particularly limited, and for example, an existing learning model generation system can be used.
  • the coloring learning image is a three-dimensional image, it can be decomposed into a two-dimensional image along an arbitrary X-axis, Y-axis, and Z-axis.
  • the learning model for example, learning model 209 generated in this example, for example, a two-dimensional image of the X cross section, a two-dimensional image of the Y cross section, and two of the Z cross sections obtained by decomposing the target three-dimensional CBCT image.
  • a three-dimensional image it is possible to generate a colored two-dimensional image in which the region of each tooth composed of the crown and the root is color-coded for each tooth.
  • combining these colored two-dimensional images it is possible to generate a colored three-dimensional image color-coded for each tooth for the target.
  • this colored three-dimensional image for example, it becomes possible to easily analyze the movement of the tooth position, as described above.
  • FIG. 3 shows a schematic diagram of each image.
  • (A) is an example of the three-dimensional CBCT image that is not color-coded
  • (B) is a reconstructed three-dimensional surface image reconstructed by the various processes based on the three-dimensional CBCT image.
  • (C) is an example of the oral scanner three-dimensional mesh.
  • the entire surface of each tooth consisting of the crown and the root appears as a three-dimensional mesh
  • FIG. 3C According to the oral scanner three-dimensional mesh, the surface shape of the crown appears as a three-dimensional mesh.
  • the three-dimensional oral scanner mesh shown in FIG. 3C is also referred to as a crown surface mesh.
  • the oral scanner three-dimensional mesh is, for example, data obtained by the oral scanner device, and the oral scanner device can output the measurement result as a surface mesh.
  • the analyzer 20 of the first embodiment for example, from the three-dimensional CBCT image input by the input unit 21, a colored three-dimensional image color-coded by the tooth region extraction unit 22, the coupling unit 23, and the correction unit 24. Is generated, and the reconstruction unit 25 reconstructs a three-dimensional mesh (for example, a surface three-dimensional mesh which is a set of triangles composed of three-dimensional coordinate points) from the colored three-dimensional image.
  • a three-dimensional mesh for example, a surface three-dimensional mesh which is a set of triangles composed of three-dimensional coordinate points
  • FIG. 4 shows the whole of the reconstructed three-dimensional mesh, and (B) shows one tooth in the reconstructed three-dimensional mesh.
  • the coordinate system of the reconstructed three-dimensional mesh is composed of the global coordinates and the local coordinates.
  • the global coordinates are set for each of the upper jaw and the lower jaw, and are hereinafter referred to as the upper jaw global coordinates and the lower jaw global coordinates.
  • the origin of the maxillary global coordinates is determined, for example, as the center of gravity of all the tooth regions of the maxilla in the color-coded colored three-dimensional image obtained in the joining step. Then, as shown in FIG. 4A, the X-axis, the Y-axis, and the Z-axis (not shown) are aligned with the coordinate axes of the reconstructed three-dimensional mesh with respect to the origin.
  • the local coordinates are set for each tooth.
  • the origin of the local coordinates is determined, for example, as the center of gravity of the area of one colored tooth in the colored three-dimensional image.
  • the X-axis, the Y-axis (not shown), and the Z-axis are respectively aligned with respect to the origin in the axial directions obtained from the principal component analysis of the tooth region. ..
  • the direction vector having the center of gravity as the origin and having the largest variance value in the tooth region is called the first principal component vector.
  • the Z-axis direction in the local coordinates of FIG. 4 corresponds to the first principal component vector.
  • the second principal component vector is defined as a vector in the direction perpendicular to the first principal component vector and in the direction in which the dispersion value of the tooth region is the largest.
  • the X-axis in the local coordinates of FIG. 4 corresponds to the second principal component vector.
  • the third principal component vector is defined as a vector in the direction perpendicular to the first principal component vector and the second principal component vector and in the direction in which the variance value of the tooth region is the largest.
  • the relationship between the global coordinates and the local coordinates can be expressed by a 4 ⁇ 4 matrix which is a combination of rotation and parallel traveling columns.
  • a three-dimensional CBCT image C 1 and an oral scanner three-dimensional mesh S 1 are acquired for the oral cavity of the target X. Then, the three-dimensional CBCT image C 0 is used as the learning model to extract the colored two-dimensional image, generate and correct the colored three-dimensional image, and reconstruct the reconstructed three-dimensional mesh D 1 .
  • the reconstructed three-dimensional mesh D 1 is a crown / root surface mesh as described above, and the oral scanner three-dimensional mesh S 1 is a crown surface mesh as described above.
  • the alignment can be performed by, for example, the ICP (Iterative Closest Point) method.
  • the alignment gives a 4x4 coordinate transformation matrix between the surface meshes.
  • the second oral scanner three-dimensional mesh S is performed. 2 Set the global coordinates of (the surface mesh of the crown).
  • the reconstructed three-dimensional mesh D 1 is composed of, for example, 32 upper and lower teeth, there are 32 surface meshes of the crown and root. Therefore, the reconstructed 3D mesh D 1, and one global coordinates, and 32 local coordinates of each surface mesh of the crown-root exists. That is, there are 32 transformation matrices for local coordinates with respect to global coordinates.
  • the alignment result of the global coordinates of the oral scanner 3D mesh S 1 and the oral scanner 3D mesh S 2 and the alignment result of the global coordinates of the oral scanner 3D mesh S 1 and the reconstructed 3D mesh D 1 By using and, it is possible to perform the alignment processing of the global coordinates of the oral scanner three-dimensional mesh S 2 and the reconstructed three-dimensional mesh D 1 .
  • This state is called the initial posture.
  • each tooth is moving, so the oral scanner 3D mesh S 2 and the reconstructed 3D mesh D 1 are aligned in a shifted state by the amount of tooth movement. .. Therefore, the surface mesh of 32 crown-root reconstruction 3D mesh D 1, taken out one at a state of the initial position, to align the oral scanner 3D mesh S 2.
  • the surface mesh of the crown-root reconstruction 3D mesh D 1 is made of, the surface mesh of the crown-root to the global coordinates of the oral scanner 3D mesh S 2
  • the transformation matrix to local coordinates of is updated.
  • the surface mesh (reconstructed three-dimensional mesh) D 1 of the crown and root becomes the reconstructed three-dimensional mesh D 2 . That is, the reconstructed three-dimensional mesh D 2 is a result that reflects the state in which each tooth is moving at the time of the second consultation.
  • the global coordinates and local coordinates for the second oral scanner 3D mesh S2 (the surface mesh of the crown) are set, and the 4 ⁇ 4 transformation matrix of the local coordinates for the global coordinates is This means that both the first visit and the second visit were obtained. Since the 4 ⁇ 4 transformation matrix is composed of two elements, rotation and translation, the amount of rotation and translation between the first visit and the second visit can be calculated, and this is the second visit for the first visit. It will show the movement and rotation of the position of the target tooth at the time of. Similarly, for the third and subsequent consultations, movement and rotation can be analyzed simply by acquiring the oral scanner three-dimensional mesh.
  • the program of the present embodiment is a program capable of executing the method for analyzing tooth movement of the present invention on a computer.
  • the program of this embodiment may be recorded on, for example, a computer-readable recording medium.
  • the recording medium is not particularly limited, and examples thereof include the storage medium described above.
  • a reconstructed three-dimensional mesh in which the surface shape of the tooth is reconstructed is generated in advance using a three-dimensional CT image, and the position of this and the oral scanner three-dimensional mesh.
  • a three-dimensional CT image may be acquired only for the first time, for example, and after the second time, it is only necessary to acquire the simple and inexpensive three-dimensional mesh of the oral scanner. Therefore, the present invention is an extremely useful technique in orthodontics and the like.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Biophysics (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Optics & Photonics (AREA)
  • Surgery (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)
PCT/JP2020/017802 2019-04-26 2020-04-24 歯の位置の分析装置、歯の領域抽出モデル生成方法、歯の位置の分析方法、プログラム、および記録媒体 WO2020218560A1 (ja)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2021516290A JPWO2020218560A1 (enrdf_load_stackoverflow) 2019-04-26 2020-04-24

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019085115 2019-04-26
JP2019-085115 2019-04-26

Publications (1)

Publication Number Publication Date
WO2020218560A1 true WO2020218560A1 (ja) 2020-10-29

Family

ID=72942783

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/017802 WO2020218560A1 (ja) 2019-04-26 2020-04-24 歯の位置の分析装置、歯の領域抽出モデル生成方法、歯の位置の分析方法、プログラム、および記録媒体

Country Status (2)

Country Link
JP (1) JPWO2020218560A1 (enrdf_load_stackoverflow)
WO (1) WO2020218560A1 (enrdf_load_stackoverflow)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113397585A (zh) * 2021-07-27 2021-09-17 朱涛 基于口腔cbct和口扫数据的牙体模型生成方法及系统
CN113842216A (zh) * 2021-12-01 2021-12-28 极限人工智能有限公司 一种上下牙对合模拟方法、装置及电子设备
CN114612642A (zh) * 2022-03-04 2022-06-10 杭州隐捷适生物科技有限公司 一种基于邻接信息的三维牙冠网格模型邻面自动修补方法
CN115471506A (zh) * 2022-09-13 2022-12-13 福州海狸家口腔科技有限公司 一种口扫模型的位置调整方法、存储介质和电子设备
CN115583016A (zh) * 2022-10-08 2023-01-10 北京缔佳医疗器械有限公司 生产牙齿模型倒凹充填方法、装置、存储介质及电子设备
CN116524118A (zh) * 2023-04-17 2023-08-01 杭州雅智医疗技术有限公司 基于三维牙齿cbct数据和口腔扫描模型的多模态渲染方法
WO2024082284A1 (zh) * 2022-10-21 2024-04-25 深圳先进技术研究院 基于网格特征深度学习的口腔正畸自动排牙方法和系统
WO2024188024A1 (zh) * 2023-03-10 2024-09-19 先临三维科技股份有限公司 牙齿颜色的确定方法、装置、设备及介质

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008284350A (ja) * 2000-04-25 2008-11-27 Align Technology Inc 歯科治療計画のためのシステムおよび方法
JP2009090138A (ja) * 2004-09-24 2009-04-30 Icat:Kk 断面情報検出装置
JP2011147728A (ja) * 2010-01-25 2011-08-04 Nihon Univ 画像生成装置、画像生成方法およびプログラム
WO2012096312A1 (ja) * 2011-01-11 2012-07-19 株式会社アドバンス 口腔内撮影表示システム
KR20130008236A (ko) * 2011-07-12 2013-01-22 (주)쓰리디아이티 악교정수술을 위한 영상 매칭정보 생성 방법 및 이를 이용한 악교정 모의 수술 방법
WO2013018522A1 (ja) * 2011-07-29 2013-02-07 メディア株式会社 歯周病の検査方法
JP2014524795A (ja) * 2011-07-08 2014-09-25 デュレ,フランソワ 歯科分野で使用される三次元測定デバイス
JP2015523871A (ja) * 2012-05-17 2015-08-20 デピュイ・シンセス・プロダクツ・エルエルシーDePuy Synthes Products, LLC 外科手術計画の方法
JP2016140761A (ja) * 2015-01-30 2016-08-08 デンタル・イメージング・テクノロジーズ・コーポレーション 歯の変動追跡および予測

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008284350A (ja) * 2000-04-25 2008-11-27 Align Technology Inc 歯科治療計画のためのシステムおよび方法
JP2009090138A (ja) * 2004-09-24 2009-04-30 Icat:Kk 断面情報検出装置
JP2011147728A (ja) * 2010-01-25 2011-08-04 Nihon Univ 画像生成装置、画像生成方法およびプログラム
WO2012096312A1 (ja) * 2011-01-11 2012-07-19 株式会社アドバンス 口腔内撮影表示システム
JP2014524795A (ja) * 2011-07-08 2014-09-25 デュレ,フランソワ 歯科分野で使用される三次元測定デバイス
KR20130008236A (ko) * 2011-07-12 2013-01-22 (주)쓰리디아이티 악교정수술을 위한 영상 매칭정보 생성 방법 및 이를 이용한 악교정 모의 수술 방법
WO2013018522A1 (ja) * 2011-07-29 2013-02-07 メディア株式会社 歯周病の検査方法
JP2015523871A (ja) * 2012-05-17 2015-08-20 デピュイ・シンセス・プロダクツ・エルエルシーDePuy Synthes Products, LLC 外科手術計画の方法
JP2016140761A (ja) * 2015-01-30 2016-08-08 デンタル・イメージング・テクノロジーズ・コーポレーション 歯の変動追跡および予測

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113397585A (zh) * 2021-07-27 2021-09-17 朱涛 基于口腔cbct和口扫数据的牙体模型生成方法及系统
CN113397585B (zh) * 2021-07-27 2022-08-05 朱涛 基于口腔cbct和口扫数据的牙体模型生成方法及系统
CN113842216A (zh) * 2021-12-01 2021-12-28 极限人工智能有限公司 一种上下牙对合模拟方法、装置及电子设备
CN114612642A (zh) * 2022-03-04 2022-06-10 杭州隐捷适生物科技有限公司 一种基于邻接信息的三维牙冠网格模型邻面自动修补方法
CN115471506A (zh) * 2022-09-13 2022-12-13 福州海狸家口腔科技有限公司 一种口扫模型的位置调整方法、存储介质和电子设备
CN115583016A (zh) * 2022-10-08 2023-01-10 北京缔佳医疗器械有限公司 生产牙齿模型倒凹充填方法、装置、存储介质及电子设备
WO2024082284A1 (zh) * 2022-10-21 2024-04-25 深圳先进技术研究院 基于网格特征深度学习的口腔正畸自动排牙方法和系统
WO2024188024A1 (zh) * 2023-03-10 2024-09-19 先临三维科技股份有限公司 牙齿颜色的确定方法、装置、设备及介质
CN116524118A (zh) * 2023-04-17 2023-08-01 杭州雅智医疗技术有限公司 基于三维牙齿cbct数据和口腔扫描模型的多模态渲染方法
CN116524118B (zh) * 2023-04-17 2023-12-22 杭州雅智医疗技术有限公司 基于三维牙齿cbct数据和口腔扫描模型的多模态渲染方法

Also Published As

Publication number Publication date
JPWO2020218560A1 (enrdf_load_stackoverflow) 2020-10-29

Similar Documents

Publication Publication Date Title
WO2020218560A1 (ja) 歯の位置の分析装置、歯の領域抽出モデル生成方法、歯の位置の分析方法、プログラム、および記録媒体
US11735306B2 (en) Method, system and computer readable storage media for creating three-dimensional dental restorations from two dimensional sketches
JP7168644B2 (ja) 口腔内画像の選択及びロック
US20220148173A1 (en) Intraoral scanning system with excess material removal based on machine learning
EP3846735B1 (en) Automated orthodontic treatment planning using deep learning
Bardua et al. A practical guide to sliding and surface semilandmarks in morphometric analyses
JP7458711B2 (ja) ディープラーニングを用いた歯科用cadの自動化
CN113728363B (zh) 基于目标函数生成牙科模型的方法
US20180028294A1 (en) Dental cad automation using deep learning
US20210073998A1 (en) Apparatuses and methods for three-dimensional dental segmentation using dental image data
US10368719B2 (en) Registering shape data extracted from intra-oral imagery to digital reconstruction of teeth for determining position and orientation of roots
US20130169639A1 (en) System and method for interactive contouring for 3d medical images
JP2025525610A (ja) 歯科修復自動化
DE112014001417T5 (de) Verfahren und Vorrichtung zur Formanalyse, zum Speichern und Abrufen von 3D-Modellen mit Anwendung im automatischen Zahnrestaurationsdesign
KR102346199B1 (ko) 파노라믹 영상 생성 방법 및 이를 위한 영상 처리장치
WO2025097057A1 (en) Modeling and visualization of facial structure for dental treatment planning
JP2024031920A (ja) 3次元スキャンデータから補綴物を自動で生成する方法、及びこれをコンピュータで実行させるためのプログラムが記録されたコンピュータ読取り可能な記録媒体
US12186153B2 (en) Automated tooth administration in a dental restoration workflow
KR20200134037A (ko) 신경관 라인 자동 생성방법 및 이를 위한 의료영상 처리장치
Rekik et al. TSegLab: Multi-stage 3D dental scan segmentation and labeling
KR20220145758A (ko) 컴퓨팅 장치를 이용한 하악골 비대칭 평가 방법 및 이를 이용한 수술 시뮬레이션 방법
US11361524B1 (en) Single frame control view
Hassani Najafabadi Enhancing Quality of Low-Dose CT Scans Via Generative Diffusion Models
양수 Morphology-aware Neural Implicit Representation Learning for 3D Mesh Generation of Dental Crown Prosthesis
Choi et al. Development of automatic 3D model comparison (ModelMatch3D) for forensic identification and testing using odontology data

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20794835

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021516290

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20794835

Country of ref document: EP

Kind code of ref document: A1