CN117442370A - Method, device, equipment and storage medium for registration of occlusion of oral cavity - Google Patents

Method, device, equipment and storage medium for registration of occlusion of oral cavity Download PDF

Info

Publication number
CN117442370A
CN117442370A CN202311665171.1A CN202311665171A CN117442370A CN 117442370 A CN117442370 A CN 117442370A CN 202311665171 A CN202311665171 A CN 202311665171A CN 117442370 A CN117442370 A CN 117442370A
Authority
CN
China
Prior art keywords
occlusion
data
determining
initial model
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311665171.1A
Other languages
Chinese (zh)
Inventor
吴刚
陈冬灵
王家锁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Up3d Tech Co ltd
Original Assignee
Shenzhen Up3d Tech Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Up3d Tech Co ltd filed Critical Shenzhen Up3d Tech Co ltd
Priority to CN202311665171.1A priority Critical patent/CN117442370A/en
Publication of CN117442370A publication Critical patent/CN117442370A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C11/00Dental articulators, i.e. for simulating movement of the temporo-mandibular joints; Articulation forms or mouldings
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C19/00Dental auxiliary appliances
    • A61C19/04Measuring instruments specially adapted for dentistry
    • A61C19/05Measuring instruments specially adapted for dentistry for determining occlusion
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C9/00Impression cups, i.e. impression trays; Impression methods
    • A61C9/004Means or methods for taking digitized impressions
    • A61C9/0046Data acquisition means or methods
    • A61C9/0053Optical means or methods, e.g. scanning the teeth by a laser or light beam
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/344Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/35Determination of transform parameters for the alignment of images, i.e. image registration using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Epidemiology (AREA)
  • Dentistry (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biophysics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Geometry (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Optics & Photonics (AREA)
  • Computational Linguistics (AREA)
  • Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)

Abstract

The embodiment of the application relates to the technical field of oral medical treatment, in particular to an oral occlusion registration method, which comprises the following steps: acquiring target data of an initial model; determining an effective area of the initial model according to the target data; determining a bite abrasion composition of the initial model based on an effective area of the initial model; determining the occlusion movement process of the upper jaw and the lower jaw in the initial model according to the occlusion abrasion composition to obtain occlusion movement data; and executing registration operation on the initial model according to the occlusion motion data. According to the method, the effective area of the three-dimensional digital model is automatically extracted to determine the occlusion abrasion composition of the patient, and the model is accurately registered according to the occlusion abrasion composition, so that the whole process does not need to be manually participated, errors caused by manual operation are effectively avoided, and the registration effect and the registration efficiency are improved.

Description

Method, device, equipment and storage medium for registration of occlusion of oral cavity
Technical Field
The embodiment of the application relates to the technical field of oral medical treatment, in particular to an oral occlusion registration method, an oral occlusion registration device, an oral occlusion registration equipment and a storage medium.
Background
Along with the continuous development of science and technology, various industries are continuously in depth for digital application, and the traditional physical modeling mode is gradually replaced by the establishment of a three-dimensional digital model by collecting data in the oral cavity of a patient through an oral cavity scanner, so that the three-dimensional digital model becomes a common means in a modern medical scene.
In the process of digitally modeling the oral cavity of a patient, due to the complexity and diversity of actual scenes, such as the deviation, data loss or error of a three-dimensional scanner, the three-dimensional digital model is easy to cause that the oral cavity occlusion relationship of the patient cannot be correctly reflected, so that the three-dimensional digital model needs to be re-registered. When the related technology is used for realigning the occlusion relationship of the oral cavity, the plaster model of the oral cavity of a patient is mainly used for modeling the maxillary frame again in a physical mode, or the coordinate position of the unilateral three-dimensional data model is manually moved in the three-dimensional digital model for registration, but the two modes are used for judging the occlusion relationship in a manual experience mode, so that the requirement on an operator is high, the occlusion relationship cannot be accurately judged frequently, the registration effect is poor, and medical errors are easy to occur.
Disclosure of Invention
An object of the embodiment of the application is to provide an oral occlusion registration method, so as to solve the technical problems that the related technology relies on manual experience when re-registering an oral three-dimensional digital model, the requirement on an operator is high, and the registration effect is poor easily.
In a first aspect, embodiments of the present application provide an oral bite registration method, the method comprising:
acquiring target data of an initial model;
determining an effective area of the initial model according to the target data;
determining a bite abrasion composition of the initial model based on an effective area of the initial model;
determining the occlusion movement process of the upper jaw and the lower jaw in the initial model according to the occlusion abrasion composition to obtain occlusion movement data;
and executing registration operation on the initial model according to the occlusion motion data.
With reference to the first aspect, in a possible implementation manner, the target data includes three-dimensional model data of the initial model, and the determining, according to the target data, an effective area of the target model includes: performing point cloud gridding treatment on the initial model according to the three-dimensional model data to obtain a target model; determining a non-working area of the target model; and deleting the non-working area of the target model to obtain an effective area of the target model.
With reference to the first aspect, in a possible implementation manner, the effective area includes a tooth area and a gum area, and the determining the abrasion pattern of the initial model based on the effective area of the target model includes: determining the tooth data and gum data according to the effective area; performing a gingival separation operation on the effective area according to the tooth data and the gingival data, and determining characteristic information of each tooth; extracting a bite abrasion region from the effective region according to the characteristic information of each tooth; the wear pattern is determined based on the bite wear region.
With reference to the first aspect, in one possible implementation manner, the occlusion abrasion composition corresponds to a plurality of abrasion surfaces, and determining an occlusion motion process of the upper jaw and the lower jaw in the initial model according to the occlusion abrasion composition, to obtain occlusion motion data includes: determining the inclination angle of the abrasion surface according to the occlusion abrasion composition; according to the inclination angle of the wearing surface, carrying out occlusion simulation operation, and determining the occlusion movement process of the upper jaw and the lower jaw in the initial model to obtain occlusion movement data; the performing a registration operation on the initial model according to the bite motion data includes: calculating the maximum area contacted by the wearing surface in the movement process of the upper jaw and the lower jaw according to the occlusion movement data, and determining a target registration position; and executing registration operation on the initial model according to the target registration position.
With reference to the first aspect, in one possible implementation manner, the performing an occlusion simulation operation according to the inclination angle of the wearing surface, determining an occlusion motion process of the upper jaw and the lower jaw in the initial model, and obtaining occlusion motion data includes: determining limit values of the upper and lower jaw collision areas according to the inclination angle of the occlusion wearing surface; determining the occlusion characteristics and the abrasion areas of the occlusion simulation composition according to the limit values; and laterally moving the teeth according to the occlusion characteristics and the abrasion region to obtain the occlusion motion data.
With reference to the first aspect, in a possible implementation manner, the performing a gingival separation operation on the effective area according to the tooth data and the gingival data, determining feature information of each tooth includes: determining a target image corresponding to the effective area; inputting the target image into a preset semantic segmentation model, so that the semantic segmentation model performs semantic segmentation processing on the target image, and outputting the target image of each tooth; and determining characteristic information of each tooth according to the target image of each tooth.
With reference to the first aspect, in one possible implementation manner, before inputting the target image into a preset neural network, the method further includes: training an initial model according to a preset training set to obtain the semantic segmentation model, wherein the data set comprises a sample set and a label set corresponding to the sample set, the initial model comprises a main network, a pooling module and a decoding network, the main network is used for carrying out feature extraction processing on an input sample, the pyramid pooling module is used for carrying out pooling processing on a feature extraction result, and the decoding module is used for outputting a semantic segmentation result.
In a second aspect, an embodiment of the present application further proposes an oral occlusion registration apparatus, including:
the data acquisition module is used for acquiring target data of the initial model;
the data determining module is used for determining an effective area of the initial model according to the target data;
a composition determining module for determining a wear composition of the initial model based on an effective area of the target model;
and the registration module is used for executing registration operation on the initial model according to the occlusion abrasion composition.
In a third aspect, embodiments of the present application also propose a computer device comprising a memory and a processor, the memory being connected to the processor, the processor being arranged to execute one or more computer programs stored in the memory, the processor, when executing the one or more computer programs, causing the computer device to implement the method as described in the first aspect.
In a fourth aspect, embodiments of the present application also propose a computer-readable storage medium storing a computer program comprising program instructions which, when executed by a processor, cause the processor to perform a method according to the first aspect.
The embodiment of the application can realize the following technical effects:
based on the method provided by the embodiment of the application, when the method is used for model registration, firstly, target data of an initial model are acquired; determining an effective area of the initial model according to the target data; determining a bite abrasion composition of the initial model based on an effective area of the initial model; determining the occlusion movement process of the upper jaw and the lower jaw in the initial model according to the occlusion abrasion composition to obtain occlusion movement data; and executing registration operation on the initial model according to the occlusion motion data. According to the method, the effective area of the three-dimensional digital model is automatically extracted to determine the occlusion abrasion composition of the patient, and the model is accurately registered according to the occlusion abrasion composition, so that the whole process does not need to be manually participated, errors caused by manual operation are effectively avoided, and the registration effect and the registration efficiency are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments of the present application will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of an oral occlusion registration method according to an embodiment of the present application;
fig. 2 is a schematic structural view of an oral occlusion registration device according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
It should be noted that, if not conflicting, the various features in the embodiments of the present application may be combined with each other, which is within the protection scope of the present application. In addition, while functional block division is performed in a device diagram and logical order is shown in a flowchart, in some cases, the steps shown or described may be performed in a different order than the block division in the device, or in the flowchart. Moreover, the words "first," "second," "third," and the like as used herein do not limit the data and order of execution, but merely distinguish between identical or similar items that have substantially the same function and effect.
An embodiment of the present application proposes an oral occlusion registration method, referring to fig. 1, fig. 1 is a schematic flow chart of an oral occlusion registration method according to an embodiment of the present application, where the method includes:
step S10, acquiring target data of an initial model;
it should be noted that, the execution body of the embodiment is a computer device, where all relevant software required for running the oral occlusion registration program is stored in the computer device, and the computer device may implement the oral occlusion registration method provided in the embodiment of the present application by executing the oral occlusion registration program.
The initial model is a three-dimensional oral model which is required to be subjected to data registration, and is a three-dimensional digital model established by scanning influence data in the oral cavity of a patient, for example, the digital data of the oral cavity is acquired by adopting an oral scanning device (a 3D scanner or a CT scanner) and the like, and the digital data of the oral cavity is correspondingly processed by related medical influence software, so that the initial model is created.
It is readily understood that in an ideal situation, the initial model can accurately reflect three-dimensional information of the interior of the patient's mouth, thereby characterizing the bite relationship of the interior of the patient's mouth. However, due to the complexity of the actual application scenario, such as scan deviation or abnormal data, the three-dimensional oral cavity model created by scanning the oral cavity of the patient often has a difference from the actual oral cavity condition of the patient, and the specific embodiment is that the occlusion relationship reflected by the three-dimensional oral cavity model cannot correspond to the actual occlusion relationship of the patient.
In some embodiments, the target data of the initial model primarily includes digitized data characterizing geometric and structural features of the patient's mouth, which can describe specific relationships within the patient's mouth, such as position, shape, size, and surface features, among others. As a possible implementation manner, the initial model is represented in the form of a grid model, and the grid model mainly comprises a plurality of vertexes, edges connected with different vertexes and a plurality of faces connected with different edges, each vertex corresponds to a set of three-dimensional coordinate data, and the three-dimensional coordinate data of the vertexes jointly determine the faces of the initial model.
Step S20, determining an effective area of the initial model according to the target data;
the effective area of the initial model refers to an area related to main features of the model in the three-dimensional oral cavity model, and correspondingly, an area unrelated to the main features of the model is an ineffective area of the initial model. For example, in the present embodiment, the effective area of the model mainly refers to an area where the three-dimensional oral cavity model can reflect the actual situation in the oral cavity of the patient, such as the geometric features and surface details in the oral cavity of the patient, and the ineffective area refers to an area where the actual geometric features or surface structures are not reflected in the three-dimensional oral cavity model, which may include noise generated when the oral cavity of the patient is scanned, a base of the oral cavity scanner, a shielding area during the scanning process, and so on. It is easy to understand that the invalid region of the model has no meaning for medical analysis and may generate a certain interference to the normal processing process of the model, so that the valid region of the model needs to be deleted, so that only the valid region of the model is reserved, and the processing of the subsequent steps is facilitated.
Step S30, determining an abrasion composition of the initial model based on an effective area of the initial model;
the abrasion composition of the initial model refers to image information of an occlusion abrasion surface which can be represented by the initial model, the occlusion abrasion surface can represent an occlusion abrasion area in the oral cavity of a patient, namely, the occlusion surface of the oral cavity of the patient generates a plane due to mutual abrasion between occlusion surfaces of teeth of the upper jaw and the lower jaw, which are mutually collided, in a long-time chewing process of the oral cavity of the patient, and the occlusion surface refers to surfaces of teeth of the patient, which are mutually contacted in chewing and occluding processes.
In this embodiment, after the effective area of the initial model is determined, all the feature points related to the occlusal wear surface of the oral cavity of the patient are determined in the effective area of the initial model, and three-dimensional coordinate data of all the feature points are packed, so that the wear pattern of the initial model is determined.
Step S40, determining the occlusion movement process of the upper jaw and the lower jaw in the initial model according to the occlusion abrasion composition to obtain occlusion movement data;
and step S50, performing registration operation on the initial model according to the occlusion motion data.
The registration operation performed on the initial model refers to coordinate alignment of different feature points in the initial model according to a determined registration relationship, in this embodiment, a suitable target position is determined in the initial model mainly according to an abrasion composition, and alignment based on three-dimensional coordinates is performed on each feature point in an abrasion surface of the initial model based on the target position, so that the initial model can accurately reflect the abrasion relationship in the oral cavity of the patient.
Further, in the above embodiment, the target data includes three-dimensional model data of the initial model, and the determining, according to the target data, an effective area of the initial model includes: performing point cloud gridding treatment on the initial model according to the three-dimensional model data to obtain a target model; determining a non-working area of the target model; and deleting the non-working area of the target model to obtain an effective area of the initial model.
It should be noted that, the target model is a mesh model obtained through point cloud meshing, and in this embodiment, the purpose of performing point cloud meshing on the initial model is to convert discrete point cloud data in the initial model into continuous mesh data, so as to analyze and process the continuous mesh data in subsequent steps, and determine model features better.
In some embodiments, performing a point cloud meshing process on the initial model includes: preprocessing the three-dimensional model data, such as filtering, denoising, smoothing and the like, determining the type of gridding processing according to the preprocessing result, wherein common grid types comprise triangular grids, tetrahedral grids or hexahedral grids and the like, and converting the point cloud data of the three-dimensional model into a grid model through a preset grid generation algorithm according to the type of gridding processing. As a preferred implementation, the mesh type of the object model in this embodiment may be a triangular mesh, and the mesh generation algorithm may include elaunay triangulation, a Bowyer-Watson algorithm, and the like.
In some embodiments, a selection rule of a non-working area corresponding to the three-dimensional model of the oral cavity may be preset, when the method is implemented, the area corresponding to the selection rule is determined to be the non-working area through a computer vision technology, then the non-working area is deleted in the initial model by adopting UP3D model editor or similar three-dimensional modeling software, and the remaining area is the effective area of the model.
Further, in the above embodiment, the effective area includes a tooth area and a gum area, and the determining the wear pattern of the initial model based on the effective area of the target model includes: determining the tooth data and gum data according to the effective area; performing a gingival separation operation on the effective area according to the tooth data and the gingival data, and determining characteristic information of each tooth; extracting a bite abrasion region from the effective region according to the characteristic information of each tooth; the wear pattern is determined based on the bite wear region.
Specifically, according to the tooth data and gum data, a gum separation operation is performed on the effective area, and characteristic information of each tooth is determined, including: determining a target image corresponding to the effective area; inputting the target image into a preset semantic segmentation model, so that the semantic segmentation model performs semantic segmentation processing on the target image, and outputting the target image of each tooth; and determining characteristic information of each tooth according to the target image of each tooth.
The specific form of the semantic segmentation model may be PSPNet (i.e. Pyramid Scene Parsing Network), which is a deep learning model for scene analysis, and uses a pyramid pooling module to aggregate context information of different areas, so as to improve accuracy and effect of scene analysis, and in general, a common PSPNet includes an input layer for inputting an image with a size of HxW, where H and W represent a height and a width of the image, respectively; the initial convolution layer is used for carrying out convolution operation on the input image so as to extract the characteristics of the image; the pyramid pooling module (Pyramid Pooling Module) comprises a plurality of pooling layers, each pooling layer is used for carrying out pooling operation on images with different scales, and the results of the pooling operation are stacked along the depth direction to form a pyramid structure so that the model can capture context information with different scales; and the global context pooling module (Global Context Pyramid Module) is used for carrying out global context pooling on the result of each pooling layer on the basis of the pyramid pooling module, and the global context pooling is realized by fusing the result of each pooling layer with the result of one global average pooling layer, so that the mining capability of the model on global context information can be effectively enhanced.
In this embodiment, after determining the target image, the target image is first input to the input layer of the PSPNet, and the feature map of the last convolution layer is obtained through the initial convolution layer, then the pyramid pooling module determines the representation of the different sub-areas, and forms the final feature representation through up-sampling and the connection layer, specifically including local and global context information, and finally outputs the final semantic segmentation result, that is, the target image of each tooth through the PSPNet, and inputs the target image to the convolution layer to obtain the feature information of each tooth.
More specifically, before inputting the target image into a preset neural network, the method further includes: training a target model according to a preset training set to obtain the semantic segmentation model, wherein the data set comprises a sample set and a label set corresponding to the sample set, the target model comprises a main network, a pooling module and a decoding network, the main network is used for carrying out feature extraction processing on an input sample, the pyramid pooling module is used for carrying out pooling processing on a feature extraction result, and the decoding module is used for outputting a semantic segmentation result.
In some embodiments, the data set is in a VOC format, wherein the sample set includes a plurality of patient oral artwork, and the label set includes labels corresponding to the patient oral artwork in each sample set, and as a possible implementation, the artwork is a normal RGB image, and the labels corresponding to the artwork are grey scale or 8-bit color images. Specifically, the shape of the original image is [ height, width,3], the shape of the label is [ height, width ], and for the label, the content of each pixel is a number, for example, 0,1, 2, 3, 4, 5, … …, which represents the category to which the pixel belongs.
The semantic segmentation model is used for classifying each pixel point in the original target image, namely, the probability that each pixel point belongs to each category in the prediction result is compared with the label, so that the network can be trained.
In some embodiments, the Loss functions used in training the object model include Cross Entropy Loss (i.e., cross entropy Loss function) and Dice Loss (i.e., dice coefficient Loss function), where the cross entropy Loss function is used in the case where the semantic segmentation model classifies pixels using Softmax, dice is used to use the evaluation index of the semantic segmentation as a Loss, and Dice coefficient is a set similarity measure function, which is generally used to calculate the similarity of two samples, where the value range may be [0,1].
Further, in the above embodiment, the determining, according to the bite abrasion pattern, a bite movement process of the upper and lower jaws in the initial model, to obtain bite movement data includes: determining the inclination angle of the abrasion surface according to the occlusion abrasion composition; according to the inclination angle of the wearing surface, carrying out occlusion simulation operation, and determining the occlusion movement process of the upper jaw and the lower jaw in the initial model to obtain occlusion movement data; the performing a registration operation on the initial model according to the bite motion data includes: calculating the maximum area contacted by the wearing surface in the movement process of the upper jaw and the lower jaw according to the occlusion movement data, and determining a target registration position; and executing registration operation on the initial model according to the target registration position.
As a possible implementation mode, according to the occlusion wear composition, the movement process of the upper jaw and the lower jaw in the initial model is reversely pushed out, and based on the movement process between the upper jaw and the lower jaw, occlusion movement data of the upper jaw and the lower jaw in the occlusion movement process can be calculated, namely, the maximum contact area of each wearing surface of the upper jaw and the lower jaw in the occlusion movement process is calculated, so that the optimal registration position, namely, the target registration position, can be determined.
More specifically, the performing an occlusion simulation operation according to the inclination angle of the wearing surface, determining an occlusion motion process of the upper and lower jaws in the initial model, and obtaining occlusion motion data includes: determining limit values of the upper and lower jaw collision areas according to the inclination angle of the occlusion wearing surface; determining the occlusion characteristics and the abrasion areas of the occlusion simulation composition according to the limit values; and laterally moving the teeth according to the occlusion characteristics and the abrasion region to obtain the occlusion motion data.
It should be noted that, in the foregoing embodiments, there is not necessarily a certain sequence between the steps, and those skilled in the art will understand from the description of the embodiments of the present application that, in different embodiments, the steps may be performed in different execution sequences, that is, may be performed in parallel, may be performed interchangeably, or the like.
As another aspect of the embodiments of the present application, the embodiments of the present application provide an oral bite registration device. The oral occlusion registration apparatus may be a software module, where the software module includes several instructions, which are stored in a memory, and the processor may access the memory and call the instructions to execute to complete the oral occlusion registration method set forth in the foregoing embodiments.
In some embodiments, the oral occlusion registration apparatus may also be constructed by hardware devices, for example, the oral occlusion registration apparatus may be constructed by one or more than two chips, and the chips may work cooperatively with each other to complete the oral occlusion registration method described in the above embodiments. As another example, the malocclusion registration device may also be built from various types of logic devices, such as general purpose processors, digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs), single chip microprocessors, ARM (Acorn RISC Machine) or other programmable logic devices, discrete gate or transistor logic, discrete hardware components, or any combination of these components.
Specifically, referring to fig. 2, fig. 2 is a schematic structural diagram of an oral occlusion registration device, and as shown in the figure, the oral occlusion registration device includes:
a data acquisition module 210, configured to acquire target data of the initial model;
a data determining module 220, configured to determine an effective area of the initial model according to the target data;
a composition determination module 230 for determining a wear composition of the initial model based on an effective area of the target model;
the information calculation module 240 is configured to determine occlusion motion data of the upper and lower jaws in the initial model according to the occlusion wear pattern;
a registration module 250 is configured to perform a registration operation on the initial model according to the occlusion motion data.
In a possible implementation manner, the target data includes three-dimensional model data of the initial model, and the data determining module 220 is specifically configured to, when configured to determine, according to the target data, an effective area of the initial model: performing point cloud gridding treatment on the initial model according to the three-dimensional model data to obtain a target model; determining a non-working area of the target model; and deleting the non-working area of the target model to obtain an effective area of the initial model.
In one possible implementation, the effective area includes a tooth area and a gum area, and the composition determining module 230 is specifically configured to, when determining the wear composition of the initial model based on the effective area of the target model: determining the tooth data and gum data according to the effective area; performing a gingival separation operation on the effective area according to the tooth data and the gingival data, and determining characteristic information of each tooth; extracting a bite abrasion region from the effective region according to the characteristic information of each tooth; the wear pattern is determined based on the bite wear region.
In one possible implementation, the information calculating module 240, in determining a bite motion process of the upper and lower jaws in the initial model according to the bite abrasion pattern, obtains bite motion data, including: determining the inclination angle of the abrasion surface according to the occlusion abrasion composition; according to the inclination angle of the wearing surface, carrying out occlusion simulation operation, and determining the occlusion movement process of the upper jaw and the lower jaw in the initial model to obtain occlusion movement data; when the initial model is used for executing registration operation according to the occlusion motion data, the method is particularly used for: calculating the maximum area contacted by the wearing surface in the movement process of the upper jaw and the lower jaw according to the occlusion movement data, and determining a target registration position; and executing registration operation on the initial model according to the target registration position.
In one possible implementation, the composition determining module 230 is specifically configured to, when configured to determine the characteristic information of each tooth by performing a gingival separation operation on the effective area according to the tooth data and the gingival data: determining a target image corresponding to the effective area; inputting the target image into a preset semantic segmentation model, so that the semantic segmentation model performs semantic segmentation processing on the target image, and outputting the target image of each tooth; and determining characteristic information of each tooth according to the target image of each tooth.
In one possible implementation, the composition determining module 230, before being used to input the target image into a preset neural network, is further configured to: training a target model according to a preset training set to obtain the semantic segmentation model, wherein the data set comprises a sample set and a label set corresponding to the sample set, the target model comprises a main network, a pooling module and a decoding network, the main network is used for carrying out feature extraction processing on an input sample, the pyramid pooling module is used for carrying out pooling processing on a feature extraction result, and the decoding module is used for outputting a semantic segmentation result.
It should be noted that, the above-mentioned oral cavity occlusion registration device may execute the oral cavity occlusion registration method provided in the embodiment of the present application, and has the corresponding functional module and beneficial effects of the execution method. Technical details not described in detail in the oral bite registration device embodiments may be found in the oral bite registration devices provided in the embodiments of the present application.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a computer device according to an embodiment of the present application. The computer device includes one or more processors 41 and memory 42. The memory 42 is connected to the one or more processors 41, for example via a bus to the processor 41.
The processor 41 is configured to support the computer device to perform the respective functions of the methods in the method embodiments described above. The processor 41 may be a central processing unit (central processing unit, CPU), a network processor (network processor, NP), a hardware chip or any combination thereof. The hardware chip may be an application specific integrated circuit (application specific integrated circuit, ASIC), a programmable logic device (programmable logic device, PLD), or a combination thereof. The PLD may be a complex programmable logic device (complex programmable logic device, CPLD), a field-programmable gate array (field-programmable gate array, FPGA), general-purpose array logic (generic array logic, GAL), or any combination thereof.
The memory 42 is used for storing program codes and the like. Memory 42 may include Volatile Memory (VM), such as random access memory (random access memory, RAM); the memory may also include a nonvolatile memory (NVM), such as read-only memory (ROM), flash memory (flash memory), hard disk (HDD) or Solid State Drive (SSD); the memory may also comprise a combination of the above types of memories.
The memory 42 may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, such as program instructions/modules corresponding to the method of malocclusion registration in embodiments of the present application. The processor executes the various functional applications and data processing of the oral occlusion registration method and the oral occlusion registration apparatus by running the non-volatile software programs, instructions and modules stored in the memory, that is, the functions of the various modules or units of the oral occlusion registration method and the oral occlusion registration apparatus provided by the above method embodiments are implemented.
The memory 42 may include a storage program area that may store an operating system, at least one application program required for functionality, and a storage data area. The stored data area may store data created from use of the malocclusion registration device, and the like. In some embodiments, the memory optionally includes memory remotely located relative to the processor, which may be connected to the malocclusion registration device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The one or more modules are stored in the memory 42, which when executed by the one or more processors 41, perform the method of oral occlusion registration in any of the method embodiments described above, e.g., perform the method steps described in the method embodiments described above, implementing the functions of the modules described in the apparatus embodiments described above.
The present application also provides a computer-readable storage medium storing a computer program comprising program instructions that, when executed by a computer, cause the computer to perform the method of the previous embodiments.
Those skilled in the art will appreciate that implementing all or part of the above-described methods in the embodiments may be accomplished by computer programs stored in a computer-readable storage medium, which when executed, may include the steps of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only memory (ROM), a random-access memory (Random Access memory, RAM), or the like.
The foregoing disclosure is only illustrative of the preferred embodiments of the present application and is not intended to limit the scope of the claims herein, as the equivalent of the claims herein shall be construed to fall within the scope of the claims herein.

Claims (10)

1. A method of dental occlusion registration, the method comprising:
acquiring target data of an initial model;
determining an effective area of the initial model according to the target data;
determining a bite abrasion composition of the initial model based on an effective area of the initial model;
determining the occlusion movement process of the upper jaw and the lower jaw in the initial model according to the occlusion abrasion composition to obtain occlusion movement data;
and executing registration operation on the initial model according to the occlusion motion data.
2. The method of claim 1, wherein the target data comprises three-dimensional model data of the initial model, and wherein the determining the active area of the initial model from the target data comprises:
performing point cloud gridding treatment on the initial model according to the three-dimensional model data to obtain a target model;
determining a non-working area of the target model;
and deleting the non-working area of the target model to obtain an effective area of the initial model.
3. The method of claim 1, wherein the active area comprises a tooth area and a gum area, wherein the determining the wear pattern of the initial model based on the active area of the target model comprises:
determining the tooth data and gum data according to the effective area;
performing a gingival separation operation on the effective area according to the tooth data and the gingival data, and determining characteristic information of each tooth;
extracting a bite abrasion region from the effective region according to the characteristic information of each tooth;
the wear pattern is determined based on the bite wear region.
4. The method according to claim 1, wherein the bite abrasion pattern corresponds to a plurality of abrasion surfaces, and the determining a bite movement process of the upper and lower jaws in the initial model according to the bite abrasion pattern, to obtain bite movement data, includes:
determining the inclination angle of the abrasion surface according to the occlusion abrasion composition;
according to the inclination angle of the wearing surface, carrying out occlusion simulation operation, and determining the occlusion movement process of the upper jaw and the lower jaw in the initial model to obtain occlusion movement data;
the performing a registration operation on the initial model according to the bite motion data includes:
calculating the maximum area contacted by the wearing surface in the movement process of the upper jaw and the lower jaw according to the occlusion movement data, and determining a target registration position;
and executing registration operation on the initial model according to the target registration position.
5. The method according to claim 4, wherein the performing bite simulation operation according to the inclination angle of the wearing surface, determining a bite movement process of the upper and lower jaws in the initial model, and obtaining bite movement data includes:
determining limit values of the upper and lower jaw collision areas according to the inclination angle of the occlusion wearing surface;
determining the occlusion characteristics and the abrasion areas of the occlusion simulation composition according to the limit values;
and laterally moving the teeth according to the occlusion characteristics and the abrasion region to obtain the occlusion motion data.
6. A method according to claim 3, wherein said performing a gingival separation operation on said active region based on said tooth data and gingival data, determining characteristic information for each tooth, comprises:
determining a target image corresponding to the effective area;
inputting the target image into a preset semantic segmentation model, so that the semantic segmentation model performs semantic segmentation processing on the target image, and outputting the target image of each tooth;
and determining characteristic information of each tooth according to the target image of each tooth.
7. The method of claim 6, further comprising, prior to inputting the target image into a pre-set neural network:
training a target model according to a preset training set to obtain the semantic segmentation model, wherein the data set comprises a sample set and a label set corresponding to the sample set, the target model comprises a main network, a pooling module and a decoding network, the main network is used for carrying out feature extraction processing on an input sample, the pyramid pooling module is used for carrying out pooling processing on a feature extraction result, and the decoding module is used for outputting a semantic segmentation result.
8. An oral bite registration device, comprising:
the data acquisition module is used for acquiring target data of the initial model;
the data determining module is used for determining an effective area of the initial model according to the target data;
a composition determining module for determining a wear composition of the initial model based on an effective area of the target model;
the information calculation module is used for determining occlusion motion data of the upper jaw and the lower jaw in the initial model according to the occlusion abrasion composition;
and the registration module is used for executing registration operation on the initial model according to the occlusion motion data.
9. A computer device comprising a memory and a processor, the memory being connected to the processor, the processor being for executing one or more computer programs stored in the memory, the processor, when executing the one or more computer programs, causing the computer device to implement the method of any of claims 1-7.
10. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program comprising program instructions which, when executed by a processor, cause the processor to perform the method of any of claims 1-7.
CN202311665171.1A 2023-12-05 2023-12-05 Method, device, equipment and storage medium for registration of occlusion of oral cavity Pending CN117442370A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311665171.1A CN117442370A (en) 2023-12-05 2023-12-05 Method, device, equipment and storage medium for registration of occlusion of oral cavity

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311665171.1A CN117442370A (en) 2023-12-05 2023-12-05 Method, device, equipment and storage medium for registration of occlusion of oral cavity

Publications (1)

Publication Number Publication Date
CN117442370A true CN117442370A (en) 2024-01-26

Family

ID=89583881

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311665171.1A Pending CN117442370A (en) 2023-12-05 2023-12-05 Method, device, equipment and storage medium for registration of occlusion of oral cavity

Country Status (1)

Country Link
CN (1) CN117442370A (en)

Similar Documents

Publication Publication Date Title
EP3620130A1 (en) Automated orthodontic treatment planning using deep learning
KR102273438B1 (en) Apparatus and method for automatic registration of oral scan data and computed tomography image using crown segmentation of oral scan data
EP3591616A1 (en) Automated determination of a canonical pose of a 3d dental structure and superimposition of 3d dental structures using deep learning
CA3114650C (en) Method and apparatus for generating three-dimensional model, device, and storage medium
JP2007068992A5 (en)
CN112515787B (en) Three-dimensional dental data analysis method
CN110264573B (en) Three-dimensional reconstruction method and device based on structured light, terminal equipment and storage medium
CN110223376B (en) Three-dimensional particle reconstruction method based on single accumulated particle material image
CN110176064B (en) Automatic identification method for main body object of photogrammetric generation three-dimensional model
JP2022549281A (en) Method, system and computer readable storage medium for registering intraoral measurements
JP7078642B2 (en) How to get 3D model data of multiple components of an object
CN113344950A (en) CBCT image tooth segmentation method combining deep learning with point cloud semantics
US20240070882A1 (en) Method and device for matching three-dimensional oral scan data via deep-learning based 3d feature detection
CN115471663A (en) Three-stage dental crown segmentation method, device, terminal and medium based on deep learning
Ben-Hamadou et al. Teeth3ds: a benchmark for teeth segmentation and labeling from intra-oral 3d scans
KR102255592B1 (en) method of processing dental CT images for improving precision of margin line extracted therefrom
CN117442370A (en) Method, device, equipment and storage medium for registration of occlusion of oral cavity
CN116485809B (en) Tooth example segmentation method and system based on self-attention and receptive field adjustment
CN114445309A (en) Defect image generation method for depth learning and system for defect image generation method for depth learning
KR102496449B1 (en) Automated method for tooth segmentation of three dimensional scan data using tooth boundary curve and computer readable medium having program for performing the method
Dhar et al. Automatic tracing of mandibular canal pathways using deep learning
CN113140016B (en) Metal artifact correction method and system for CBCT equipment
CN116524118B (en) Multi-mode rendering method based on three-dimensional tooth CBCT data and oral cavity scanning model
EP4307229A1 (en) Method and system for tooth pose estimation
CN117710493A (en) Tooth colorimetric method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination