CN114898046A - Model generation method, related device, equipment and storage medium - Google Patents

Model generation method, related device, equipment and storage medium Download PDF

Info

Publication number
CN114898046A
CN114898046A CN202210585198.9A CN202210585198A CN114898046A CN 114898046 A CN114898046 A CN 114898046A CN 202210585198 A CN202210585198 A CN 202210585198A CN 114898046 A CN114898046 A CN 114898046A
Authority
CN
China
Prior art keywords
caries
tooth
image data
data
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202210585198.9A
Other languages
Chinese (zh)
Inventor
张凯
陈兴春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sensetime Technology Co Ltd
Original Assignee
Shenzhen Sensetime Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sensetime Technology Co Ltd filed Critical Shenzhen Sensetime Technology Co Ltd
Priority to CN202210585198.9A priority Critical patent/CN114898046A/en
Publication of CN114898046A publication Critical patent/CN114898046A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/60ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)

Abstract

The application discloses a model generation method, a related device, equipment and a storage medium, wherein the model generation method comprises the following steps: acquiring first data acquired by a defective object to be processed before cleaning, and acquiring second data acquired by the defective object to be processed after cleaning; and generating a three-dimensional model of the part to be filled of the defected object to be processed based on the first data and the second data. Above-mentioned scheme can assist and promote the efficiency that defect was filled.

Description

Model generation method, related device, equipment and storage medium
Technical Field
The present application relates to the field of computer vision technologies, and in particular, to a model generation method, and a related apparatus, device, and storage medium.
Background
Due to factors such as daily use and natural aging, human organs, object devices, ground roads and the like are all inevitably damaged. For example, in human organs, the incidence of dental caries is increasing due to a lot of reasons such as excessive intake of sugar and habit of using teeth.
At present, the operation of defect filling is generally complex. Taking filling of dental caries as an example, filling of dental caries can be completed only by complicated treatments such as going caries, resin layered filling, light-cured calcium hydroxide indirect marrow covering, and adjusting grinding and polishing of high-speed turbine dental drills, dental curettes, hooked forceps, residual root forceps and other instruments, thus greatly affecting efficiency. In view of this, how to improve the efficiency of defect filling becomes an urgent problem to be solved.
Disclosure of Invention
The application provides a model generation method, a related device, equipment and a storage medium.
A first aspect of the present application provides a model generation method, including: acquiring first data acquired by a defective object to be processed before cleaning, and acquiring second data acquired by the defective object to be processed after cleaning; and generating a three-dimensional model of the part to be filled of the defected object to be processed based on the first data and the second data.
Therefore, the three-dimensional model of the part to be filled of the defective object to be processed can be obtained through modeling through the first data and the second data which are respectively collected before and after the defective object to be processed is cleaned, and filling operation can be greatly reduced. Therefore, the efficiency of defect filling can be improved.
Wherein, acquiring first data collected before cleaning of a defective object to be processed comprises: acquiring image data of each object to be processed, which is acquired before cleaning; determining a defective object to be processed based on the image data; the image data of the defective object to be processed is used as first data.
Therefore, the image data of the defective object to be processed is used as the first data by acquiring the image data of each object to be processed collected before cleaning and determining the defective object to be processed based on the image data, so that the inspection operation of the object to be processed can be greatly reduced in the whole filling process through image identification, and the efficiency can be further improved.
Wherein the object to be processed is a tooth, the defective object to be processed is a caries, and the determining of the defective object to be processed based on the image data includes: respectively acquiring the dental caries confidence coefficient of each tooth based on the image data; caries is determined in each tooth based on caries confidence.
Therefore, the caries confidence of each tooth is respectively obtained based on the image data, and the caries in each tooth is determined based on the caries confidence, so that the accuracy of determining the caries can be improved by determining the caries through the caries confidence.
Wherein the image data includes visible light image data and depth image data, and the dental caries confidence of each tooth is respectively obtained based on the image data, including: identifying based on the visible light image data to respectively obtain a first dental caries confidence coefficient of each tooth; and identifying based on the depth image data to respectively obtain a second caries confidence coefficient of each tooth; combining the first caries confidence level and the second caries confidence level to obtain the caries confidence level of each tooth.
Therefore, the image data comprises visible light image data and depth image data, the identification is carried out based on the visible light image data, a first caries confidence coefficient of each tooth is obtained respectively, the identification is carried out based on the depth image data, a second caries confidence coefficient of each tooth is obtained respectively, and the caries confidence coefficient of each tooth is obtained by combining the first caries confidence coefficient and the second caries confidence coefficient, so that the caries identification is carried out from two dimensions of visible light and depth, and the accuracy of determining the caries can be further improved.
Wherein determining caries in individual teeth based on caries confidence comprises: determining whether the caries confidence level is above a first threshold; in the case that the caries confidence level is greater than a first threshold, determining that the tooth is carious; displaying image data of the tooth and receiving a recognition result input based on the image data in a case where the caries confidence is not greater than a first threshold and greater than a second threshold; wherein the first threshold is greater than the second threshold and the identification includes whether the tooth is carious.
Therefore, by judging whether the confidence of caries is higher than a first threshold, and determining that a tooth is caries in response to the confidence of the tooth being higher than the first threshold, without manual confirmation by a user, in the case where the confidence of caries is not higher than the first threshold and is higher than a second threshold, image data of the tooth is displayed, and a recognition result based on the input of the image data is received, and the first threshold is higher than the second threshold, the recognition result includes whether the tooth is caries, and in the case where direct confirmation is not possible, whether caries is determined by simple interaction with the user, so that an operation flow of confirming whether caries is possible to be greatly saved, and accuracy of confirming whether caries is improved, that is, efficiency and accuracy of confirming caries can be improved at the same time.
Wherein the object to be treated is a tooth, the defective object to be treated is a caries, and the model generating method further comprises: the image data of the tooth not determined as the caries is saved to the tooth file of the object to which the tooth belongs.
Therefore, the image data of the tooth which is not determined as the decayed tooth is stored to the tooth file of the object to which the tooth belongs, and convenience of follow-up re-diagnosis is improved.
Wherein, after generating the three-dimensional model of the portion to be filled of the defective object to be processed based on the first data and the second data, the method further comprises: and responding to the filling of the part to be filled, acquiring third data acquired after the filling of the decayed tooth is completed, and storing the third data to a tooth file of an object to which the decayed tooth belongs.
Therefore, after the three-dimensional model is generated, in response to the filling of the part to be filled, third data acquired after the filling of the decayed tooth is completed is acquired, and the third data is stored in the tooth file of the object to which the decayed tooth belongs, so that convenience of subsequent re-diagnosis is improved.
A second aspect of the present application provides a model generation apparatus, including: the device comprises a data acquisition module and a model generation module, wherein the data acquisition module is used for acquiring first data acquired by a defective object to be processed before cleaning and acquiring second data acquired by the defective object to be processed after cleaning; and the model generation module is used for generating a three-dimensional model of the part to be filled of the defected object to be processed based on the first data and the second data.
A third aspect of the present application provides a model generation apparatus, which includes a memory and a processor coupled to each other, wherein the processor is configured to execute program instructions stored in the memory to implement the model generation method in the first aspect.
Therefore, a three-dimensional model of the portion to be filled of the defective object to be processed can be obtained through modeling, and therefore defect filling operations can be greatly reduced. Therefore, the efficiency of defect filling can be improved.
The model generation device further comprises an image acquisition device, the image acquisition device is coupled to the processor, and the image acquisition device is used for acquiring image data of the object to be processed.
Wherein the image capture device includes a visible light camera and a depth camera.
The model generation device further comprises a three-dimensional printing device, the three-dimensional printing device is coupled to the processor, and the three-dimensional printing device is used for printing on the basis of the three-dimensional model of the part to be filled of the defected object to be processed.
A fourth aspect of the present application provides a computer-readable storage medium having stored thereon program instructions that, when executed by a processor, implement the model generation method of the first aspect described above.
According to the scheme, the first data collected before the defective object to be processed is cleaned is obtained, the second data collected after the defective object to be processed is cleaned is obtained, and therefore the three-dimensional model of the part to be filled of the defective object to be processed is generated based on the first data and the second data. Therefore, the efficiency and experience of defect filling can be improved.
Drawings
FIG. 1 is a schematic flow chart diagram of an embodiment of a model generation method of the present application;
FIG. 2 is a schematic flow chart of an embodiment for determining caries in individual teeth;
FIG. 3 is a schematic flow chart diagram of another embodiment of a model generation method of the present application;
FIG. 4 is a block diagram of an embodiment of a model generation apparatus according to the present application;
FIG. 5 is a block diagram of an embodiment of a model generation apparatus according to the present application;
FIG. 6 is a block diagram of an embodiment of a computer-readable storage medium of the present application.
Detailed Description
The following describes in detail the embodiments of the present application with reference to the drawings attached hereto.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular system structures, interfaces, techniques, etc. in order to provide a thorough understanding of the present application.
The terms "system" and "network" are often used interchangeably herein. The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship. Further, the term "plurality" herein means two or more than two.
Referring to fig. 1, fig. 1 is a schematic flow chart of an embodiment of a model generation method according to the present application.
Specifically, the method may include the steps of:
step S11: acquiring first data acquired by the defected object to be processed before cleaning, and acquiring second data acquired by the defected object to be processed after cleaning.
In the embodiment of the disclosure, the object to be processed may be set according to a specific application scenario. Taking the to-be-processed object as a road surface as an example, the damaged to-be-processed object represents the road surface with the conditions such as potholes, cracks and the like; or, taking the object to be processed as a vessel as an example, the defective object to be processed is a vessel representing the condition of having a gap, a crack, etc.; alternatively, taking the example that the object to be treated is a tooth, the defective object to be treated is a tooth which is characterized by the existence of cavities, gaps, and the like, which will be collectively referred to as caries in the embodiments disclosed in the present application. Other scenarios may be analogized, and are not exemplified here.
In one implementation scenario, for example, where the object to be treated is a tooth, the caries cleaning operation may include, but is not limited to, removing necrotic parts, and the like. The detailed meaning of the necrotic part can be referred to the medical technology of dental caries, and is not described herein.
In one implementation scenario, in order to determine a defective object to be processed before cleaning, image data acquired by each object to be processed before cleaning may be acquired, and the defective object to be processed is determined based on the image data, so that the image data of the defective object to be processed is used as the first data. In the mode, the image data of each object to be processed, which is acquired before being cleaned, is acquired, and the defective object to be processed is determined based on the image data, so that the image data of the defective object to be processed is used as the first data, and therefore, through image identification, the inspection operation of the object to be processed can be greatly reduced in the whole filling process, and the efficiency can be further improved.
In a specific implementation scenario, taking the object to be processed as a tooth as an example, image data acquired before cleaning of the tooth may be acquired, caries in each tooth may be determined based on the image data, and the image data of caries may be used as the first data; alternatively, for example, in a vessel repair scene, image data of the vessel before cleaning (e.g., image data of an outer surface of the vessel, image data of an inner surface of the vessel, etc.) may be acquired, a defective vessel may be determined based on the image data, and the image data of the defective vessel may be used as the first data. Other cases may be analogized, and no one example is given here.
In a specific implementation scenario, the image data may include visible light image data, such as RGB image data, grayscale image data, and the like, which are not limited herein. Still taking the example that the object to be processed is a tooth, on the basis of obtaining the visible light image data, the identification can be performed based on the visible light image data, and the dental caries in each tooth can be determined.
For example, in order to improve the identification efficiency, a caries identification model may be trained in advance, and the caries identification model may include, but is not limited to, a convolutional neural network, etc., where the network structure of the caries identification model is not limited. On the basis, the visible light image data can be identified by using the decayed tooth identification model to obtain decayed tooth identification results, and the decayed tooth identification results comprise whether each tooth is decayed tooth or not. Specifically, several sample visible light images may be acquired in advance, and the sample visible light images are marked with sample marks of respective teeth, the sample marks indicating whether the teeth are actually decayed (e.g., sample mark 1 may be used to indicate that the teeth are decayed, and sample mark 0 may be used to indicate that the teeth are not decayed). Based on the method, the sample visible light image can be identified by using the decayed tooth identification model to obtain the predicted mark of the tooth, and the predicted mark represents whether the tooth is predicted to be decayed or not, so that the network parameter of the decayed tooth identification model can be adjusted based on the difference between the sample mark and the predicted mark. It should be noted that, the specific measurement manner of the difference may refer to a loss function such as cross entropy loss, and the specific adjustment manner of the parameter may refer to an optimization manner such as gradient descent, which is not described herein again.
Alternatively, unlike the previously described manner in which the visible light image is caries-identified by a caries identification model, the visible light image may also be processed using conventional image processing to determine caries in individual teeth. Specifically, the visible light image data may be processed by using an edge operator to obtain the edge of each tooth. Further, a tooth region formed by the edge of the tooth may be processed by a watershed algorithm, for example, and if there is a line (or a thicker line and a larger line) in the tooth region after processing, it indicates that there is a more prominent line on the tooth surface, so that the tooth may be considered as caries, whereas if there is no line (or a thinner line and a smaller line) in the tooth region after processing, it indicates that there is no prominent line on the tooth surface, so that the tooth may be considered as not caries.
In a specific implementation scenario, in order to compensate for the deficiency that the caries identification through the visible light image may have the false smoking, which may cause the lines on the tooth surface to be identified, the image data may also include depth image data, for example, the depth image data may be obtained by acquiring images of the tooth using a 3D structured light camera, a 3D ToF (Time of flight), an ultrasonic scanning device, or the like. On the basis of this, identification can be carried out on the basis of the depth image data, and caries in the respective teeth can be determined. For example, if the tooth is found to have a complete shape by recognition based on the depth image data, the tooth may be considered not to be a caries, whereas if the tooth is found to have a hole by recognition based on the depth image data, the tooth may be considered to be a caries.
In a specific embodiment, still taking the example of the object to be processed being a tooth, the caries confidence level of each tooth can be determined based on the image data, and the caries in each tooth can be determined based on the caries confidence level. It is noted that the higher the caries confidence level, the greater the likelihood that a tooth is carious, whereas the lower the caries confidence level, the lower the likelihood that a tooth is carious. The detailed calculation process of the caries confidence level and the method for determining caries based on the caries confidence level can be referred to the following disclosure examples, which are not repeated herein. In the above manner, the caries confidence of each tooth is respectively obtained based on the image data, and the caries in each tooth is determined based on the caries confidence, so that the caries is determined by the confidence, and the accuracy of determining the caries can be improved.
In one implementation scenario, still taking the example where the object to be processed is a tooth, after determining caries in each tooth, image data of teeth not determined to be caries may be saved to a tooth archive of the object to which the tooth belongs. For example, the dental archive may have image data stored therein that is archived according to the acquisition time. By the method, the image data of the tooth which is not determined as the decayed tooth is stored to the tooth file of the object to which the tooth belongs, so that convenience of follow-up re-diagnosis is improved.
In one implementation, after caries is identified in each tooth and first data for caries is acquired, further post-cleaning caries may be collected in response to completion of caries cleaning to obtain second data. The second data may include depth image data, or may include both visible light image data and depth image data, which is not limited herein.
Step S12: and generating a three-dimensional model of the part to be filled of the defected object to be processed based on the first data and the second data.
In one implementation scenario, as previously described, the first data may include depth image data and the second data may also include depth image data. Taking the example that the object to be processed is a tooth, for the sake of distinction, the depth image data before caries cleaning may be referred to as a first depth image, and the depth image data after caries cleaning may be referred to as a second depth image. Specifically, a first partial image of an adjacent (e.g., meshed) portion of the caries and other teeth in the first depth image may be acquired, and a second partial image of a carious cavity portion may be acquired based on the second depth image. On the basis, the first local image and the second local image can be fused to obtain a fused local image. In order to improve the usability of the three-dimensional model, the fused local image can be further finely adjusted, and the three-dimensional model of the part to be filled is obtained based on the finely adjusted fused local image. It should be noted that the fine adjustment may include, but is not limited to, removing outliers in the fused local image, and the like, and is not limited herein.
In another implementation scenario, still taking the object to be processed as a tooth as an example, as previously described, the depth image data in the first data before caries cleaning may be referred to as a first depth image, and the depth image data in the second data after caries cleaning may be referred to as a second depth image. In order to improve the model generation efficiency, a generative model may be trained in advance, and the generative model may include, but is not limited to, a convolutional neural network, and the like, where the network structure of the generative model is not limited. Based on the generated model, the first depth image and the second depth image can be processed to obtain a three-dimensional model of the part to be filled. In particular, a sample image pair may be acquired based on historical filling data, and the sample image pair includes a first sample depth image before caries clearance and a second sample depth image after caries clearance. In addition, a three-dimensional model of the specimen of the portion of the caries to be filled can be obtained. For example, a three-dimensional model of the sample can be obtained by manual delineation by a doctor; alternatively, the difference between the third sample depth image and the second sample depth image after the caries is completely filled in the historical dental filling data may be obtained to obtain a three-dimensional model of the sample, which is not limited herein. Further, the first sample depth image and the second sample depth image may be processed by using a generation model to obtain a predicted three-dimensional model of a portion of the caries to be filled, and network parameters of the generation model may be adjusted based on a difference between the sample three-dimensional model and the predicted three-dimensional model. It should be noted that, the specific measurement manner of the difference may refer to a loss function such as dice loss, and the specific adjustment process of the parameter may refer to an optimization manner such as gradient descent, which is not described herein again.
In an implementation scenario, after a three-dimensional model of a portion to be filled is generated, printing may be performed based on the three-dimensional model to obtain a three-dimensional printed product, and filling of the portion to be filled of the defective object to be processed is completed based on the three-dimensional printed product. Taking the example that the object to be processed is a tooth, after that, in response to the filling of the part to be filled, third data acquired after the filling of the decayed tooth is completed can be acquired, and the third data is saved to the tooth file of the object to which the decayed tooth belongs. It should be noted that, in the existing method, before filling, a doctor is often required to visually inspect whether the decayed tooth exists, if so, the doctor is required to clean the decayed tooth (for example, to clean a necrotic part), and then the doctor needs to manually fill the position of the decayed tooth with a filling material, repeatedly correct the position, perform a curing treatment, and then repeatedly polish and correct the position, so that the whole filling process is time-consuming and labor-consuming, and the experience is poor. Different from the method, after the three-dimensional printed product is obtained based on the three-dimensional model, the three-dimensional printed product can be directly transplanted to the decayed tooth and then cured without repeated correction, so that the whole filling process can be greatly simplified, discomfort in the filling process is greatly reduced, and foreign body sensation of the filled tooth is greatly reduced. In addition, the medical pressure of doctors and medical institutions can be reduced.
According to the scheme, the first data collected before the defective object to be processed is cleaned is obtained, the second data collected after the defective object to be processed is cleaned is obtained, and therefore the three-dimensional model of the part to be filled of the defective object to be processed is generated based on the first data and the second data. Therefore, the efficiency of defect filling can be improved.
Referring to FIG. 2, FIG. 2 is a schematic flow chart of one embodiment for determining caries in individual teeth. Specifically, fig. 2 is a schematic view of a flow for determining caries, taking an example in which an object to be treated is a tooth. Specifically, the embodiments of the present disclosure may include the following steps:
step S21: based on the image data, caries confidence levels for individual teeth are obtained separately.
In one implementation scenario, where the image data may include visible light image data, as previously described, identification may be based on the visible light image data to obtain a caries confidence level for each tooth. Illustratively, the visible light image can be identified based on a caries identification model, resulting in a caries confidence level for the tooth. Unlike the caries identification model of the previously disclosed embodiment, in the training process, the sample mark may represent the actual confidence that the tooth is carious (e.g., the sample mark is 0 representing the actual confidence of 0%, and the sample mark is 1 representing the actual confidence of 100%), then the sample visible light image may be identified using the caries identification model to obtain a predicted mark for the tooth, and the predicted mark may represent the predicted confidence that the tooth is carious. Based on this, network parameters of the caries identification model may be adjusted based on the difference between the sample markers and the predictive markers. For the description of the caries identification model, reference may be made to the description of the embodiments disclosed above, and further description is omitted here.
In another implementation scenario, where the image data may include depth image data, as previously described, identification may be based on the depth image data to obtain a caries confidence level for each tooth. Illustratively, the more obvious the hole in the depth image data, the greater the confidence; or, in order to improve the accuracy of obtaining the confidence of the dental caries based on the depth image data identification, the process of identifying the visible light image data by using the dental caries identification model may also be referred to, the dental caries identification model may be obtained by training in advance by using the sample depth image data labeled with the sample marker, and the dental caries identification model may identify the depth image data, and the specific training process may refer to the foregoing description, which is not described herein again.
In yet another implementation scenario, to improve the accuracy of the confidence level, the image data may include visible light image data and depth image data, and the identification may be performed based on the visible light image data to obtain a first caries confidence level for each tooth, respectively, and based on the depth image data to obtain a second caries confidence level for each tooth, respectively, and the first caries confidence level and the second caries confidence level are combined to obtain a caries confidence level for each tooth. It should be noted that the process of acquiring the first caries confidence level and the second caries confidence level can refer to the related description, and will not be described herein. In addition, for each tooth, the first caries confidence level and the second caries confidence level can be weighted to obtain the caries confidence level. For example, where the reference to the depth image data is more dependent, the weight of the second caries confidence level may be set to a weight greater than the first caries confidence level; alternatively, in the case where the reference to the visible light image data is largely dependent, the weight of the first caries confidence degree may be set to be larger than that of the second caries confidence degree; alternatively, where the visible image data and the depth image data are referenced identically, the weight of the first caries confidence level may be set equal to the weight of the second caries confidence level, without limitation. In the above manner, the image data includes visible light image data and depth image data, and the identification is performed based on the visible light image data to obtain the first caries confidence level of each tooth, and the identification is performed based on the depth image data to obtain the second caries confidence level of each tooth, so that the caries confidence level of each tooth is obtained by combining the first caries confidence level and the second caries confidence level, and therefore, by performing caries identification from two dimensions of visible light and depth, the accuracy of determining caries can be further improved.
Step S22: caries is determined in each tooth based on caries confidence.
In one implementation scenario, the first threshold and the second threshold may be preset, and the first threshold is higher than the second threshold. For example, the first threshold may be set to 90%, the second threshold may be set to 70%; alternatively, the first threshold may be set to 95%, and the second threshold may be set to 80%, which is not limited herein. Based on this, it can be determined whether the caries confidence level is above a first threshold. Further, in the case where the caries confidence is greater than the first threshold, it may be directly determined that the tooth is caries, and in the case where the caries confidence is not greater than the first threshold and is greater than the second threshold, the image data of the tooth may be displayed, and a recognition result input (e.g., a dentist) based on the image data may be received, and the recognition result includes whether the tooth is caries. In the above manner, by judging whether the confidence of the caries is higher than the first threshold, and determining that the tooth is the caries if the confidence of the caries is higher than the first threshold, so that manual confirmation by a user is not required, and displaying the image data of the tooth if the confidence of the caries is not higher than the first threshold and is higher than the second threshold, and receiving the recognition result input based on the image data, and the first threshold is higher than the second threshold, the recognition result includes whether the tooth is the caries, so that in the case that direct confirmation cannot be performed, whether the caries is caused or not can be confirmed through simple interaction with the user, thereby greatly saving the operation flow of confirming whether the tooth is the caries, and improving the accuracy of confirming whether the caries is caused, i.e., improving the efficiency and the accuracy of confirming the caries simultaneously.
In one particular implementation scenario, while the image data is displayed, confirmation cues may also be displayed, which may include, but are not limited to, caries confidence for the teeth. For example, the confirmation prompt may be "the caries confidence of the identified tooth is 77%, please confirm whether it is caries", which is not limited herein. It may be determined that the recognition result includes that the tooth is carious in response to the user selecting "yes" or it may be determined that the recognition result includes that the tooth is not carious in response to the user selecting "no".
In one particular implementation scenario, the tooth may also be determined directly to be not carious in response to the caries confidence level of the tooth being not greater than a second threshold. That is, in the case where the caries confidence of the tooth is high (i.e., higher than the first threshold), it may be directly determined that the tooth is carious, and in the case where the caries confidence of the tooth is low (i.e., lower than the second threshold), it may be directly determined that the tooth is not carious, and only in the case where the caries confidence of the tooth is moderate (i.e., between the first threshold and the second threshold), it may not be directly determined whether the tooth is carious, requiring user-assisted confirmation, so that the time required for confirming the caries can be greatly saved, further improving the overall efficiency.
According to the scheme, the caries confidence degrees of all the teeth are respectively obtained based on the image data, and the caries in all the teeth is determined based on the caries confidence degrees, so that the caries is determined according to the caries confidence degrees, and the accuracy of determining the caries can be improved.
Referring to fig. 3, fig. 3 is a schematic flow chart of another embodiment of the model generation method of the present application. Specifically, the method may include the steps of:
step S31: and acquiring image data of each object to be processed, which is acquired before cleaning.
Reference may be made to the related description in the foregoing embodiments, which are not repeated herein.
Step S32: and respectively acquiring the defect confidence coefficients of the objects to be processed based on the image data.
It should be noted that, the specific process of obtaining the defect confidence coefficient may refer to the specific steps of obtaining the caries confidence coefficient in the foregoing disclosed embodiments, for example, the first defect confidence coefficient of each object to be processed may be obtained by performing recognition based on visible light image data, the second defect confidence coefficient of each object to be processed may be obtained by performing recognition based on depth image data, and the defect confidence coefficient of each object to be processed may be obtained by combining the first defect confidence coefficient and the second defect confidence coefficient. Reference may be made to the related description in the foregoing embodiments, which are not repeated herein.
Step S33: and determining whether each object to be processed is defective or not based on the defect confidence, if not, executing step S34, and if so, executing step S35.
It should be noted that, in the specific process of determining whether each of the objects to be processed is defective or not based on the defect confidence, referring to the specific steps of determining whether each of the teeth is carious based on the caries confidence, for example, in the case where the defect confidence is greater than a first threshold, the object to be processed may be directly determined to be defective, and in the case where the defect confidence is not greater than the first threshold and is greater than a second threshold, the image data of the object to be processed may be displayed, and the recognition result input based on the image data may be received, and the first threshold is greater than the second threshold, the recognition result includes whether the object to be processed is defective or not. Reference may be made to the related description in the foregoing embodiments, which are not repeated herein.
Step S34: and storing the image data of the objects to be processed which are determined not to be defective into an object file of the objects to be processed.
As described in the foregoing disclosure, taking the object to be processed is a tooth as an example, if the tooth is not determined to be a decayed tooth, the image data thereof may be stored in the tooth file of the object to which the tooth belongs.
Step S35: the image data of the defective object to be processed is used as first data.
Reference may be made to the related description in the foregoing embodiments, which are not repeated herein.
Step S36: and acquiring second data acquired after the defect object to be processed is cleaned.
Reference may be made to the related description in the foregoing embodiments, which are not repeated herein.
Step S37: and generating a three-dimensional model of the part to be filled of the defected object to be processed based on the first data and the second data.
Reference may be made to the related description in the foregoing embodiments, which are not repeated herein. In addition, taking the example that the object to be processed is a tooth, after the three-dimensional model is generated, three-dimensional printing can be performed based on the three-dimensional model to obtain a three-dimensional printed product, so that the three-dimensional printed product can be transplanted to a decayed tooth, and light curing treatment and the like are performed, thereby completing filling operation of the part to be filled.
Step S38: and responding to the filled part, acquiring third data acquired after the filling of the defective object to be processed is finished, and storing the third data to an object file of the defective object to be processed.
As described in the foregoing disclosure, taking the object to be treated as a tooth as an example, after the caries is filled, the third data can be collected and saved to the tooth file of the object to which the caries belongs. Reference may be made to the related description in the foregoing embodiments, which are not repeated herein.
By the scheme, defect filling operation can be greatly reduced. Therefore, the efficiency of defect filling can be improved.
Referring to fig. 4, fig. 4 is a schematic diagram of a framework of an embodiment of a model generation apparatus 40 according to the present application. The model generation device 40 includes: the data acquisition module 41 is used for acquiring first data acquired by the defective object to be processed before cleaning and acquiring second data acquired by the defective object to be processed after cleaning; and a model generating module 42, configured to generate a three-dimensional model of the portion to be filled in of the defective object to be processed based on the first data and the second data.
According to the scheme, the first data collected before the defective object to be processed is cleaned is obtained, the second data collected after the defective object to be processed is cleaned is obtained, and therefore the three-dimensional model of the filling part of the defective object to be processed is generated based on the first data and the second data. Therefore, the efficiency of defect filling can be improved.
In some disclosed embodiments, the data obtaining module 41 includes an image data obtaining sub-module, configured to obtain image data of each of the objects to be processed collected before cleaning; the data acquisition module 41 includes a defect determination sub-module for determining a defective object to be processed based on the image data; the data acquisition module 41 includes a first data acquisition sub-module for taking the image data of the defective object to be processed as first data.
Therefore, the image data of the defective object to be processed is used as the first data by acquiring the image data of each object to be processed collected before cleaning and determining the defective object to be processed based on the image data, so that the defect checking operation can be greatly reduced in the whole filling process through image identification, and the efficiency can be further improved.
In some disclosed embodiments, the object to be processed is a tooth, the defective object to be processed is caries, and the defect determining submodule includes a measuring unit for obtaining caries confidence levels of the respective teeth based on the image data, respectively; the caries determination sub-module includes a determination unit for determining caries in individual teeth based on caries confidence.
Therefore, the caries confidence of each tooth is respectively obtained based on the image data, and the caries in each tooth is determined based on the caries confidence, so that the accuracy of determining caries can be improved by determining caries through the confidence.
In some disclosed embodiments, the metrology unit includes a first degree quantum unit for performing the identification based on the visible light image data to obtain a first caries confidence level for each tooth, respectively; the measuring unit comprises a second measuring unit which is used for carrying out identification based on the depth image data and respectively obtaining a second dental caries confidence coefficient of each tooth; the measurement unit includes a combining subunit for combining the first caries confidence level and the second caries confidence level to obtain a caries confidence level for each tooth.
Therefore, the image data comprises visible light image data and depth image data, the identification is carried out based on the visible light image data, a first caries confidence coefficient of each tooth is obtained respectively, the identification is carried out based on the depth image data, a second caries confidence coefficient of each tooth is obtained respectively, and the caries confidence coefficient of each tooth is obtained by combining the first caries confidence coefficient and the second caries confidence coefficient, so that the caries identification is carried out from two dimensions of visible light and depth, and the accuracy of determining the caries can be further improved.
In some disclosed embodiments, the determining unit includes a determining subunit for determining whether the caries confidence level is above a first threshold; the determination unit comprises a first response unit for determining that the tooth is carious if the caries confidence level is greater than a first threshold; the determination unit includes a second response unit for displaying the image data of the tooth and receiving the recognition result input based on the image data in a case where the caries confidence is not more than the first threshold and is more than a second threshold; wherein the first threshold is greater than the second threshold and the identification includes whether the tooth is carious.
Therefore, by judging whether the confidence of the decayed tooth is higher than a first threshold, and determining that the tooth is decayed tooth in the case that the confidence of the decayed tooth is higher than the first threshold, the image data of the tooth is displayed in the case that the confidence of the decayed tooth is not more than the first threshold and more than a second threshold without manual confirmation by a user, and the identification result input based on the image data is received, and the first threshold is more than the second threshold, the identification result includes whether the tooth is decayed tooth, so that the operation flow of confirming whether the tooth is decayed tooth can be greatly saved, the accuracy of confirming whether the tooth is decayed tooth can be improved, and the efficiency and the accuracy of confirming the decayed tooth can be improved simultaneously.
In some disclosed embodiments, the object to be processed is a tooth, the defective object to be processed is caries, and the model generating means 40 includes a data archive module for saving image data of a tooth not determined to be caries to a tooth archive of the object to which the tooth belongs.
Therefore, the image data of the tooth which is not determined as the decayed tooth is stored to the tooth file of the object to which the tooth belongs, and convenience of follow-up re-diagnosis is improved.
In some disclosed embodiments, the data archive module is further configured to, in response to the filling of the portion to be filled, obtain third data collected after the filling of the caries, and save the third data to a tooth archive of an object to which the caries belongs.
Therefore, after the three-dimensional model is generated, in response to the filling of the part to be filled, third data acquired after the filling of the decayed tooth is completed is acquired, and the third data is stored in the tooth file of the object to which the decayed tooth belongs, so that convenience of subsequent re-diagnosis is improved.
Referring to fig. 5, fig. 5 is a block diagram of an embodiment of a model generation apparatus 50 according to the present application. The model generating device 50 includes a memory 51 and a processor 52 coupled to each other, where the processor 52 is configured to execute program instructions stored in the memory 51 to implement the steps of any of the above-described model generating method embodiments, which may specifically refer to the foregoing disclosed embodiments, and details are not described herein again.
In particular, the processor 52 is configured to control itself and the memory 51 to implement the steps of any of the above-described embodiments of the model generation method. Processor 52 may also be referred to as a CPU (Central Processing Unit). Processor 52 may be an integrated circuit chip having signal processing capabilities. The Processor 52 may also be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 52 may be commonly implemented by an integrated circuit chip.
In one implementation scenario, with continuing reference to fig. 5, the model generating apparatus 50 further includes an image capturing device 53, the image capturing device 53 is coupled to the processor 52, and the image capturing device 53 is configured to capture image data of the object to be processed. Specifically, the image capture device 53 may include a visible light camera 531 and a depth camera 532, the visible light camera 531 may capture visible light image data, and the depth camera 532 may capture depth image data. Depth camera 532 may specifically include, but is not limited to: a 3D structured light camera, a 3D ToF (Time of flight), an ultrasonic scanning device, etc., which are not limited herein.
In one implementation scenario, the model generating apparatus 50 further includes a three-dimensional printing device 54, the three-dimensional printing device 54 is coupled to the processor 52, and the three-dimensional printing device 54 is configured to print based on the three-dimensional model of the portion to be filled of the defective object to be processed.
In one implementation scenario, model generation device 50 also includes human-machine interaction circuitry (not shown), which may include, but is not limited to: display circuitry (e.g., a display screen, etc.), input circuitry (e.g., a keyboard, etc.), without limitation; alternatively, model generation device 50 also includes communication circuitry (not shown) that may include, but is not limited to: USB (Universal Serial Bus) Interface, HDMI (High Definition Multimedia Interface), etc., but not limited thereto, to be communicatively connected to peripheral devices (e.g., a display, a keyboard, etc.) through a communication circuit.
By the scheme, defect filling operation can be greatly reduced. Therefore, the efficiency of defect filling can be improved.
Referring to fig. 6, fig. 6 is a block diagram illustrating an embodiment of a computer readable storage medium 60 according to the present application. The computer readable storage medium 60 stores program instructions 601 capable of being executed by a processor, the program instructions 601 for implementing the steps of any of the above-described model generation method embodiments.
By the scheme, defect filling operation can be greatly reduced. Therefore, the efficiency of defect filling can be improved.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is merely one type of logical division, and an actual implementation may have another division, for example, a unit or a component may be combined or integrated with another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some interfaces, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on network elements. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
If the technical scheme of the application relates to personal information, a product applying the technical scheme of the application clearly informs personal information processing rules before processing the personal information, and obtains personal independent consent. If the technical scheme of the application relates to sensitive personal information, a product applying the technical scheme of the application obtains individual consent before processing the sensitive personal information, and simultaneously meets the requirement of 'express consent'. For example, at a personal information collection device such as a camera, a clear and significant identifier is set to inform that the personal information collection range is entered, the personal information is collected, and if the person voluntarily enters the collection range, the person is regarded as agreeing to collect the personal information; or on the device for processing the personal information, under the condition of informing the personal information processing rule by using obvious identification/information, obtaining personal authorization by modes of popping window information or asking a person to upload personal information of the person by himself, and the like; the personal information processing rule may include information such as a personal information processor, a personal information processing purpose, a processing method, and a type of personal information to be processed.

Claims (13)

1. A method of model generation, comprising:
acquiring first data acquired by a defective object to be processed before cleaning, and acquiring second data acquired by the defective object to be processed after cleaning;
and generating a three-dimensional model of the part to be filled of the defected object to be processed based on the first data and the second data.
2. The method of claim 1, wherein the acquiring first data acquired by the defective subject to be treated before the cleaning comprises:
acquiring image data of each object to be processed, which is acquired before cleaning;
determining the defected object to be processed based on the image data;
and taking the image data of the defected object to be processed as the first data.
3. The method according to claim 2, wherein the object to be treated is a tooth, the defective object to be treated is a caries, and the determining the defective object to be treated based on the image data comprises:
based on the image data, respectively obtaining caries confidence of each tooth;
determining caries in each of the teeth based on the caries confidence level.
4. A method as recited in claim 3, wherein said image data includes visible light image data and depth image data, said obtaining a caries confidence level for each of said teeth based on said image data, respectively, comprising:
performing identification based on the visible light image data to respectively obtain a first caries confidence coefficient of each tooth; and the number of the first and second groups,
performing identification based on the depth image data to respectively obtain a second caries confidence coefficient of each tooth;
combining the first caries confidence level and the second caries confidence level to obtain a caries confidence level for each of the teeth, respectively.
5. A method as recited in claim 3, wherein said determining caries in each of said teeth based on said caries confidence level comprises:
determining whether the caries confidence level is above a first threshold;
in the event that the caries confidence level is greater than the first threshold, determining that the tooth is carious;
displaying image data of the tooth and receiving a recognition result input based on the image data in a case where the caries confidence is not greater than the first threshold and greater than a second threshold;
wherein the first threshold is greater than the second threshold and the identification includes whether the tooth is carious.
6. The method according to claim 2, wherein the object to be treated is a tooth, the defective object to be treated is a caries, and the method further comprises:
and storing the image data of the tooth which is not determined as the decayed tooth into a tooth file of the object to which the tooth belongs.
7. The method of claim 6, wherein after the generating a three-dimensional model of the portion of the defective object to be treated to be filled based on the first data and the second data, the method further comprises:
and acquiring third data acquired after the filling of the decayed tooth is completed in response to the filling of the part to be filled, and saving the third data to a tooth file of an object to which the decayed tooth belongs.
8. A model generation apparatus, comprising:
the data acquisition module is used for acquiring first data acquired by a defective object to be processed before cleaning and acquiring second data acquired by the defective object to be processed after cleaning;
and the model generation module is used for generating a three-dimensional model of the part to be filled of the defected object to be processed based on the first data and the second data.
9. A model generation apparatus comprising a memory and a processor coupled to each other, the processor being configured to execute program instructions stored in the memory to implement the model generation method of any one of claims 1 to 7.
10. The model generation apparatus of claim 9, further comprising an image acquisition device coupled to the processor and configured to acquire image data of an object to be processed.
11. The model generation apparatus of claim 10, wherein the image capture device comprises a visible light camera and a depth camera.
12. The model generation apparatus of claim 9, further comprising a three-dimensional printing device coupled to the processor and configured to print based on a three-dimensional model of a portion to be filled of a defective object to be processed.
13. A computer-readable storage medium having stored thereon program instructions, which when executed by a processor, implement the model generation method of any one of claims 1 to 7.
CN202210585198.9A 2022-05-25 2022-05-25 Model generation method, related device, equipment and storage medium Withdrawn CN114898046A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210585198.9A CN114898046A (en) 2022-05-25 2022-05-25 Model generation method, related device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210585198.9A CN114898046A (en) 2022-05-25 2022-05-25 Model generation method, related device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114898046A true CN114898046A (en) 2022-08-12

Family

ID=82725334

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210585198.9A Withdrawn CN114898046A (en) 2022-05-25 2022-05-25 Model generation method, related device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114898046A (en)

Similar Documents

Publication Publication Date Title
US20210100643A1 (en) Automated detection, generation and/or correction of dental features in digital models
JP7386215B2 (en) Method and device for removal of dental mesh orthodontic appliances
CN104220022B (en) Tooth preparations guiding piece
US8954181B2 (en) Systems, methods, apparatuses, and computer-readable storage media for designing and manufacturing custom dental preparation guides
US8521317B2 (en) Systems, methods, apparatuses, and computer-readable storage media for designing and manufacturing prosthetic dental items
CN107427189A (en) Intraoral image automatically selecting and locking
JP2009523552A (en) Visualization of 3D data acquisition
KR101057762B1 (en) Apparatus for processing a tooth model for prosthetics, a method thereof and a recording medium having recorded thereon a program for implementing the method
KR20180121689A (en) Identification of areas of interest during intraoral scans
US11250580B2 (en) Method, system and computer readable storage media for registering intraoral measurements
US20160256246A1 (en) Negative and positive merge modelling
CN113262070A (en) Dental surgery equipment positioning method and system based on image recognition and storage medium
CN110236673A (en) Design method and device before a kind of bilateral jaw defect Reconstruction based on database
CN115457198A (en) Tooth model generation method and device, electronic equipment and storage medium
CN106562800A (en) Full oral digital information acquisition and saving method and system
EP4364094A1 (en) Non-invasive periodontal examination
US20230390031A1 (en) Systems and methods for library-based tooth selection in digital dental appliance design
US8352059B2 (en) Method for the manufacturing of a reproduction of an encapsulated head of a foetus and objects obtained by the method
CN114898046A (en) Model generation method, related device, equipment and storage medium
CN116823729A (en) Alveolar bone absorption judging method based on SegFormer and oral cavity curved surface broken sheet
KR102472128B1 (en) Method for providing information for dental treatment, and electronic apparatus performing the same method
CN107361842A (en) Medical image-processing apparatus, medical diagnostic imaging apparatus and image processing method
CN115186448A (en) Tooth restoration data determination method, device, equipment and storage medium
CN111710426A (en) Method, device, system and computer readable storage medium for filling holes in tooth model
KR102026085B1 (en) Method and apparatus for surface completion in ct data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20220812