CN114549509A - Method, device, equipment and storage medium for detecting orthodontic effect - Google Patents

Method, device, equipment and storage medium for detecting orthodontic effect Download PDF

Info

Publication number
CN114549509A
CN114549509A CN202210194371.2A CN202210194371A CN114549509A CN 114549509 A CN114549509 A CN 114549509A CN 202210194371 A CN202210194371 A CN 202210194371A CN 114549509 A CN114549509 A CN 114549509A
Authority
CN
China
Prior art keywords
tooth
target
model
image
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210194371.2A
Other languages
Chinese (zh)
Inventor
华昀峰
田彦
江腾飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shining 3D Technology Co Ltd
Original Assignee
Shining 3D Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shining 3D Technology Co Ltd filed Critical Shining 3D Technology Co Ltd
Priority to CN202210194371.2A priority Critical patent/CN114549509A/en
Publication of CN114549509A publication Critical patent/CN114549509A/en
Priority to PCT/CN2023/078959 priority patent/WO2023165505A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Geometry (AREA)
  • Radiology & Medical Imaging (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Medical Informatics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)

Abstract

The present disclosure relates to a method, an apparatus, a device and a storage medium for detecting an orthodontic effect, wherein the method for detecting the orthodontic effect comprises: acquiring a tooth image, wherein the tooth image comprises a target tooth; identifying target teeth in the tooth images based on a pre-trained neural network model, and determining characteristic information of the target teeth; determining pose information of the target tooth according to the feature information and the tooth model corresponding to the target tooth; and mapping the tooth model into the tooth image based on the pose information, and generating a detection result of the target orthodontic effect based on the tooth image. According to the method provided by the disclosure, in the tooth diagnosis and treatment process, frequent re-diagnosis is not needed, the related data of the tooth correction effect can be automatically acquired in real time by collecting the tooth image, the tooth correction effect is detected, the operation is simple and convenient, the implementation is convenient, meanwhile, the follow-up related medical care personnel can conveniently know the tooth correction condition, and the correction scheme can be adjusted in time.

Description

Method, device, equipment and storage medium for detecting orthodontic effect
Technical Field
The present disclosure relates to the field of information processing technologies, and in particular, to a method, an apparatus, a device, and a storage medium for detecting an orthodontic effect.
Background
With the development of society, orthodontic treatment is increasingly gaining attention as a way of oral aesthetic treatment. Among them, the three-dimensional digital orthodontic technology is widely used as a brand-new digital orthodontic technology. In the dental treatment process of the oral cavity, generally, a doctor acquires a tooth image of a user through a handle of an intraoral scanner, then views the tooth image displayed on a display, and judges the condition that the tooth of the user is missing or defective after diagnosis.
In the traditional orthodontic treatment process, a user needs to scan the teeth of the user through an intraoral scanner to obtain and process the current tooth model of the user during each rechecking, and then the current tooth model is compared with a preset model to obtain an error report and obtain a treatment effect.
Disclosure of Invention
In order to solve the technical problem, the present disclosure provides a method, an apparatus, a device, and a medium for detecting an orthodontic effect, which can automatically detect the orthodontic effect in real time by collecting a tooth image in a tooth diagnosis and treatment process, and is simple and convenient to operate and easy to implement.
In a first aspect, the disclosed embodiments provide a method for detecting an orthodontic effect, including:
acquiring a tooth image, wherein the tooth image comprises a target tooth;
identifying a target tooth in the tooth image based on a pre-trained neural network model, and determining characteristic information of the target tooth;
determining pose information of the target tooth according to the feature information and the tooth model corresponding to the target tooth;
the tooth model is mapped into the tooth image based on the pose information, and a detection result of the target orthodontic effect is generated based on the tooth image.
Optionally, before acquiring the dental image, the method further includes:
acquiring tooth scanning data, and constructing a three-dimensional tooth model according to the tooth scanning data, wherein the three-dimensional tooth model comprises a tooth model corresponding to at least one tooth;
the tooth scanning data is acquired by a three-dimensional scanner.
Optionally, the feature information includes a number of the target tooth; there is a number for each tooth model included in the three-dimensional tooth model.
Optionally, the determining the pose information of the target tooth according to the feature information and the tooth model corresponding to the target tooth includes:
acquiring a tooth model with the same number as the target tooth from the three-dimensional tooth model;
and determining the pose information of the target tooth according to the characteristic information and the tooth model.
Optionally, the feature information further includes coordinates of the target tooth.
Optionally, the determining pose information of the target tooth according to the feature information and the tooth model includes:
determining a coordinate system of the tooth model, and converting the coordinates of the target tooth into the coordinate system;
and determining the pose information of the target tooth in the coordinate system based on the transformed coordinates of the target tooth and the pose information of the tooth model.
Optionally, the pose information includes orientation and position information.
Optionally, the determining pose information of the target tooth in the coordinate system based on the transformed coordinates of the target tooth and the pose information of the tooth model includes:
fitting the contour line of the target tooth and the contour line of the tooth model based on the converted coordinates of the target tooth and the position information of the tooth model to determine the position information of the target tooth;
determining an orientation of the target tooth according to the orientation of the tooth model;
and obtaining the pose information of the target tooth in the coordinate system according to the orientation and the position information of the target tooth.
Optionally, the feature information further includes feature points, and the feature points are used for identifying local features of the target tooth.
Optionally, after determining pose information of the target tooth, the method further includes:
and acquiring other tooth images including the target tooth, and updating the pose information according to a first characteristic point in characteristic information corresponding to the other tooth images and a second characteristic point in the characteristic information corresponding to the tooth images.
Optionally, the mapping the tooth model to the tooth image based on the pose information and generating the detection result of the target orthodontic effect based on the tooth image includes:
mapping the tooth model to the tooth image based on the pose information to obtain a mapping image;
calculating pixel differences between the dental image and the mapped image;
and generating a detection result of the target tooth correction effect according to the pixel difference.
In a second aspect, the present disclosure provides a device for detecting an orthodontic effect, including:
the device comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring a tooth image which comprises a target tooth;
the recognition unit is used for recognizing the target teeth in the tooth images based on a pre-trained neural network model and determining the characteristic information of the target teeth;
the determining unit is used for determining the pose information of the target tooth according to the feature information and the tooth model corresponding to the target tooth;
a detection unit configured to map the tooth model into the tooth image based on the pose information, and generate a detection result of the target orthodontic effect based on the tooth image.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including:
a memory;
a processor; and
a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the above-described method of detecting an orthodontic effect.
In a fourth aspect, the disclosed embodiments provide a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the steps of the above-mentioned method for detecting an orthodontic effect.
The embodiment of the disclosure provides a method for detecting a tooth straightening effect, which comprises the following steps: acquiring a tooth image; identifying the tooth image based on a pre-trained neural network model, and determining the characteristic information of the target tooth in the tooth image; determining pose information of the target tooth according to the feature information and the tooth model corresponding to the target tooth; and mapping the tooth model into the tooth image based on the pose information, and generating a detection result of the target orthodontic effect based on the tooth image. The method provided by the disclosure, the user diagnoses the in-process at the tooth, only need shoot including the two-dimensional image of tooth can, two-dimensional tooth can be obtained through equipment that has the shooting function such as terminal, need not use professional intraoral scanner just can let the user accomplish the detection at home, so do not need frequent double-examination, can just can acquire the relevant data of orthodontics effect in real time automatically through gathering the image of being corrected tooth at present, detect orthodontics effect, easy and simple to handle and be convenient for implement, the process of correcting is known to the relevant medical personnel of follow-up of still being convenient for simultaneously, in time adjust the scheme of correcting.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments or technical solutions in the prior art of the present disclosure, the drawings used in the description of the embodiments or prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a schematic flowchart of a method for detecting orthodontic effects according to an embodiment of the disclosure;
fig. 2 is a schematic diagram of an application scenario provided by the embodiment of the present disclosure;
FIG. 3 is a schematic view of a dental image provided by an embodiment of the present disclosure;
FIG. 4 is a schematic view of a tooth model provided in accordance with an embodiment of the present disclosure;
fig. 5 is a schematic flowchart illustrating a method for detecting a orthodontic effect according to an embodiment of the disclosure;
fig. 6 is a schematic structural diagram of a device for detecting orthodontic effects according to an embodiment of the disclosure;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
In order that the above objects, features and advantages of the present disclosure may be more clearly understood, aspects of the present disclosure will be further described below. It should be noted that the embodiments and features of the embodiments of the present disclosure may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure, but the present disclosure may be practiced otherwise than as described herein; it is to be understood that the embodiments disclosed in the specification are only a few embodiments of the present disclosure, and not all embodiments.
With the development of society, orthodontic treatment is increasingly gaining attention as a way of oral aesthetic treatment. Orthodontic treatment is primarily directed to the reconstruction of periodontal tissue by applying orthodontic forces to the teeth, thereby altering the position of the teeth within the alveolar bone. Orthodontic treatment can improve malocclusal relationship due to crowding of teeth, abnormal arrangement of teeth, etc., thereby achieving long-term stabilization of periodontal tissues. "three-dimensional digital orthodontic" is a brand-new digital correction technology, widely used, and has the advantages of more accurate correction of malposition teeth, bucks and the like than the traditional orthodontic, more natural effect, shorter correction time, no rebound and the like, and is popular with teeth designers. In the dental treatment process of the oral cavity, generally, a doctor acquires a tooth image of a user through a handle of an intraoral scanner, then views the tooth image displayed on a display, and judges the condition that the tooth of the user is missing or defective after diagnosis. In the traditional orthodontic treatment process, a user needs to go to a hospital for multiple times for rechecking, the teeth of the user need to be scanned by an intraoral scanner during each rechecking to obtain and process the current tooth model of the user, and then the current tooth model is compared with a preset model to check whether the tooth correction condition is matched with a pre-designed scheme or not, so that an error report and a treatment effect are obtained.
In order to solve the technical problem, the embodiment of the present disclosure provides a method for detecting a tooth straightening effect, in a tooth diagnosis and treatment process, a tooth image is photographed in real time, and a current tooth straightening effect is analyzed based on the tooth image and an expected model, so that the method is simple and convenient to operate, and meanwhile, correction data can be provided for related medical staff, so that subsequent adjustment according to the correction data is facilitated, frequent rechecking is not needed, and waste of manpower and material resources is reduced to a certain extent. In particular, the details are set forth in one or more of the following examples.
Fig. 1 is a method for detecting a tooth straightening effect, provided by an embodiment of the present disclosure, for detecting a current tooth straightening effect, so as to facilitate understanding of a tooth straightening condition, and specifically includes the following steps S110 to S140 shown in fig. 1:
and S110, acquiring a tooth image, wherein the tooth image comprises a target tooth.
Specifically, the method for detecting the orthodontic effect may be performed by a terminal or a server. Specifically, the terminal or the server may detect the orthodontic effect of the target tooth through the tooth model and the tooth image, the target tooth is a single tooth, one tooth image may include a plurality of teeth, and each tooth may be called as the target tooth when detected. For example, in an application scenario, as shown in fig. 2, the server 22 includes a dental model, and the server 22 receives the dental image transmitted by the terminal 21 and then detects the orthodontic effect based on the dental image and the dental model. The dental image may be captured by the terminal 21. Alternatively, the dental image is acquired by the terminal 21 from another device. Still alternatively, the dental image is an image obtained by the terminal 21 performing image processing on a preset image, where the preset image may be obtained by shooting by the terminal 21, or the preset image may be obtained by the terminal 21 from another device. Here, the other devices are not particularly limited. In another application scenario, the terminal 21 includes a tooth model, and after the terminal 21 acquires the tooth image, the orthodontic effect is detected according to the tooth image and the tooth model.
It is to be understood that the method for detecting the orthodontic effect provided by the embodiments of the present disclosure is not limited to the several possible scenarios described above. The following description will be given taking as an example a method of detecting the orthodontic effect of the terminal 21 alone.
Understandably, the terminal acquires a tooth image including a target tooth through the acquisition device, the tooth image includes at least one relatively complete tooth, if the tooth image includes a plurality of teeth, the orthodontic condition of each of the plurality of teeth needs to be analyzed, the tooth image is shown in fig. 3, fig. 3 is a schematic diagram of a tooth image provided by the embodiment of the disclosure, in fig. 3, the tooth image 310 includes two teeth, partial outlines of the two teeth are clear, and the two teeth are marked as a tooth 311 and a tooth 312; tooth image 320 in fig. 3 includes tooth 321. It can be understood that the acquisition device may be embedded in a terminal or an external device used in combination with the terminal, the terminal may be a mobile phone or a tablet computer, the embedded terminal may be a device carried by the terminal itself, such as a camera configured at the terminal, the camera captures a two-dimensional image of a tooth, or captures video data of a tooth, each frame of the video data is a two-dimensional tooth image, the external device may be a device having an acquisition function, such as an oral cavity capturing instrument, the oral cavity capturing instrument may also capture an image of each frame of a tooth in an oral cavity, and the specific external device is not limited. It will be appreciated that the manner in which the dental image is obtained is not limited and may be selected at the discretion of the user.
Optionally, before the terminal acquires the tooth image, the terminal needs to acquire a tooth model, and the method specifically includes: acquiring tooth scanning data, and constructing a three-dimensional tooth model according to the tooth scanning data, wherein the three-dimensional tooth model comprises a tooth model corresponding to at least one tooth; the tooth scan data is acquired by a three-dimensional scanner.
Optionally, the tooth models corresponding to each tooth in the three-dimensional tooth model are numbered.
It can be understood that before the terminal takes the tooth image, the terminal scans the teeth through the three-dimensional scanning device to obtain tooth scanning data, for example, teeth of a certain user need to be corrected, a relevant medical worker scans all teeth of the user by using the three-dimensional scanner to generate tooth scanning data, a tooth model before tooth correction of the user is constructed according to the tooth scanning data, all single tooth models form a three-dimensional tooth model of the user, the pose of each tooth model before tooth correction can also be adjusted to obtain a corrected three-dimensional tooth model, all tooth models in the corrected three-dimensional tooth model form an expected model, the expected model is a most ideal effect model after correction, and the tooth model after correction is taken as an example for explanation in the following embodiment. After obtaining the tooth model after each tooth is corrected, because the whole three-dimensional tooth model is obtained by scanning data, the three-dimensional tooth model (three-dimensional tooth model) of the user needs to be divided into single tooth models by adopting a neural network model, that is, the three-dimensional tooth model is composed of a plurality of single tooth models and is numbered for each tooth model, and the numbering is convenient for positioning the single tooth model, for example, the user includes 28 teeth, and each tooth needs to be corrected, and 28 single tooth models in the tooth model are numbered and can be respectively recorded as 1 to 28. It will be appreciated that it is also possible to number only a single tooth model that requires straightening, the remaining teeth that do not require straightening.
Illustratively, referring to fig. 4, fig. 4 is a schematic diagram of a tooth model provided in an embodiment of the present disclosure, fig. 4 is a three-dimensional tooth model, the three-dimensional tooth model is a three-dimensional model, and a tooth model 411 and a tooth model 412 in fig. 4 correspond to a tooth 311 and a tooth 312 in a tooth 300 in fig. 3, respectively, that is, the straightened tooth model corresponding to the tooth 311 is 411, the straightened tooth model corresponding to the tooth 312 is 412, and the straightened tooth model corresponding to the tooth 321 in the tooth image 320 is 413.
And S120, identifying the target teeth in the tooth image based on the pre-trained neural network model, and determining the characteristic information of the target teeth.
Optionally, the feature information includes feature points, coordinates of the target tooth, and a number of the target tooth.
Understandably, on the basis of the S110, after the terminal acquires the tooth image, the terminal identifies the tooth image based on the pre-trained neural network model to determine the feature information of the target tooth in the tooth image, the pre-trained neural network model may be a model with the feature extraction and identification functions, and the specific model is not limited as long as the tooth can be identified to obtain the feature information; the feature information includes feature points, numbers, and coordinates of a tooth image, if the tooth image includes a plurality of teeth, the feature information of each of the plurality of teeth is identified, or one feature information including features of all teeth is obtained, one feature information may also be understood as feature information corresponding to the tooth image, specifically, referring to fig. 3, in fig. 3, a tooth image 310 includes a tooth 311 and a tooth 312, a tooth image 320 includes a tooth 321, if a single tooth is detected, both the tooth 311 and the tooth 312 in the tooth image 310 may be marked as a target tooth, the tooth 321 in the tooth image 320 is marked as a target tooth, the following embodiment takes the tooth 321 as a target tooth as an example to explain, the feature information corresponding to the tooth 321 includes feature points of the tooth 321, and the feature points may be points on the tooth image where a gray value of the image changes drastically or points on the edge of the tooth image with a large curvature, the coordinates of the tooth 321 may specifically refer to a plurality of coordinate points included in a contour line (edge line) of the tooth 321, where the number of the tooth 321 is used to determine a specific single tooth model in the pre-constructed three-dimensional tooth model, that is, the number of the tooth 321 is the same as the number of a single tooth model in the three-dimensional tooth model, so as to determine a tooth model corresponding to the target tooth, where the tooth model corresponding to the tooth 321 is 412 in fig. 4.
And S130, determining the pose information of the target tooth according to the feature information and the tooth model corresponding to the target tooth.
It is understood that, on the basis of the above S120, the tooth model corresponding to the target tooth is determined according to the number in the feature information corresponding to the target tooth, and then the pose information of the target tooth is determined according to the feature points and/or coordinates in the feature information and the related information of the tooth model, wherein the pose information includes orientation and spatial position information, the orientation refers to the posture of the tooth, for example, the posture of the tooth facing upwards or downwards relative to the three-dimensional tooth model, and the spatial position information can also be understood as three-dimensional coordinate information.
Optionally, after determining the pose information of the target tooth, the method further includes: and acquiring other tooth images including the target tooth, and updating the pose information according to the first characteristic point in the characteristic information corresponding to the other tooth images and the second characteristic point in the characteristic information corresponding to the tooth images.
Understandably, if only one image comprising the target tooth is obtained, the pose information of the target tooth is directly calculated according to the coordinates in the characteristic information and the related information of the tooth model corresponding to the target tooth; if a plurality of images including the target tooth are acquired, after pose information is obtained through calculation according to coordinates corresponding to any one of the plurality of tooth images and related information of the tooth model, the pose information is updated based on feature points corresponding to the plurality of tooth images, namely, the determined pose information is optimized by matching feature points detected in different tooth images, so that the accuracy is improved, wherein the feature points corresponding to the plurality of tooth images include first feature points included in any one of the tooth images and second feature points corresponding to other tooth images except any one of the tooth images in the plurality of tooth images; or, a plurality of pose information of the target tooth may be calculated according to feature information calculated from a plurality of tooth images, for example, 10 tooth images including the tooth 321 are acquired, 10 pose information corresponding to the tooth 321 is calculated according to coordinates in the feature information corresponding to each tooth image in the 10 tooth images, and then the determined 10 pose information is optimized in combination according to feature points in the feature information corresponding to the 10 tooth images to obtain optimal pose information, so as to improve the accuracy of the detection result.
And S140, mapping the tooth model to the tooth image based on the pose information, and generating a detection result of the target tooth correction effect based on the tooth image.
Understandably, after the terminal determines the pose information, projecting the tooth model corresponding to the target tooth number in the tooth image onto the tooth image according to the pose information, wherein the pose information obtained by only one target tooth in the tooth image can also be understood as the pose information corresponding to the tooth image, namely establishing the relation between the target tooth and the tooth model in the tooth image, and further establishing the relation between the image coordinate system of the tooth image and the world coordinate system of the tooth model; then projecting the tooth model into the tooth image, generating a detection result of the target orthodontic effect based on the mapped tooth image, it can be understood that the projected tooth image includes both the two-dimensional tooth model and the target tooth, the image is a two-dimensional image, the tooth model is a three-dimensional tooth model, and after the three-dimensional tooth model is mapped to the two-dimensional tooth image, the tooth image comprises a two-dimensional tooth model, the tooth model and the target tooth have large-area overlapping, but a difference value exists on a pixel point, the difference may reflect the orthodontic condition of the teeth during the orthodontic procedure, the detection may be whether the orthodontic effect is as expected, for example, referring to fig. 3, the tooth model is projected into the tooth image 320 to obtain a mapping image 330, the mapping image 330 includes a two-dimensional tooth model 331 and the target tooth 321, and the two-dimensional tooth model 331 covers most of the target tooth 321. The pixel difference between the tooth image and the mapping image can be calculated, the difference value between the target tooth in the tooth image and the tooth model pixel in the mapping image reflects the tooth correction condition in the correction process, and the detection result can be whether the correction effect is expected or not. After the detection result is obtained, the related data and the detection result related to the detection process can be sent to related medical personnel for checking, and the related medical personnel judge whether the detection result meets the expectation, so that the subsequent correction condition can be conveniently and quickly known, and the correction mode can be timely adjusted.
Optionally, the mapping the tooth model to the tooth image based on the pose information and generating a detection result of the target orthodontic effect based on the tooth image specifically include: mapping the tooth model to the tooth image based on the pose information to obtain a mapping image; calculating pixel differences between the dental image and the mapped image; and generating a detection result of the target tooth correction effect according to the pixel difference.
It can be understood that the generating of the detection result in S140 specifically includes the following implementation procedures: based on the pose information of the tooth image, mapping the tooth model corresponding to the target tooth to the tooth image to obtain a mapping image, referring to fig. 3, in fig. 3, the tooth image 320 includes the tooth 321, after the pose information of the tooth 321 is obtained, mapping the tooth model 413 corresponding to the tooth 321 to the tooth image 320 according to the pose information, the mapped tooth image, that is, the mapping image is recorded as 330, and the mapping image 330 includes the mapped two-dimensional tooth model 331 and the target tooth 321 covered by the tooth model 331 in most areas; after the mapping image 330 is obtained, the pixel difference between the target tooth 321 and the tooth model 331 in the mapping image 330 is calculated, or the pixel difference between the tooth image 320 and the mapping image 330 is directly calculated, that is, the fitting error between the target tooth 321 and the tooth model 331 is calculated to obtain the pixel difference, finally, the pixel difference is analyzed to analyze whether the tooth correction is in accordance with the expectation, and whether the tooth correction is in accordance with the expectation means whether the current tooth correction condition is in accordance with the constructed corrected tooth model, that is, whether the current tooth correction condition is based on the tooth model or has the correction error, and then, the detection result of the target tooth correction effect is generated according to the pixel difference and the analysis result.
The method for detecting the orthodontic effect provided by the embodiment of the disclosure comprises the following steps: acquiring a tooth image, wherein the tooth image comprises a target tooth; identifying target teeth in the tooth images based on a pre-trained neural network model, and determining characteristic information of the target teeth; determining pose information of the target teeth according to the feature information and the tooth models corresponding to the target teeth; and mapping the tooth model into the tooth image based on the pose information, and generating a detection result of the target orthodontic effect based on the tooth image. The method provided by the disclosure, in the process of tooth diagnosis and treatment, a user can automatically acquire tooth images according to the mobile terminal, the mobile terminal can be a mobile phone, the user can finish detection at home without using a professional intraoral scanner, so that the user does not need to frequently go for a double-visit, relevant data of tooth correction effect can be conveniently acquired in real time, the tooth correction effect is analyzed by a real-time automatic comparison method, a detection result is generated, the operation is simple and convenient, the implementation is convenient, subsequent relevant medical workers can carry out remote diagnosis, the medical compliance of the user is improved, the relevant medical workers can timely adjust the correction scheme, and the success rate of orthodontic correction is improved.
On the basis of the foregoing embodiment, optionally, determining the pose information of the target tooth according to the feature information and the tooth model corresponding to the target tooth specifically includes the following steps S510 to S520 as shown in fig. 5:
and S510, acquiring a tooth model with the same number as that of the target tooth from the three-dimensional tooth model.
It can be understood that, after the terminal constructs the three-dimensional tooth model according to the tooth scanning data, the tooth model with the same number as the target tooth is selected from the three-dimensional tooth model, the three-dimensional tooth model includes all the single tooth models of the teeth to be straightened of the user, and the single tooth models have numbers, for example, the number of the target tooth 321 in the tooth image 320 in fig. 3 is 16, and the single tooth model with the same number of 16, that is, the tooth model 413 in fig. 4, is obtained from the three-dimensional tooth model.
And S520, determining the pose information of the target tooth according to the characteristic information and the tooth model.
Understandably, after the tooth model corresponding to the target tooth is determined based on the above S510, the pose information of the target tooth is determined according to the coordinates in the feature information and the related information of the tooth model.
Optionally, determining pose information of the target tooth according to the feature information and the tooth model specifically includes: determining a coordinate system of the tooth model, and converting the coordinates of the target tooth into the coordinate system; and determining the pose information of the target tooth in the coordinate system based on the transformed coordinates of the target tooth and the pose information of the tooth model.
Understandably, the pose information of the target tooth is determined according to the characteristic information and the tooth model, and the method specifically comprises the following steps: determining a coordinate system of the tooth model, the coordinate system being specifically a world coordinate system, the coordinate system being determined by a three-dimensional scanner scanning the teeth; after a coordinate system is determined, converting the coordinates of the target teeth into the world coordinate system, wherein the coordinates corresponding to the tooth images are the image coordinate system, the coordinates of the target teeth in the tooth images are also the image coordinate system, and the conversion between the image coordinate system and the world coordinate system is realized by converting the coordinates of the target teeth into the world coordinate system, namely the conversion between the image coordinate system and the world coordinate system can be realized, and the whole tooth images can also be understood as being unified into the world coordinate system, so that the pose information of the target teeth in the world coordinate system can be conveniently determined; and then determining the pose information of the target tooth in the world coordinate system based on the transformed coordinates of the target tooth and the pose information of the tooth model in the world coordinate system, wherein the coordinates of the target tooth can specifically refer to a plurality of coordinates forming a contour line or an edge line of the target tooth in the tooth image.
Optionally, determining pose information of the target tooth in the coordinate system based on the transformed coordinates of the target tooth and the pose information of the tooth model specifically includes: fitting the contour line of the target tooth and the contour line of the tooth model based on the converted coordinates of the target tooth and the position information of the tooth model to determine the position information of the target tooth; determining the orientation of the target tooth according to the orientation of the tooth model; and obtaining pose information of the target tooth in the coordinate system according to the orientation and the position information of the target tooth.
It can be understood that the determining of the pose information of the target tooth in the coordinate system based on the transformed coordinates of the target tooth and the pose information of the tooth model specifically includes the following steps: the pose information comprises position information and orientation, the position information is a three-dimensional space coordinate, and the orientation is used for determining the posture of the tooth in the whole tooth model; fitting the contour line of the target tooth and the contour line of the tooth model based on the converted two-dimensional coordinates of the contour line of the target tooth and the position information of the tooth model in the world coordinate system, and determining the position information of the target tooth in the coordinate system; then determining the orientation of the target tooth in the coordinate system according to the orientation in the pose information of the tooth model; and finally, according to the orientation of the target teeth and the position information of the target teeth, obtaining the pose information of the target teeth in a coordinate system, establishing a conversion relation between the tooth image and the tooth model after determining the pose information of the target teeth, and facilitating the subsequent mapping of the tooth model to the tooth image to analyze the orthodontic effect.
According to the method for detecting the orthodontic effect, the tooth model with the same number as that of the target tooth is obtained from the three-dimensional tooth model, namely, the tooth model is positioned to determine the tooth model corresponding to the target tooth, then, the pose information of the target tooth in the coordinate system of the tooth model is determined according to the coordinates of the contour line of the target tooth in the characteristic information and the pose information of the tooth model, the connection between the tooth model and the target tooth is established, the tooth model is conveniently mapped to the tooth image according to the pose information, and the orthodontic effect of each tooth is analyzed.
Fig. 6 is a schematic structural diagram of a device for detecting an orthodontic effect according to an embodiment of the disclosure. The apparatus for detecting an orthodontic effect according to the embodiment of the present disclosure may perform the processing procedure according to the above method for detecting an orthodontic effect, and as shown in fig. 6, the apparatus 600 for detecting an orthodontic effect includes:
an acquisition unit 610 for acquiring a dental image, the dental image including a target tooth;
the recognition unit 620 is configured to recognize a target tooth in the tooth image based on a pre-trained neural network model, and determine feature information of the target tooth;
a determining unit 630, configured to determine pose information of the target tooth according to the feature information and a tooth model corresponding to the target tooth;
a detection unit 640 for mapping the tooth model into the tooth image based on the pose information, and generating a detection result of the target orthodontic effect based on the tooth image.
Optionally, the apparatus 600 further comprises a construction unit, wherein the construction unit is used before the acquiring of the dental image, and specifically used for:
acquiring tooth scanning data, and constructing a three-dimensional tooth model according to the tooth scanning data, wherein the three-dimensional tooth model comprises a tooth model corresponding to at least one tooth;
the tooth scanning data is acquired by a three-dimensional scanner.
Optionally, the characteristic information in the apparatus 600 includes the number of the target tooth; the tooth model corresponding to each tooth in the three-dimensional tooth model has a number.
Optionally, the determining unit 630 determines the pose information of the target tooth according to the feature information and the tooth model corresponding to the target tooth, and is specifically configured to:
acquiring a tooth model with the same number as the target tooth from the three-dimensional tooth model;
and determining the pose information of the target tooth according to the characteristic information and the tooth model.
Optionally, the feature information in the apparatus 600 further includes coordinates of the target tooth.
Optionally, the determining unit 630 determines the pose information of the target tooth according to the feature information and the tooth model, and is specifically configured to:
determining a coordinate system of the tooth model, and converting the coordinates of the target tooth into the coordinate system;
and determining the pose information of the target tooth in the coordinate system based on the transformed coordinates of the target tooth and the pose information of the tooth model.
Optionally, the pose information in the apparatus 600 includes orientation and position information.
Optionally, the determining unit 630 determines the pose information of the target tooth in the coordinate system based on the transformed coordinates of the target tooth and the pose information of the tooth model, and is specifically configured to:
fitting the contour line of the target tooth and the contour line of the tooth model based on the converted coordinates of the target tooth and the position information of the tooth model to determine the position information of the target tooth;
determining an orientation of the target tooth according to the orientation of the tooth model;
and obtaining the pose information of the target tooth in the coordinate system according to the orientation and the position information of the target tooth.
Optionally, the feature information in the apparatus 600 further includes feature points, and the feature points are used to identify local features of the target tooth.
Optionally, the apparatus 600 further includes an updating unit, where the updating unit is configured to, after determining the pose information of the target tooth, specifically:
and acquiring other tooth images including the target tooth, and updating the pose information according to a first characteristic point in characteristic information corresponding to the other tooth images and a second characteristic point in the characteristic information corresponding to the tooth images.
Optionally, the mapping of the tooth model to the tooth image based on the pose information in the detection unit 640, and generating a detection result of the target orthodontic effect based on the tooth image are specifically configured to:
mapping the tooth model to the tooth image based on the pose information to obtain a mapping image;
calculating pixel differences between the dental image and the mapped image;
and generating a detection result of the target tooth correction effect according to the pixel difference.
The device for detecting orthodontic effect of the embodiment shown in fig. 6 can be used to implement the technical solution of the above method embodiments, and the implementation principle and technical effect are similar, and are not described herein again.
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. The electronic device provided by the embodiment of the present disclosure may execute the processing flow of the detection method for orthodontic effect provided by the above embodiment, as shown in fig. 7, the electronic device 700 includes: processor 710, communication interface 720, and memory 730; wherein the computer program is stored in the memory 730 and configured to be executed by the processor 710 for the detection of the orthodontic effect as described above.
In addition, the embodiment of the present disclosure also provides a computer-readable storage medium, on which a computer program is stored, the computer program being executed by a processor to implement the method for detecting the orthodontic effect according to the above embodiment.
Furthermore, the embodiments of the present disclosure also provide a computer program product, which includes a computer program or instructions, when executed by a processor, to implement the method for detecting an orthodontic effect as described above.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present disclosure, which enable those skilled in the art to understand or practice the present disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method for detecting an orthodontic effect, comprising:
acquiring a tooth image, wherein the tooth image comprises a target tooth;
identifying a target tooth in the tooth image based on a pre-trained neural network model, and determining characteristic information of the target tooth;
determining pose information of the target tooth according to the feature information and the tooth model corresponding to the target tooth;
the tooth model is mapped into the tooth image based on the pose information, and a detection result of the target orthodontic effect is generated based on the tooth image.
2. The method of claim 1, wherein prior to acquiring the dental image, the method further comprises:
acquiring tooth scanning data, and constructing a three-dimensional tooth model according to the tooth scanning data, wherein the three-dimensional tooth model comprises a tooth model corresponding to at least one tooth;
the tooth scanning data is acquired by a three-dimensional scanner.
3. The method according to claim 2, wherein the feature information includes a number of the target tooth, and a number exists for each tooth model included in the three-dimensional tooth model; determining pose information of the target tooth according to the feature information and the tooth model corresponding to the target tooth, wherein the determining pose information of the target tooth comprises the following steps:
acquiring a tooth model with the same number as the target tooth from the three-dimensional tooth model;
and determining the pose information of the target tooth according to the characteristic information and the tooth model.
4. The method of claim 3, wherein the feature information further comprises coordinates of the target tooth; determining pose information of the target tooth according to the feature information and the tooth model, wherein the determining pose information of the target tooth comprises:
determining a coordinate system of the tooth model, and converting the coordinates of the target tooth into the coordinate system;
and determining the pose information of the target tooth in the coordinate system based on the transformed coordinates of the target tooth and the pose information of the tooth model.
5. The method according to claim 4, characterized in that the pose information comprises orientation and position information; the determining the pose information of the target tooth in the coordinate system based on the transformed coordinates of the target tooth and the pose information of the tooth model comprises:
fitting the contour line of the target tooth and the contour line of the tooth model based on the converted coordinates of the target tooth and the position information of the tooth model to determine the position information of the target tooth;
determining an orientation of the target tooth according to the orientation of the tooth model;
and obtaining the pose information of the target teeth under the coordinate system according to the orientation and the position information of the target teeth.
6. The method of claim 1, wherein the feature information further comprises feature points for identifying local features of the target tooth;
after determining pose information of the target tooth, the method further comprises:
and acquiring other tooth images including the target tooth, and updating the pose information according to a first characteristic point in characteristic information corresponding to the other tooth images and a second characteristic point in the characteristic information corresponding to the tooth images.
7. The method according to claim 1, wherein the mapping the dental model into the dental image based on the pose information and generating the detection of the target orthodontic effect based on the dental image comprises:
mapping the tooth model to the tooth image based on the pose information to obtain a mapping image;
calculating pixel differences between the dental image and the mapping image;
and generating a detection result of the target tooth correction effect according to the pixel difference.
8. A device for detecting orthodontic effects, comprising:
the device comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring a tooth image which comprises a target tooth;
the recognition unit is used for recognizing the target teeth in the tooth images based on a pre-trained neural network model and determining the characteristic information of the target teeth;
the determining unit is used for determining the pose information of the target tooth according to the feature information and the tooth model corresponding to the target tooth;
a detection unit configured to map the tooth model into the tooth image based on the pose information, and generate a detection result of the target orthodontic effect based on the tooth image.
9. An electronic device, comprising:
a memory;
a processor; and
a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the method of detecting an orthodontic effect according to any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method for detecting an orthodontic effect according to any one of claims 1 to 7.
CN202210194371.2A 2022-03-01 2022-03-01 Method, device, equipment and storage medium for detecting orthodontic effect Pending CN114549509A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210194371.2A CN114549509A (en) 2022-03-01 2022-03-01 Method, device, equipment and storage medium for detecting orthodontic effect
PCT/CN2023/078959 WO2023165505A1 (en) 2022-03-01 2023-03-01 Tooth correction effect evaluation method, apparatus, device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210194371.2A CN114549509A (en) 2022-03-01 2022-03-01 Method, device, equipment and storage medium for detecting orthodontic effect

Publications (1)

Publication Number Publication Date
CN114549509A true CN114549509A (en) 2022-05-27

Family

ID=81662379

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210194371.2A Pending CN114549509A (en) 2022-03-01 2022-03-01 Method, device, equipment and storage medium for detecting orthodontic effect

Country Status (2)

Country Link
CN (1) CN114549509A (en)
WO (1) WO2023165505A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116593151A (en) * 2023-07-17 2023-08-15 创新奇智(青岛)科技有限公司 Dental socket chest expander testing method and device, electronic equipment and readable storage medium
WO2023165505A1 (en) * 2022-03-01 2023-09-07 先临三维科技股份有限公司 Tooth correction effect evaluation method, apparatus, device, and storage medium
WO2024153136A1 (en) * 2023-01-17 2024-07-25 上海时代天使医疗器械有限公司 Tooth deviation estimation method and system

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117857912B (en) * 2024-01-15 2024-06-07 中山大学附属口腔医院 Information acquisition method and system for dentistry

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108320325A (en) * 2018-01-04 2018-07-24 华夏天宇(北京)科技发展有限公司 The generation method and device of dental arch model
CN111784754B (en) * 2020-07-06 2024-01-12 浙江得图网络有限公司 Tooth orthodontic method, device, equipment and storage medium based on computer vision
CN111931843A (en) * 2020-08-10 2020-11-13 深圳爱舒笑科技有限公司 Method for monitoring tooth position based on image processing
CN112807108B (en) * 2021-01-27 2022-03-01 清华大学 Method for detecting tooth correction state in orthodontic correction process
CN114549509A (en) * 2022-03-01 2022-05-27 先临三维科技股份有限公司 Method, device, equipment and storage medium for detecting orthodontic effect

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023165505A1 (en) * 2022-03-01 2023-09-07 先临三维科技股份有限公司 Tooth correction effect evaluation method, apparatus, device, and storage medium
WO2024153136A1 (en) * 2023-01-17 2024-07-25 上海时代天使医疗器械有限公司 Tooth deviation estimation method and system
CN116593151A (en) * 2023-07-17 2023-08-15 创新奇智(青岛)科技有限公司 Dental socket chest expander testing method and device, electronic equipment and readable storage medium
CN116593151B (en) * 2023-07-17 2023-09-12 创新奇智(青岛)科技有限公司 Dental socket chest expander testing method and device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
WO2023165505A1 (en) 2023-09-07

Similar Documents

Publication Publication Date Title
CN114549509A (en) Method, device, equipment and storage medium for detecting orthodontic effect
US11889991B2 (en) Using an intraoral mirror with an integrated camera to record dental status, and applications thereof
US12070367B2 (en) Correction of unclear features in three-dimensional models of dental sites
Hutton et al. An evaluation of active shape models for the automatic identification of cephalometric landmarks
CN105007856B (en) Method and system for being engaged registration
CN108986209B (en) Method and system for evaluating implant planting precision
WO2021078065A1 (en) Breast three-dimensional point cloud reconstruction method and apparatus, and storage medium and computer device
US20100124367A1 (en) Image registration
JP2017529919A (en) A method for aligning an intraoral digital 3D model
CN110868913B (en) Tool for tracking gum line and displaying periodontal measurements using intraoral 3D scan
CN108904084B (en) System and method for acquiring intraoral digitized impressions
CN111931843A (en) Method for monitoring tooth position based on image processing
US20230215027A1 (en) Oral image marker detection method, and oral image matching device and method using same
CN112790887A (en) Oral imaging method, system and device, and readable storage medium
JP2006331009A (en) Image processor and image processing method
CN118252639B (en) False tooth edge line determination method, device, medium and equipment
CN114830179A (en) Determining spatial relationships between upper teeth and facial bones
CN115620871A (en) Corrector information generation method and device for invisible orthodontic platform
Ahn et al. Marginal bone destructions in dental radiography using multi-template based on internet services

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination