CN117323038A - Orthodontic treatment monitoring method, orthodontic treatment monitoring device, orthodontic treatment monitoring equipment and storage medium - Google Patents

Orthodontic treatment monitoring method, orthodontic treatment monitoring device, orthodontic treatment monitoring equipment and storage medium Download PDF

Info

Publication number
CN117323038A
CN117323038A CN202311397505.1A CN202311397505A CN117323038A CN 117323038 A CN117323038 A CN 117323038A CN 202311397505 A CN202311397505 A CN 202311397505A CN 117323038 A CN117323038 A CN 117323038A
Authority
CN
China
Prior art keywords
data
teeth
target
reference data
registration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311397505.1A
Other languages
Chinese (zh)
Inventor
田彦
王昊
王嘉磊
张健
江腾飞
赵晓波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shining 3D Technology Co Ltd
Original Assignee
Shining 3D Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shining 3D Technology Co Ltd filed Critical Shining 3D Technology Co Ltd
Priority to CN202311397505.1A priority Critical patent/CN117323038A/en
Publication of CN117323038A publication Critical patent/CN117323038A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C7/00Orthodontics, i.e. obtaining or maintaining the desired position of teeth, e.g. by straightening, evening, regulating, separating, or by correcting malocclusions
    • A61C7/002Orthodontic computer assisted systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Medical Informatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Dentistry (AREA)
  • Epidemiology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)

Abstract

The invention relates to an orthodontic treatment monitoring method, an orthodontic treatment monitoring device, orthodontic treatment equipment and a storage medium. An orthodontic treatment monitoring method comprising: obtaining observation data of the teeth of the target user at a second time; acquiring reference data of the first time of the teeth of the target user; registering the observation data with corresponding teeth on the reference data to obtain target registration data, wherein the target registration data comprises target observation data corresponding to the observation data and target reference data corresponding to the reference data; based on the corresponding teeth in the target observation data and the target reference data, an orthodontic result of the target teeth is estimated. The method provided by the disclosure can evaluate the correction degree of the teeth of each stage of the patient in the orthodontic process, and is convenient for the patient to know the orthodontic treatment effect in real time.

Description

Orthodontic treatment monitoring method, orthodontic treatment monitoring device, orthodontic treatment monitoring equipment and storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a storage medium for orthodontic treatment monitoring.
Background
The current orthodontic treatment needs to remove the dental braces through a period of several months, so that a professional doctor evaluates the orthodontic treatment effect, and a patient cannot quantitatively know the orthodontic degree of the patient during the period, so that the subsequent treatment confidence of the patient is not facilitated.
The existing orthodontic treatment monitoring method is mainly characterized in that a machine-understandable orthodontic rule, data driving and other software programs are established, the software programs can only estimate final displacement vectors, rotation vectors and other information of teeth after orthodontic treatment is finished, the orthodontic correction degree of each stage cannot be displayed in an orthodontic treatment process, namely the orthodontic correction degree of each stage in the orthodontic treatment process cannot be known in real time, and therefore, the orthodontic treatment monitoring method is urgently needed to be provided so as to evaluate the orthodontic correction degree of each stage of a patient in real time.
Disclosure of Invention
In order to solve the technical problems, the embodiments of the present disclosure provide an orthodontic treatment monitoring method, an orthodontic treatment monitoring device, and a orthodontic treatment monitoring storage medium, which can evaluate the orthodontic degree of each stage of teeth of a patient in an orthodontic process, and facilitate the patient to know the orthodontic treatment effect in real time.
In a first aspect, embodiments of the present disclosure provide an orthodontic treatment monitoring method, comprising:
obtaining observation data of the teeth of the target user at a second time;
acquiring reference data of the target user tooth at the first time;
registering the observation data with the corresponding teeth on the reference data to obtain target registration data, wherein the target registration data comprises target observation data corresponding to the observation data and target reference data corresponding to the reference data;
And estimating an orthodontic result of the target tooth based on the target observation data and the corresponding tooth in the target reference data.
In a second aspect, embodiments of the present disclosure provide an orthodontic treatment monitoring device comprising:
the second acquisition unit is used for acquiring observation data of the teeth of the target user at a second time;
the first acquisition unit is used for acquiring the reference data of the first time of the teeth of the target user;
the registration unit is used for registering the observation data and the corresponding teeth on the reference data to obtain target registration data, wherein the target registration data comprises target observation data corresponding to the observation data and target reference data corresponding to the reference data;
and the estimation unit is used for estimating the orthodontic result of the target teeth based on the corresponding teeth in the target observation data and the target reference data.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including:
a memory;
a processor; and
a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the orthodontic treatment monitoring method described above.
In a fourth aspect, embodiments of the present disclosure provide a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of an orthodontic treatment monitoring method as described above.
The embodiment of the disclosure provides an orthodontic treatment monitoring method, which comprises the following steps: obtaining observation data of the teeth of the target user at a second time; acquiring reference data of the first time of the teeth of the target user; registering the observation data with corresponding teeth on the reference data to obtain target registration data, wherein the target registration data comprises target observation data corresponding to the observation data and target reference data corresponding to the reference data; based on the corresponding teeth in the target observation data and the target reference data, an orthodontic result of the target teeth is estimated. According to the method, point cloud data of the teeth of the target user at different times are acquired, jaw postures are eliminated based on the fixed teeth corresponding to the point cloud data, posture transformation conditions of the orthodontic teeth are monitored on the basis of the jaw posture elimination, accuracy of tooth posture estimation is effectively improved, and users can know the correction degrees of the teeth at different stages in the orthodontic process in real time.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments of the present disclosure or the solutions in the prior art, the drawings that are required for the description of the embodiments or the prior art will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
Fig. 1 is a schematic flow chart of an orthodontic treatment monitoring method according to an embodiment of the disclosure;
fig. 2 is a schematic diagram of a refinement flow of S110 in the orthodontic treatment monitoring method shown in fig. 1;
fig. 3 is a schematic structural diagram of an orthodontic treatment monitoring method according to an embodiment of the present disclosure;
fig. 4 is a schematic structural view of an orthodontic treatment monitoring device according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
In order that the above objects, features and advantages of the present disclosure may be more clearly understood, a further description of aspects of the present disclosure will be provided below. It should be noted that, without conflict, the embodiments of the present disclosure and features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure, but the present disclosure may be practiced otherwise than as described herein; it will be apparent that the embodiments in the specification are only some, but not all, embodiments of the disclosure.
At present, for orthodontic treatment monitoring, a method based on rendering images and observing matching can be adopted, but the method is sensitive to example segmentation results, if the tooth boundaries of example segmentation are inaccurate, the accuracy of tooth posture estimation can be obviously reduced, and further accurate monitoring of treatment effects is affected, so that user experience is poor.
Aiming at the technical problems, the embodiment of the disclosure provides an orthodontic treatment monitoring method, which realizes posture transformation estimation of teeth between front teeth and rear teeth of an orthodontic treatment through a 2D/3D instance segmentation and point cloud registration technology, specifically obtains observation data of teeth at second time, obtains reference data of teeth at first time, carries out global registration according to the observation data and fixed teeth corresponding to the reference data to eliminate jaw postures, carries out local registration on the observation data and target teeth corresponding to the reference data on the basis of eliminating jaw postures, and finally determines transformation conditions of the target teeth before and after the orthodontic treatment so as to facilitate instant understanding of orthodontic treatment effects and improve user experience. And in particular by one or more of the following examples.
Fig. 1 is a flow chart of an orthodontic treatment monitoring method provided by an embodiment of the present disclosure, which may be executed by a terminal or a server, and is a feasible application scenario, where the server determines a transformation situation of a target tooth before and after orthodontic according to acquired observation data and reference data, and specifically includes the following steps S110 to S140 shown in fig. 1:
s110, acquiring observation data of the teeth of the target user at a second time.
Wherein the observed data is tooth scan data obtained based on a second scanning device scanning a tooth surface of the target user for a second time.
It will be appreciated that in response to the scan instruction, the second scanning device scans the surface of the teeth of the target user at a second time, acquires a reconstructed image of the teeth of the target user at the second time, and three-dimensionally reconstructs a three-dimensional digitized model of the teeth of the target user at the second time based on the reconstructed image. The observation data includes a three-dimensional digitized model of the tooth at a second time of the target user. The number of target teeth scanned while scanning the tooth surface is typically a plurality, preferably a full mouth tooth, i.e. all teeth in the mouth of the target user.
S120, acquiring reference data of the first time of the teeth of the target user.
Wherein the reference data is tooth scanning data obtained by scanning the tooth surface of the target user at the first time based on a first scanning device. The method specifically comprises the following steps:
acquiring three-dimensional dental model data of the teeth of the target user at the first time; and carrying out three-dimensional instance segmentation on the three-dimensional dental model data to obtain the first single-dental-point cloud data, wherein the reference data comprises a plurality of first single-dental-point cloud data.
It can be appreciated that the first scanning device scans the tooth surface of the target user at the first time, acquires a dental reconstruction image of the target user at the first time, and three-dimensionally reconstructs a dental three-dimensional digitized model (i.e., three-dimensional dental model data) of the target user at the first time based on the reconstructed image, the three-dimensional dental model data being stored for recall, the reference data comprising the dental three-dimensional digitized model of the target user at the first time.
It is understood that in response to the acquisition instruction, pre-stored three-dimensional dental model data for the target user at the first time is acquired. Then, three-dimensional instance segmentation is performed on the three-dimensional dental model data to obtain first single-tooth point cloud data and types of single teeth, the first single-tooth point cloud data and the types of the single teeth can be obtained, the 3D instance segmenter can be used for segmenting the three-dimensional dental model data specifically, the 3D instance segmenter is realized based on a multi-view dental grid, the three-dimensional dental model data specifically comprises a point cloud data set of the first time teeth of a target user, which is marked as Source, it can be understood that each tooth of the target teeth (namely each target single tooth) has corresponding point cloud data in reference data, the point cloud data of each single tooth exists only as a whole before segmentation, the point cloud data of each single tooth after segmentation is respectively and independently marked as first single-tooth point cloud data, the reference data comprises a plurality of first single-tooth point cloud data, and it is required to be stated that all the first single-tooth point cloud data can form the first point cloud data of the target teeth. The class of each tooth in the target teeth is used for carrying out point cloud registration according to the tooth class. It can be understood that each single-tooth point cloud data can be used as an independent monomer to perform pose transformation respectively, and the plurality of first single-tooth point cloud data can also be used as a whole to perform pose transformation, that is, the first single-tooth point cloud data of a plurality of target single teeth perform the same pose transformation, for example, the first single-tooth point cloud data of all target single teeth perform the same pose transformation, that is, the first single-tooth point cloud data perform the pose transformation. The 3D instance segmentation technology inputs a 3D model, and outputs the object label of each three-dimensional point of the 3D model, the 3D external frame of each object and the object category.
It should be noted that, the teeth of the target user are scanned at a first time to obtain the tooth scan data, and after orthodontic for a period of time, the teeth are scanned at a second time to obtain the tooth scan data, that is, the first time may be a certain time before orthodontic or a certain time in orthodontic process, the second time may be a certain time in orthodontic process or a certain time after orthodontic, the first time is different from the second time, and the first time is before the second time. Orthodontic generally proceeds in stages, with a first time being the start time of a first stage (i.e., the start time of the entire orthodontic procedure, which may be understood as being before orthodontic), a second time being the end time of each stage (i.e., after orthodontic), or with a first time being the start time of one stage (i.e., before orthodontic), and a second time being the end time of that stage, which may be understood as being after orthodontic. It should be noted that the end time of one phase is usually the start time of the next phase.
It is understood that the tooth scan data acquired for the first time scan of the target user and the tooth scan data acquired for the second time scan of the target user each include tooth scan data of the target tooth. The first scanning device may be an intraoral scanner (such as an intraoral three-dimensional scanner) for scanning teeth in an oral cavity, namely an entrance scanner; the second scanning device may be, but is not limited to, an extraoral scanner (e.g., an extraoral three-dimensional scanner) for scanning the surfaces of teeth, that is, scanning the teeth in the oral cavity outside the mouth to obtain extraoral scanning data, where the extraoral scanner has higher scanning efficiency and global accuracy than the intraoral scanner, but cannot completely obtain the image data of each surface of each tooth in the oral cavity, and the intraoral scanner has lower scanning efficiency and lower global accuracy than the extraoral scanner, but can obtain the image data of each surface of each tooth and has high accuracy of each tooth. It is understood that the first scanning device scans intraoral data of teeth obtained by scanning teeth in the mouth of a user, and the second scanning device scans extraoral data of teeth obtained by scanning teeth outside the mouth of the user.
In a possible application scene, in an orthodontic process, scanning the surface of the oral cavity of a user at the current orthodontic time (second time) through a first scanning device to obtain intraoral scanning data of the teeth, acquiring intraoral scanning data of the teeth based on the fact that the surface of the oral cavity of the user at the last orthodontic time (first time) is scanned by the same first scanning device, and comparing pose change conditions of the orthodontic teeth in the oral cavity of the user at the two times of the current orthodontic time and the last orthodontic time.
At least one embodiment described below will be described in detail taking extraoral scan data of teeth after orthodontic treatment and intraoral scan data of teeth before orthodontic treatment as examples.
And S130, registering the observation data and the corresponding teeth on the reference data to obtain target registration data.
Wherein the target registration data includes target observation data corresponding to the observation data and target reference data corresponding to the reference data.
Wherein, the target observation data corresponds to the observation data and refers to: the target observation data is obtained by performing pose change on the observation data through registration, and can also be the observation data (namely, pose change does not occur in the registration process); the target reference data corresponds to reference data: the target reference data is obtained by performing pose change on the reference data through registration, and can also be the reference data (namely, pose change does not occur in the registration process).
Optionally, in S130, the registration is performed on the observation data and the teeth corresponding to the reference data to obtain target registration data, which may be specifically implemented by the following steps:
performing coarse registration on the observation data and the reference data to obtain coarse registration pose changes of the observation data and the reference data, wherein the coarse registration pose changes can be specifically a coarse registration pose change matrix; performing fine registration on the observation data and the reference data on the basis of the coarse registration to obtain a fine registration pose change, which can be specifically a fine registration pose change matrix; and obtaining target registration data based on the coarse registration pose change and the fine registration pose change by the observation data and the reference data. Specifically, the observation data and the reference data are registered based on a first part of teeth in the corresponding target teeth, and the observation data and the reference data optimally register a second part of teeth in the corresponding target teeth on the basis of the registration, wherein the first part of teeth and the second part of teeth are different parts of teeth in the target teeth.
It can be understood that coarse registration is performed on the observation data and the reference data to obtain coarse registration pose changes, the coarse registration pose changes include coarse registration pose changes of the observation data and coarse registration pose changes of the reference data, the coarse registration pose changes of the observation data are pose changes of the observation data generated by coarse registration, the coarse registration pose changes of the observation data are 0 when pose changes do not occur in the coarse registration process, the coarse registration pose changes of the reference data are pose changes of the reference data generated by coarse registration, and the coarse registration pose changes of the reference data are 0 when pose changes do not occur in the coarse registration process.
Optionally, the coarse registration of the observation data and the reference data to obtain the coarse registration pose change of the observation data and the reference data may be specifically implemented by the following steps:
registering the observation data and the fixed teeth corresponding to the reference data to obtain pose changes of the observation data and the fixed teeth corresponding to the reference data, wherein the pose changes of the observation data and the fixed teeth corresponding to the reference data can be specifically a pose change matrix, and the rough registration pose changes of the observation data and the reference data are pose changes of the fixed teeth corresponding to the observation data and the reference data, wherein the fixed teeth are part of the teeth in the target teeth, and the fixed teeth are teeth with very small or unchanged variable amounts in different stages of tooth orthodontic, namely the variable amounts of the fixed teeth are smaller than a first preset threshold value. Wherein the target tooth comprises a fixed tooth. It should be noted that the fixed teeth may be empirically set.
Optionally, registering the fixed teeth corresponding to the observed data and the reference data to obtain pose changes of the fixed teeth corresponding to the observed data and the reference data may be specifically implemented by the following steps:
Roughly registering the observation data and the fixed teeth corresponding to the reference data to obtain first pose changes of the observation data and the fixed teeth corresponding to the reference data, wherein the first pose changes can be specifically a first pose change matrix; performing fine registration on the observation data and the fixed teeth corresponding to the reference data on the basis of coarse registration to obtain second pose changes of the fixed teeth corresponding to the observation data and the reference data, wherein the second pose changes can be specifically a second pose change matrix; the pose changes of the corresponding fixed teeth on the observed data and the reference data include the first pose change and the second pose change.
It will be appreciated that by means of a coarse registration algorithm (Random Sample Consensus, RANSAC), the corresponding fixed teeth on the observation and reference data are registered, resulting in a first pose change of the corresponding fixed teeth on the observation and reference data. Subsequently, the second pose changes of the corresponding fixed teeth on the observed data and the reference data are obtained by registering the corresponding fixed teeth on the observed data and the reference data through a fine registration algorithm (such as a closest point iterative algorithm, iterative Closest Point, ICP) on the basis of a coarse registration algorithm (Random Sample Consensus, RANSAC), wherein the pose changes of the corresponding fixed teeth on the observed data and the reference data comprise a first pose change and a second pose change, namely, the coarse registration pose changes of the observed data and the reference data comprise the first pose change and the second pose change.
It may be understood that the global rough registration of the observation data and the reference data is performed by taking the rough registration pose change of the observation data and the reference data as the global pose change, or it may be understood that the teeth corresponding to the observation data and the reference data are performed by rough registration based on the global pose change, or that is, the single teeth corresponding to the observation data and the reference data are all performed by rough registration based on the global pose change.
Optionally, on the basis of the coarse registration, the observation data and the reference data are subjected to fine registration to obtain a fine registration pose change, which is specifically realized by the following steps:
the observation data and the reference data are subjected to fine registration on the basis of the coarse registration, so that fine registration pose changes of the corresponding teeth on the observation data and the reference data are obtained; the method specifically comprises the following steps: and on the basis of the coarse registration, finely registering the corresponding single teeth on the observed data and the reference data.
It can be understood that on the basis of the rough registration, corresponding single teeth in the observation data and the reference data are registered according to the types of the teeth, so as to obtain the fine registration pose change of each pair of single teeth, wherein the corresponding single teeth are the teeth of the same type, namely the teeth with the same tooth position number. Specifically, first single-tooth observation data corresponding to a first single tooth is determined in the observation data, first single-tooth reference data corresponding to the first single tooth is determined in the reference data, fine registration is performed on the first single-tooth observation data and the first single-tooth reference data on the basis of Wen Cu registration, and first single-tooth fine registration pose change is obtained, and specifically, the first single-tooth fine registration pose change matrix can be used, and the fine registration pose change comprises first single-tooth fine registration pose change; and determining second single-tooth observation data corresponding to the second single tooth in the observation data, determining second single-tooth reference data corresponding to the second single tooth in the reference data, and performing fine registration on the second single-tooth observation data and the second single-tooth reference data on the basis of Wen Cu registration to obtain second single-tooth fine registration pose changes, which can be specifically a second single-tooth fine registration pose change matrix, wherein the fine registration pose changes comprise second single-tooth fine registration pose changes, that is, the fine registration pose changes comprise single-tooth fine registration pose changes obtained by fine registration of all single teeth respectively. And then, registering the observed data and the reference data based on the rough registration pose change and the fine registration pose change to obtain target observed data and target reference data. It should be noted that, the fine registration may be performed by using a single tooth as a unit, or may be performed by using a combination of a plurality of single teeth as a unit, where the number of single teeth in the combination is generally less than the number of all single teeth of the target tooth.
The following embodiment takes coordinate conversion of observed data to a coordinate system where the reference data is located as an example for detailed description, the reference data keeps pose unchanged, the observed data and corresponding fixed teeth on the reference data are roughly registered, the method specifically comprises that the corresponding fixed teeth are firstly roughly registered based on a RANSAC registration algorithm and then finely registered based on an ICP registration algorithm, the pose change of the fixed tooth point cloud data in the observed data is obtained and is used as the rough registration pose change of the observed data (namely, the rough registration pose change of each single tooth in the observed data), the observed data and the reference data are finely registered one by one on the basis of the rough registration of the observed data and the reference data, the fine registration pose change of each single tooth corresponding to the observed data (namely, the fine registration pose change of each single tooth in the observed data) is obtained, each single tooth in the observed data is correspondingly registered with each single tooth in the reference data based on the rough registration pose change and the fine registration pose change of each single tooth, and the target data (namely, the reference data) are obtained.
And S140, estimating the orthodontic result of the target teeth based on the corresponding teeth in the target observation data and the target reference data.
Optionally, in S140, the orthodontic result of the target tooth is estimated based on the corresponding tooth in the target observation data and the target reference data, which may be specifically implemented by the following steps:
and comparing the target reference data with corresponding teeth in the target reference data to obtain an orthodontic result of the target teeth.
It can be understood that, on the basis of S130, the description is given taking the pose change of the observed data as an example, the observed data is subjected to global transformation through coarse registration, that is, the pose difference caused by the jaw pose between the observed data and the reference data is eliminated, and the accuracy of orthodontic degree estimation is improved. And then, locally and finely registering the coarse registered observation data and the single teeth of the corresponding category on the reference data according to the category of the single teeth (namely the single teeth) by utilizing an ICP algorithm to obtain target reference data. And then, comparing the target observation data with the single teeth of the corresponding class on the target reference data to obtain the pose difference of the target single teeth, namely representing the correction condition of the target single teeth. Then, the pose differences of the plurality of target single teeth can be integrated, so that the overall treatment condition of the teeth of the target user in the orthodontic stage, namely the overall correction condition of the target teeth, can be conveniently known. Of course, local fine registration and comparison can be performed by taking the combination of a plurality of target single teeth as a unit, so as to obtain the pose difference of the combination of a plurality of target single teeth of corresponding categories on the target observation data and the target reference data. It should be noted that the orthodontic result of the target tooth may be an orthodontic result of one or more or all of the orthodontic teeth. It is understood that the specific embodiment of performing pose change on the reference data to estimate the orthodontic result of the orthodontic tooth is the same as the observed data, and will not be described herein. It is understood that both the observed data and the reference data may also be subject to pose changes during the registration process. It should be noted that, the pose change processing in the present application does not represent an absolute change of the position and the pose, for example, the pose of both poses does not need to be further changed in the registration process, that is, the pose change is indicated to be 0, and the pose adjustment is not performed.
According to the orthodontic treatment monitoring method provided by the embodiment of the disclosure, teeth in the current time are scanned in real time, the observation data of the target user in the second time are obtained, the reference data obtained by scanning the target user in the first time before the teeth are obtained, gum registration is not needed, the observation data and the reference data are registered in a 3D space to eliminate jaw gestures, on the basis of eliminating the jaw gestures, the pose difference between the observation data and the reference data is determined, so that orthodontics caused by orthodontics of the teeth can be monitored, and the accuracy and user experience of orthodontics tooth gesture estimation are effectively improved.
On the basis of the foregoing embodiment, fig. 2 is a schematic diagram of a refinement flow of S110 in the orthodontic treatment monitoring method shown in fig. 1, and optionally, the step of obtaining observation data of the target user' S teeth at a second time specifically includes steps S210 to S240 shown in fig. 2:
s210, acquiring a texture map sequence and a speckle map sequence of the teeth of the target user at a second time.
It will be appreciated that the observation data specifically includes a sequence of texture maps, including a plurality of frames of texture maps, noted asTexture map sequences are understood to be monocular RGB sequences, i.e. a series of RGB maps, and speckle map sequences comprise a plurality of frames of speckle maps, i.e. a series of speckle maps, denoted >Where L represents the number of image frames, the speckle pattern sequence may be a binocular speckle pattern sequence or a monocular speckle pattern sequence, and is not particularly limited.
S220, performing two-dimensional instance segmentation on the texture map sequence to obtain a segmentation mask sequence.
It can be understood that, on the basis of S210 above, an example segmentation result of an RGB image and a class of each tooth are obtained by performing example segmentation on each frame of texture image (RGB image) in the texture image sequence by using a 2D example segmentation technique (Swin Transformer), the class of each tooth is subjected to point cloud registration to eliminate jaw gestures, the RGB image example segmentation result can also be understood as an example segmentation mask sequence, and the class information of the example segmentation result is given by post-processing, that is, the class of each tooth is determined, where the 2D example segmentation technique inputs a frame of RGB image, and outputs the object label to which each pixel belongs, the 2D external frame of each object and the class of each object in the RGB image.
S230, performing depth estimation on the speckle pattern sequence to generate a depth pattern sequence.
It can be understood that, on the basis of S220, the depth estimation algorithm is used to perform depth estimation on each frame of speckle pattern to obtain a depth pattern corresponding to each frame of speckle pattern, and the multi-frame depth patterns form a depth pattern sequence, where the depth pattern sequence is recorded as Where L also represents the number of depth map frames, the sequence of depth maps, i.e. a series of depth maps. It can be understood that the depth estimation algorithm can be depth estimation based on a monocular RGB image, depth estimation based on a binocular RGB image, and the like, and the specific algorithm does not limit the depth estimation algorithm and can be determined by the user according to the requirements.
S240, obtaining second point cloud data according to the segmentation mask sequence and the depth map sequence.
Wherein the observation data includes the second point cloud data.
It is understood that the second point cloud data refers to a three-dimensional point set of the tooth surface of the target user at the second time, recorded asWherein c represents the number of teeth, that is, the second point cloud data includes point cloud data of c tooth surfaces, each tooth has corresponding point cloud data, and is denoted as second single-tooth point cloud data, the c second single-tooth point cloud data form second point cloud data, and the observation data includes second point cloud data.
Optionally, in S240, second point cloud data is obtained according to the segmentation mask sequence and the depth map sequence, which is specifically implemented by the following steps:
converting each frame of depth map in the depth map sequence into corresponding point cloud data to obtain multi-frame point cloud data; splicing and fusing the multi-frame point cloud data to obtain fused point cloud data; determining a single-tooth region from the depth map sequence based on the segmentation mask sequence; and carrying out back projection on the single-tooth area in the fusion point cloud data to obtain second single-tooth point cloud data, wherein the observation data comprise a plurality of second single-tooth point cloud data.
It can be understood that, based on S230, each frame of depth map in the depth map sequence is converted into corresponding point cloud data, so as to obtain multi-frame point cloud data, that is, the point cloud data corresponding to each frame of depth map exists. And then, splicing multi-frame point cloud data through an ICP splicing algorithm to obtain point cloud splicing data, and carrying out fusion optimization on the point cloud splicing data to obtain fusion point cloud data. And acquiring a single-tooth region in the depth map sequence by using the segmentation mask sequence, wherein the single-tooth region refers to a region of a single tooth, namely, a region where each tooth is positioned is determined in the depth map. After obtaining the single-tooth area and the fusion point cloud data, back-projecting each single-tooth area in the fusion point cloud data to determine the point cloud data (namely second single-tooth point cloud data) of each tooth.
For example, referring to fig. 3, fig. 3 is a schematic structural diagram of an orthodontic treatment monitoring method provided by an embodiment of the present disclosure, where a first scanning device collects teeth in an orthodontic t stage to obtain observation data, where the observation data includes a binocular speckle pattern sequence and a monocular RGB pattern sequence; performing depth estimation on the binocular speckle pattern sequence through an algorithm library to obtain a depth pattern sequence; performing RGB instance segmentation (2D instance segmentation) on the monocular RGB image sequence to obtain an instance segmentation mask sequence; splicing and fusing the point cloud data corresponding to each frame of depth map through a splicing algorithm in an algorithm library to obtain fused point cloud data; determining orthodontic t-stage observation single-tooth point cloud data from the fusion point cloud data through an example segmentation mask corresponding to the back projection depth map; the second scanning device acquires pre-orthodontic 3D reference data (dental data); performing 3D instance segmentation on the reference data to obtain orthodontic t-stage front tooth model single tooth point cloud data; the method comprises the steps of performing fixed tooth matching on orthodontic t-stage front tooth model single tooth point cloud data and orthodontic t-stage observation single tooth point cloud data, and determining jaw postures; the single-tooth point cloud data observed in the orthodontic t stage are subjected to global transformation through the jaw gestures, and aligned single-tooth point cloud data observed after the orthodontic t stage are obtained; and estimating the posture transformation condition of the orthodontic teeth on the aligned orthodontic t-stage rear observation single-tooth point cloud data and the orthodontic t-stage front dental model single-tooth point cloud data by utilizing an ICP algorithm.
According to the orthodontic treatment monitoring method, the depth map sequence is obtained according to the speckle map sequence, and the obtained depth map is good in quality and can be used for accurate single-tooth posture estimation. And then, a splicing algorithm is used for processing the depth map sequence to obtain observation data, so that the subsequent point cloud matching in a three-dimensional space is facilitated, and the accuracy of single tooth posture estimation before and after orthodontic is improved.
Fig. 4 is a schematic structural diagram of an orthodontic treatment monitoring device according to an embodiment of the present disclosure. The orthodontic treatment monitoring device provided in the embodiments of the present disclosure may perform the process flow provided in the embodiment of the orthodontic treatment monitoring method, as shown in fig. 4, the orthodontic treatment monitoring device 400 includes a second acquisition unit 410, a first acquisition unit 420, a registration unit 430, and an estimation unit 440, where:
a second acquisition unit 410 for acquiring observation data of a second time of the teeth of the target user;
a first obtaining unit 420, configured to obtain reference data of the first time of the teeth of the target user;
a registration unit 430, configured to register the observation data with the corresponding teeth on the reference data to obtain target registration data, where the target registration data includes target observation data corresponding to the observation data and target reference data corresponding to the reference data;
And an estimating unit 440 for estimating an orthodontic result of the target tooth based on the target observation data and the corresponding tooth in the target reference data.
Optionally, the second obtaining unit 410 is configured to:
acquiring a texture map sequence and a speckle map sequence of the teeth of the target user at a second time;
performing two-dimensional instance segmentation on the texture map sequence to obtain a segmentation mask sequence;
performing depth estimation on the speckle pattern sequence to generate a depth pattern sequence;
and obtaining second point cloud data according to the segmentation mask sequence and the depth map sequence, wherein the observation data comprise the second point cloud data.
Optionally, the second obtaining unit 410 is configured to:
converting each frame of depth map in the depth map sequence into corresponding point cloud data to obtain multi-frame point cloud data;
splicing and fusing the multi-frame point cloud data to obtain fused point cloud data;
determining a single-tooth region from the depth map sequence based on the segmentation mask sequence;
and carrying out back projection on the single-tooth region in the fusion point cloud data to obtain second unit point cloud data, wherein the second point cloud data comprises a plurality of second single-tooth point cloud data.
Optionally, the first obtaining unit 420 is configured to:
Acquiring three-dimensional dental model data of the teeth of the target user at the first time;
and carrying out three-dimensional instance segmentation on the three-dimensional dental data to obtain the first single-dental-point cloud data, wherein the reference data comprises a plurality of first single-dental-point cloud data.
Optionally, the registration unit 430 is configured to:
performing coarse registration on the observation data and the reference data to obtain coarse registration pose changes of the observation data and the reference data;
performing fine registration on the observation data and the reference data on the basis of the coarse registration to obtain fine registration pose changes;
and obtaining target registration data based on the coarse registration pose change and the fine registration pose change by the observation data and the reference data.
Optionally, the registration unit 430 is configured to:
registering the observation data and the fixed teeth corresponding to the reference data to obtain pose changes of the observation data and the fixed teeth corresponding to the reference data, wherein the rough registration pose changes of the observation data and the reference data are the pose changes of the observation data and the fixed teeth corresponding to the reference data, and the fixed teeth are part of the target teeth.
Optionally, the registration unit 430 is configured to:
roughly registering the fixed teeth corresponding to the observed data and the reference data to obtain first pose changes of the fixed teeth corresponding to the observed data and the reference data;
fine registration is carried out on the observation data and the fixed teeth corresponding to the reference data on the basis of coarse registration, so that second pose changes of the fixed teeth corresponding to the observation data and the reference data are obtained;
the pose changes of the corresponding fixed teeth on the observed data and the reference data include the first pose change and the second pose change.
Optionally, the registration unit 430 is configured to:
and carrying out fine registration on the observation data and the teeth corresponding to the reference data on the basis of the coarse registration to obtain fine registration pose changes of the teeth corresponding to the observation data and the reference data.
Optionally, the estimation unit 440 is configured to:
and comparing the target observation data with corresponding teeth in the target reference data to obtain an orthodontic result of the target teeth.
Optionally, the observed data in the apparatus 400 is dental scan data obtained based on a second scanning device scanning a dental surface of the target user for a second time; the reference data is tooth scan data obtained based on a first scanning device scanning a tooth surface of the target user's mouth for a first time.
The orthodontic treatment monitoring device of the embodiment shown in fig. 4 may be used to implement the technical solution of the above-mentioned method embodiment, and its implementation principle and technical effects are similar, and will not be described here again.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. Referring now in particular to fig. 5, a schematic diagram of an electronic device 500 suitable for use in implementing embodiments of the present disclosure is shown. The electronic device 500 in the embodiments of the present disclosure may include, but is not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), wearable electronic devices, and the like, and fixed terminals such as digital TVs, desktop computers, smart home devices, and the like. The electronic device shown in fig. 5 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 5, the electronic device 500 may include a processing means (e.g., a central processor, a graphics processor, etc.) 501 that may perform various suitable actions and processes according to a program stored in a Read Only Memory (ROM) 502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503 to implement an orthodontic treatment monitoring method of an embodiment as described in the present disclosure. In the RAM 503, various programs and data required for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM 502, and the RAM 503 are connected to each other via a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
In general, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 507 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 508 including, for example, magnetic tape, hard disk, etc.; and communication means 509. The communication means 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. While fig. 5 shows an electronic device 500 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flow diagrams, thereby implementing the orthodontic treatment monitoring method as described above. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 509, or from the storage means 508, or from the ROM 502. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 501.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
Alternatively, the electronic device may perform other steps described in the above embodiments when the above one or more programs are executed by the electronic device.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or gateway that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or gateway. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or gateway comprising the element.
The foregoing is merely a specific embodiment of the disclosure to enable one skilled in the art to understand or practice the disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown and described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (13)

1. A method of orthodontic treatment monitoring, comprising:
obtaining observation data of the teeth of the target user at a second time;
acquiring reference data of the target user tooth at the first time;
registering the observation data with the corresponding teeth on the reference data to obtain target registration data, wherein the target registration data comprises target observation data corresponding to the observation data and target reference data corresponding to the reference data;
and estimating an orthodontic result of the target tooth based on the target observation data and the corresponding tooth in the target reference data.
2. The method of claim 1, wherein the acquiring observations of the target user's teeth at the second time comprises:
acquiring a texture map sequence and a speckle map sequence of the teeth of the target user at a second time;
performing two-dimensional instance segmentation on the texture map sequence to obtain a segmentation mask sequence;
performing depth estimation on the speckle pattern sequence to generate a depth pattern sequence;
and obtaining second point cloud data according to the segmentation mask sequence and the depth map sequence, wherein the observation data comprise the second point cloud data.
3. The method according to claim 2, wherein the obtaining second point cloud data according to the segmentation mask sequence and the depth map sequence includes:
Converting each frame of depth map in the depth map sequence into corresponding point cloud data to obtain multi-frame point cloud data;
splicing and fusing the multi-frame point cloud data to obtain fused point cloud data;
determining a single-tooth region from the depth map sequence based on the segmentation mask sequence;
and carrying out back projection on the single-tooth area in the fusion point cloud data to obtain second single-tooth point cloud data, wherein the second point cloud data comprises a plurality of second single-tooth point cloud data.
4. The method of claim 1, wherein the obtaining the reference data for the first time of the target user's teeth comprises:
acquiring three-dimensional dental model data of the teeth of the target user at the first time;
and carrying out three-dimensional instance segmentation on the three-dimensional dental data to obtain the first single-dental-point cloud data, wherein the reference data comprises a plurality of first single-dental-point cloud data.
5. The method of claim 1, wherein registering the observation data with the corresponding teeth on the reference data to obtain target registration data comprises:
performing coarse registration on the observation data and the reference data to obtain coarse registration pose changes of the observation data and the reference data;
Performing fine registration on the observation data and the reference data on the basis of the coarse registration to obtain fine registration pose changes;
and obtaining target registration data based on the coarse registration pose change and the fine registration pose change by the observation data and the reference data.
6. The method of claim 5, wherein the coarsely registering the observation data and the reference data to obtain coarse registration pose changes of the observation data and the reference data comprises:
registering the observation data and the fixed teeth corresponding to the reference data to obtain pose changes of the observation data and the fixed teeth corresponding to the reference data, wherein the rough registration pose changes of the observation data and the reference data are the pose changes of the observation data and the fixed teeth corresponding to the reference data, and the fixed teeth are part of the target teeth.
7. The method of claim 6, wherein said registering the corresponding fixed teeth on the observed data and the reference data results in a pose change of the corresponding fixed teeth on the observed data and the reference data, comprising:
Roughly registering the fixed teeth corresponding to the observed data and the reference data to obtain first pose changes of the fixed teeth corresponding to the observed data and the reference data;
fine registration is carried out on the observation data and the fixed teeth corresponding to the reference data on the basis of coarse registration, so that second pose changes of the fixed teeth corresponding to the observation data and the reference data are obtained;
the pose changes of the corresponding fixed teeth on the observed data and the reference data include the first pose change and the second pose change.
8. The method of claim 5, wherein the fine registration of the observation data and the reference data based on the coarse registration results in a fine registration pose change, comprising:
and carrying out fine registration on the observation data and the teeth corresponding to the reference data on the basis of the coarse registration to obtain fine registration pose changes of the teeth corresponding to the observation data and the reference data.
9. The method of claim 1, wherein the estimating orthodontic results for the target teeth based on the target observation data and corresponding teeth in the target reference data comprises:
And comparing the target observation data with corresponding teeth in the target reference data to obtain an orthodontic result of the target teeth.
10. The method of claim 1, wherein the observation data is tooth scan data based on a second scanning device scanning a tooth surface of the target user for a second time; the reference data is tooth scan data obtained based on a first scanning device scanning a tooth surface of the target user at a first time.
11. An orthodontic treatment monitoring device, comprising:
the second acquisition unit is used for acquiring observation data of the teeth of the target user at a second time;
the first acquisition unit is used for acquiring the reference data of the first time of the teeth of the target user;
the registration unit is used for registering the observation data and the corresponding teeth on the reference data to obtain target registration data, wherein the target registration data comprises target observation data corresponding to the observation data and target reference data corresponding to the reference data;
and the estimation unit is used for estimating the orthodontic result of the target teeth based on the corresponding teeth in the target observation data and the target reference data.
12. An electronic device, comprising:
a memory;
a processor; and
a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the orthodontic treatment monitoring method of any one of claims 1 to 10.
13. A computer readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the steps of the orthodontic treatment monitoring method according to any one of claims 1 to 10.
CN202311397505.1A 2023-10-25 2023-10-25 Orthodontic treatment monitoring method, orthodontic treatment monitoring device, orthodontic treatment monitoring equipment and storage medium Pending CN117323038A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311397505.1A CN117323038A (en) 2023-10-25 2023-10-25 Orthodontic treatment monitoring method, orthodontic treatment monitoring device, orthodontic treatment monitoring equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311397505.1A CN117323038A (en) 2023-10-25 2023-10-25 Orthodontic treatment monitoring method, orthodontic treatment monitoring device, orthodontic treatment monitoring equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117323038A true CN117323038A (en) 2024-01-02

Family

ID=89282907

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311397505.1A Pending CN117323038A (en) 2023-10-25 2023-10-25 Orthodontic treatment monitoring method, orthodontic treatment monitoring device, orthodontic treatment monitoring equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117323038A (en)

Similar Documents

Publication Publication Date Title
US20230050354A1 (en) Method for generating a customized/personalized head related transfer function
EP3786890B1 (en) Method and apparatus for determining pose of image capture device, and storage medium therefor
CN108492364B (en) Method and apparatus for generating image generation model
CN109741388B (en) Method and apparatus for generating a binocular depth estimation model
CN110349107B (en) Image enhancement method, device, electronic equipment and storage medium
CN111325792B (en) Method, apparatus, device and medium for determining camera pose
CN110930438B (en) Image registration method, device, electronic equipment and storage medium
CN113327318B (en) Image display method, image display device, electronic equipment and computer readable medium
WO2023213254A1 (en) Intraoral scanning and processing method, system, electronic device and medium
CN109040525B (en) Image processing method, image processing device, computer readable medium and electronic equipment
WO2022022350A1 (en) Image processing method and apparatus, electronic device, storage medium, and computer program product
WO2022247630A1 (en) Image processing method and apparatus, electronic device and storage medium
CN115068140A (en) Tooth model acquisition method, device, equipment and medium
CN114708150A (en) Scanning data processing method and device, electronic equipment and medium
CN115040278A (en) Method, device, equipment and medium for obtaining tooth preparation model
WO2022100005A1 (en) Tooth image processing method and apparatus, electronic device, storage medium and program
WO2023213252A1 (en) Scanning data processing method and apparatus, and device and medium
CN112714263B (en) Video generation method, device, equipment and storage medium
CN113989717A (en) Video image processing method and device, electronic equipment and storage medium
CN115937291B (en) Binocular image generation method and device, electronic equipment and storage medium
CN117323038A (en) Orthodontic treatment monitoring method, orthodontic treatment monitoring device, orthodontic treatment monitoring equipment and storage medium
CN115546413A (en) Method and device for monitoring orthodontic effect based on portable camera and storage medium
CN111915532B (en) Image tracking method and device, electronic equipment and computer readable medium
CN115797595A (en) Orthodontic treatment monitoring method, orthodontic treatment monitoring device, orthodontic treatment monitoring equipment and orthodontic treatment monitoring storage medium
CN110390717B (en) 3D model reconstruction method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination