CN115565684A - Oral cavity structure information generation method, system, storage medium and oral cavity instrument - Google Patents

Oral cavity structure information generation method, system, storage medium and oral cavity instrument Download PDF

Info

Publication number
CN115565684A
CN115565684A CN202211209794.3A CN202211209794A CN115565684A CN 115565684 A CN115565684 A CN 115565684A CN 202211209794 A CN202211209794 A CN 202211209794A CN 115565684 A CN115565684 A CN 115565684A
Authority
CN
China
Prior art keywords
oral cavity
data
coordinate
feature
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211209794.3A
Other languages
Chinese (zh)
Inventor
王猛
王静
王明政
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuxi Times Angel Biotechnology Co ltd
Original Assignee
Wuxi Times Angel Biotechnology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuxi Times Angel Biotechnology Co ltd filed Critical Wuxi Times Angel Biotechnology Co ltd
Priority to CN202211209794.3A priority Critical patent/CN115565684A/en
Publication of CN115565684A publication Critical patent/CN115565684A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/50ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Public Health (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Epidemiology (AREA)
  • Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Databases & Information Systems (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)

Abstract

The invention discloses an oral cavity structure information generation method, an oral cavity structure information generation system, a storage medium and an oral cavity instrument, wherein the oral cavity structure information generation method comprises the following steps: acquiring oral three-dimensional model data, and judging whether an appointed intraoral tissue area in the oral three-dimensional model meets a preset integrity condition; if not, acquiring corresponding intraoral image data, extracting oral characteristic data in the intraoral image data, establishing a pixel size relation between the intraoral image data and oral three-dimensional model data, and reconstructing a part representing the oral characteristic data on the oral three-dimensional model data to obtain oral structure information. The oral cavity structure information generation method provided by the invention can simplify logic, improve data integrity and accuracy and obtain more visual and accurate structure information.

Description

Oral cavity structure information generation method, system, storage medium and oral cavity instrument
Technical Field
The invention relates to the technical field of oral cavity digital models, in particular to an oral cavity structure information generation method, an oral cavity structure information generation system, a storage medium and an oral cavity instrument.
Background
In modern society, oral health including not only oral health achieved by controlling plaque, removing dirt and food debris but also oral health maintenance typified by improvement of bite relations and orofacial muscle functions is becoming a focus of attention. For the latter, on one hand, the dental hygiene, the chewing function, the development and the functions of temporomandibular joint can be influenced, and malocclusion and snoring can be easily caused, which are double hazards to the health and the beauty of human bodies; on the other hand, the adjustment of the occlusion relation and the orofacial muscle function depends on the high-precision analysis of the oral cavity structure information, particularly the intraoral tissue structure characteristics, and when a large error exists, the problems of poor treatment effect and user experience feeling and the like can be caused. It can be seen that, in order to effectively maintain oral health, analysis and extraction of oral structural information are crucial.
The common technical scheme in the prior art is that an oral cavity plaster model or an oral cavity three-dimensional model is further manufactured by manufacturing an oral cavity silica gel model of a patient, or an oral cavity scanning model of the patient is obtained by scanning the oral cavity by a scanner, so that the analysis of the oral cavity structure information is carried out. However, in the extraction process of the model, on one hand, the shape and the position of the dental crown are too much concerned, so that other characteristics in the oral cavity are lost, and comprehensive data reference is difficult to provide in the process of comprehensive diagnosis; on the other hand, in consideration of operational errors, patient comfort, and the like, there is a high possibility that the regional feature acquisition is incomplete, and it is necessary to newly create or acquire a model for the local feature. Further, since the steps of manufacturing the oral cavity silicone model or the oral scan model are complicated, and the accuracy requirement cannot be met by simply adopting the intraoral image, it is necessary to provide an oral structure information generating method to solve the above problems.
Disclosure of Invention
One of the objectives of the present invention is to provide an oral cavity structure information generating method, so as to solve the problem that in the prior art, the oral cavity feature analysis step cannot take into account the advantages of both the feature data comprehensiveness, the high precision requirement, the patient comfort level and the step simplification degree, and cannot realize the automation of judgment logic and information generation.
An object of the present invention is to provide an oral cavity structure information generating system.
It is an object of the present invention to provide a storage medium.
It is an object of the present invention to provide an oral appliance.
In order to achieve one of the above objects, an embodiment of the present invention provides an oral cavity structure information generating method, including: acquiring oral three-dimensional model data, and judging whether an appointed intraoral tissue area in the oral three-dimensional model meets a preset integrity condition or not; if not, acquiring corresponding intraoral image data, extracting oral cavity feature data corresponding to the intraoral tissue region in the intraoral image data, establishing a pixel size relationship between the intraoral image data and the oral cavity three-dimensional model data, and reconstructing a part representing the oral cavity feature data on the oral cavity three-dimensional model data to obtain the oral cavity structure information.
As a further improvement of an embodiment of the present invention, the step of acquiring the oral three-dimensional model data and determining whether the specified intraoral tissue region in the oral three-dimensional model satisfies a preset integrity condition specifically includes: acquiring oral cavity three-dimensional model data, and calculating to obtain a position relation between a target tooth position and a model boundary and/or a position relation between an upper jaw model boundary and a lower jaw model boundary according to target tooth position characteristics in the oral cavity three-dimensional model data and model boundary characteristics corresponding to the target tooth position characteristics; and comparing the position relation with a preset position condition corresponding to the position relation, and judging whether an intraoral tissue area corresponding to the target tooth position in the oral three-dimensional model meets a preset integrity condition.
As a further improvement of an embodiment of the present invention, the "calculating a position relationship between a target tooth position and a model boundary according to a target tooth position feature in the oral cavity three-dimensional model data and a model boundary feature corresponding to the target tooth position feature" specifically includes: determining at least one tooth position on the oral cavity three-dimensional model as a target tooth position, and calculating reference feature coordinates of the target tooth position based on a preset feature recognition rule to represent the target tooth position feature; according to the reference feature coordinates, determining boundary feature coordinates corresponding to the reference feature coordinates on the oral cavity three-dimensional model along a first direction so as to represent the model boundary features; calculating to obtain a characteristic distance value between the reference characteristic coordinate and the boundary characteristic coordinate so as to represent the position relation between the target tooth position and the model boundary; the step of comparing the position relationship with a preset position condition corresponding to the position relationship to determine whether an intraoral tissue region corresponding to the target tooth site in the oral three-dimensional model meets a preset integrity condition specifically includes: and comparing the characteristic distance value with a distance integrity criterion value representing the preset position condition, and judging whether an intraoral tissue area corresponding to the target tooth position in the oral three-dimensional model meets a preset integrity condition or not according to a comparison result.
As a further improvement of an embodiment of the present invention, the reference feature coordinate is located in a preset model coordinate system, and an origin of the model coordinate system is located in a occlusal plane; the "calculating the reference feature coordinates of the target tooth positions based on the preset feature recognition rules" specifically includes: determining all space points on the dental crown of the target tooth position on one side of the labial surface to form a space point set; selecting a space point with the minimum coordinate value in the space point set as a pole to establish a polar coordinate system, and arranging other space points in ascending order according to a polar angle and a polar diameter to form a characteristic traversal sequence; extracting coordinates of the first two space points in the characteristic traversal sequence, sequentially calculating to obtain a first polar coordinate vector and a second polar coordinate vector by taking the pole as a starting point, calculating a cross product of the first polar coordinate vector and the second polar coordinate vector, and judging whether the cross product is less than 0; if so, updating the starting point to be a first space point corresponding to the first polar coordinate vector; if not, updating the starting point to be a second space point corresponding to the second polar coordinate vector; traversing all other spatial points behind the second spatial point in the feature traversal sequence, selectively updating the starting point according to a polar coordinate vector formed by the spatial point and the starting point and a cross product between the polar coordinate vectors, and determining at least two spatial points meeting the judgment condition as convex hull reference points; and fitting convex hull contour data of the target tooth position according to the convex hull reference point and the pole, and calculating reference characteristic coordinates of the target tooth position according to the convex hull contour data.
As a further improvement of the embodiment of the present invention, the "traversing all other spatial points located after the second spatial point in the feature traversal sequence, updating the starting point and determining at least two spatial points meeting the determination condition as convex hull reference points according to a polar coordinate vector formed by the spatial point and the starting point and a cross product between the polar coordinate vectors" specifically includes: extracting a third space point located behind the second space point in the feature traversal sequence, sequentially calculating a reference polar coordinate vector between a reference space point which is not the starting point and the starting point in the first space point and the second space point and a third polar coordinate vector between the third space point and the starting point, calculating a cross product of the reference polar coordinate vector and the third polar coordinate vector, and judging whether the cross product is less than 0; if so, updating the starting point as the reference space point, and determining the reference space point as the convex hull reference point; if not, the initial point is not updated, the reference space point is deleted, the third space point is used as a new reference space point, and the initial point is selectively updated and the convex hull reference point is determined according to the new reference polar coordinate vector and the cross product of polar coordinate vectors formed by the initial point and the next space point of the new reference space point; and repeating iteration until all the space points in the characteristic traversal sequence are judged, and obtaining at least two convex hull reference points.
As a further improvement of an embodiment of the present invention, the "calculating the reference feature coordinates of the target tooth position based on a preset feature recognition rule" specifically includes: and determining and taking the coordinates of the gingival margin midpoint of the target tooth site as the reference feature coordinates of the target tooth site.
As a further improvement of an embodiment of the present invention, the "determining boundary feature coordinates corresponding to the reference feature coordinates on the oral cavity three-dimensional model along a first direction according to the reference feature coordinates to characterize the model boundary features" specifically includes: taking the reference characteristic coordinate as a starting point, and taking a reference extension line in a direction away from the target tooth position along the first direction, and continuously analyzing the formation condition of a projection end point of a reference extension end on the jaw face of the oral cavity three-dimensional model, except the reference characteristic coordinate, on the reference extension line; and when the reference extension end does not form a projection end point with the maxillofacial surface of the oral cavity three-dimensional model any more, determining the finally formed projection end point coordinate as the boundary characteristic coordinate.
As a further improvement of an embodiment of the present invention, the intraoral tissue region includes a vestibular sulcus, the oral cavity characteristic data includes vestibular sulcus height data, the first direction is a projection of an extending direction of a long axis of a dental body on a maxillofacial surface of the oral cavity three-dimensional model, and the target dental positions include a maxillofacial incisor dental position and a mandibular incisor dental position.
As a further improvement of an embodiment of the present invention, the intraoral tissue region includes a vestibular sulcus and/or a tooth root ridge, the oral characteristic data includes dental arch width data, the first direction is a projection of an extension direction of a long axis of a tooth body on a maxillofacial surface of the oral three-dimensional model, and the target tooth positions include a first maxillofacial left molar tooth position and a first maxillofacial right molar tooth position which correspond to each other; wherein the first maxillofacial area is an upper jaw and/or a lower jaw.
As a further improvement of an embodiment of the present invention, the intraoral tissue region includes a labial frenum, the oral cavity feature data includes labial frenum width data, and the "determining at least one tooth position on the oral cavity three-dimensional model as a target tooth position, and calculating reference feature coordinates of the target tooth position based on a preset feature recognition rule to characterize the target tooth position feature" specifically includes: determining an incisor tooth position on the left side of a first jaw face and an incisor tooth position on the right side of the first jaw face on the oral three-dimensional model as the target tooth position, and calculating reference feature coordinates of the target tooth position and a plurality of relative feature coordinates between the two reference feature coordinates on the target tooth position based on a preset feature recognition rule so as to represent the target tooth position feature; wherein the first maxillofacial region is an upper jaw and/or a lower jaw; the "determining, according to the reference feature coordinates, boundary feature coordinates on the oral cavity three-dimensional model corresponding to the reference feature coordinates along a first direction to characterize the model boundary features" specifically includes: respectively taking the reference characteristic coordinate and the relative characteristic coordinate as starting points, and continuously analyzing the formation condition of projection end points of reference extension ends on the oral cavity three-dimensional model, except the reference characteristic coordinate or the relative characteristic coordinate, on the reference extension line along a first direction to a direction away from the target tooth position; wherein the first direction is a projection of an extension direction of a long axis of a tooth body on a maxillofacial surface of the oral cavity three-dimensional model; and when the reference extension end and the oral cavity three-dimensional model do not form the projection end point any more, determining the finally formed projection end point coordinate as the boundary characteristic coordinate.
As a further improvement of an embodiment of the present invention, before the comparing the characteristic distance value with the distance integrity criterion value representing the preset position condition and determining whether the intraoral tissue region corresponding to the target tooth site in the oral three-dimensional model satisfies the preset integrity condition according to the comparison result, the method includes: acquiring at least two groups of three-dimensional training model data and at least two groups of distance training data corresponding to the three-dimensional training model data; wherein the distance training data comprises a distance between conditional training coordinates on the three-dimensional training model corresponding to the reference feature coordinates and target tissue coordinates on the three-dimensional training model corresponding to the intraoral tissue region; a line connecting the conditional training coordinate and the target tissue coordinate extends along the first direction; and calculating the average distance data and the training distance standard deviation of the distance training data, and calculating to obtain the distance integrity criterion value according to the difference between the average distance data and the product of the training distance standard deviation and a preset distribution probability coefficient.
As a further refinement of an embodiment of the invention, the intraoral tissue region comprises a vestibular sulcus, the oral cavity characteristic data comprises vestibular sulcus height data, the target dental positions comprise an upper jaw first incisor dental position and a lower jaw second incisor dental position, the distance training data comprises a first sulcus bottom distance parameter corresponding to the upper jaw first incisor dental position and a second sulcus bottom distance parameter corresponding to the lower jaw second incisor dental position; the "calculating the average distance data and the training distance standard deviation of the distance training data" specifically includes: calculating first trench bottom average distance values of all first trench bottom distance parameters in all distance training data and second trench bottom average distance values of all second trench bottom distance parameters in all distance training data to obtain average distance data; and calculating first trench bottom distance standard deviations of all first trench bottom distance parameters in all distance training data and second trench bottom distance standard deviations of all second trench bottom distance parameters in all distance training data to obtain the training distance standard deviations.
As a further improvement of an embodiment of the present invention, the distance training data includes a sulcus distance parameter corresponding to the upper jaw left side incisor teeth position, the upper jaw left side central incisor teeth position, the upper jaw right side incisor teeth position, the lower jaw left side central incisor teeth position, the lower jaw right side central incisor teeth position, and the lower jaw right side central incisor teeth position, respectively; the sulcus distance parameter represents the distance between the conditional training coordinate of the gingival margin midpoint corresponding to the incisor tooth position and the target tissue coordinate of the sulcus feature point corresponding to the gingival margin midpoint.
As a further improvement of an embodiment of the present invention, the intraoral tissue region includes a vestibular sulcus and/or a tooth root ridge, the oral cavity characteristic data includes arch width data, the target tooth positions include upper two-sided distal molar tooth positions and lower two-sided distal molar tooth positions, the distance training data includes an upper arch width parameter corresponding to the upper two-sided distal molar tooth positions, and a lower arch width parameter corresponding to the lower two-sided distal molar tooth positions; the "calculating the average distance data and the training distance standard deviation of the distance training data" specifically includes: calculating upper average width values of all upper arch width parameters in all distance training data and lower average width values of all lower arch width parameters in all distance training data to obtain average distance data; and calculating the upper width standard deviation of all upper arch width parameters in all distance training data and the lower width standard deviation of all lower arch width parameters in all distance training data to obtain the training distance standard deviation.
As a further improvement of an embodiment of the present invention, the distribution probability coefficient is 2.
As a further improvement of an embodiment of the present invention, the "obtaining a feature distance value between the reference feature coordinate and the boundary feature coordinate by calculation" specifically includes: and calculating the distance values of the reference characteristic coordinates and the boundary characteristic coordinates in the extending direction of the tooth central line to obtain the characteristic distance values.
As a further improvement of an embodiment of the present invention, the first direction is a projection of an extending direction of a long axis of a tooth body on the oral cavity three-dimensional model; the boundary characteristic coordinates and the reference characteristic coordinates are located in a preset model coordinate system, the model coordinate system at least comprises a first coordinate axis and a third coordinate axis, the third coordinate axis extends along the extension direction of the tooth center line, and the first coordinate axis extends along the extension direction of the middle incisor width; the "calculating a distance value between the reference feature coordinate and the boundary feature coordinate in the extending direction of the tooth centerline" specifically includes: and calculating a coordinate difference value of the reference feature coordinate and the boundary feature coordinate on the third coordinate axis.
As a further improvement of an embodiment of the present invention, the comparing the characteristic distance value with a distance integrity criterion value representing the preset position condition, and determining whether an intraoral tissue region corresponding to the target tooth site in the oral three-dimensional model satisfies a preset integrity condition according to a comparison result specifically includes: judging whether the characteristic distance value is smaller than the distance integrity criterion value or not; if so, judging that the position in the oral cavity three-dimensional model pointed by the characteristic distance value is a characteristic missing position; if not, continuing to compare the value of the next characteristic distance value with the distance integrity criterion value; and judging whether the intraoral tissue area corresponding to the target tooth position in the oral three-dimensional model meets the integrity condition or not according to the number of the characteristic missing positions.
As a further improvement of an embodiment of the present invention, the "determining whether an intraoral tissue region corresponding to the target tooth site in the oral three-dimensional model satisfies the integrity condition according to the number of the feature missing positions" specifically includes: judging the numerical value magnitude relation between the number of the characteristic missing positions and the allowable error numerical value; if the number of the characteristic missing positions is larger than or equal to the allowable error number value, judging that the intraoral tissue area corresponding to the target tooth position in the oral three-dimensional model does not meet the integrity condition; if the number of the characteristic missing positions is smaller than the allowable error number value, judging that the intra-oral tissue region corresponding to the target tooth position in the oral three-dimensional model meets the integrity condition; wherein, the allowable error quantity value is an integer which is more than or equal to one half of the target tooth position quantity.
As a further improvement of an embodiment of the present invention, the "obtaining a position relationship between a maxilla model boundary and a mandible model boundary by calculating according to a target dental position feature in the oral cavity three-dimensional model data and a model boundary feature corresponding to the target dental position feature" specifically includes: determining at least incisor tooth positions on the oral cavity three-dimensional model as target tooth positions, and calculating reference feature coordinates of the target tooth positions based on a preset feature recognition rule so as to represent the target tooth position features; determining an upper boundary extreme value coordinate and a lower boundary extreme value coordinate of the oral cavity three-dimensional model along a first direction according to the reference characteristic coordinate; wherein the upper limit value coordinate is positioned at the upper jaw of the oral cavity three-dimensional model, the distance between the upper limit value coordinate and the occlusal plane has a maximum value, and the lower limit value coordinate is positioned at the lower jaw of the oral cavity three-dimensional model, the distance between the lower limit value coordinate and the occlusal plane has a maximum value; calculating to obtain a characteristic height value between the upper boundary extreme value coordinate and the lower boundary extreme value coordinate so as to represent the position relation between the upper jaw model boundary and the lower jaw model boundary; the step of comparing the position relationship with a preset position condition corresponding to the position relationship to determine whether an intraoral tissue region corresponding to the target tooth site in the oral three-dimensional model meets a preset integrity condition specifically includes: and comparing the characteristic height value with a height integrity criterion value representing the preset position condition, and judging whether an intraoral tissue area corresponding to the target tooth position in the oral three-dimensional model meets a preset integrity condition according to a comparison result.
As a further improvement of an embodiment of the present invention, the first direction is a projection of an extending direction of a long axis of a tooth body on a jaw face of the oral cavity three-dimensional model, the reference feature coordinate, the upper boundary extreme value coordinate and the lower boundary extreme value coordinate are located in a preset model coordinate system, an origin of the model coordinate system is located on a straight line where an incisor crest is located and includes at least a first coordinate axis and a third coordinate axis, the third coordinate axis extends along the extending direction of the tooth center line, and the first coordinate axis extends along a central incisor width extending direction; the "determining the upper boundary extreme value coordinate and the lower boundary extreme value coordinate of the oral cavity three-dimensional model along the first direction according to the reference feature coordinate" specifically includes: calculating an upper boundary coordinate set corresponding to an upper jaw target tooth position on the oral cavity three-dimensional model and a lower boundary coordinate set corresponding to a lower jaw target tooth position on the oral cavity three-dimensional model so as to represent the model boundary characteristics; traversing and determining an upper boundary coordinate having a maximum coordinate value corresponding to the third coordinate axis in the upper boundary coordinate set as the upper boundary extreme value coordinate, and traversing and determining a lower boundary coordinate having a minimum coordinate value corresponding to the third coordinate axis in the lower boundary coordinate set as the lower boundary extreme value coordinate; the "calculating to obtain the feature height value between the upper boundary extreme value coordinate and the lower boundary extreme value coordinate" specifically includes: and calculating a coordinate difference value of the upper boundary extreme value coordinate and the lower boundary extreme value coordinate on the third coordinate axis as the characteristic height value.
As a further improvement of an embodiment of the present invention, before the comparing the feature height value with the height integrity criterion value representing the preset position condition and determining whether the intra-oral tissue region corresponding to the target tooth position in the three-dimensional oral cavity model satisfies the preset integrity condition according to the comparison result, the method includes: acquiring at least two groups of three-dimensional training model data and at least two groups of height training data corresponding to the three-dimensional training model data; the high training data comprise a distance between an upper extreme tissue coordinate, corresponding to the intraoral tissue area, on the three-dimensional training model on one side of the first direction and a lower extreme tissue coordinate, corresponding to the intraoral tissue area, on the three-dimensional training model on the other side of the first direction; calculating the average height data and the standard deviation of the training height of the height training data, and calculating to obtain the height integrity criterion value according to the difference between the average height data and the product of the standard deviation of the training height and a preset distribution probability coefficient; wherein the distribution probability coefficient is 3.
As a further improvement of an embodiment of the present invention, the "extracting oral cavity feature data corresponding to the intraoral tissue region in the intraoral image data" specifically includes: calling a corresponding first neural network model and a corresponding second neural network model according to the intraoral organization region; inputting the intraoral image data into the first neural network model to extract an interested region to obtain characteristic region image data; inputting the feature region image data into the second neural network model for feature recognition to obtain the oral cavity feature data corresponding to the oral tissue region.
As a further improvement of an embodiment of the present invention, the first neural network model and the second neural network model are convolutional neural network models; the intraoral tissue region comprises the vestibular sulcus; before the step of calling the corresponding first neural network model and the second neural network model according to the intraoral tissue region, the method comprises the following steps: acquiring at least two groups of training image data and region-of-interest marks corresponding to the training image data; wherein the region of interest marks an extension range in the corresponding training image, covering at least the upper lip, the lower lip and the dentition in the training image; calling a preset convolutional neural network model, and performing iterative training by taking the training image data and the region-of-interest mark as model input to obtain a first training parameter and a corresponding first neural network model; acquiring at least two groups of training image data, and a tooth position mark and a vestibular sulcus bottom mark corresponding to the training image data; and calling a preset convolution neural network model, and performing iterative training by taking the training image data, the tooth position mark and the vestibular sulcus mark as model input to obtain a second training parameter and a corresponding second neural network model.
As a further improvement of an embodiment of the present invention, the "establishing a pixel size relationship between the intraoral image data and the oral cavity three-dimensional model data" specifically includes: determining at least one target tooth position corresponding to each other in the oral photography image and the oral three-dimensional model as a relative reference tooth position, calculating size data of a crown of the relative reference tooth position in at least one same direction in the oral photography image and the oral three-dimensional model, and respectively obtaining reference pixel size data and reference physical size data; and fitting to obtain a size mapping factor according to the reference pixel size data and the reference physical size data so as to represent the pixel size relation.
As a further refinement of an embodiment of the present invention, the at least one same direction includes a crown width direction; the size mapping factor is a quotient of the reference physical size data and the reference pixel size data.
As a further improvement of an embodiment of the present invention, the "calculating size data of the dental crown at the relative reference dental position in at least one same direction in the intraoral photographic image and the oral three-dimensional model to obtain reference pixel size data and reference physical size data respectively" specifically includes: determining a first reference characteristic point and a second reference characteristic point of the relative reference tooth position on the intraoral camera image, and calculating and obtaining reference pixel size data according to the number of pixels between the first reference characteristic point and the second reference characteristic point; wherein the first reference characteristic point and the second reference characteristic point are positioned at two sides of the long axis of the tooth body of the relative reference tooth position, and the distance between the first reference characteristic point and the corresponding incisal end of the dental crown of the relative reference tooth position is equal to the distance between the second reference characteristic point and the corresponding incisal end of the dental crown of the relative reference tooth position; determining a third reference characteristic point and a fourth reference characteristic point of the relative reference tooth position on the oral three-dimensional model, and calculating and obtaining the reference physical size data according to the Euclidean distance between the third reference characteristic point and the fourth reference characteristic point; the third reference characteristic point and the fourth reference characteristic point are positioned on two sides of the long axis of the tooth body of the relative reference tooth position, and the distance between the third reference characteristic point and the corresponding tangent end of the tooth crown of the relative reference tooth position is equal to the distance between the fourth reference characteristic point and the corresponding tangent end of the tooth crown of the relative reference tooth position.
As a further improvement of an embodiment of the present invention, the "determining at least one target tooth position corresponding to each other in the intraoral photographic image and the oral three-dimensional model as a relative reference tooth position" specifically includes: judging whether the oral photographic image and the upper jaw left middle incisor tooth position and the upper jaw right middle incisor tooth position in the oral three-dimensional model both contain complete crowns or not; if so, determining the left-side central incisor tooth position and the right-side central incisor tooth position of the upper jaw, which correspond to each other in the oral cavity three-dimensional model and the oral cavity image, as relative reference tooth positions; the "determining a first reference feature point and a second reference feature point of the relative reference tooth position on the intraoral camera image, calculating and obtaining the reference pixel size data according to the number of pixels between the first reference feature point and the second reference feature point" specifically includes: determining the first reference feature point at a distal boundary of an incisor tooth crown in a maxillary left side of the intraoral photographic image and the second reference feature point at a distal boundary of an incisor tooth crown in a maxillary right side of the intraoral photographic image; and calculating the number of pixels between the first reference characteristic point and the second reference characteristic point, and taking one half of the number of the pixels as the reference pixel size data.
As a further improvement of an embodiment of the present invention, the distance from the first reference feature point to the gingival end of the corresponding incisor site is three times the distance from the first reference feature point to the coronal end of the corresponding incisor site.
As a further improvement of an embodiment of the present invention, the "determining at least one target tooth position corresponding to each other in the oral cavity three-dimensional model and the intraoral photographic image as a relative reference tooth position" specifically includes: judging whether the left maxillary middle incisor tooth position and the right maxillary middle incisor tooth position in the oral photography image and the oral three-dimensional model both comprise complete crowns; if only the left maxillary central incisor tooth position in the oral photography image and the oral three-dimensional model contains a complete dental crown, determining the left maxillary central incisor tooth position corresponding to each other in the oral photography image and the oral three-dimensional model as a relative reference tooth position; the "determining a first reference feature point and a second reference feature point of the relative reference tooth position on the intraoral photographic image, and calculating and obtaining the reference pixel size data according to the number of pixels between the first reference feature point and the second reference feature point" specifically includes: determining the first reference feature point at a distal boundary of an incisor crown in the maxillary left side of the intraoral photographic image and the second reference feature point at a mesial boundary of the incisor crown in the maxillary left side of the intraoral photographic image; the number of pixels between the first reference feature point and the second reference feature point is calculated, and the number of pixels is taken as the reference pixel size data.
As a further improvement of an embodiment of the present invention, the "reconstructing a portion representing the oral cavity feature data on the oral cavity three-dimensional model data to obtain the oral cavity structure information" specifically includes: determining at least two intraoral tissue feature points from the oral cavity feature data in the intraoral image data; and according to the intraoral tissue characteristic points, fitting tissue space characteristic points corresponding to the intraoral tissue characteristic points at the characteristic missing positions in the oral cavity three-dimensional model according to the pixel size relation, and forming a part representing the oral cavity characteristic data on the oral cavity three-dimensional model data.
As a further refinement of an embodiment of the invention, the intraoral tissue region includes a vestibular sulcus, the oral cavity characteristic data includes vestibular sulcus height data, and the intraoral tissue characteristic points include an upper sulcus bottom characteristic point and a lower sulcus bottom characteristic point.
As a further improvement of an embodiment of the present invention, the "fitting tissue spatial feature points corresponding to the intraoral tissue feature points following the pixel size relationship" specifically includes: calculating the distance between the upper sulcus bottom characteristic point and the gingival margin midpoint of the corresponding tooth position to obtain an upper sulcus bottom distance parameter, and calculating to obtain a corresponding upper sulcus bottom mapping parameter according to the upper sulcus bottom distance parameter and a size mapping factor representing the size relation of the pixels; fitting to obtain an upper space characteristic point representing the position of the upper gully bottom according to the reference characteristic coordinates of the target tooth position corresponding to the upper gully bottom characteristic point and the upper gully bottom mapping parameters; calculating the distance between the lower sulcus bottom characteristic point and the gingival margin midpoint of the corresponding tooth position to obtain a lower sulcus bottom distance parameter, and calculating to obtain a corresponding lower sulcus bottom mapping parameter according to the lower sulcus bottom distance parameter and the size mapping factor; and fitting according to the reference feature coordinates of the target tooth positions corresponding to the lower sulcus bottom feature points and the lower sulcus bottom mapping parameters to obtain lower spatial feature points representing the lower sulcus bottom position.
As a further refinement of an embodiment of the present invention, the intraoral tissue region includes vestibular sulcus and/or tooth root elevations, the oral cavity characteristic data includes arch width data, and the intraoral tissue characteristic points include left-side and right-side sulcus characteristic points.
As a further improvement of an embodiment of the present invention, the "determining at least two intraoral tissue feature points according to the oral cavity feature data in the intraoral image data" specifically includes: determining all intraoral tissue feature points in the intraoral photographic image according to the oral cavity feature data; the "forming the part characterizing the oral cavity feature data on the oral cavity three-dimensional model data" specifically includes: and fitting a tissue space distribution curve or a tissue space distribution curve according to the tissue space characteristic points, and using the tissue space distribution curve or the tissue space distribution curve as a part representing the oral cavity characteristic data on the oral cavity three-dimensional model data.
In order to achieve one of the above objects, an embodiment of the present invention provides an oral cavity structure information generating method, including: acquiring three-dimensional model data of an oral cavity and corresponding intraoral image data; extracting oral cavity characteristic data corresponding to a target intraoral tissue area in an oral cavity three-dimensional model in the intraoral image data, establishing a pixel size relation between the intraoral image data and the oral cavity three-dimensional model data, and reconstructing a part representing the oral cavity characteristic data on the oral cavity three-dimensional model data to obtain the oral cavity structure information.
In order to achieve one of the above objects, an embodiment of the present invention provides an oral cavity structure information generating system, which includes a processor, a memory and a communication bus, where the processor and the memory complete communication with each other through the communication bus; the memory is used for storing application programs; the processor is configured to implement the steps of the oral cavity structure information generating method according to any one of the above technical solutions when executing the application program stored in the memory.
In order to achieve one of the above objects, an embodiment of the present invention provides a storage medium, on which an application program is stored, and when the application program is executed, the steps of the oral cavity structure information generating method according to any one of the above aspects are implemented.
In order to achieve one of the above objects, an embodiment of the present invention provides an oral cavity apparatus, which is constructed according to oral cavity structure information generated according to the oral cavity structure information generation method according to any one of the above aspects.
As a further improvement of an embodiment of the present invention, the oral appliance is used for training orofacial muscle function and/or for treating oral breathing.
Compared with the prior art, the oral cavity structure information generation method provided by the invention has the advantages that the relative position relation between the tooth position characteristics and the model boundary characteristics is analyzed, and the corresponding intraoral image data is called to complete the characteristics under the condition that the oral cavity three-dimensional model is judged not to meet the preset conditions, so that complete oral cavity structure information based on the oral cavity three-dimensional model data is obtained, the requirement for extracting the oral cavity three-dimensional model is reduced, and the simplification of steps and operation logic is realized on the basis that the high-precision complete and comprehensive oral cavity structure characteristics are obtained; the composite process between the intraoral image data and the intraoral three-dimensional model data is mainly used for reconstructing the characteristics of the intraoral three-dimensional model according to the established pixel size relationship, and the output oral structure information is based on the three-dimensional data, so that the method has stronger intuition and accuracy, is convenient for medical workers to further analyze, and is also convenient for patients or other related personnel to look up.
Drawings
FIG. 1 is a schematic representation of a three-dimensional model of the oral cavity when the oral appliance is not installed in accordance with one embodiment of the present invention.
FIG. 2 is a schematic structural view of a three-dimensional model of the oral cavity during installation of an oral appliance in accordance with one embodiment of the present invention.
Fig. 3 is a schematic configuration diagram of an oral cavity configuration information generating system according to an embodiment of the present invention.
Fig. 4 is a schematic step diagram of a method for generating oral cavity structure information according to an embodiment of the present invention.
Fig. 5 is a schematic step diagram of a method for generating oral cavity structure information according to another embodiment of the present invention.
Fig. 6 is a schematic diagram of a first structure of another three-dimensional model of the oral cavity under a forward viewing angle in accordance with another embodiment of the present invention.
Fig. 7 is a schematic step diagram of a first example of a method for generating oral cavity structure information according to another embodiment of the present invention.
Fig. 8 is a partial step diagram of a specific example of the first embodiment of the oral cavity structure information generation method according to another embodiment of the present invention.
Fig. 9 is a partial step diagram of another specific example of the first embodiment of the oral cavity structure information generation method according to another embodiment of the present invention.
Fig. 10 is a partially enlarged view of a portion corresponding to Z1 in fig. 6 when the first embodiment of the oral cavity structure information generating method according to another embodiment of the present invention is performed.
Fig. 11 is a schematic diagram of a part of the steps of a second example of a method for generating oral cavity structure information according to another embodiment of the present invention.
Fig. 12 is a schematic structural diagram of another oral cavity three-dimensional model in a side view when the second embodiment of the oral cavity structural information generating method is performed according to another embodiment of the present invention.
Fig. 13 is a schematic structural diagram of another oral cavity three-dimensional model in a forward view when the second embodiment of the oral cavity structural information generating method is performed according to another embodiment of the present invention.
Fig. 14 is a schematic diagram of a part of the steps of a third example of a method for generating oral cavity structure information according to another embodiment of the present invention.
Fig. 15 is a schematic structural diagram of another oral cavity three-dimensional model in a forward view when the third embodiment of the oral cavity structural information generating method is performed according to another embodiment of the present invention.
Fig. 16 is a schematic view of a part of the steps of a fourth example of the oral cavity structure information generation method according to another embodiment of the present invention.
Fig. 17 is a diagram illustrating a state where distance training data is distributed on three-dimensional training model data when the fourth example of the method for generating oral cavity structure information is performed according to another embodiment of the present invention.
Fig. 18 is a schematic view of a part of the steps of a fifth example of the oral cavity structure information generating method according to another embodiment of the present invention.
Fig. 19 is a schematic step diagram of a method for generating oral cavity structure information according to still another embodiment of the present invention.
Fig. 20 is a schematic diagram of a part of the steps of a first example of a method for generating oral cavity configuration information according to still another embodiment of the present invention.
Fig. 21 is a schematic diagram of a second structure of another oral three-dimensional model in a forward viewing angle when the first embodiment of the method for generating oral structural information is performed according to still another embodiment of the present invention.
Fig. 22 is a schematic diagram showing a part of steps of a second example of a method for generating oral cavity configuration information according to still another embodiment of the present invention.
Fig. 23 is a schematic diagram illustrating a part of the steps of a first example of a method for generating oral cavity structure information according to an embodiment of the present invention.
Fig. 24 is a schematic view of a part of the steps of a second example of the oral cavity structure information generation method according to the embodiment of the present invention.
Fig. 25 is a partially enlarged schematic view of a portion Z2 in fig. 21 and a corresponding portion Z3 in an intraoral photographic image when a second example of an oral cavity structure information generation method is executed according to an embodiment of the present invention.
Fig. 26 is a partial step diagram of a first specific example of a second embodiment of a method for generating oral cavity structure information according to an embodiment of the present invention.
Fig. 27 is a partial step diagram of a second specific example of the oral cavity structure information generation method according to the embodiment of the present invention.
Fig. 28 is a schematic view of a part of the steps of a third example of the oral cavity structure information generating method according to the embodiment of the present invention.
Fig. 29 is a schematic step diagram of a method for generating oral cavity structure information according to still another embodiment of the present invention.
Detailed Description
The present invention will be described in detail below with reference to specific embodiments shown in the accompanying drawings. These embodiments are not intended to limit the present invention, and structural, methodological, or functional changes made by those skilled in the art according to these embodiments are included in the scope of the present invention.
It should be noted that the term "comprises/comprising" or any other variation thereof is intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Moreover, the terms "first," "second," "third," "fourth," "fifth," etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
The main idea of the invention is to call the intraoral photographic image or data thereof corresponding to the oral three-dimensional model after obtaining the oral three-dimensional model or the data thereof, in order to avoid incomplete contents of the oral three-dimensional model, such as missing features corresponding to a target tooth site or other regions, on one hand, establish a corresponding relationship between the intraoral photographic image data and the oral three-dimensional model data, on the other hand, extract required information in the intraoral photographic image data, further perform complementation or augmentation on the oral three-dimensional model data, reduce the actual operation requirement of the oral three-dimensional model extraction, and output more intuitive model data for analysis and diagnosis of medical workers. Preferably, before invoking the intraoral image data, the method further comprises a step of judging the integrity of the oral three-dimensional model, so that the generation process of the oral structure information is simplified, and the step of generating the oral structure information according to the oral three-dimensional model can be automatically realized.
The following will further explain various embodiments, technical principles and corresponding technical effects of the present invention by referring to the accompanying drawings. In an embodiment of the present invention, an oral cavity apparatus is provided, fig. 1 shows an installation and cooperation environment of the oral cavity apparatus or an oral cavity apparatus model, and what is shown in fig. 1 can be interpreted as an internal environment of an oral cavity of an actual human body, and can also be an extracted oral cavity entity model or a modeled oral cavity three-dimensional model. Taking the structure shown in fig. 1 as an oral cavity three-dimensional model 100 as an example, the left figure shows a rendered three-dimensional structure, and the right figure shows a contour structure corresponding to at least a part of the structure of the oral cavity three-dimensional model 100.
In a specific example of the present invention, the three-dimensional model 100 of the oral cavity specifically includes teeth 11, labial and buccal mucosa 12, vestibular sulcus 13, ridge 14, and labial ligament 15. Among them, the vestibular sulcus 13 is also called as the labial and buccal gingival sulcus, and can be interpreted as the upper and lower boundaries of the oral cavity, and the vestibular sulcus 13 is entirely in the shape of an iron hoof and is a groove-shaped tissue structure generated by the labial and buccal mucosa 12 moving to the alveolar mucosa. Specifically, in the three-dimensional model 100 of the oral cavity, the teeth are arranged in the normal order with the human head in an upright position, and in this case, according to the FDI (F time division dental alliance) tooth position representation method, the tooth number 11 is positioned at the upper left of the tooth number 31, and the tooth number 41 is positioned at the lower left of the tooth number 21. Based on this, it can be defined that the vestibular sulcus 13 includes an upper vestibular sulcus 131 and a lower vestibular sulcus 132, wherein the vestibular sulcus 13 located away from the No. 41 teeth and the No. 31 teeth with respect to the No. 11 teeth and the No. 21 teeth is the upper vestibular sulcus 131, and the vestibular sulcus 13 located away from the No. 11 teeth and the No. 21 teeth with respect to the No. 41 teeth and the No. 31 teeth is the lower vestibular sulcus 132. Certainly, when the three-dimensional model 100 of the oral cavity is observed from different viewing angles or the three-dimensional model 100 of the oral cavity is set to different positions and postures, the definitions of the upper vestibular sulcus 131 and the lower vestibular sulcus 132, the upper and lower positions, and the upper and lower vestibular sulcus 131 and the lower vestibular sulcus 132 may be correspondingly adjusted, which is known to those skilled in the art and will not be described herein again.
As shown in fig. 1 and 2, the oral appliance 200 provided by the present invention may be a solid oral appliance, or may be a corresponding three-dimensional model or a solid model. The oral appliance 200 is specifically customized and constructed according to the oral cavity structure information, so that the oral appliance 200 can fit the oral tissues such as at least one of the teeth 11, the labial and buccal mucosa 12, the vestibular sulcus 13, the ridge 14 and the labial ligament 15 as far as possible when being matched or installed with the oral cavity three-dimensional model 100 or the corresponding human body actual oral cavity environment, and the comfort level or the matching degree is improved on the premise of realizing the functions of the oral appliance 200.
The oral appliance 200 may be specifically configured in a variety of types, for example, the oral appliance 200 may be a dental deformity appliance or holder, or an appliance for training Orofacial muscle function (also known as Orofacial muscle function Therapy, OMT, orofacial myofacial Therapy) and/or for treating oral breathing. Corresponding to different types of oral instruments 200, the oral cavity structure information supported by the construction process may also have differences, and in the first case, the oral cavity structure information includes morphological information of all tissues in the oral cavity, specifically, morphological characteristics of dentition crowns, morphological distribution of tooth root protrusions 14 and morphological distribution of vestibular sulcus 13 are included at the same time; in the second case, where the oral appliance 200 is a dental malformation appliance or holder, the oral structure information includes at least part of the dentition crown morphology characteristics, and preferably the morphology distribution of the root ridges 14; in a third case where the oral appliance 200 is a orofacial muscle barrier, the oral structural information includes at least a partial vestibular groove morphology feature and preferably a dentition crown morphology feature.
No matter whether the oral appliance 200 belongs to any of the above application scenarios, the dimension design of at least one dimension of the oral appliance 200 should be higher than the three-dimensional model 100 of the oral cavity, so that the oral appliance 200 can form a certain yielding distance at least with the tooth root ridge 14, and the oral model is prevented from being damaged or the wearing comfort is prevented from being reduced due to the excessive squeezing of the gum. Based on this, in one embodiment, the oral cavity three-dimensional model 100 includes a right distal root ridge 141 and a left distal root ridge 142 therein, and the oral cavity instrument 200 includes a right end 21 corresponding to the right distal root ridge 141 and a left end 22 corresponding to the left distal root ridge 142 therein. In embodiments where oral appliance 200 is configured for training orofacial muscle function and/or for treating mouth breathing, right end 21 and left end 22 may specifically be the ends on the side of the buccal shield distal from the labial shield, or the ends on the side of the buccal shield distal from the breathing orifice.
Wherein the right end 21 may be defined as the end of the oral device 200 on the side of the right distal root ridge 141 facing away from the soft palate, and the distance of the right end 21 from the right distal root ridge 141 relative to the soft palate may be defined as the "abdicating distance"; the left end 22 may be defined as the end of the oral device 200 on the side of the left distal ridge 142 facing away from the soft palate, and the distance of the left end 22 from the left distal ridge 142 relative to the soft palate may be defined as the "abdicating distance".
The "offset distance" is freely selected according to the specific type or function of the oral appliance 200, for example, when the oral appliance 200 is configured as a dental deformity appliance or holder, the distance between the right end 21 and the left end 22 may be equal to or less than the distance between the right distal root ridge 141 and the left distal root ridge 142, so as to constrain the teeth of the corresponding tooth position to be displaced or held in the original position. Also for example, when the oral cavity instrument 200 is configured for training orofacial muscle function and/or for treating oral breathing, or is configured as other equipment for forming a barrier in the mouth, the distance between the right end 21 and the left end 22 may be greater than the distance between the right distal root ridge 141 and the left distal root ridge 142, and preferably, the difference between the distance between the right end 21 and the left end 22 and the distance between the right distal root ridge 141 and the left distal root ridge 142 is greater than or equal to 3mm, so as not to interfere excessively with soft tissues such as the gums at the root ridges 142, thereby affecting the wearing experience or causing wear of the oral cavity model.
The left and right distal root ridges 142, 141, which are the most distant tooth positions in orientation relative to the midline of the teeth, may generally refer to the root ridge of the maxillary second molar or the root ridge of the mandibular second molar for adults, and may generally refer to the root ridge of the maxillary second deciduous molar or the root ridge of the mandibular second deciduous molar for children. The root ridge 14 at any of the above-mentioned dental sites may be interpreted as an intraoral tissue that wraps the root and protrudes in a direction away from the soft palate with respect to the labial surface of the crown, and may specifically be a gingival part located outside the root canal and an alveolar bone part wrapped by the gingival part.
Further, when the oral appliance 200 is mounted on or matched with the three-dimensional model 100 of the oral cavity or the actual oral environment of the human body, the upper end portion close to the upper jaw may be fitted to the upper vestibular sulcus 131, and the lower end portion close to the lower jaw may be fitted to the lower vestibular sulcus 132, that is, the distance between the upper end portion and the lower end portion of the oral appliance 200 may be equal to the distance between the upper vestibular sulcus 131 and the lower vestibular sulcus 132.
Regarding the above-mentioned size relationship, considering that when the corresponding three-dimensional model 100 of the oral cavity is extracted for the actual oral cavity environment of the human body, the distance between the upper vestibular groove 131 and the lower vestibular groove 132 may be larger than the distance between the two vestibular grooves of the human body in the normal living state due to the stretching, so the above-mentioned "equal" relationship may also be "slightly smaller". Of course, in order to improve the training effect of orofacial muscles and/or the therapeutic effect of oral breathing, the above-mentioned "equal" relationship may also be "slightly larger" as well. Preferably, the sectional shape and the extending distribution curve of the upper end portion of the oral appliance 200 may also be fitted to the distribution curve and the tissue shape of the upper vestibular groove 131, and the sectional shape and the extending distribution curve of the lower end portion of the oral appliance 200 may also be fitted to the distribution curve and the tissue shape of the lower vestibular groove 132, so that the entire oral appliance 200 is also configured in an iron shoe shape.
Under the configuration scheme of the overall shape of the iron shoe, in order to avoid the labial ligament 15 in the mouth and avoid the oral appliance 200 from pressing the same, the middle part of the upper end part and the middle part of the lower end part of the oral appliance 200 may be correspondingly provided with an avoiding part depressed toward the geometric center of the oral appliance 200, and meanwhile, considering that the shapes of the labial ligament 15 may be different under different oral models and actual oral environments of human bodies, the width of the avoiding part extending along the length extending direction of the oral appliance 200 should be at least greater than or equal to the width of the labial ligament 15 on the three-dimensional oral model 100, so as to prevent unnecessary limit on the soft tissue of the labial ligament 15 and cause pain to a wearer.
The above description of the features of the oral cavity instrument 200 can be taken as a limitation on the morphological features of the oral cavity instrument 200 itself, so as to achieve the above-mentioned corresponding technical effects; on the other hand, in one embodiment, oral appliance 200 is configured to be constructed from oral structure information generated according to an oral structure information generation method. Based on this, the above description of the oral cavity apparatus 200 can be interpreted as the beneficial effect of the oral cavity structure information or the oral cavity structure information generating method, in other words, when the steps of the oral cavity structure information generating method provided by the present invention are executed, an oral cavity structure information can be generated, so that the oral cavity apparatus correspondingly produced has any of the above features and technical solutions.
The invention also provides a storage medium, which can be embodied as a computer-readable storage medium. The storage medium may be provided in a computer and store an application program, and in this case, the storage medium may be any available medium that can be accessed by the computer, or may be a storage device such as a server, a data center, or the like, which is integrated with one or more available media. The usable medium may be a magnetic medium such as a floppy Disk, a hard Disk, a magnetic tape, or the like, or an optical medium such as a DVD (Digital Video Disc), or a semiconductor medium such as an SSD (Solid State Disk). The application program, when executed, performs the steps of a method of generating oral structure information to perform at least: obtaining intraoral image data and oral three-dimensional model data, establishing a pixel size relationship, and reconstructing oral characteristic data on the oral three-dimensional model data. And preferably, the step of judging the integrity of the oral cavity three-dimensional model data is executed.
An embodiment of the present invention further provides an oral cavity structure information generating system 300 as shown in fig. 3, which includes a processor 31, a memory 33 and a communication bus 34. The processor 31 and the memory 33 are communicated with each other through a communication bus 34. To further extend the functionality of the oral structure information generation system 300, the oral structure information generation system 300 may also include a communication interface 32 for the oral structure information generation system 300 to perform, communicate with other systems or devices such as healthcare worker operating systems/devices, patient clients, manufacturer/warehouse management systems, or manufacturing/warehouse management devices. Similarly, the processor 31, the communication interface 32 and the memory 33 can communicate with each other via a communication bus 34.
Correspondingly, the memory 33 is used for storing application programs; the processor 31 is configured to execute the application program stored in the memory 33, and the application program may be an application program stored in the storage medium as described above, that is, the storage medium may be configured to be at least included in the memory 33. Based on this, when executing the application program, the processor 31 may implement an oral cavity structure information generating method, which specifically includes: the method comprises the steps of obtaining intraoral image data and oral three-dimensional model data, establishing a pixel size relationship, reconstructing oral characteristic data on the oral three-dimensional model data and the like. And preferably, comprises the step of judging the data integrity of the oral cavity three-dimensional model.
The communication bus 34 may be a PCI bus (Peripheral Component Interconnect) or an EISA bus (Extended Industry Standard Architecture) or the like. The communication bus. The communication bus 34 may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 3, but that does not indicate only one bus or one type of bus.
The Memory 33 may include a RAM (Random Access Memory) or an NVM (Non-Volatile Memory), such as at least one disk Memory. The Processor 33 may be a general-purpose Processor including a CPU (Central Processing Unit), an NP (Network Processor), etc., and may also be a DSP (Digital Signal Processing), an ASIC (Application Specific Integrated Circuit), an FPGA (Field Programmable Gate Array), or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component.
Of course, although the present invention is provided above as an oral cavity structure information generating system 300, it can be known from the description of the oral cavity structure information generating system 300 that the internal components thereof may be integrated into one device after being collocated with the embodiments, and based on this, the oral cavity structure information generating system 300 may refer to not only a large system such as a fieldbus control system, but also a small circuit system or a control system in the oral cavity structure information generating device.
As shown in fig. 4, an embodiment of the present invention provides an oral cavity structure information generating method, in which an application program or a command corresponding to the method can be loaded on the storage medium and/or the oral cavity structure information generating system 300, so as to achieve the technical effect of generating oral cavity structure information. The method for generating the oral cavity structure information may specifically include the following steps.
And step 40, acquiring oral three-dimensional model data, and judging whether an appointed intraoral tissue area in the oral three-dimensional model meets a preset integrity condition.
The oral cavity three-dimensional model data refers to data information capable of establishing an oral cavity three-dimensional model, and the source of the data information can be multi-angle electronic data information such as oral cavity photographic images, oral cavity scanning models and the like, or entity models such as oral cavity silica gel models, oral cavity plaster models and the like. For electronic data information, the oral cavity three-dimensional model data which is as complete as possible (but not required to completely contain all the intraoral tissue characteristics) can be formed through compound recombination and repair; for the solid model, reconstruction and occlusion processing may be performed by CT (Computed Tomography). Preferably, the three-dimensional model of the oral cavity may be a three-dimensional model characterizing the structural features of the mandibular surface of the patient in the occluded state.
The "designated intraoral tissue region" may be interpreted as a region on a three-dimensional model of the oral cavity determined to be suitable for different clinical purposes or task types. The determined standard can be defined by medical workers, or can be automatically determined by the task type preset by the system. For the former, the steps of "acquiring intraoral organization selected instruction" may be specifically included; for the latter, the step "determining the task type and the corresponding intraoral tissue area according to the analysis instruction" may be specifically included. The basis for determining the intraoral tissue area can be determined according to the positions of different tooth positions in the oral three-dimensional model, or determined after segmentation according to indexes such as point cloud characteristics, gray scale characteristics and the like of other oral tissues. Such intraoral tissue regions include, but are not limited to, vestibular sulcus, labial frenulum, dental arch, and the like.
The integrity condition can be a preset index requirement for measuring the overall integrity of the oral three-dimensional model. The judgment result can be manually identified to obtain an integrity score, and then the integrity score is compared by a system to obtain the integrity score. Based on this, step 40 may comprise: and acquiring an integrity score based on the oral three-dimensional model data, and judging whether at least the intraoral tissue region in the oral three-dimensional model meets a preset integrity condition.
The judgment result of the integrity condition can also be obtained by comparing after being evaluated in advance by artificial intelligence and machine learning means such as a pre-constructed neural network model. Based on this, step 40 may include: analyzing the oral three-dimensional model to obtain an integrity score at least corresponding to the intraoral tissue region, and judging whether at least the intraoral tissue region in the oral three-dimensional model meets a preset integrity condition. Wherein the integrity score points to at least the intraoral tissue region.
The process of evaluating the integrity of the oral three-dimensional model includes, but is not limited to, performing positional relationship analysis, gray scale analysis, point cloud density analysis, image or model continuity analysis on at least the intraoral tissue region.
Of course, the setting of the integrity condition is not limited to the local integrity judgment of some important intraoral tissue regions, and may be the setting of the entire oral three-dimensional model. Correspondingly, the integrity evaluation process may be performed on the whole oral three-dimensional model.
If not, skipping to step 43, acquiring corresponding intraoral image data, extracting oral cavity feature data corresponding to an intraoral tissue area in the intraoral image data, establishing a pixel size relationship between the intraoral image data and oral cavity three-dimensional model data, and reconstructing a part representing the oral cavity feature data on the oral cavity three-dimensional model data to obtain oral cavity structure information.
The intraoral image data corresponds to the oral three-dimensional model data, and such correspondence is not limited to the fact that the two must point to the same object, but rather, so long as substantially the same structural features can be characterized. For example, the oral cavity three-dimensional model data may be established based on an oral cavity scanning model with a human body actual oral cavity environment as an object, and the intraoral photographic image may be established with an oral cavity silica gel model, an oral cavity plaster model and the like as an object, as long as the oral cavity silica gel model or the oral cavity plaster model points to the intraoral tissue structure of the same oral cavity with the oral cavity scanning model. It should be noted that the intraoral photographic image data needs to include an intraoral tissue region to which the oral characteristic data to be reconstructed corresponds.
The purpose of establishing the pixel size relationship is not limited to performing corresponding scaling on the intraoral photographic image and the oral cavity three-dimensional model to match the intraoral photographic image and the oral cavity three-dimensional model, and the pixel size relationship can be established only by mapping and reflecting the oral cavity characteristic data on the oral cavity three-dimensional model or the data thereof in a manner of keeping a relative position relationship.
Therefore, the completeness of at least part of the region on the oral three-dimensional model can be judged, the at least part of the region on the oral three-dimensional model can be selectively reconstructed, the missing oral characteristic data or information on the oral three-dimensional model can be completed, and therefore complete oral structural information can be generated for analysis and diagnosis of medical workers.
As shown in fig. 5, another embodiment of the present invention provides an oral cavity structure information generating method, specifically providing specific steps 41 and 42 for step 40, preferably determining whether the oral cavity three-dimensional model meets the integrity condition by analyzing the target tooth position characteristics and the model boundary characteristics. It is to be understood that the application program or the instructions corresponding to the method may also be loaded in the storage medium and/or the oral cavity structure information generation system 300. The method for generating the oral cavity structure information may specifically include the following steps.
And step 41, acquiring oral cavity three-dimensional model data, and calculating to obtain a position relation between a target tooth position and a model boundary and/or a position relation between an upper jaw model boundary and a lower jaw model boundary according to the target tooth position characteristic and the model boundary characteristic corresponding to the target tooth position characteristic in the oral cavity three-dimensional model data.
The target tooth position feature has two meanings, wherein the target indicates that the target tooth position feature actually points to a part needing to be analyzed in the three-dimensional oral cavity model and corresponds to a specific position needing to be analyzed. With regard to the steps of position analysis and determination, there are various solutions, in other words how to determine the target tooth position characteristics. In one embodiment, the target site feature may be designated by a medical practitioner, for example, if a caries condition of a specific site needs to be examined, the target site feature may be selected, for example, as a first molar site, and a feature of the site may be further analyzed; for example, if the numerical value of the vestibular groove in a certain dimension needs to be checked, the tooth positions around the whole or part of the vestibular groove can be selected as the target tooth positions, and then the target tooth position characteristics can be obtained. In another technical solution, the position pointed by the target dentition feature may be determined according to a target task type, the target task type may be a task of making an oral appliance such as a dentognathic appliance or an orofacial muscle barrier, or a task of checking the integrity of intraoral dentition (for example, whether there is missing tooth), and for the former, the method may specifically include the steps of: acquiring type information of a target instrument and determining target intraoral tissue; and determining the corresponding tooth position as the target tooth position according to the target intraoral tissue.
Secondly, the "feature" represents the functional role, that is, the target tooth position feature is actually a feature information capable of representing the relative position condition of the target tooth position, such as the coordinates of a certain feature point on the crown of the target tooth position or the coordinates of a certain feature point of the root-ridge of the target tooth position. The feature point and its coordinates may be coordinates of not only an edge of a certain intraoral tissue at a dental site or a certain point on a surface, but also a geometric center point of a certain intraoral tissue at a dental site, or coordinates of a set center point of a whole formed by a plurality of intraoral tissues at a dental site. Before forming the target dental position feature, a process of determining a target dental position number may be generally included (when determining that the target dental position points to the model position, the process may also be multiplexed), and thus, the method may specifically include the steps of: traversing all the tooth position data in the oral cavity three-dimensional model data, and determining the number of each tooth position; and respectively determining a characteristic coordinate point on each tooth position according to the number of each tooth position as the target tooth position characteristic.
The model boundary feature, at the level of "boundary", may be a certain boundary point or a boundary distribution curve on the three-dimensional model of the oral cavity. On the level of the model boundary and the corresponding relation between the model boundary and the target tooth position feature, the model boundary can be a point or a coordinate which is positioned in a certain area or volume range of the target tooth position feature and positioned on the boundary of the oral cavity three-dimensional model; the model boundary point or the coordinate obtained by traversing and searching according to the target tooth position characteristic and the preset direction can be used for reflecting the position relation between the tooth position and the model boundary or the position relation between the upper jaw model boundary and the lower jaw model boundary by utilizing the corresponding relation of the two, further judging whether the oral cavity three-dimensional model at least meets the integrity condition at the corresponding oral tissue area, and even carrying out the operations of reconstructing the characteristic on the oral cavity three-dimensional model and the like by utilizing the oral cavity three-dimensional model.
As shown in fig. 6, the first tooth position 11A in the oral cavity three-dimensional model 100 may be determined as the target tooth position, and the first gingival margin midpoint 11A thereof may be determined as a coordinate point for characterizing the target tooth position feature, and when an embodiment of traversing the model boundary feature corresponding to the target tooth position feature along a preset direction is adopted, the first model boundary point 10A corresponding to the first gingival margin midpoint 11A may be retrieved on the oral cavity three-dimensional model 100 along the first reference direction D11, and the model boundary feature may be characterized by the first model boundary point 10A. Of course, there may be multiple sets of the target tooth position, the model boundary, the target tooth position feature and the model boundary feature, for example, with the second tooth position 11B as the target tooth position, the second gingival margin midpoint 11B and the second model boundary point 10B may be obtained in turn based on the same scheme, and with the fourth tooth position 11d as the target tooth position, the fourth gingival margin midpoint and the fourth model boundary point may be determined in turn based on the same scheme at the lower jaw of the oral cavity three-dimensional model 100. Based on this, the positional relationship between the boundaries of the upper jaw model and the boundaries of the lower jaw model in step 41 can be calculated using the first model boundary points 10A or the second model boundary points 10B, and the fourth model boundary points. Here, it is understood that the first reference direction D11 may of course include a first reference opposite direction D11 'corresponding thereto, and both directions may be defined as one direction, and adaptively retrieved in a direction away from the occlusal plane (an imaginary plane formed from a mesial abutment point of the maxillary central incisor to the mesial buccal cusp of the bilateral first molars) with the first reference direction D11 when retrieving the maxillary model boundary feature, and retrieved in a direction away from the occlusal plane with the first reference opposite direction D11' when retrieving the mandibular model boundary feature.
The three-dimensional model 100 of the oral cavity includes a first model boundary 10A corresponding to the first model boundary point 10A and a second model boundary 10B corresponding to the second model boundary point 10B. The first model boundary 10a and the second model boundary 10b may be interpreted as model boundaries formed by fitting model boundary points obtained by traversing and searching a plurality of tooth position feature coordinate points, or may be interpreted as model boundaries capable of intersecting with reference lines corresponding to the plurality of tooth position feature coordinate points to form corresponding model boundary points. The former enables it to be known at the user side and the latter embodies the essence of its actual presence in the three-dimensional model 100 of the mouth.
Correspondingly, the first dental position 11A in the three-dimensional model 100 of the oral cavity may be determined as the target dental position, but the coordinate point representing the target dental position feature may be determined according to the geometric center of the region surrounded by the crown convex hull contour of the first dental position 11A, that is, the target dental position feature may be represented by using the first crown labial surface center point 11A'. Meanwhile, a second reference direction D12 distinguished from the first reference direction D11 may be used to perform traversal and search in which an inclination angle exists with respect to the tooth length extending direction, so that the first reference boundary point 10A 'corresponding to the second crown labial surface center point 11A' may be searched to represent the model boundary feature. The second reference direction D12 may be the direction in which all the teeth are used to determine the boundary features of the model, but of course, the directions in which each tooth is used to determine the boundary features of the model may be different from each other.
In one embodiment, the actual orientations reflected by the second reference direction D12 at each tooth position may be different from each other, thereby enabling to generate derivative directions such as the third reference direction D13. The determination of the second reference direction D12 and the third reference direction D13 may comprise the steps of: determining a global reference center point C1 of the oral cavity three-dimensional model on one side of the labial surface of the dental crown; the global reference center point C1 and the first crown labial surface center point 11A' are used as reference lines, and the extending direction of the reference lines is used as a second reference direction D12 corresponding to the first tooth position 11A. Based on this, corresponding to the second tooth position 11b, the global reference center point C1 and the corresponding second crown labial surface center point can be used as the reference line to determine the third reference direction D13. The global reference center point C1 may preferably be a midpoint of a line segment formed at least partially on the tooth centerline, and may specifically be a midpoint of a projection line segment of the tooth centerline on the three-dimensional oral cavity model 100.
In another embodiment, the direction of traversing the boundary features of the search model may also be determined by determining a combination of local center points and edge points. Preferably, the local center point may be a crown labial surface center point, and the edge point may be a gingival margin center point. For example, the third tooth site 11C in the three-dimensional model of the oral cavity 100 may be determined as the target tooth site, the third crown labial surface center point C2 corresponding to the third tooth site 11C may be determined as the local center point according to the method steps provided above, and the third gingival margin center point 11C corresponding to the third tooth site 11C may be determined as the edge point according to the method steps provided above. Then, the third crown-labial center point C2 and the third gingival-margin center point 11C are taken as reference lines, and the extending direction of the reference lines is taken as a fourth reference direction D14 corresponding to the third tooth position 11C. In this way, the third model boundary point 10C corresponding to the third tooth position 11C can be obtained.
At this point, as can be appreciated by those skilled in the art, the invention has various embodiments at least including a gingival margin midpoint, a crown labial surface midpoint and the like in terms of determining and characterizing the target tooth position; on the determination of the boundary characteristics of the model, at least various embodiments such as region division determination, direction determination and the like are included; the embodiment of determining the boundary characteristics of the model according to the direction at least comprises various embodiments of determining according to a fixed direction, determining according to a central point of a lip surface of a crown and a global reference central point, determining according to a central point of a gingival margin and a local reference central point and the like. In addition, the embodiments that can be easily inferred from the various embodiments and examples are all included in the technical scope of the present invention.
Notably, in step 41, the calculation part includes three technical solutions, one of which is to calculate a position relationship between the target dental position and the model boundary, the other of which is to calculate a position relationship between the maxillary model boundary and the mandibular model boundary, and the other of which is to calculate a position relationship between the target dental position and the model boundary, and calculate a position relationship between the maxillary model boundary and the mandibular model boundary.
And 42, comparing the position relation with a preset position condition corresponding to the position relation, and judging whether an intraoral tissue area corresponding to the target tooth position in the oral three-dimensional model meets a preset integrity condition.
As shown in fig. 6, the preset position condition may have various forms, such as "up", "down", "left", "right" indicating the orientation, and further, such as an interval indicating a numerical magnitude relationship, a threshold value, and the like. For the former, the judgment may be performed according to a single feature, for example, the position of the retrieved model boundary feature coordinate point on the oral three-dimensional model 100 is determined, and on the premise that the intra-oral tissue region points to the height of the vestibular sulcus, for the upper jaw, when the model boundary feature coordinate point is located below the set standard coordinate point, it is determined that at least the preset position condition is not met, and the integrity condition may not be met; on the premise that the intraoral tissue region points to the width of the vestibular sulcus, when the characteristic coordinate point of the model boundary can be set to be positioned on the left side of the set standard coordinate point on the left side of the oral cavity (the side where the first tooth position 11a and the third tooth position 11c are positioned), it is judged that the condition of the preset position is at least not met and the integrity condition may not be met. It can be seen that the preset position condition is set according to the indication of the orientation of the tissue region in the mouth, and also according to the position of the target tooth position in the three-dimensional model 100 of the mouth. In addition, the determination can be performed according to a geometric figure or a distribution range formed by the model boundary feature coordinate point and the target tooth position feature coordinate point, for example, whether a connection line between the model boundary feature coordinate point and the target tooth position feature coordinate point intersects with a preset line segment or a preset region or not is used as a circle, for example, a connection line between the model boundary feature coordinate point and the target tooth position feature coordinate point is used as a diameter to make a circle, and whether the region range of the circle falls into a preset region range or overlaps with the preset line segment or not is determined.
For the latter, preferably, the euclidean distance between the model boundary feature coordinate point and the target tooth position feature coordinate point or the length of a projection line segment of a connecting line in a certain direction is used as an analysis object, and the numerical value of the analysis object is compared with a set threshold or range, so as to determine whether the intraoral tissue region corresponding to the target tooth position meets a preset position condition. The integrity condition may preferably be a derivative condition with respect to the case where the predetermined position condition is met or not. For example, a threshold value is set for the number of sites that do not meet the preset position condition, and when the number of intraoral tissue sites that do not meet the preset position condition reaches the threshold value, it is determined that the oral three-dimensional model cannot be directly put into use, or it makes no sense to continue analyzing the oral three-dimensional model.
In this regard, the present invention provides various embodiments for calculating a "positional relationship", the integrity condition and the positional condition being at least an indicator corresponding to the positional relationship. For example, in a preferred embodiment, taking the first tooth position 11A as an example, if the parameter for characterizing the positional relationship is the length of the projection line of the first gingival margin midpoint 11A and the first model boundary point 10A in the first reference direction D11, the corresponding preset position condition should also be the length of the projection line of the gingival margin midpoint of the middle incisor (which may be defined as a left maxillary middle incisor when more accuracy is required) corresponding to the first tooth position 11A and an intraoral tissue feature point in the first reference direction D11 thereof in the first reference direction D11. In summary, the predetermined position condition may be consistent with the position relationship at least in terms of direction, tooth position, and point selection. The integrity condition is preferably a judgment condition formed by integrating all judgment results relative to the preset position condition, so that an indirect correlation is established with the position relationship.
If not, skipping to step 43, acquiring corresponding intraoral image data, extracting oral cavity feature data corresponding to an intraoral tissue area in the intraoral image data, establishing a pixel size relationship between the intraoral image data and oral cavity three-dimensional model data, and reconstructing a part representing the oral cavity feature data on the oral cavity three-dimensional model data to obtain oral cavity structure information.
When it is determined through steps 41 to 42 that the vestibular sulcus feature above the maxillary central incisor is absent and does not satisfy the preset integrity condition, the acquired intraoral image data should at least include the vestibular sulcus tissue region above the maxillary central incisor. It can be seen that step 42 contributes to step 43, and there are more than "whether the integrity condition is satisfied", and there are also specific references and related situations for making such judgment result of the integrity condition, wherein at least the location condition in the oral cavity of the part or tooth position not satisfying the preset location condition can be included.
As illustrated in fig. 6, referring to the first tooth site 11A, when it is determined that the first gingival margin midpoint 11A and the first model boundary point 10A do not satisfy the predetermined position condition (for example, the predetermined position for the first model boundary point 10A should be the point 13A and above the first reference direction D11) during traversal search along the first reference direction D11, the oral cavity feature data corresponding to the vestibular sulcus is analyzed as the first vestibular sulcus bottom coordinate point 13A by the intraoral imaging data, or the distance length between the first vestibular sulcus bottom coordinate point 13A and the first gingival margin midpoint 11A, at this time, according to the position of the first vestibular sulcus bottom coordinate point 13A in the intraoral imaging data and the pixel size relationship, for example, at least the first vestibular sulcus bottom coordinate point 14A is reconstructed on the oral cavity three-dimensional model 100 to form a "dot" pointed by the first vestibular bottom coordinate point 13A in fig. 6. Obviously, the intraoral photographic image is not required to be magnified to the same size as the three-dimensional model 100 of the oral cavity in this process. Further, based on the same or similar steps, a first vestibular floor reference point 13A', a second vestibular floor coordinate point 13B, a third vestibular floor coordinate point 13C, and the like may be formed correspondingly.
Of course, this does not mean that the present invention excludes the technical solution of unifying the size and the dimension of the intraoral camera image and the intraoral three-dimensional model 100, and especially when a fitting curved surface (the first tissue spatial distribution curved surface Sa shown in fig. 6 or the third tissue spatial distribution curved surface at the third dental site 11 c) or a curve (the first tissue spatial distribution curve 13a shown in fig. 6 or the second tissue spatial distribution curve 13 b) is required, the pixel size relationship can be established for a plurality of intraoral feature data, so as to achieve the effect similar to the scaling transformation of the intraoral tissue region.
As shown in fig. 7, the present invention provides a first example of the oral cavity structure information generation method according to the above embodiment, and the first example specifically includes the following steps.
Step 410, obtaining oral cavity three-dimensional model data.
Step 411, determining at least one tooth position on the oral cavity three-dimensional model as a target tooth position, and calculating reference feature coordinates of the target tooth position based on a preset feature recognition rule to represent the target tooth position feature.
The reference feature coordinates may be coordinates of the above-mentioned gingival margin midpoint or the coordinates of the crown labial surface center point, etc., thereby characterizing at least the position of the target tooth site. Of course, the coordinates of a plurality of points distributed at the gingival margin and the coordinates of a plurality of points distributed on the labial surface of the crown can also be used to characterize the morphological features of the target tooth site; the position feature of the target tooth site may be further characterized by using the incisal crest midpoint coordinate of an incisor, the cusp point coordinate of an cuspid tooth, or the occlusal surface center point coordinate of a molar tooth as the reference feature coordinate.
The preset feature identification rule can be that the contour feature of the dental crown of the target tooth position is determined according to the pixel gray value or the space relative position of the point cloud set, and the coordinate of a space point farthest relative to the occlusal plane is found according to the space point on the convex hull contour to serve as a reference feature coordinate capable of reflecting the midpoint coordinate of the gingival margin. Of course, when the reference feature coordinate points to other positions of the target tooth position, the extraction process thereof may have other derived technical solutions.
And step 412, determining boundary feature coordinates corresponding to the reference feature coordinates on the oral cavity three-dimensional model along the first direction according to the reference feature coordinates to represent the boundary features of the model.
Referring to fig. 6, the first direction may be correspondingly defined as D1, and when different schemes for determining the direction are adopted, the first direction D1 may point to any one of the first reference direction D11, the first reference reverse direction D11', the second reference direction D12, the third reference direction D13, or the fourth reference direction D14, or point to other directions which are not explicitly mentioned in the foregoing but which are within the ability of those skilled in the art.
The boundary feature coordinates can be the model boundary points or the model boundary coordinate points, so that feature points corresponding to the target tooth position features and representing the model boundary features can be obtained through traversal retrieval, the relative position relationship between the target tooth position and the model boundary can be formed by using the reference feature coordinates and the boundary feature coordinates, the abstract position relationship is quantized into a calculable and definite numerical value, and the model integrity judgment can be conveniently carried out by using the abstract position relationship.
And 413, calculating a characteristic distance value between the reference characteristic coordinate and the boundary characteristic coordinate to represent the position relation between the target tooth position and the model boundary.
The characteristic distance value may be an euclidean distance between the reference characteristic coordinate and the boundary characteristic coordinate, or may be an euclidean distance or a projection of a two-point connection line in a certain direction, which is not limited in the present invention. The position relation between the target tooth position and the model boundary is quantized by utilizing the characteristic distance value, the preset position condition can be set conveniently, the operation in the subsequent judgment process is facilitated, and compared with the fitting of the position relation by utilizing other space indexes such as volume or area, the method can avoid the reduction of the credibility of data in the transformation and mapping process due to the difference in dimensionality between the oral photographic image and the oral three-dimensional model.
Step 421, comparing the characteristic distance value with a distance integrity criterion value representing a preset position condition.
In other words, the technical scheme provided by the invention sets a distance integrity criterion value to represent the preset position condition. This point can be interpreted as that the distance integrity criterion value is a part of the preset position condition, and the invention does not exclude setting integrity criterion values of multiple dimensions under the preset position condition, so as to comprehensively judge whether the position relationship between the target tooth position and the model boundary satisfies the preset position relationship from multiple dimensions. The integrity criterion may be replaced or substituted according to the type of the target tooth position and the target intraoral tissue, for example, when determining the protrusion of the jaw (corresponding to the incisor tooth positions of the upper jaw and the incisor tooth positions of the lower jaw) or the arc of the dental arch (corresponding to all the tooth positions of the upper jaw or all the tooth positions of the lower jaw), an arc integrity criterion is set.
And step 422, judging whether the intraoral tissue area corresponding to the target tooth position in the oral three-dimensional model meets a preset integrity condition according to the comparison result.
The incidence relation between the integrity condition and the comparison result can directly serve as a sufficient condition for judging whether the integrity condition is met or not when only whether the intraoral tissue area corresponding to the single target tooth site is complete or not is required to be judged; when the intra-oral tissue areas corresponding to the multiple target tooth positions need to be subjected to integrity judgment by respectively utilizing the characteristic distance values, an integrity condition needs to be set for all comparison results, so that whether the whole intra-oral tissue areas corresponding to the multiple target tooth positions meet the expected integrity requirement is measured.
As shown in fig. 6, when the incisor teeth, the lateral incisor teeth on the left side of the upper jaw, and the canine teeth on the left side of the upper jaw are used as the target teeth, the comparison results corresponding to the three teeth may not satisfy the preset position conditions, and it may be determined that the intraoral tissue regions corresponding to the three teeth are incomplete. The upper jaw right middle incisor tooth position, the upper jaw right side incisor tooth position and the upper jaw right cuspid tooth position are taken as target tooth positions, only the comparison result of the upper jaw right side incisor tooth position shows that the target tooth positions do not meet the preset position condition, at the moment, if the preset integrity condition is set to be loose, the whole of the intraoral tissue area corresponding to the three tooth positions can be considered to meet the integrity condition. It should be understood that the above setting of the preset integrity condition focuses on determining the number of the unsatisfied preset position conditions, and it is of course also possible to measure whether the preset integrity condition is satisfied according to the difference between the sum of the characteristic distances and the integrity criterion value, for example, calculating indexes such as the mean and variance thereof.
If not, skipping to step 43, acquiring corresponding intraoral image data, extracting oral cavity feature data corresponding to an intraoral tissue area in the intraoral image data, establishing a pixel size relationship between the intraoral image data and oral cavity three-dimensional model data, and reconstructing a part representing the oral cavity feature data on the oral cavity three-dimensional model data to obtain oral cavity structure information.
The first embodiment provided by the invention can simplify the expression of the position relation, the abstract content is embodied as the distance between two coordinates, and the integrity of the oral cavity three-dimensional model can be judged based on the simple coordinate relation, so that the judgment of complex abstract content is realized on the basis of simplifying the algorithm logic.
Preferably, the "calculating the reference feature coordinates of the target tooth position based on the preset feature recognition rule" specifically includes: and determining and taking the coordinates of the gingival margin midpoint of the target tooth site as the reference feature coordinates of the target tooth site. Because the gingival margin midpoint is positioned near the position, which is farthest away from the occlusal plane, on the dental crown of the target dental site, the relative distance from the gingival margin midpoint to other tissues in the oral cavity on the oral cavity three-dimensional model is relatively shorter, and the number of traversing space feature points on the oral cavity three-dimensional model can be reduced by taking the coordinate of the point as a reference feature coordinate, so that the operation speed is increased.
In a specific example of this first embodiment, the reference feature coordinates are located in a predetermined model coordinate system, the origin of which is located in the occlusal plane. Based on this, as shown in fig. 7 and 8, the step 411 may specifically include the following steps. Notably, steps 410 to 43 are included in this specific example, but are not described further below.
And step 51, determining all space points on the dental crown of the target tooth position on one side of the labial surface to form a space point set.
The oral three-dimensional model is externally presented with a three-dimensional figure consisting of a plurality of surfaces, but the essence of the oral three-dimensional model is formed by fitting a plurality of space points through computer software, and on the basis of the three-dimensional model, the oral three-dimensional model can be analyzed by setting a certain density so as to obtain point cloud data or point set data distributed on the oral three-dimensional model, namely the space point set.
It should be noted that the "determining all the spatial points on the target dental crown on the labial surface side" may be interpreted as at least being able to confirm the spatial points on the target dental crown. On one hand, the step 51 may be to analyze the point cloud data of the oral three-dimensional model according to a preset density, and perform the following steps 52 to 56 to extract and analyze the spatial point set corresponding to the target tooth position; on the other hand, before the step 51, a neural network algorithm may be invoked or otherwise a region of interest on the oral three-dimensional model is selected, for example, a region where the target tooth position or the dentition as a whole is located is selected, and the spatial point set can also be obtained through analysis, and the algorithm complexity is reduced.
All spatial points on the labial face side are selected depending on the needs of the oral cavity characteristic data, and in this embodiment, the characteristics of the intra-oral tissues on the labial face side of the dentition, such as the vestibular sulcus, the maxillofacial protrusion amplitude, and the labial frenulum, are mainly focused on. The labial surface may be also interpreted as a buccal surface for molars. Of course, in other embodiments, such as when attention needs to be paid to the alveolar bone development condition or the characteristics of the tissues in the mouth, such as the hard palate, the soft palate, etc., all the spatial points on the dental crown of the target tooth position on the lingual or palatal side may be calculated based on the preset characteristic recognition rule to form the spatial point set.
And step 52, selecting the space point with the minimum coordinate value in the space point set as a pole to establish a polar coordinate system, and arranging other space points in ascending order according to the polar angle and the polar diameter to form a characteristic traversal sequence.
Referring to fig. 6 and 10, the above-mentioned technical solutions using the reference feature coordinates and the boundary feature coordinates as the operation objects are all established when the coordinates are in a preset model coordinate system. The model coordinate system may be a digital axis, a planar rectangular coordinate system, a spatial rectangular coordinate system, or the like, and preferably, the model rectangular coordinate system includes at least two coordinate axes so as to reflect a relative position relationship between the reference feature coordinates and the boundary feature coordinates. In order to form a mutual symmetry of the upper jaw and the lower jaw, preferably a mutual symmetry of the left side of the dentition and the right side of the dentition, the origin of the model coordinate system may be set in the occlusal plane and preferably at the intersection of the occlusal plane and the midline of the teeth. Of course, in other embodiments, the origin may be set at the global reference center point C1. Based on this, the space point with the minimum coordinate value is the space point which is at least closest to the occlusal plane and is characterized in all the space points on one side of the labial surface of the crown of the target tooth position. At this time, we can consider the space point to be on the margin of the target dental site crown, that is, a point on the contour of the target dental site crown.
For example, the model coordinate system may be a planar rectangular coordinate system including a first coordinate axis Rx and a third coordinate axis Rz, or may be a spatial rectangular coordinate system including the first coordinate axis Rx, the third coordinate axis Rz, and a second coordinate axis Ry, and the origin Ro is located in the occlusal plane (in fig. 6, the model coordinate system is placed outside the oral three-dimensional model to avoid interference causing the drawing unclear). As can be seen from the enlarged partial view of fig. 10, when the first tooth position 11a is selected as the target tooth position, the origin Ro is located on the side of the incisal end of the first tooth position 11a, and when the first tooth position 11a is a maxillary left middle incisor and the origin Ro is preferably the intersection point of the occlusal plane and the tooth center line, the origin Ro is located on the side of the mesial plane of the first tooth position 11a, and based on this, a model coordinate system can be established in which the first coordinate axis Rx extends in the width direction of the first tooth position 11a and the third coordinate axis Rz extends in the length direction of the first tooth position 11a (or the long axis extending direction of the tooth body, or in the tooth center line direction, particularly for the middle incisor).
The "minimum coordinate value" may refer to a minimum coordinate value in the first coordinate axis Rx direction, a minimum coordinate value in the third coordinate axis Rz direction, a minimum sum of the coordinate value in the first coordinate axis Rx direction and the coordinate value in the third coordinate axis Rz direction, or a minimum other parameter generated after the two coordinate values are subjected to different weighting or other operations. Based on the establishment of the model coordinate system, after the coordinate values in the directions of the first coordinate axis Rx and the third coordinate axis Rz are considered comprehensively, the vertex of the approximate incisor angle on the dental crown of the target dental site is usually selected as the pole, and particularly, the vertex of the central incisor is selected as the pole. When only the coordinate values in the direction of the third coordinate axis Rz are considered, the fifth spatial coordinate point m5 may be selected as the pole. In particular, the point m0 in fig. 10 can also be chosen as the pole when considering the special case that the first tooth site 11a is occluded. To illustrate the general applicability of the invention, we will describe below in terms of the special case of occlusion being considered in the selection of poles, but not in terms of occlusion being considered in the selection of other spatial points.
A polar coordinate system is established with a pole m0, whose horizontal axis may be parallel to the first coordinate axis Rx forming a first polar coordinate horizontal axis Px, and whose vertical axis may be parallel to the third coordinate axis Rz. The horizontal axis and the vertical axis of the polar coordinate system may be set in the same positive direction as the first coordinate axis Rx and the third coordinate axis Rz, that is, the first polar coordinate horizontal axis Px extends along the mesial surface to the distal surface of the dental crown of the target dental position, and the vertical axis of the polar coordinate system extends along the incisal end of the dental crown of the target dental position toward the gum.
The polar angles and the polar diameters are arranged in ascending order, preferably, the polar angles and the polar diameters are arranged in ascending order preferentially, and when two spatial points have the same polar angle, the polar angles and the polar diameters are arranged in ascending order. For example, the sixth spatial point m6 and the seventh spatial point m7 have the same polar angle, but since the sixth spatial point m6 has a shorter polar diameter than the seventh spatial point m7, the sixth spatial point m6 is arranged before the seventh spatial point m 7. For another example, the fourth spatial point m4 has a smaller pole diameter than the third spatial point m3, but since the pole angle of the fourth spatial point m4 is larger than the pole angle of the third spatial point m3, the third spatial point m3 is arranged before the fourth spatial point m 4. Thus, a feature traversal sequence may be formed that includes at least a first spatial point m1, a second spatial point m2, a third spatial point m3, a fourth spatial point m4, a fifth spatial point m5, a sixth spatial point m6, and a seventh spatial point m7 in that order, which may be denoted as [ m1, m2, m3, m4, m5, m6, m 7.
And 53, extracting coordinates of the first two space points in the characteristic traversal sequence, sequentially calculating to obtain a first polar coordinate vector and a second polar coordinate vector by taking the poles as starting points, calculating a cross product of the first polar coordinate vector and the second polar coordinate vector, and judging whether the cross product is less than 0.
The first polar coordinate vector may be represented as
Figure BDA0003874044630000351
The second polar coordinate vector can be expressed as
Figure BDA0003874044630000352
The cross product can be expressed as
Figure BDA0003874044630000353
If the cross product CP (1)<0, the direction of the line segment m0m2 obtained by rotating the line segment m0m1 is considered to be anticlockwise, and at this time, it can be judged that the first space point m1 and the second space point m2 are likely to exist at the edge of the target dental crown at the same time, namely belong to a convex hull reference point in the convex hull outline; but if the cross product CP (1)>0, it can be considered that the direction of the line segment m0m2 obtained by rotating the line segment m0m1 is clockwise, and at this time, it needs to be further checked whether the first spatial point m1 and the second spatial point m2 include one convex hull reference point.
If yes, jumping to step 54A, and updating the starting point to be a first space point corresponding to the first polar coordinate vector;
if not, go to step 54B, and update the starting point to the second spatial point corresponding to the second polar coordinate vector.
When the counterclockwise rotation direction is satisfied, it may be determined that the first spatial point m1 is closer to the pole m0 than the second spatial point m2, and the arrangement of the first spatial point m1 and the second spatial point m2 is in accordance with the arrangement of the convex hull reference point on the convex hull contour data with respect to the pole m0, thereby determining that the first spatial point m1 is the next starting point, and continuing to determine whether the second spatial point m2 and the subsequent spatial points are within the convex hull contour data range. When the counterclockwise rotation direction is not met, it can be determined that there is an error in the steps such as point selection, and the first spatial point m1 does not belong to the convex hull reference point in the convex hull contour data, so that the traversal determination is continued with the second spatial point m2 as the next starting point.
And step 55, traversing all other spatial points behind the second spatial point in the feature traversal sequence, selectively updating the starting point according to the polar coordinate vector formed by the spatial point and the starting point and the cross product between the polar coordinate vectors, and determining at least two spatial points meeting the judgment condition as convex hull reference points.
After the next starting point after the pole m0 is selected, the cross product judgment may be continuously performed with the second spatial point m2 and all other spatial points located therebehind as objects, or the cross product judgment may be continuously performed with all other spatial points located behind the second spatial point m2 as objects, and finally at least two convex hull reference points which meet the conditions and can be considered to fall into the convex hull contour data are obtained.
And 56, fitting convex hull outline data of the target tooth position according to the convex hull reference point and the pole, and calculating reference characteristic coordinates of the target tooth position according to the convex hull outline data.
The at least two convex hull reference points and the pole m0 can form convex hull outline data capable of enclosing or fitting to form a convex hull, and the larger the data volume of the convex hull reference points is, the more the enclosed convex hull fits the outline of the dental crown of the target dental position. Based on this, preferably, when the gingival margin midpoint coordinate of the target dental crown is selected as the reference feature coordinate, a convex hull reference point or a convex hull contour fitting point having the largest coordinate value on the first coordinate axis Rx may be selected as the reference feature coordinate. Wherein the convex hull contour fitting point can be obtained by executing the following steps: interpolating and fitting to form a convex hull contour curve according to convex hull contour data consisting of the pole and the convex hull reference point; the convex hull contour curve comprises a convex hull reference point and a convex hull contour fitting point obtained through interpolation. For example, in the case where the first spatial point m1 and the second spatial point m2 are both convex hull reference points, a first convex hull contour fitting point may be interpolated between the pole m0 and the first spatial point m1, and/or a second convex hull contour fitting point may be interpolated between the first spatial point m1 and the second spatial point m2, so as to obtain a smoother convex hull contour curve.
As shown in fig. 7, 8 and 9, as another specific example independent of the above specific example, or as a preferred embodiment provided for step 55 in the above specific example, the step 55 may further specifically include the following steps. It is understood that steps 410 to 43 in fig. 7 can form a technical solution in combination with the above description; steps 51 through 56 of fig. 8, as part of step 411 of fig. 7, may be combined with the above description to form another embodiment; steps 551 through 553 in fig. 9, as part of step 55 in fig. 8, and further as part of step 411 in fig. 7, may be combined with the above description to form yet another embodiment. Notably, steps 410 to 43, and steps 51 to 56 are included in this specific example, but are not described further below.
And 551, extracting a third space point located behind the second space point in the feature traversal sequence, sequentially calculating a reference polar coordinate vector between a reference space point of the first space point and the second space point, which is not the starting point, and a third polar coordinate vector between the third space point and the starting point, calculating a cross product of the reference polar coordinate vector and the third polar coordinate vector, and judging whether the cross product is less than 0.
As shown in conjunction with FIGS. 6 and 10, for example, in determining the first polar coordinate vector
Figure BDA0003874044630000371
And the second polar coordinate vector
Figure BDA0003874044630000372
The cross product CP (1)<When 0, the first spatial point m1 is selected as the starting point after the pole m0, and the second spatial point m2 is defined as the reference spatial point. At this time, further calculation is made as the reference polar coordinate vector
Figure BDA0003874044630000373
Coordinate vector of third pole
Figure BDA0003874044630000374
And determines whether the cross product CP (2) is less than 0, thereby determining the rotation direction of the line segment m1m2 to the line segment m1m 3.
If yes, go to step 552A to update the starting point to be the reference space point and determine that the reference space point is the convex hull reference point.
If not, the step 552B is skipped, the starting point is not updated, the reference spatial point is deleted, the third spatial point is used as a new reference spatial point, and the starting point is selectively updated and the convex hull reference point is determined according to the new reference polar coordinate vector and the cross product of the polar coordinate vectors formed by the starting point and the next spatial point of the new reference spatial point.
Ruan-Y-Ji CP (2)<0, then the second spatial point m2 as the reference spatial point is determined to be a new starting point after the first spatial point m1, and the second spatial point m2 is determined to be one of the convex hull reference points. If cross product CP (2) >0, deleting the second space point m2 as the reference space point, and calculating the third polar coordinate vector by taking the first space point m1 as a starting point
Figure BDA0003874044630000381
A polar coordinate vector formed with the first spatial point m1 and the fourth spatial point m4
Figure BDA0003874044630000382
And iterating the cross product CP, and determining whether to update the initial point and the convex hull reference point according to a subsequent cross product judgment result.
And 553, repeating the iteration until the judgment of all the space points in the feature traversal sequence is completed, and obtaining at least two convex hull reference points.
The following description will be made with respect to implementation procedures of the fourth spatial point m4, the fifth spatial point m5, the sixth spatial point m6, and the seventh spatial point m 7. After step 552, taking the cases shown in fig. 6 and 10 as an example, the second spatial point m2 may be determined as a starting point after the first spatial point m1, and based on this, the third spatial point m3 and the fourth spatial point m4 may be extracted to respectively form a fourth polar coordinate vector
Figure BDA0003874044630000383
And a fifth polar coordinate vector
Figure BDA0003874044630000384
And taking the third space point m3 as a reference space point, calculating the corresponding cross product CP (3) and judging the relation between the cross product CP (3) and 0, wherein the cross product CP (3)<0, thus updating the third spatial point m3 to be a new starting point and determining the third spatial point m3 to be a convex hull reference point.
A fourth space point m4 and a fifth space point m5 are further extracted and respectively form a sixth polar coordinate vector
Figure BDA0003874044630000385
And a seventh polar coordinate vector
Figure BDA0003874044630000386
And taking the fourth space point m4 as a reference space point, calculating the corresponding cross product CP (4) and judging the relation between the cross product CP (4) and 0, wherein the cross product CP (4)>0, therefore, the starting point is not updated, the fourth space point m4 as the reference space point is deleted, the fifth space point m5 is taken as a new reference space point, the sixth space point m6 as the next space point of the fifth space point m5 is extracted, and the eighth polar coordinate vector is correspondingly formed
Figure BDA0003874044630000391
According to a seventh polar coordinate vector as a new reference polar coordinate vector
Figure BDA0003874044630000392
And the eighth polar coordinate vector
Figure BDA0003874044630000393
The relation between the cross product CP (5) and 0 decides whether to update the starting point or determine the convex hull reference point.
Due to CP (5)<0, the fifth spatial point m5 is thus updated as a new starting point, and the fifth spatial point m5 is determined to be the convex hull reference point. A sixth space point m6 and a seventh space point m7 are further extracted to respectively form a ninth polar coordinate vector
Figure BDA0003874044630000394
And the tenth polar coordinate vector
Figure BDA0003874044630000395
And taking the sixth space point m6 as a reference space point, calculating a corresponding cross product CP (5) and judging the relation between the cross product CP and 0. Due to cross accumulation CP (5)>0, and thus the start point is not updated, the sixth spatial point m6, which is the reference spatial point, is deleted, and the seventh spatial point m7, which is the new reference spatial point, is deleted.
Through the steps, at least a first space point m1, a second space point m2, a third space point m3 and a fifth space point m5 are extracted and obtained as the convex hull reference points, the technical scheme is repeatedly iterated until all the space points in the characteristic traversal sequence are judged, all the convex hull reference points aiming at the target tooth position are obtained, and then the corresponding convex hull outline and the data thereof can be formed.
Although the above process forms a loop nesting algorithm in terms of expression, the above process can be implemented by an algorithm such as stacking, stack top popping and the like in an actual operation process, and the invention is not exhaustive in terms of specific operation modes.
As shown in fig. 7 and 11, the present invention provides a second example of the oral cavity structure information generating method based on the above embodiment, which details the step 412 and specifically provides the steps 4121A and 4122A. It is understood that the following description will not be expanded for the other steps than the refining step. This second embodiment specifically includes the following steps.
Step 410, acquiring three-dimensional model data of the oral cavity.
Step 411, determining at least one tooth position on the oral cavity three-dimensional model as a target tooth position, and calculating reference feature coordinates of the target tooth position based on a preset feature recognition rule to represent the target tooth position feature.
Step 4121A, taking the reference feature coordinates as a starting point, and taking a reference extension line in a direction away from the target tooth site along the first direction, and continuously analyzing the formation of projection end points of reference extension ends on the jaw face of the oral cavity three-dimensional model, except the reference feature coordinates, on the reference extension line.
The reference extension line may be an auxiliary line actually displayed on the three-dimensional model of the oral cavity, thereby presenting the process of retrieving and determining the boundary feature coordinates to the medical practitioner for monitoring. Of course, the reference extension line may also be interpreted as the logic inherent in the traversal process when the system executes the oral cavity structure information generating method, that is, the traversal of the spatial point on the oral cavity three-dimensional model and the judgment of the end point forming condition are performed towards the direction approximate to the reference extension line. In summary, the reference extension line does not necessarily need to be actually present, but is present as an inherent logic such as an auxiliary line or a direction reference line in the oral cavity structure information generation method provided by the present invention.
Taking the fifth tooth position 11E as an example of the target tooth position, taking the coordinate of the fifth gingival margin midpoint 11E corresponding to the fifth tooth position 11E as the reference feature coordinate, if the first direction is the extending direction of the long axis of the tooth body, a fifth standard reference extension line Le is correspondingly generated, where one end of the fifth standard reference extension line Le is the fifth gingival margin midpoint 11E, and the projection of the other end on the three-dimensional model of the oral cavity 100 corresponds to the projection end point. Of course, when the first direction is defined as the extending direction of the tooth long axis of the target tooth position or the projection of the extending direction of the tooth long axis on the jaw face of the three-dimensional oral cavity model 100, the correspondingly generated reference extending line and the fifth standard reference extending line Le form a certain inclination angle. The three embodiments are feasible, and all the required projection end points can be formed, and for the last technical scheme, the first direction corresponds to a fifth projection direction D1e, and a fifth reference extension line Le' can be constructed; when the target tooth position is in the lower jaw, it may correspond to a fifth projection opposite direction D1e' opposite to the fifth projection direction D1 e.
Similarly, taking the sixth tooth position 11F as the target tooth position, if the sixth tooth position 11F is taken as the target tooth position, the sixth gingival margin midpoint 11F is correspondingly included. When the first direction is the extending direction of the long axis of the tooth body, the sixth standard reference extending line Lf is generated through the above technical scheme. When the first direction is the projection of the extending direction of the long axis of the tooth body on the maxillofacial surface of the three-dimensional model 100 of the oral cavity, the sixth reference extending line Lf' is generated through the above technical scheme, and the corresponding first direction is the sixth projection direction D1f. When the target tooth position is in the lower jaw, the first direction corresponds to a sixth projection opposite direction D1f'.
It is to be understood that although the reference feature coordinates are determined as the gingival margin midpoint coordinates of the target tooth position in the following description of the present invention, the technical solutions for determining the reference feature coordinates as other coordinates on the target tooth position can be derived by those skilled in the art according to the foregoing description.
In step 4122A, when the reference extension end no longer forms a projection end point with the maxillofacial surface of the oral cavity three-dimensional model, the finally formed projection end point coordinate is determined as a boundary feature coordinate.
In the process that any one of the reference extension lines gradually extends towards the direction far away from the target tooth position, the reference extension line at least continuously generates a projection end point on the oral cavity three-dimensional model 100, and when the reference extension line extends out of the boundary of the oral cavity three-dimensional model 100, the reference extension line cannot continuously generate the projection end point with the oral cavity three-dimensional model 100, and based on this, according to the formation situation of the projection end point, when a new projection end point is no longer formed, the finally formed projection end point is considered to be already on the boundary of the oral cavity three-dimensional model 100.
For example, the fifth reference extension line Le' or the fifth standard extension line Le gradually extends from the fifth gingival margin midpoint 11E, and continuously generates projection end points on the oral three-dimensional model 100, and the last projection end point is the corresponding fifth model boundary point 10E. For another example, the sixth reference extension line Lf' or the sixth standard reference extension line Lf gradually extends from the sixth gingival margin midpoint 11F, and projection end points are continuously generated on the oral three-dimensional model 100, and the last projection end point is the corresponding sixth model boundary point 10F.
And 413, calculating a characteristic distance value between the reference characteristic coordinate and the boundary characteristic coordinate to represent the position relation between the target tooth position and the model boundary.
Step 421, comparing the characteristic distance value with a distance integrity criterion value representing a preset position condition.
And step 422, judging whether the intraoral tissue area corresponding to the target tooth position in the oral three-dimensional model meets a preset integrity condition according to the comparison result.
If not, skipping to step 43, acquiring corresponding intraoral image data, extracting oral cavity feature data corresponding to an intraoral tissue area in the intraoral image data, establishing a pixel size relationship between the intraoral image data and oral cavity three-dimensional model data, and reconstructing a part representing the oral cavity feature data on the oral cavity three-dimensional model data to obtain oral cavity structure information.
Therefore, based on the technical scheme, the boundary position of the model can be determined based on a simple algorithm, the problems of overlarge data volume, long operation time consumption and difficult fitting process caused by analyzing point cloud data are solved, the model boundary characteristic corresponding to the tooth position characteristic can be actively established based on the tooth position characteristic, and the subsequent judgment of integrity and the determination of the missing position are facilitated.
In an application scenario, preferably, the intraoral tissue region includes a vestibular sulcus, the oral cavity feature data includes vestibular sulcus height data, the first direction is a projection of an extension direction of a long axis of a tooth body on a maxillofacial surface of the oral cavity three-dimensional model, and the target tooth positions include a maxillofacial incisor tooth position and a mandibular incisor tooth position. Thus, the corresponding model boundary characteristics can be selected according to the anatomical characteristics of the vestibular sulcus, wherein the highest point of the upper vestibular sulcus is usually positioned above the upper jaw central incisor or lateral incisor along the long axis direction of the tooth body, and the lowest point of the lower vestibular sulcus is usually positioned below the lower jaw central incisor or lateral incisor along the long axis direction of the tooth body, so as to judge whether the three-dimensional oral cavity model comprises the region part necessary for calculating the vestibular sulcus height data. It is worth emphasizing that the incisor positions include the central incisor position and the lateral incisor position, and further include the four positions of the upper jaw left side, the lower jaw left side, the upper jaw right side and the lower jaw right side.
In another application scenario, preferably, the intraoral tissue region comprises a vestibular groove, or a root ridge, or both. Oral cavity characteristic data includes dental arch width data, the extending direction that first direction is the teeth body major axis is in projection on the jaw face of oral cavity three-dimensional model, the target tooth position includes first jaw face left side molar tooth position and first jaw face right side molar tooth position of mutual correspondence. The first maxillofacial area is an upper jaw, or a lower jaw, or the upper jaw and the lower jaw are selected as the first maxillofacial area. In this way, with respect to the anatomical features of the entire dental arch, the model boundary features can be selected based on the fact that the farthest point on the left side of the dental arch is usually located on the left side of the projection direction of the upper jaw left second molar along the long axis of the dental body, or the left side of the projection direction of the lower jaw left second molar along the long axis of the dental body, and the farthest point on the right side of the dental arch is usually located on the right side of the projection direction of the upper jaw right second molar along the long axis of the dental body, so as to determine whether or not the three-dimensional model of the oral cavity includes the region necessary for calculating the dental arch width data.
In yet another application scenario, preferably, the intraoral tissue region includes a labial ligament, and the oral cavity characteristic data includes labial ligament width data. Based on this, as shown in fig. 7 and fig. 14, the present invention provides a third embodiment of the oral cavity structure information generating method based on the above embodiment, which refines step 411 and step 412 to adapt to the above application scenario, and specifically provides step 411B, step 4121B and step 4122B. It is understood that the following description will not be expanded for the other steps than the refining step. This third embodiment specifically includes the following steps.
Step 410, acquiring three-dimensional model data of the oral cavity.
And step 411B, determining the incisor tooth position on the left side of the first jaw face and the incisor tooth position on the right side of the first jaw face on the three-dimensional model of the oral cavity as target tooth positions, and calculating reference feature coordinates of the target tooth positions and a plurality of relative feature coordinates between the two reference feature coordinates on the target tooth positions based on a preset feature recognition rule so as to represent the features of the target tooth positions.
The first maxillofacial area is an upper jaw, or a lower jaw, or the upper jaw and the lower jaw are selected as the first maxillofacial area. Taking fig. 14 as an example that the first jaw face is an upper jaw, the incisor tooth position in the right side of the first jaw face points to the fifth tooth position 11E, and the incisor tooth position in the left side of the first jaw face points to the seventh tooth position 11G, the coordinate of the fifth gingival margin midpoint 11E may be used as the reference characteristic coordinate of the tooth position, and the coordinate of the seventh gingival margin midpoint 11G may be used as the reference characteristic coordinate of the tooth position. Since the incisors located on the same jaw face are sequentially arranged in the direction perpendicular to the tooth centerline or the tooth long axis, the fifth gingival margin midpoint 11E and the seventh gingival margin midpoint 11G necessarily include a plurality of spatial points distributed on the target tooth position therebetween, similarly to extraction of the crown margin points such as the convex hull reference point. As a preferred embodiment, the coordinates of the spatial points located both between the fifth gingival margin midpoint 11E and the seventh gingival margin midpoint 11G and distributed on the gingival margin of the fifth tooth site 11E or the gingival margin of the seventh tooth site 11G may be selected as the relative feature coordinates. It will be appreciated that the spatial point to which the relative feature coordinates point is also located on the target tooth site, and therefore the relative feature coordinates and the reference feature coordinates may together characterize the target tooth site.
For example, the coordinates of the fifth gingival margin mark point 11Em located on the fifth tooth site 11e may be selected as the relative feature coordinates. Of course, the spatial point to which the relative feature coordinates point is not limited to the fifth margin mark point 11Em, and may be any point located not only on the margin but also on the crown.
And 4121B, taking the reference characteristic coordinate and the relative characteristic coordinate as starting points, respectively, and taking the reference characteristic coordinate and the relative characteristic coordinate as reference extension lines along the first direction to the direction far away from the target tooth position, and continuously analyzing the formation condition of projection end points of the reference extension ends on the reference extension lines, except the reference characteristic coordinate or the relative characteristic coordinate, on the oral cavity three-dimensional model.
Wherein the first direction is a projection of an extending direction of a long axis of a tooth body on a maxillofacial surface of the oral cavity three-dimensional model. On this basis, a fifth reference extension line Le ' can be generated corresponding to the fifth tooth site 11E and the fifth gingival margin midpoint 11E, a seventh reference extension line Lg ' can be generated corresponding to the seventh tooth site 11G and the seventh gingival margin midpoint 11G, and a fifth marked reference extension line Lem ' can be generated corresponding to the fifth tooth site 11E and the fifth gingival margin marking point 11Em as one of the reference extension lines. Similarly, the target tooth position corresponding to each reference extension line is located in the upper jaw, and the first direction D1 is upward from the cutting crest of the middle incisor tooth position; when the target tooth position is selected as the middle incisor of the lower jaw, the extending direction of the corresponding reference extending line may be the first reverse direction D1' (for example, any one of the above-mentioned reference reverse directions).
In step 4122B, when the reference extension end no longer forms a projection end point with the oral three-dimensional model, the finally formed projection end point coordinate is determined as a boundary feature coordinate.
The projection end points are continuously generated on the oral cavity three-dimensional model 100 during the process of extending the reference extension line. Based on the fact that the boundary of the oral three-dimensional model 100 has been reached when the above-mentioned at least three reference extension lines no longer generate new projected end points with the oral three-dimensional model 100, a fifth model boundary point 10E corresponding to the fifth tooth position 11E, a seventh model boundary point 10G corresponding to the seventh tooth position 11G, and a fifth mark boundary point 10Em corresponding to the fifth gingival margin mark point 11Em on the fifth tooth position 11E can be determined, so that the coordinates of the above-mentioned model boundary point and mark boundary point are taken together as the boundary feature coordinates.
And 413, calculating a characteristic distance value between the reference characteristic coordinate and the boundary characteristic coordinate to represent the position relation between the target tooth position and the model boundary.
Step 421, comparing the characteristic distance value with a distance integrity criterion value representing a preset position condition.
And step 422, judging whether the intraoral tissue area corresponding to the target tooth position in the oral three-dimensional model meets a preset integrity condition according to the comparison result.
If not, skipping to step 43, acquiring corresponding intraoral image data, extracting oral cavity feature data corresponding to an intraoral tissue area in the intraoral image data, establishing a pixel size relationship between the intraoral image data and oral cavity three-dimensional model data, and reconstructing a part representing the oral cavity feature data on the oral cavity three-dimensional model data to obtain oral cavity structure information.
The principle of the technical scheme is that based on the anatomical features of the labial frenulum, the labial frenulum is generally located in the middle area above the two middle incisors of the upper jaw and the middle area below the two middle incisors of the lower jaw, so that the middle incisor teeth can be selected as target teeth, the range of the area where the labial frenulum is located is limited, and the operation speed is improved. Since the model boundary feature corresponding to the relative feature coordinates is obtained by the search, it is possible to further determine whether or not the labial ligament region is missing on the oral three-dimensional model based on the positional relationship between the relative feature coordinates and the model boundary feature. The technical scheme for judging whether the loss occurs can be the same set of oral cavity structure information generation method as the method for judging whether the vestibular groove part has the loss, and the details are not repeated here.
On the basis, the person skilled in the art can obtain the position relation between the target tooth position and the model boundary corresponding to different areas on the oral cavity three-dimensional model based on the anatomical features of different tissues in the oral cavity. The technical solutions corresponding to all intraoral tissues are not exhaustive in the present invention.
As shown in fig. 7 and 16, the present invention provides a fourth example of the oral cavity structure information generating method according to the above embodiment, which provides steps 61 to 62 in advance with respect to step 421A, and steps 61 to 62 may be provided at any position before step 421A, which is not limited by the present invention. Hereinafter, the present invention will be described by taking a fourth embodiment formed by disposing the preceding step before step 410 as an example. It is understood that the following description will not be made for the other steps than the preceding step. This fourth embodiment specifically includes the following steps.
Step 61, at least two sets of three-dimensional training model data and at least two sets of distance training data corresponding to the three-dimensional training model data are obtained.
The "preset position condition" may be directly obtained and set according to experiment and network data, or a more accurate and actual preset position condition may be obtained by using the technical solution provided in the fourth embodiment of the present invention.
In this embodiment, the calculation of the preset position condition is preferably performed by using three-dimensional training model data at least including corresponding features of the complete intraoral tissue region. For example, when the vestibular groove is more focused on by the generation of the oral cavity structure information, the three-dimensional training model data for constructing the preset position condition preferably includes all vestibular groove features and tooth position features, but the absence is allowed for the position and the form such as the tooth root ridge. At this time, it should be noted that, although the data for drawing up the preset position condition in the fourth embodiment is three-dimensional training model data, when only local features such as the labial frenum, the maxillofacial protrusion amplitude, and the like are focused, it may be partial or local intraoral photographic image data in the three-dimensional training model. When applying planar image data, steps of establishing a pixel size relationship and converting a two-dimensional feature into a three-dimensional feature need to be added.
Wherein the distance training data comprises a distance between conditional training coordinates on the three-dimensional training model corresponding to the reference feature coordinates and target tissue coordinates on the three-dimensional training model corresponding to the intraoral tissue region. A line connecting the conditional training coordinates and the target tissue coordinates extends along the first direction.
As shown in fig. 6 and 17, taking the intraoral tissue region as the vestibular sulcus region and the first tooth position 11A corresponding to the incisor tooth position in the left side of the upper jaw as an example, the reference feature coordinate corresponding to the first tooth position 11A in the actual oral cavity three-dimensional model 100 points to the coordinate of the first gingival margin midpoint 11A, the conditional training coordinate corresponding to the incisor tooth position in the left side of the upper jaw in the three-dimensional training model points to the coordinate of the first training gingival margin midpoint 1121U, and the target tissue coordinate corresponding to the incisor tooth position in the left side of the upper jaw in the three-dimensional training model points to the coordinate of the first training vestibular sulcus bottom point 1321U. Based on this, the distance training data can be determined from the coordinates of the first training gingival margin midpoint 1121U and the coordinates of the first training vestibular sulcus bottom point 1321U together.
The correspondence between the above data is established based on the first direction D1 with the first gingival margin midpoint 11A and the first training gingival margin midpoint 1121U as references, respectively. Since correspondence naturally exists between the gingival margin midpoints, the correspondence between the retrieved first gingival margin midpoint 11A and the first training gingival margin midpoint 1121U is quickly and stably established by traversing the first direction D1.
The distance training data is a plurality of data formed in a plurality of three-dimensional training model data for a single tooth position. In other words, one piece of distance training data corresponding to a first tooth position may be obtained in the first three-dimensional training model, and another piece of distance training data corresponding to a second tooth position may be obtained in the second three-dimensional training model.
Step 62, calculating the average distance data and the training distance standard deviation of the distance training data, and calculating to obtain a distance integrity criterion value according to the difference between the average distance data and the product of the training distance standard deviation and a preset distribution probability coefficient.
The distribution probability coefficient points to a predetermined probability distribution, which may be a discrete distribution such as a geometric distribution, a binomial distribution, a poisson distribution, or the like, or a continuous distribution such as a uniform distribution, an exponential distribution, a normal distribution, or the like. The distance integrity criterion value established by the method is used for representing the preset position condition, and can estimate and cover various conditions possibly occurring on the oral cavity three-dimensional model to a greater extent, so that the probability of misjudgment is reduced.
Step 410, obtaining oral cavity three-dimensional model data.
And 411, determining at least one tooth position on the oral cavity three-dimensional model as a target tooth position, and calculating reference feature coordinates of the target tooth position based on a preset feature recognition rule to represent the target tooth position feature.
And step 412, determining boundary feature coordinates corresponding to the reference feature coordinates on the oral cavity three-dimensional model along the first direction according to the reference feature coordinates to represent the boundary features of the model.
And 413, calculating a characteristic distance value between the reference characteristic coordinate and the boundary characteristic coordinate to represent the position relation between the target tooth position and the model boundary.
Step 421, comparing the characteristic distance value with a distance integrity criterion value representing a preset position condition.
And step 422, judging whether the intraoral tissue area corresponding to the target tooth position in the oral three-dimensional model meets a preset integrity condition according to the comparison result.
If not, skipping to step 43, acquiring corresponding intraoral image data, extracting oral cavity feature data corresponding to an intraoral tissue area in the intraoral image data, establishing a pixel size relationship between the intraoral image data and oral cavity three-dimensional model data, and reconstructing a part representing the oral cavity feature data on the oral cavity three-dimensional model data to obtain oral cavity structure information.
Therefore, an integrity criterion value with a high reference value is formed based on the three-dimensional training model data with a high data integrity degree, and whether the characteristic loss occurs in the region part corresponding to the target tooth position can be judged better. Meanwhile, in the process of implementing the fourth embodiment, other position relationships and preset position conditions can be correspondingly established based on the actual oral cavity three-dimensional model and the three-dimensional training model by adjusting the first direction, so that the judgment of the integrality of the intraoral tissue region of dimensions such as dental arch width, dental arch radian, maxillofacial protrusion amplitude, labial frenum width and the like can be completed.
In an application scenario, preferably, the intraoral tissue region includes a vestibular sulcus, the oral cavity feature data includes vestibular sulcus height data, the target dental positions include an upper first incisor dental position and a lower second incisor dental position, and the distance training data includes a first sulcus bottom distance parameter corresponding to the upper first incisor dental position and a second sulcus bottom distance parameter corresponding to the lower second incisor dental position. Based on this, on the basis of the fourth embodiment described above, as shown in fig. 16, the step 62 may specifically include the following steps as a part thereof.
Step 621A, calculate first trench bottom average distance values of all first trench bottom distance parameters in all distance training data, and second trench bottom average distance values of all second trench bottom distance parameters in all distance training data, to obtain average distance data.
Step 622A, calculating first trench bottom distance standard deviations of all first trench bottom distance parameters in all distance training data and second trench bottom distance standard deviations of all second trench bottom distance parameters in all distance training data to obtain training distance standard deviations.
The three-dimensional training model data including a set of distance training data represented in fig. 17 is taken as an object, and the upper left side central incisors are taken as the upper jaw first incisors, and the lower right side lateral incisors are taken as the lower jaw second incisors as an example. The first sulcus bottom distance parameter points to the distance between a first training gingival margin midpoint 1121U and a first training vestibular sulcus bottom point 1321U which are sequentially distributed along a first direction D1 of a first maxillary incisor tooth position; the second sulcus floor distance parameter points to a distance between a second training gingival margin midpoint 1142L and a second training vestibular sulcus floor point 1342L which are sequentially distributed along the first direction D1 of the mandibular second incisor tooth position.
Based on this, the distance training data of all three-dimensional training model data (e.g., n sets in total) can be integrated to obtain n sets of first sulcus distance parameters corresponding to the maxillary left-side central incisors and n sets of second sulcus distance parameters corresponding to the mandibular right-side lateral incisors. Therefore, the average values can be calculated and obtained to be respectively used as the first trench bottom average distance value and the second trench bottom average distance value to form average distance data, and the standard deviations are calculated and obtained to be respectively used as the first trench bottom distance standard deviation and the second trench bottom distance standard deviation to form the training distance standard deviation.
In a preferred embodiment, the distance training data includes a floor distance parameter corresponding to the maxillary left incisor tooth position (i.e., the distance between the fifth training gingival margin midpoint 1122U and the fifth training vestibular floor point 1322U in fig. 17 along the first direction D1 thereof), a floor distance parameter corresponding to the maxillary left incisor tooth position (the distance between the first training gingival margin midpoint 1121U and the first training vestibular floor point 1321U in fig. 17 along the first direction D1 thereof), a floor distance parameter corresponding to the maxillary right incisor tooth position (the distance between the fourth training gingival margin midpoint 1111U and the fourth training vestibular floor point 1311U in fig. 17 along the first direction D1 thereof), a floor distance parameter corresponding to the maxillary right incisor left incisor tooth position (the distance between the third training gingival margin midpoint 1112U and the third training vestibular floor point 1312U in fig. 17 along the first direction D1 thereof), a floor distance parameter corresponding to the maxillary right or left mandibular tooth position (the distance between the eighth training vestibular margin point 1141 and the training vestibular floor point 1141 in fig. 17 along the first direction L), and a floor distance parameter corresponding to the second training vestibular floor distance parameter (the second training vestibular floor distance parameter 1141) along the second training vestibular floor distance parameter L1 in fig. 17L 1L) and the training vestibular floor distance parameter (the sixth training vestibular floor distance parameter). Wherein the first direction D1 corresponding to different target tooth positions may have differences, which are not specifically labeled herein for the sake of simplifying the description. Therefore, the distance integrity criterion value which is more accurate and credible can be calculated according to the groove bottom distance parameters of the eight incisor teeth.
It is emphasized that the sulcus distance parameter characterizes the distance between the conditional training coordinates of the gingival margin midpoint corresponding to the incisor tooth site and the target tissue coordinates of the sulcus feature point corresponding to the gingival margin midpoint. Thus, in other scenarios, the sulcus-bottom distance parameter or other parameters of other target tooth positions may be calculated based on the same logic.
In another application scenario, preferably, the intraoral tissue region includes a vestibular sulcus, or a tooth root bulge, or both a vestibular sulcus and a tooth root bulge, the oral cavity characteristic data includes dental arch width data, the target dental positions include upper two-side distal molar dental positions and lower two-side distal molar dental positions, and the distance training data includes an upper dental arch width parameter corresponding to the upper two-side distal molar dental positions and a lower dental arch width parameter corresponding to the lower two-side distal molar dental positions. Based on this, on the basis of the fourth embodiment described above, as shown in fig. 16, the step 62 may specifically include the following steps as a part thereof.
And 621B, calculating upper average width values of all upper dental arch width parameters in all distance training data and lower average width values of all lower dental arch width parameters in all distance training data to obtain average distance data.
And 622B, calculating upper width standard deviations of all upper arch width parameters in all distance training data and lower width standard deviations of all lower arch width parameters in all distance training data to obtain training distance standard deviations.
The steps 621B and 622B correspond to the steps 621A and 621A, and the arch width parameter is used as a calculation object to judge the integrity of the vestibular groove or the ridge. The basic idea is consistent with the previous application scene, and the description is not carried out here.
For the far-end molar teeth on two sides of the upper jaw, the far-end molar teeth on two sides of the upper jaw are preferably the second molar teeth on the left side of the upper jaw and the second molar teeth on the right side of the upper jaw; the far-end molar positions on two sides of the lower jaw are preferably the second molar position on the left side of the lower jaw and the second molar position on the right side of the lower jaw. Of course, in the case where there is missing teeth, multiple teeth (third molar ejection), or the like, the tooth position farthest from the tooth centerline may be selected as the "distal molar position".
For any specific example or derivative technical solution in the fourth embodiment, it is further preferable that the distribution probability coefficient is 2.
For example, in an application scenario where the oral characteristic data includes vestibular sulcus height data, the calculated first sulcus bottom average distance value is μ 21 with the calculated first sulcus bottom distance standard deviation being σ 21 as the subject, then in a preferred embodiment where the distribution probability coefficient m =2, the first distance integrity criterion value ζ 21 corresponding to the upper jaw first incisor tooth position may be configured to at least satisfy:
ζ21=μ21-2*σ21。
Thus, a first characteristic distance value corresponding to a first incisor tooth position can be compared
Figure BDA0003874044630000501
The relationship with the first distance integrity criterion value ζ 21 achieves the effect of "comparing the positional relationship with the preset positional condition corresponding to the positional relationship".
In a preferred embodiment, corresponding to step 413 described in any one of the technical solutions herein, for the step in which "the feature distance value between the reference feature coordinate and the boundary feature coordinate is calculated", it may be preferably configured to include: and calculating the distance values of the reference characteristic coordinates and the boundary characteristic coordinates in the extending direction of the long axis of the tooth body to obtain the characteristic distance values.
Corresponding to fig. 6, for example, the characteristic distance value corresponding to the first tooth site 11A may be a distance value between the coordinates of the first gingival margin midpoint 11A and the coordinates of the first model boundary point 10A in the first reference direction D11.
Based on this, in one embodiment, the feature distance value calculation step can be further simplified by establishing a special coordinate system. In this embodiment, the first direction D1 can be defined as a projection of an extending direction of a long axis of a tooth body on the three-dimensional model 100 of the oral cavity. Correspondingly, the boundary characteristic coordinates and the reference characteristic coordinates are located in a preset model coordinate system, the model coordinate system at least comprises a first coordinate axis Rx and a third coordinate axis Rz, the third coordinate axis Rz extends along the extension direction of the tooth central line, and the first coordinate axis Rx extends along the extension direction of the central incisor width. At this time, the "calculating a distance value of the reference feature coordinate and the boundary feature coordinate in the extending direction of the tooth centerline" may be specifically configured to include: and calculating a coordinate difference value of the reference feature coordinate and the boundary feature coordinate on the third coordinate axis. For example, the difference between the coordinates of the first gingival margin midpoint 11A and the coordinates of the first model boundary point 10A on the third coordinate axis Rz is calculated.
As shown in fig. 7 and 18, the present invention provides a fifth example of the oral cavity structure information generating method based on the above embodiment, which details the step 421, specifically, the step 4211, the step 4212A, and the step 4212B, and details the step 422, and provides the step 4220 for the step 4212A and the step 4212B. It is understood that the following description will not be expanded for the other steps than the refining step. Similarly, since steps 4211 to 4220 are attributed to steps 421 and 422 in order, the description of steps 421 and 422 themselves will be omitted below. This fifth embodiment specifically includes the following steps.
Step 410, obtaining oral cavity three-dimensional model data.
Step 411, determining at least one tooth position on the oral cavity three-dimensional model as a target tooth position, and calculating reference feature coordinates of the target tooth position based on a preset feature recognition rule to represent the target tooth position feature.
And step 412, determining boundary feature coordinates corresponding to the reference feature coordinates on the oral cavity three-dimensional model along the first direction according to the reference feature coordinates to represent the boundary features of the model.
And 413, calculating a characteristic distance value between the reference characteristic coordinate and the boundary characteristic coordinate to represent the position relation between the target tooth position and the model boundary.
Step 4211, determine whether the characteristic distance value is less than the distance integrity criterion value.
If yes, jumping to a step 4212A, and judging that the position in the oral cavity three-dimensional model pointed by the characteristic distance value is a characteristic missing position.
If not, the process goes to step 4212B, and the comparison of the next characteristic distance value with the distance integrity criterion value is continued.
Step 4220, determining whether the intraoral tissue region corresponding to the target tooth site in the oral three-dimensional model satisfies an integrity condition according to the number of the feature missing positions.
If not, skipping to step 43, acquiring corresponding intraoral image data, extracting oral characteristic data corresponding to an intraoral tissue region in the intraoral image data, establishing a pixel size relationship between the intraoral image data and the oral three-dimensional model data, and reconstructing a part representing the oral characteristic data on the oral three-dimensional model data to obtain oral structure information.
Based on the above-described fifth embodiment, it is possible to previously determine the feature missing positions, and evaluate whether the oral three-dimensional model falls within the range in which the integrity condition is not satisfied, based on the number of feature missing positions.
As shown in fig. 6, the first characteristic distance value for the first tooth position 11a (i.e., the first incisor tooth position, or the left-side middle incisor tooth position of the upper jaw) is continued
Figure BDA0003874044630000521
And the definition of a first distance integrity criterion value ζ 21 when
Figure BDA0003874044630000522
Then, the first tooth position 11a is determined as a feature missing position. Continuing with the previous fourth characteristic distance value for the fourth tooth position 11d (i.e. the sixth incisor tooth position, or the central incisor tooth position on the right side of the lower jaw)
Figure BDA0003874044630000523
And a fourth distance integrity criterion value ζ 41 is defined when
Figure BDA0003874044630000524
Then, it is determined that the fourth tooth position 11d is not a feature missing position, and the next target tooth position is determined.
Fig. 6 shows the distribution of the intraoral tissues at the eight incisor positions corresponding to the upper jaw and the lower jaw in total, and it can be seen that the positions corresponding to the left maxillary lateral incisor, the left maxillary central incisor and the right maxillary lateral incisor are feature missing positions, and at this time, the "number of feature missing positions" corresponds to 3. Based on this, a fixed threshold or a dynamic threshold can be set for the number, so as to further judge whether the intraoral tissue region corresponding to the vestibular sulcus height to which the eight incisor teeth points meets the integrity condition.
As to the specific judgment rule of the integrity condition provided in step 4220, in a specific example based on this embodiment, the following steps may be specifically configured. It is to be understood that the determination rule is not limited to the fifth embodiment, and those skilled in the art can combine the determination rule with the step 42, the step 422 or the step 4220 in other embodiments, examples or specific examples to form the preferred technical solution of the corresponding embodiments.
Step 4221, determining a numerical relationship between the number of the feature missing positions and the allowable error number value.
If the number of the feature missing positions is greater than or equal to the allowable error number value, step 4222A is skipped to determine that the intra-oral tissue region corresponding to the target tooth position in the three-dimensional model of the oral cavity does not satisfy the integrity condition.
If the number of the feature missing positions is smaller than the allowable error number value, step 4222B is executed to determine that the intra-oral tissue region corresponding to the target tooth position in the three-dimensional model of the oral cavity satisfies the integrity condition.
Therefore, whether the intraoral tissue region meets the integrity condition can be directly judged according to the comparison between the preset allowable error quantity value and the quantity of the feature missing positions, and the abstract concept of the integrity condition is converted into data content which can be obtained through actual operation.
For any technical solution that includes the above specific example, it is further preferable that the allowable error quantity value is an integer equal to or greater than one half of the target number of teeth. Therefore, the allowable error quantity value is set as a threshold value which is dynamically changed along with the number of the target tooth positions, the proportion of the number of the characteristic missing positions to the number of all the target tooth positions with the characteristic distance values obtained through calculation can be reflected, and the requirement on the integrity of the oral cavity three-dimensional model is macroscopically controlled.
For example, if eight tooth positions, which are total of the upper jaw left and right middle incisor tooth positions, the upper jaw left and right lateral incisor tooth positions, the lower jaw left and right middle incisor tooth positions, and the lower jaw left and right lateral incisor tooth positions, are selected as the target tooth positions, when the number of the characteristic missing positions is greater than or equal to 4, it is determined that the target tooth positions do not satisfy the integrity condition in the oral tissue region of the oral cavity three-dimensional model corresponding to the target tooth positions. As described above, in the three-dimensional model of the oral cavity shown in fig. 6, since there are 3 feature missing positions at the maxillary left lateral incisors, the maxillary left middle incisors and the maxillary right lateral incisors, respectively, which are less than one-half of the total number of target sites, it can be considered that the completeness condition is satisfied.
Preferably, after the intraoral tissue region of the oral cavity three-dimensional model data corresponding to the target tooth site is judged to meet the integrity condition, the oral cavity three-dimensional model itself can be further subjected to feature extraction and fitting. For feature extraction, because the intraoral tissue regions corresponding to the incisor teeth in the right upper jaw and the incisor teeth in the right lower jaw and the incisor teeth on the right lower jaw all contain relatively complete vestibular sulcus features, vestibular sulcus height data contained in the oral cavity feature data can be obtained accordingly, that is, the oral cavity feature data are extracted according to the non-feature missing position. For fitting, the structures of the left side and the right side of the maxillofacial surface are generally symmetrical to each other, so that at least the vestibular sulcus feature corresponding to the incisor tooth position on the right side of the maxilla can be subjected to mirror image processing and the like, and then the vestibular sulcus feature corresponding to the incisor tooth position on the left side of the maxilla can be filled in the feature missing position corresponding to the incisor tooth position on the left side of the maxilla, that is, according to the non-feature missing position, oral cavity structure information can be reconstructed based on the oral cavity anatomical features.
As shown in fig. 19, still another embodiment of the present invention provides an oral cavity structure information generating method, in which an application program or a command corresponding to the method can be loaded on the storage medium and/or the oral cavity structure information generating system 300, so as to achieve the technical effect of generating oral cavity structure information. This further embodiment is mainly refined from the previous embodiment in steps 41 and 42, steps 410 to 413 'being provided corresponding to step 41, and steps 421' to 422 being provided corresponding to step 42. The method for generating the oral cavity structure information may specifically include the following steps.
Step 410, acquiring three-dimensional model data of the oral cavity.
And 411', determining at least incisor teeth positions on the oral three-dimensional model as target teeth positions, and calculating reference feature coordinates of the target teeth positions based on a preset feature recognition rule to represent target tooth position features.
And step 412', determining the upper boundary extreme value coordinate and the lower boundary extreme value coordinate of the three-dimensional model of the oral cavity along the first direction according to the reference characteristic coordinate.
And 413', calculating to obtain a characteristic height value between the upper boundary extreme value coordinate and the lower boundary extreme value coordinate so as to represent the position relation between the upper jaw model boundary and the lower jaw model boundary.
And step 421', comparing the feature height value with a height integrity criterion value representing a preset position condition, and judging whether an intraoral tissue region corresponding to the target tooth position in the oral three-dimensional model meets the preset integrity condition according to a comparison result.
And step 422, judging whether the intraoral tissue area corresponding to the target tooth position in the oral three-dimensional model meets a preset integrity condition according to the comparison result.
If not, skipping to step 43, acquiring corresponding intraoral image data, extracting oral cavity feature data corresponding to an intraoral tissue area in the intraoral image data, establishing a pixel size relationship between the intraoral image data and oral cavity three-dimensional model data, and reconstructing a part representing the oral cavity feature data on the oral cavity three-dimensional model data to obtain oral cavity structure information.
The upper boundary extreme value coordinate is located on the upper jaw of the oral cavity three-dimensional model, the distance between the upper boundary extreme value coordinate and the occlusal plane is maximal, and the lower boundary extreme value coordinate is located on the lower jaw of the oral cavity three-dimensional model, and the distance between the lower boundary extreme value coordinate and the occlusal plane is maximal.
The present invention provides a solution that is different from the above-mentioned embodiment and that determines whether or not the integrity condition is satisfied by using the positional relationship between the maxillary model boundary and the mandibular model boundary. The technical solutions provided in the foregoing can be alternatively applied to the steps of determining the target tooth position, calculating the reference feature coordinate, and the like. Further, other further improved solutions described in the previous embodiment may also be alternatively applied to the further embodiment after being adjusted appropriately, thereby forming more derivative solutions.
Referring to fig. 21, in this further embodiment, at least eight incisor teeth positions are determined as target teeth positions, and the maxillary model boundary feature and the mandibular model boundary feature are searched in a traversal manner according to the corresponding at least eight reference feature coordinates, so as to finally determine the upper boundary extreme value coordinate point 101M and the lower boundary extreme value coordinate point 102M which are farthest from the occlusal plane, respectively. The position relationship between the upper boundary extreme value coordinate point 101M and the lower boundary extreme value coordinate point 102M can be used for representing the position relationship between the boundaries of the upper jaw model and the lower jaw model, so that the position relationship between abstract boundaries can be converted into the position relationship between specific coordinate points, and the algorithm design and judgment are facilitated.
As shown in fig. 19 and 20, the present invention provides a first example of an oral cavity structure information generating method based on the above-described embodiment, which refines step 412', specifically, steps 4121' and 4122', and refines step 413', and provides steps 4130' corresponding to steps 4121' to 4122 '. It is understood that the following description will not be expanded for the other steps than the refining step. Similarly, since steps 4121 'to 4130' are sequentially assigned to steps 412 'and 413', the description of steps 412 'and 413' themselves will be omitted hereinafter. This first embodiment specifically includes the following steps.
Step 410, obtaining oral cavity three-dimensional model data.
Step 411', determining at least incisor teeth positions on the oral cavity three-dimensional model as target teeth positions, and calculating reference feature coordinates of the target teeth positions based on a preset feature recognition rule to represent target teeth position features.
And 4121', calculating an upper boundary coordinate set corresponding to the upper jaw target tooth position on the oral cavity three-dimensional model and a lower boundary coordinate set corresponding to the lower jaw target tooth position on the oral cavity three-dimensional model to represent the boundary characteristics of the models.
In step 4122', the upper boundary coordinate having the largest coordinate value corresponding to the third coordinate axis in the upper boundary coordinate set is traversed and determined as the upper boundary extreme coordinate, and the lower boundary coordinate having the smallest coordinate value corresponding to the third coordinate axis in the lower boundary coordinate set is traversed and determined as the lower boundary extreme coordinate.
Step 4130', calculating a coordinate difference value of the upper boundary extreme value coordinate and the lower boundary extreme value coordinate on the third coordinate axis as a feature height value.
And step 421', comparing the feature height value with a height integrity criterion value representing a preset position condition, and judging whether an intraoral tissue region corresponding to the target tooth position in the oral three-dimensional model meets the preset integrity condition according to a comparison result.
And step 422, judging whether the intraoral tissue area corresponding to the target tooth position in the oral three-dimensional model meets a preset integrity condition according to the comparison result.
If not, skipping to step 43, acquiring corresponding intraoral image data, extracting oral cavity feature data corresponding to an intraoral tissue area in the intraoral image data, establishing a pixel size relationship between the intraoral image data and oral cavity three-dimensional model data, and reconstructing a part representing the oral cavity feature data on the oral cavity three-dimensional model data to obtain oral cavity structure information.
The first direction is a projection of a tooth body long axis extending direction on a jaw face of the oral cavity three-dimensional model, the reference characteristic coordinate, the upper boundary extreme value coordinate and the lower boundary extreme value coordinate are located in a preset model coordinate system, an origin of the model coordinate system is located on a straight line where incisor crests are located and at least comprises a first coordinate axis and a third coordinate axis, the third coordinate axis extends along the tooth center line extending direction, and the first coordinate axis extends along the central incisor width extending direction.
As shown in fig. 6 and 21, the first coordinate axis is Rx, the third coordinate axis is Rz, and the origin Ro of the model coordinate system is located on the straight line on which the cutting ridges of the central incisors or the lateral incisors are located, and preferably located between the mesial angle of the central incisors on the left side of the upper jaw and the mesial angle of the central incisors on the right side of the upper jaw. Of course, the origin Ro of the model coordinate system may also be interpreted as preferably being located at the intersection of the tooth centerline and the occlusal plane.
In this first embodiment, the boundary feature coordinates, which are respectively farthest from the occlusal plane, are determined by traversing all the upper boundary coordinates and all the lower boundary coordinates. When the target tooth site is determined as an incisor or the target intraoral tissue region is determined as a vestibular sulcus region, the eight boundary feature coordinates corresponding to the eight incisor tooth sites may be determined based on the above technical solution. Further, a space point coordinate with the maximum coordinate value on the third coordinate axis Rz is selected from the four boundary characteristic coordinates on the upper jaw as an upper boundary extreme value coordinate 101M, and a space point coordinate with the minimum coordinate value on the third coordinate axis Rz is selected from the four boundary characteristic coordinates on the lower jaw as a lower boundary extreme value coordinate 102M, so that a coordinate difference value of the two on the third coordinate axis Rz is calculated and serves as a characteristic height value Δ H.
As shown in fig. 19 and 22, the present invention provides a second example of the oral cavity structure information generating method based on the above embodiment, and the second example provides the steps 61 'to 62' for the step 421', and the steps 61' to 62 'may be disposed at any position before the step 421', which is not limited by the present invention. Hereinafter, the present invention will be described by taking a second embodiment formed by disposing the preceding steps before step 410 as an example. It is understood that the following description will not be made for the other steps than the preceding step. This second embodiment specifically includes the following steps.
At step 61', at least two sets of three-dimensional training model data are obtained, as well as at least two sets of height training data corresponding to the three-dimensional training model data.
Step 62', calculating the average height data and the standard deviation of the training height of the height training data, and calculating to obtain a height integrity criterion value according to the difference between the average height data and the product of the standard deviation of the training height and a preset distribution probability coefficient
Step 410, obtaining oral cavity three-dimensional model data.
Step 411', determining at least incisor teeth positions on the oral cavity three-dimensional model as target teeth positions, and calculating reference feature coordinates of the target teeth positions based on a preset feature recognition rule to represent target teeth position features.
And step 412', determining the upper boundary extreme value coordinate and the lower boundary extreme value coordinate of the three-dimensional model of the oral cavity along the first direction according to the reference characteristic coordinate.
And 413', calculating to obtain a characteristic height value between the upper boundary extreme value coordinate and the lower boundary extreme value coordinate so as to represent the position relation between the upper jaw model boundary and the lower jaw model boundary.
And step 421', comparing the feature height value with a height integrity criterion value representing a preset position condition, and judging whether an intraoral tissue region corresponding to the target tooth position in the oral three-dimensional model meets the preset integrity condition according to a comparison result.
And step 422, judging whether the intraoral tissue area corresponding to the target tooth position in the oral three-dimensional model meets a preset integrity condition according to the comparison result.
If not, skipping to step 43, acquiring corresponding intraoral image data, extracting oral cavity feature data corresponding to an intraoral tissue area in the intraoral image data, establishing a pixel size relationship between the intraoral image data and oral cavity three-dimensional model data, and reconstructing a part representing the oral cavity feature data on the oral cavity three-dimensional model data to obtain oral cavity structure information.
The height training data comprises a distance between an upper tissue extreme value coordinate, corresponding to the intraoral tissue area, on the three-dimensional training model, and a lower tissue extreme value coordinate, corresponding to the intraoral tissue area, on the three-dimensional training model, and located on one side of the first direction, and a lower tissue extreme value coordinate, corresponding to the intraoral tissue area, on the three-dimensional training model, and located on the other side of the first direction.
Referring to fig. 6, 17 and 21, when the intraoral tissue region is determined to be a vestibular sulcus region, and when the third training vestibular sulcus bottom point 1312U is the highest point on the upper vestibular sulcus relative to the occlusal plane of the three-dimensional training model data, and the sixth training vestibular sulcus bottom point 1341L is the lowest point on the lower vestibular sulcus relative to the occlusal plane of the three-dimensional training model data, it may be considered that the coordinate of the third training vestibular sulcus bottom point 1312U is the upper tissue extremal coordinate, and the coordinate of the sixth training vestibular sulcus bottom point 1341L is the lower tissue extremal coordinate, so as to obtain the distance therebetween as height training data. Of course, the above-mentioned determination of the upper extreme coordinate and the determination of the lower extreme coordinate of the tissue only represent a specific situation, and for different three-dimensional training model data, there may be other situations where the above-mentioned extreme coordinate points to.
The second embodiment is not necessarily independent from the first embodiment of the embodiment, and the second embodiment and the first embodiment of the embodiment are combined with each other, so that the corresponding relation between the three-dimensional training model data and the oral cavity three-dimensional model data can be better established, and a more specific and accurate scheme is provided for the calculation of the integrity condition. For example, in the second embodiment, the calculation process of the height training data may be consistent with the feature height value, and preferably, when the feature height value Δ H is configured as a coordinate difference value of the upper boundary extreme point 101M and the lower boundary extreme point 102M in the direction of the third coordinate axis Rz, and the third coordinate axis Rz extends along the tooth centerline direction, the height training data may be correspondingly configured as a distance value of the upper tissue extreme value coordinate and the lower tissue extreme value coordinate in the tooth centerline extending direction, or a projection length value of a connecting line between the two extreme value coordinate points in the tooth centerline direction. Of course, if the step of calculating the feature height value has an adjustment, the selection of the height training data may be adjusted accordingly.
The definition of the mean height data and the training height standard deviation may be similar to the mean height data and the training distance standard deviation described above, and the distribution probability coefficient may also be similar to the distribution probability coefficient corresponding to the distance training data described above, and may be interpreted as that the two differ only on the basis of data, one of which data basis points to the positional relationship between the dentition feature and the intraoral tissue feature, and the other of which data basis points to the positional relationship between the upper and lower intraoral tissue features, or between the left and right intraoral tissue features, or between at least two intraoral tissues at different positions.
For any specific example or derivative technical solution in the second embodiment, it is further preferable that the distribution probability coefficient is 3.
For example, in the application scenario where the oral cavity characteristic data includes vestibular groove height data, if the average height data corresponding to all the height training data Dul is calculated as μ Dul Training height standard deviation of sigma Dul Then in the preferred embodiment with a distribution probability coefficient p =3, the altitude integrity criterion value ζ Dul Can be configured to at least satisfy:
ζ Dul =μ Dul -3*σ Dul
thus, the characteristic height value Δ H may be compared to the height integrity criterion value ζ Dul The effect of comparing the position relation with the preset position condition corresponding to the position relation is achieved. It should be noted that, since the feature height value and the height integrity criterion value are already data for characterizing the integrity of the tissue in the corresponding mouth, it is not necessary to count the number of the feature height values and the height integrity criterion value and set the allowable error number value. If the comparison result is that the characteristic height value is smaller than the height integrity criterion value, the oral cavity three-dimensional model can be directly judged not to meet the preset integrity condition.
When the two embodiments are combined, the step of judging whether the preset integrity condition is met can be executed specifically: if the number of the characteristic missing positions is larger than or equal to the allowable error number value or the characteristic height value is smaller than the height integrity criterion value, judging that the oral three-dimensional model does not meet the preset integrity condition; and if the feature missing position is smaller than the allowable error quantity value and the feature height is larger than the height integrity criterion value, judging that the oral cavity three-dimensional model meets the preset integrity condition. The feature extraction step after the preset integrity condition is satisfied can be referred to in the first embodiment, and is not described herein again.
Based on any of the above embodiments, as shown in fig. 4 and fig. 23, the present invention provides a first embodiment of an oral cavity structure information generating method, which details step 43, and specifically provides steps 431 to 433. It will be understood that for other steps than the refining step, which will not be described further below, for example, the step 40 may specifically include the step 41 and the step 42. This first embodiment specifically includes the following steps.
And step 40, acquiring oral three-dimensional model data, and judging whether an appointed intraoral tissue area in the oral three-dimensional model meets a preset integrity condition.
If not, skipping to step 43, acquiring corresponding intraoral image data, extracting oral characteristic data corresponding to an intraoral tissue region in the intraoral image data, establishing a pixel size relationship between the intraoral image data and the oral three-dimensional model data, and reconstructing a part representing the oral characteristic data on the oral three-dimensional model data to obtain oral structure information. The step 43 specifically includes:
step 431, calling the corresponding first neural network model and the second neural network model according to the intraoral organization region.
Step 432, inputting the intraoral image data into the first neural network model for region of interest extraction, and obtaining feature region image data.
And 433, inputting the image data of the characteristic region into a second neural network model for characteristic identification to obtain oral characteristic data corresponding to the oral tissue region.
Therefore, the step of extracting the features in the oral image data can be divided into two steps, so that the basic data quantity of feature identification is reduced, and the speed of obtaining the oral feature data by analyzing the oral image data is increased. The step of extracting the features by using the neural network model can adapt to various analysis scenes, and compared with the method of simply using parameters such as chroma, RGB value or gray level in an image, the method has higher accuracy and universality.
In a specific example of this first embodiment, model training steps 71 to 74 for step 43 (i.e., steps 431 to 433 described above) may be provided as training steps of the model, and specifically, the steps 71 to 74 may be disposed at any position before step 431, which is not limited by the present invention. Hereinafter, the present invention will be described taking as an example a specific example formed before this preliminary step is provided to step 41. It is understood that for other steps than this preceding step, the following description is not made. This specific example may include the following steps.
At step 71, at least two sets of training image data are acquired, along with region of interest markers corresponding to the training image data.
And 72, calling a preset convolutional neural network model, and performing iterative training by taking the training image data and the region-of-interest mark as model input to obtain a first training parameter and a corresponding first neural network model.
At least two sets of training image data are acquired, along with the dentition mark and vestibular floor mark corresponding to the training image data, step 73.
And step 74, calling a preset convolution neural network model, and performing iterative training by taking the training image data, the dentition mark and the vestibular sulcus mark as model input to obtain a second training parameter and a corresponding second neural network model.
And step 40, acquiring the data of the oral three-dimensional model, and judging whether the designated intraoral tissue area in the oral three-dimensional model meets a preset integrity condition.
If not, skipping to step 43, acquiring corresponding intraoral image data, extracting oral characteristic data corresponding to an intraoral tissue region in the intraoral image data, establishing a pixel size relationship between the intraoral image data and the oral three-dimensional model data, and reconstructing a part representing the oral characteristic data on the oral three-dimensional model data to obtain oral structure information. The step 43 specifically includes:
And 431, calling a corresponding first neural network model and a corresponding second neural network model according to the intraoral tissue region.
Step 432, inputting the intraoral image data into the first neural network model for region of interest extraction, and obtaining feature region image data.
And step 433, inputting the image data of the characteristic region into a second neural network model for characteristic identification to obtain the oral characteristic data corresponding to the oral tissue region.
Wherein the region of interest marks an extension in the corresponding training image covering at least the upper lip, the lower lip and the dentition in the training image.
A first neural network model and a second neural network model are built by utilizing the convolutional neural network, so that the application of local feature analysis can be better adapted. On one hand, the data volume in the operation process can be reduced by utilizing the characteristics of sparse connection, weight sharing and downsampling; on the other hand, the translation invariance of the method can be utilized to ensure the accuracy of feature identification.
In the embodiment, the extension range of the mark of the region of interest is limited, and the position for extracting the features is determined in the regions of the upper lip, the lower lip and the dentition, so that the influence of human facial features on the feature extraction process of the oral illumination image can be avoided, and the increase of the operation data amount caused by the extraction of unnecessary features is also avoided.
Based on any of the above embodiments, as shown in fig. 24, the present invention provides a second embodiment of an oral cavity structure information generating method, which details step 43, and specifically provides steps 434 to 435. It will be understood that for other steps than the refining step, which will not be described further below, for example, step 40 may specifically include step 41 and step 42. This second embodiment specifically includes the following steps.
And step 40, acquiring oral three-dimensional model data, and judging whether an appointed intraoral tissue area in the oral three-dimensional model meets a preset integrity condition.
If not, skipping to step 43, acquiring corresponding intraoral image data, extracting oral cavity feature data corresponding to an intraoral tissue area in the intraoral image data, establishing a pixel size relationship between the intraoral image data and oral cavity three-dimensional model data, and reconstructing a part representing the oral cavity feature data on the oral cavity three-dimensional model data to obtain oral cavity structure information. The step 43 specifically includes:
and step 434, determining at least one target tooth position corresponding to each other in the oral cavity three-dimensional model and the oral cavity image as a relative reference tooth position, and calculating size data of the dental crown relative to the reference tooth position in at least one same direction in the oral cavity three-dimensional model and the oral cavity image to respectively obtain reference pixel size data and reference physical size data.
Step 435, fitting to obtain a size mapping factor according to the reference pixel size data and the reference physical size data to represent the pixel size relationship.
Two points need to be described, first, the second embodiment, the first embodiment and the third embodiment can be combined with each other to form a more complete step 43, and it is needless to say that the three embodiments can be regarded as different embodiments to achieve corresponding technical effects. Next, as will be described in detail with reference to fig. 25, a part Z2 and a part Z3 in fig. 25 are corresponding partial schematic views, where the part Z2 is directed to fig. 21 and is a part of an intraoral tissue structure focused on a three-dimensional model of an oral cavity at a middle incisor position, and the part Z3 is a part of an intraoral tissue structure corresponding to an intraoral photographic image at a middle incisor position, and reference to fig. 25 is made for a specific description of the step of establishing a pixel size relationship, and the present solution is not limited in terms of selection of a target tooth position or the like.
As shown in fig. 21 and 25, for example, if the incisor tooth position on the left side of the upper jaw in Z2 and Z3 is selected as the relative reference tooth position, the pixel size relationship can be established by the size data of the tooth position in one same direction. For example, the length, width, or area of the labial surface of the crown of the dental implant may be used as the size data, thereby obtaining reference pixel size data corresponding to an intraoral photographic image, and reference physical size data corresponding to a three-dimensional model of the oral cavity.
Specifically, the unit of the reference pixel size data is a pixel, or may be interpreted as the number of pixels, and the unit of the reference physical size is a millimeter, or is interpreted as the length of the size data. Therefore, the mapping relation between the plane image and the three-dimensional image can be established, and the subsequent reconstruction of the oral cavity characteristics is facilitated.
Preferably, the at least one same direction includes a crown width direction. Therefore, the phenomenon that the pixel size relationship is influenced due to the fact that the front and back dental crown lengths and the dental crown labial surface areas are not unified due to special conditions such as gingival atrophy and the like can be avoided, and the dental crown width with stable numerical value is used as a data base for establishing the pixel size relationship.
Preferably, the size mapping factor is a quotient of the reference physical size data and the reference pixel size data. Therefore, the physical size represented by a single pixel in the intraoral image can be calculated, namely the length value (unit is millimeter) of the single pixel corresponding to the oral cavity three-dimensional model, and the stability of the characteristic corresponding relation in the reconstruction process is ensured by using the data similar to the scale.
On the basis, the invention provides a specific example based on the second embodiment, specifically refines the process of calculating the reference pixel size data and the reference physical size data, and completes the establishment of the mapping relationship between the reference pixel size data and the reference physical size data by selecting the reference feature points at the corresponding tooth positions on the intraoral photographic image and the oral cavity three-dimensional model. As shown in fig. 26 or fig. 27, this specific example is substantially identical to the other steps in the second embodiment described above, but the following detailed steps are provided for the step 434.
And step 81, determining a first reference characteristic point and a second reference characteristic point relative to the reference tooth position on the intraoral picture, and calculating and obtaining reference pixel size data according to the number of pixels between the first reference characteristic point and the second reference characteristic point.
And step 82, determining a third reference characteristic point and a fourth reference characteristic point relative to the reference tooth position on the oral cavity three-dimensional model, and calculating and obtaining reference physical size data according to the Euclidean distance between the third reference characteristic point and the fourth reference characteristic point.
The first reference characteristic point and the second reference characteristic point are positioned on two sides of a long axis of the tooth body of the relative reference tooth position, and the distance between the first reference characteristic point and the corresponding crown incisal end of the relative reference tooth position is equal to the distance between the second reference characteristic point and the corresponding crown incisal end of the relative reference tooth position. The third reference characteristic point and the fourth reference characteristic point are positioned on two sides of the long axis of the tooth body of the relative reference tooth position, and the distance between the third reference characteristic point and the corresponding incisal end of the tooth crown of the relative reference tooth position is equal to the distance between the fourth reference characteristic point and the corresponding incisal end of the tooth crown of the relative reference tooth position.
As shown in fig. 25, if it is determined that the relative reference tooth position is the maxillary left middle incisor tooth position on the intraoral image, the first reference feature point and the second reference feature point may be points located on the mesial boundary and the distal boundary on both sides of the maxillary left middle incisor tooth long axis 1121r, respectively. If it is determined that the relative reference dental position is the maxillary right middle incisor dental position, the first reference feature point and the second reference feature point may be points located on the mesial boundary and the distal boundary on both sides of the maxillary right middle incisor dental body long axis 1111s, respectively. If the relative reference tooth positions are determined to be the upper jaw left middle incisor tooth position and the upper jaw right middle incisor tooth position, the first reference characteristic point and the second reference characteristic point can be points which are positioned on two sides of the long axes of the two tooth bodies and are positioned on the far-middle boundary of the upper jaw left middle incisor tooth position and the far-middle boundary of the upper jaw right middle incisor tooth position respectively. In addition, the distances between the two reference characteristic points and the incisal end of the dental crown are equal, so that the two reference characteristic points can accurately reflect the width characteristic of the dental crown.
Correspondingly, the three-dimensional oral cavity model also comprises a left maxillary central incisor tooth position long axis 1121r 'and a right middle maxillary incisor tooth position long axis 1111r' and can obtain a selection scheme of at least three third reference feature points and at least three fourth reference feature points based on a similar technical scheme of the first reference feature point and the second reference feature point.
Under the technical route of this specific example, as shown in fig. 24 and 26, the present invention further provides a first specific example of a second embodiment. This first specific example essentially refines step 434, in which a crown completeness determination step 801 is added, and refines step 802A performed after the crown completeness is determined, and steps 811A and 812A attributed to step 81, to achieve the determination of the relative reference tooth position and the calculation of the reference pixel size. It will be understood that for other steps than the refining step, which will not be described further below, for example, step 40 may specifically include step 41 and step 42. Similarly, since steps 801 to 82 belong to step 434, the description of step 434 itself will be omitted below; since step 811A and step 812A pertain to step 81, the description of step 81 itself is to be omitted below. The first specific example specifically includes the following steps.
And step 40, acquiring the data of the oral three-dimensional model, and judging whether the designated intraoral tissue area in the oral three-dimensional model meets a preset integrity condition.
If not, skipping to step 43, acquiring corresponding intraoral image data, extracting oral characteristic data corresponding to an intraoral tissue region in the intraoral image data, establishing a pixel size relationship between the intraoral image data and the oral three-dimensional model data, and reconstructing a part representing the oral characteristic data on the oral three-dimensional model data to obtain oral structure information. The step 43 specifically includes:
Step 430, corresponding intraoral image data is acquired.
Step 801, judging whether the maxillary left middle incisor tooth position and the maxillary right middle incisor tooth position in the oral photography image and the oral three-dimensional model both comprise complete crowns.
If so, jumping to step 802A, and determining the left middle incisor tooth position and the right middle incisor tooth position of the upper jaw, which correspond to each other in the oral photography image and the oral cavity three-dimensional model, as relative reference tooth positions.
Step 811A determines a first reference feature point at the far-middle boundary of the maxillary left-side middle incisor crown of the intraoral photographic image and a second reference feature point at the far-middle boundary of the maxillary right-side middle incisor crown of the intraoral photographic image.
In step 812A, the number of pixels between the first reference feature point and the second reference feature point is calculated, and one-half of the number of pixels is used as the reference pixel size data.
And step 82, determining a third reference characteristic point and a fourth reference characteristic point relative to the reference tooth position on the oral cavity three-dimensional model, and calculating and obtaining reference physical size data according to the Euclidean distance between the third reference characteristic point and the fourth reference characteristic point.
Step 435, fitting to obtain a size mapping factor according to the reference pixel size data and the reference physical size data to represent the pixel size relationship.
In this way, the specific example of the second embodiment can cope with special situations such as missing teeth, and adaptively change the establishment strategy of the pixel size relationship, thereby having a wider application range. It is understood that the specific example of the preferred embodiment described above uses the incisor tooth position as a reference to obtain more definite data, but the present invention does not exclude the solution of using other tooth positions for the pixel size relationship fitting.
The determination of whether the crown is complete may be performed based on the fitting of the convex hull contour, for example, whether the area or distribution of the convex hull contour is uniform, whether the convex hull contour can be fitted, and the like, which is not described herein again.
When the crowns are all complete, the whole width of the crowns of the two upper jaw middle incisor teeth can be used as the basis for fitting the pixel size relationship, and the accuracy is higher. As shown in fig. 25, in the intraoral image, the pixel points 1121s of the far and middle boundary of the maxillary left central incisor may be determined as the first reference feature point, the pixel points 1111s of the far and middle boundary of the maxillary right central incisor may be determined as the second reference feature point, and the number of pixels between the two pixel points is calculated, so as to directly serve as the reference pixel size data. Preferably, half of the reference pixel size data is calculated as the reference pixel size data, which can reflect the average pixel condition of the central incisor width.
Correspondingly, steps corresponding to step 811A and step 812A may also be included for step 82. Specific step expressions are not described herein again, and it can be known from fig. 25 that, in the oral three-dimensional model, the far-middle boundary space point 1121s 'of the maxillary left middle incisor may be determined as the third reference feature point, the far-middle boundary space point 1111s' of the maxillary right middle incisor may be determined as the fourth reference feature point, and the euclidean distance between the two pixel points is calculated.
Correspondingly, as shown in fig. 24 and 27, the present invention further provides a second specific example of the second embodiment. This second specific example provides a further determination step 802B after determining that the crown is incomplete, and steps 811B and 8112B that pertain to step 81. It will be understood that for other steps than the refining step, which will not be described further below, for example, step 40 may specifically include step 41 and step 42. Similarly, since steps 801 to 82 belong to step 434, the description of step 434 itself will be omitted below; since step 811B and step 812B pertain to step 81, the description of step 81 itself is to be omitted below. This second specific example specifically includes the following steps.
And step 40, acquiring the data of the oral three-dimensional model, and judging whether the designated intraoral tissue area in the oral three-dimensional model meets a preset integrity condition.
If not, skipping to step 43, acquiring corresponding intraoral image data, extracting oral cavity feature data corresponding to an intraoral tissue area in the intraoral image data, establishing a pixel size relationship between the intraoral image data and oral cavity three-dimensional model data, and reconstructing a part representing the oral cavity feature data on the oral cavity three-dimensional model data to obtain oral cavity structure information. The step 43 specifically includes:
step 430, corresponding intraoral image data is obtained.
Step 801, judging whether the maxillary left middle incisor tooth position and the maxillary right middle incisor tooth position in the oral photography image and the oral three-dimensional model both comprise complete crowns.
If not, (step 802B) and only the left maxillary central incisor tooth position in the oral photography image and the oral three-dimensional model contains the complete dental crown, determining the left maxillary central incisor tooth position corresponding to each other in the oral photography image and the oral three-dimensional model as a relative reference tooth position.
Step 811B determines a first reference feature point at the distal boundary of the maxillary left-side middle incisor crown of the intraoral photographic image and a second reference feature point at the mesial boundary of the maxillary left-side middle incisor crown of the intraoral photographic image.
In step 812B, the number of pixels between the first reference feature point and the second reference feature point is calculated, and the number of pixels is used as reference pixel size data.
And 82, determining a third reference characteristic point and a fourth reference characteristic point relative to the reference tooth position on the oral cavity three-dimensional model, and calculating and obtaining reference physical size data according to the Euclidean distance between the third reference characteristic point and the fourth reference characteristic point.
Step 435, fitting to obtain a size mapping factor according to the reference pixel size data and the reference physical size data to represent the pixel size relationship.
In this embodiment, the crown missing judgment of the incisor tooth site is also performed, and the technical effects similar to those of the previous embodiment are obtained, and the two can be combined with each other to form a complete embodiment included in the second embodiment. In addition, although the invention is based on the fact that the left middle incisor tooth position of the upper jaw comprises the complete dental crown in the description, it can be understood that the technical scheme provided by the invention can be implemented for different practical situations to achieve the corresponding technical effects.
In this case, as shown in fig. 25, in the intraoral image, the far-middle boundary pixel point 1121s of the maxillary left middle incisor may be determined as the first reference feature point, the near-middle boundary pixel point Cs of the maxillary left middle incisor may be determined as the second reference feature point, and the number of pixels between the two pixel points may be calculated.
Correspondingly, steps corresponding to step 811B and step 812B may also be included for step 82. Specific step expressions are not repeated herein, and it can be known from fig. 25 that, in the oral three-dimensional model, the far-middle boundary space point 1121s 'of the maxillary left middle incisor may be determined as the third reference feature point, the near-middle boundary space point Cs' of the maxillary left middle incisor may be determined as the fourth reference feature point, and the euclidean distance between the two pixel points is calculated.
In other cases, the above-mentioned maxillary left middle incisor mesial boundary pixel point Cs may also be interpreted as a maxillary right middle incisor mesial boundary pixel point, and the above-mentioned maxillary left middle incisor mesial boundary space point Cs' may also be interpreted as a maxillary right middle incisor mesial boundary space point. Based on this, the pixel numbers of the far-middle boundary pixel point 1111s of the middle incisor on the right side of the upper jaw and the near-middle boundary pixel point, and the euclidean distance between the far-middle boundary space point 1111s' of the middle incisor on the right side of the upper jaw and the near-middle boundary space point can also be calculated.
For any one of the specific examples or derivations of the second embodiment, it is further preferred that the distance from the first reference feature point to the gingival end of the corresponding incisor site is three times the distance from the first reference feature point to the coronal end of the corresponding incisor site. In other words, the distance from the first reference feature point to the incisal end of the crown can be one fourth of the length of the crown, and the width value extracted in this way is used as the basis for establishing the pixel size relationship, so that the stability is stronger. Based on this, the second reference feature point, the third reference feature point and the fourth reference feature point can be known and configured to have the above-mentioned position feature.
Based on any of the above embodiments, as shown in fig. 4 and fig. 28, the present invention provides a third embodiment of an oral cavity structure information generating method, which details step 43 and specifically provides steps 436 to 437. It will be understood that for other steps than the refining step, which will not be described further below, for example, step 40 may specifically include step 41 and step 42. This third embodiment specifically includes the following steps.
And step 40, acquiring oral three-dimensional model data, and judging whether an appointed intraoral tissue area in the oral three-dimensional model meets a preset integrity condition.
If not, skipping to step 43, acquiring corresponding intraoral image data, extracting oral cavity feature data corresponding to an intraoral tissue area in the intraoral image data, establishing a pixel size relationship between the intraoral image data and oral cavity three-dimensional model data, and reconstructing a part representing the oral cavity feature data on the oral cavity three-dimensional model data to obtain oral cavity structure information. The step 43 specifically includes:
at step 436, at least two intraoral tissue feature points are determined from the oral cavity feature data in the intraoral image data.
And 437, according to the intraoral tissue feature points, fitting tissue space feature points corresponding to the intraoral tissue feature points at feature missing positions in the oral three-dimensional model according to the pixel size relationship, and forming a part representing oral feature data on the oral three-dimensional model data.
Based on this, at least when the oral cavity feature data includes vestibular groove height data, dental arch width data, labial frenulum width data, or maxillofacial protrusion amplitude data, the most representative tissue space feature point can be reconstructed at the feature missing position of the oral cavity three-dimensional model. For the oral cavity characteristic data including dental arch radian data and the like, a plurality of intraoral tissue characteristic points need to be determined so as to fit the integral radian curve of the dental arch on the oral cavity three-dimensional model.
In an application scenario, preferably, the intraoral tissue region includes a vestibular sulcus, the oral cavity feature data includes vestibular sulcus height data, and the intraoral tissue feature points include an upper sulcus bottom feature point and a lower sulcus bottom feature point. In this way, the most typical feature points can be extracted for reference by healthcare workers.
Under this application scenario, continuing as shown in fig. 28, the step 437 may further include the following steps.
4371 calculating the distance between the characteristic point of the upper sulcus bottom and the gingival margin midpoint of the corresponding tooth position to obtain an upper sulcus bottom distance parameter, and calculating to obtain a corresponding upper sulcus bottom mapping parameter according to the upper sulcus bottom distance parameter and a size mapping factor representing the size relation of the pixels.
And 4372, fitting to obtain an upper space characteristic point representing the position of the upper sulcus bottom according to the reference characteristic coordinates of the target tooth position corresponding to the upper sulcus bottom characteristic point and the upper sulcus bottom mapping parameters.
4373, calculating the distance between the feature point of the lower sulcus bottom and the midpoint of the gingival margin of the corresponding tooth position to obtain a lower sulcus bottom distance parameter, and calculating to obtain a corresponding lower sulcus bottom mapping parameter according to the lower sulcus bottom distance parameter and the size mapping factor.
And 4374, fitting to obtain a lower spatial feature point representing the position of the lower sulcus bottom according to the reference feature coordinates of the target tooth position corresponding to the lower sulcus bottom feature point and the lower sulcus bottom mapping parameter.
Therefore, the gingival margin midpoint which is also used as a reference point in the oral cavity three-dimensional model can be used for fitting the mapping relation of the upper and lower sulcus bottom feature points, and the consistency between the oral cavity illumination image and the oral cavity three-dimensional model is kept. Based on the above steps, as shown in fig. 6, at least a first vestibular furrow bottom coordinate point 13A can be fitted on the oral cavity three-dimensional model to be used as an upper space feature point.
Of course, the present invention is not limited to the above application scenario, and in another application scenario, preferably, the intraoral tissue region includes a vestibular sulcus, or a root ridge, or both, the oral cavity feature data includes dental arch width data, and the intraoral tissue feature points include a left-side sulcus feature point and a right-side sulcus feature point. In this application scenario, the establishment of the mapping relationship and the fitting of the feature points are similar to those in the previous application scenario, and are not described herein again.
Of course, the embodiment provided by the present invention is not limited to fitting only part of the spatial feature points in the oral three-dimensional model, and certainly all the spatial feature points may also be fitted, even a spatial distribution curve or a tissue spatial distribution curved surface is obtained by fitting. Based on this, in a specific example of the above-described third embodiment, the step 436 and the step 437 can be further configured as the following step 436 'and step 437'.
At step 436', all intraoral tissue feature points are determined from the oral feature data in the intraoral camera image.
And 437', according to the intraoral tissue feature points, fitting tissue space feature points corresponding to the intraoral tissue feature points at feature missing positions in the oral cavity three-dimensional model according to the pixel size relationship, and fitting a tissue space distribution curve or a tissue space distribution curve according to the tissue space feature points, wherein the tissue space distribution curve or the tissue space distribution curve is used as a part for representing oral cavity feature data on the oral cavity three-dimensional model data.
Similarly, as shown in fig. 6, the above-described preferred embodiment may be applied to realize fitting to the first tissue spatial distribution curve 13a or the second tissue spatial distribution curve 13b, or may realize fitting to the first tissue spatial distribution curved surface Sa. Specifically, the fitting between the tissue spatial distribution curve and the tissue spatial distribution curve, or the coordinate value of the tissue spatial feature point in the direction of the second coordinate axis Ry (or, the direction perpendicular to the paper plane), may be determined according to the gray value, the brightness value, or the actual depth of the corresponding point in the intraoral image.
As shown in fig. 29, another embodiment of the present invention provides an oral cavity structure information generating method, in which an application program or a command corresponding to the method can be loaded on the storage medium and/or the oral cavity structure information generating system 300, so as to achieve the technical effect of generating oral cavity structure information. The method for generating the oral cavity structure information may specifically include the following steps.
Step 91, obtaining oral cavity three-dimensional model data and corresponding intraoral image data.
And step 92, extracting oral cavity characteristic data corresponding to a target intraoral tissue area in the oral cavity three-dimensional model in the intraoral image data, establishing a pixel size relation between the intraoral image data and the oral cavity three-dimensional model data, and reconstructing a part representing the oral cavity characteristic data on the oral cavity three-dimensional model data to obtain oral cavity structure information.
Compared with the embodiments provided in fig. 4, fig. 5 or fig. 19 of the present invention, and other embodiments or specific examples obtained by combining and attaching the two embodiments, the embodiment described above cancels the step of determining the integrity of the oral three-dimensional model, directly utilizes the intraoral image data to perform feature extraction, and reconstructs the extracted features on the oral three-dimensional model, thereby avoiding processing of complex point cloud data on the oral three-dimensional model, and displaying more intuitive feature points, curves or curved surfaces on the oral three-dimensional model.
It will be understood by those skilled in the art that various steps in this embodiment may be alternatively embodied in the description or limitation of steps in other embodiments, examples or specific examples provided above. Especially, for the contents of extraction of oral cavity feature data, establishment of pixel size relationship, reconstruction of the portion representing oral cavity feature data, and the like, the present invention is not described herein again.
Based on any of the above embodiments, the oral cavity structure information generated by the present invention may be interpreted as an oral cavity three-dimensional model including the "part characterizing oral cavity feature data" and data thereof, or may be interpreted as referring to only the "part characterizing oral cavity feature data" (i.e., at least one of the feature points, curves, or curved surfaces).
Of course, in some embodiments, the oral cavity structure information may also be interpreted as a kind of data, and preferably may be oral cavity feature data after being mapped by a pixel size relationship. Specifically, the values may be a value of the vestibular sulcus height on the oral cavity three-dimensional model, a value of the dental arch width on the oral cavity three-dimensional model, a value of the labial frenum width on the oral cavity three-dimensional model, a value of the dental arch radian on the oral cavity three-dimensional model, or a value of the maxillofacial protrusion amplitude/radian on the oral cavity three-dimensional model.
Although some drawings in the specification have been labeled with one first direction D1, the disclosure is not limited to the embodiment, and different first directions for different tooth positions cannot be set. In particular, in the description of fig. 6, 15, 17 and 25, the first direction D1 is shown as an example, and can be interpreted as an example of the first direction of the central incisor corresponding to the three-dimensional model of the oral cavity in the direction of the forward viewing angle. For the first direction of the three-dimensional model of the oral cavity under other viewing angles or for other tooth positions, the derivation can be obtained by combining fig. 12 and 13.
In addition, the above-mentioned multiple embodiments corresponding to a certain embodiment may be combined in whole or in part, for example, three embodiments corresponding to the embodiment provided in fig. 4 and specific examples thereof may be combined; the above-mentioned multiple embodiments corresponding to a certain embodiment may be combined into another embodiment or another embodiment, for example, the embodiment provided in fig. 4 may be combined with another embodiment provided in fig. 5 as a whole, for example, the embodiments provided in fig. 4 and/or fig. 5 may be combined with another embodiment provided in fig. 19 as a whole, to set a higher integrity condition determination requirement, for example, the embodiments corresponding to the first to third embodiments corresponding to the embodiment provided in fig. 4 may be combined with another embodiment provided in fig. 19, so as to enrich the content of the another embodiment and achieve the corresponding effect.
In conclusion, the oral cavity structure information generation method provided by the invention has the advantages that the relative position relationship between the tooth position characteristics and the model boundary characteristics is analyzed, and the corresponding intraoral image data is called to complete the characteristics under the condition that the oral cavity three-dimensional model is judged not to meet the preset conditions, so that complete oral cavity structure information based on the oral cavity three-dimensional model data is obtained, the requirement for extracting the oral cavity three-dimensional model is reduced, and the simplification of steps and operation logic is realized on the basis that the high-precision oral cavity structure characteristics are obtained; the composite process between the intraoral image data and the intraoral three-dimensional model data is mainly used for reconstructing the characteristics of the intraoral three-dimensional model according to the established pixel size relationship, and the output oral structure information is based on the three-dimensional data, so that the method has stronger intuition and accuracy, is convenient for medical workers to further analyze, and is also convenient for patients or other related personnel to look up.
It should be understood that although the present description refers to embodiments, not every embodiment contains only a single technical solution, and such description is for clarity only, and those skilled in the art should make the description as a whole, and the technical solutions in the embodiments can also be combined appropriately to form other embodiments understood by those skilled in the art.
The above-listed detailed description is only a specific description of a possible embodiment of the present invention, and they are not intended to limit the scope of the present invention, and equivalent embodiments or modifications made without departing from the technical spirit of the present invention should be included in the scope of the present invention.

Claims (40)

1. An oral structure information generating method, comprising:
acquiring oral three-dimensional model data, and judging whether an appointed intraoral tissue area in the oral three-dimensional model meets a preset integrity condition;
if not, acquiring corresponding intraoral image data, extracting oral cavity feature data corresponding to the intraoral tissue region in the intraoral image data, establishing a pixel size relationship between the intraoral image data and the oral cavity three-dimensional model data, and reconstructing a part representing the oral cavity feature data on the oral cavity three-dimensional model data to obtain the oral cavity structure information.
2. The method for generating oral cavity structural information according to claim 1, wherein the step of acquiring the oral cavity three-dimensional model data and determining whether the specified intraoral tissue region in the oral cavity three-dimensional model satisfies a preset integrity condition specifically comprises:
Acquiring oral cavity three-dimensional model data, and calculating to obtain a position relation between a target tooth position and a model boundary and/or a position relation between an upper jaw model boundary and a lower jaw model boundary according to target tooth position characteristics in the oral cavity three-dimensional model data and model boundary characteristics corresponding to the target tooth position characteristics;
and comparing the position relation with a preset position condition corresponding to the position relation, and judging whether an intraoral tissue area corresponding to the target tooth position in the oral three-dimensional model meets a preset integrity condition or not.
3. The method for generating oral cavity structure information according to claim 2, wherein the "calculating a positional relationship between a target tooth position and a model boundary according to a target tooth position feature in the oral cavity three-dimensional model data and a model boundary feature corresponding to the target tooth position feature" specifically includes:
determining at least one tooth position on the oral cavity three-dimensional model as a target tooth position, and calculating reference feature coordinates of the target tooth position based on a preset feature recognition rule to represent the target tooth position feature;
according to the reference feature coordinates, determining boundary feature coordinates corresponding to the reference feature coordinates on the oral cavity three-dimensional model along a first direction so as to represent the model boundary features;
Calculating to obtain a characteristic distance value between the reference characteristic coordinate and the boundary characteristic coordinate so as to represent the position relation between the target tooth position and the model boundary;
the step of comparing the position relationship with a preset position condition corresponding to the position relationship to determine whether an intraoral tissue region corresponding to the target tooth site in the oral three-dimensional model meets a preset integrity condition specifically includes:
and comparing the characteristic distance value with a distance integrity criterion value representing the preset position condition, and judging whether an intraoral tissue area corresponding to the target tooth position in the oral three-dimensional model meets a preset integrity condition according to a comparison result.
4. The oral cavity structure information generation method according to claim 3, wherein the reference feature coordinate is located in a preset model coordinate system, and an origin of the model coordinate system is located in a occlusal plane; the "calculating the reference feature coordinates of the target tooth positions based on the preset feature recognition rules" specifically includes:
determining all space points on one side of the labial surface on the dental crown of the target tooth position to form a space point set;
Selecting a space point with the minimum coordinate value in the space point set as a pole to establish a polar coordinate system, and arranging other space points in ascending order according to a polar angle and a polar diameter to form a characteristic traversal sequence;
extracting coordinates of the first two space points in the characteristic traversal sequence, sequentially calculating to obtain a first polar coordinate vector and a second polar coordinate vector by taking the pole as a starting point, calculating a cross product of the first polar coordinate vector and the second polar coordinate vector, and judging whether the cross product is less than 0;
if so, updating the starting point to be a first space point corresponding to the first polar coordinate vector; if not, updating the starting point to be a second space point corresponding to the second polar coordinate vector;
traversing all other spatial points behind the second spatial point in the feature traversal sequence, selectively updating the starting point according to a polar coordinate vector formed by the spatial point and the starting point and a cross product between the polar coordinate vectors, and determining at least two spatial points meeting the judgment condition as convex hull reference points;
and fitting convex hull contour data of the target tooth position according to the convex hull reference point and the pole, and calculating reference characteristic coordinates of the target tooth position according to the convex hull contour data.
5. The method for generating oral cavity structure information according to claim 4, wherein the "traversing all other spatial points located after the second spatial point in the feature traversal sequence, updating the starting point according to a polar coordinate vector formed by the spatial point and the starting point and a cross product between the polar coordinate vectors, and determining at least two spatial points that meet the determination condition as convex hull reference points" specifically includes:
extracting a third space point located behind the second space point in the feature traversal sequence, sequentially calculating a reference polar coordinate vector between a reference space point which is not the starting point and the starting point in the first space point and the second space point and a third polar coordinate vector between the third space point and the starting point, calculating a cross product of the reference polar coordinate vector and the third polar coordinate vector, and judging whether the cross product is less than 0;
if so, updating the starting point as the reference space point, and determining the reference space point as the convex hull reference point; if not, the initial point is not updated, the reference space point is deleted, the third space point is used as a new reference space point, and the initial point is selectively updated and the convex hull reference point is determined according to the new reference polar coordinate vector and the cross product of polar coordinate vectors formed by the initial point and the next space point of the new reference space point;
And repeating iteration until all the space points in the characteristic traversal sequence are judged, and obtaining at least two convex hull reference points.
6. The method for generating oral cavity structure information according to claim 3, wherein the "calculating the reference feature coordinates of the target tooth site based on the preset feature recognition rule" specifically includes:
and determining and taking the coordinates of the gingival margin midpoint of the target tooth position as the reference feature coordinates of the target tooth position.
7. The method for generating oral cavity structure information according to claim 3, wherein the "determining boundary feature coordinates corresponding to the reference feature coordinates on the oral cavity three-dimensional model along a first direction according to the reference feature coordinates to characterize the model boundary features" specifically comprises:
taking the reference characteristic coordinate as a starting point, and taking a reference extension line in a direction away from the target tooth position along the first direction, and continuously analyzing the formation condition of a projection end point of a reference extension end on the jaw face of the oral cavity three-dimensional model, except the reference characteristic coordinate, on the reference extension line;
and when the reference extension end does not form a projection end point with the maxillofacial surface of the oral cavity three-dimensional model any more, determining the finally formed projection end point coordinate as the boundary characteristic coordinate.
8. The method according to claim 7, wherein the intraoral tissue region includes a vestibular sulcus, the oral characteristic data includes vestibular sulcus height data, the first direction is a projection of an extending direction of a dental long axis on a maxillofacial surface of the oral three-dimensional model, and the target dental positions include a maxillofacial incisor dental position and a mandibular incisor dental position.
9. The oral cavity structure information generation method according to claim 7, wherein the intraoral tissue region includes a vestibular sulcus and/or a tooth root ridge, the oral cavity feature data includes dental arch width data, the first direction is a projection of an extension direction of a long axis of a dental body on a maxillofacial surface of the oral cavity three-dimensional model, and the target dental positions include a first maxillofacial left molar dental position and a first maxillofacial right molar dental position which correspond to each other; wherein the first maxillofacial area is an upper jaw and/or a lower jaw.
10. The method for generating oral cavity structural information according to claim 3, wherein the intraoral tissue region includes a labial frenum, the oral cavity feature data includes labial frenum width data, and the "determining at least one tooth position on the oral cavity three-dimensional model as a target tooth position and calculating reference feature coordinates of the target tooth position based on a preset feature recognition rule to characterize the target tooth position feature" specifically includes:
Determining a first jaw face left middle incisor tooth position and a first jaw face right middle incisor tooth position on the oral three-dimensional model as target tooth positions, and calculating reference feature coordinates of the target tooth positions and a plurality of relative feature coordinates between the two reference feature coordinates on the target tooth positions based on a preset feature recognition rule so as to represent the target tooth position features; wherein the first maxillofacial region is an upper jaw and/or a lower jaw;
the "determining, according to the reference feature coordinates, boundary feature coordinates on the oral cavity three-dimensional model corresponding to the reference feature coordinates along a first direction to characterize the model boundary features" specifically includes:
respectively taking the reference characteristic coordinate and the relative characteristic coordinate as starting points, and continuously analyzing the formation condition of projection end points of reference extension ends on the oral cavity three-dimensional model, except the reference characteristic coordinate or the relative characteristic coordinate, on the reference extension line along a first direction to a direction away from the target tooth position; wherein the first direction is the projection of the extending direction of the long axis of the tooth body on the jaw face of the oral cavity three-dimensional model;
And when the reference extension end and the oral cavity three-dimensional model do not form the projection end point any more, determining the finally formed projection end point coordinate as the boundary characteristic coordinate.
11. The method for generating oral cavity structural information according to claim 3, wherein before the comparing the characteristic distance value with the distance integrity criterion value representing the preset position condition and determining whether the intraoral tissue region corresponding to the target tooth site in the oral cavity three-dimensional model satisfies the preset integrity condition according to the comparison result, the method comprises:
acquiring at least two groups of three-dimensional training model data and at least two groups of distance training data corresponding to the three-dimensional training model data; wherein the distance training data comprises a distance between conditional training coordinates on the three-dimensional training model corresponding to the reference feature coordinates and target tissue coordinates on the three-dimensional training model corresponding to the intraoral tissue region; a line connecting the conditional training coordinate and the target tissue coordinate extends along the first direction;
and calculating the average distance data and the training distance standard deviation of the distance training data, and calculating to obtain the distance integrity criterion value according to the difference between the average distance data and the product of the training distance standard deviation and a preset distribution probability coefficient.
12. The oral cavity structure information generation method according to claim 11, wherein the intraoral tissue region includes a vestibular sulcus, the oral cavity feature data includes vestibular sulcus height data, the target dental position includes an upper first incisor dental position and a lower second incisor dental position, the distance training data includes a first sulcus bottom distance parameter corresponding to the upper first incisor dental position, and a second sulcus bottom distance parameter corresponding to the lower second incisor dental position; the "calculating the average distance data and the training distance standard deviation of the distance training data" specifically includes:
calculating first trench bottom average distance values of all first trench bottom distance parameters in all distance training data and second trench bottom average distance values of all second trench bottom distance parameters in all distance training data to obtain average distance data;
and calculating the standard deviation of the first trench bottom distances of all the first trench bottom distance parameters in all the distance training data and the standard deviation of the second trench bottom distances of all the second trench bottom distance parameters in all the distance training data to obtain the training distance standard deviation.
13. The oral cavity structure information generation method according to claim 12, wherein the distance training data includes a furrow bottom distance parameter corresponding to the maxillary left side incisor tooth position, the maxillary left side central incisor tooth position, the maxillary right side incisor tooth position, the mandibular left side central incisor tooth position, the mandibular right side central incisor tooth position, and the mandibular right side lateral incisor tooth position, respectively; the gingival bottom distance parameter represents the distance between the conditional training coordinate of the gingival margin midpoint corresponding to the incisor tooth position and the target tissue coordinate of the gingival bottom feature point corresponding to the gingival margin midpoint.
14. The oral cavity structure information generating method according to claim 11, wherein the intraoral tissue region includes a vestibular sulcus and/or a tooth root ridge, the oral cavity characteristic data includes dental arch width data, the target dental positions include bilateral distal molar dental positions and bilateral distal mandibular molar dental positions, the distance training data includes an upper dental arch width parameter corresponding to the bilateral distal molar dental positions, and a lower dental arch width parameter corresponding to the bilateral distal molar dental positions; the "calculating the average distance data and the training distance standard deviation of the distance training data" specifically includes:
calculating upper average width values of all upper arch width parameters in all distance training data and lower average width values of all lower arch width parameters in all distance training data to obtain average distance data;
and calculating the upper width standard deviation of all upper dental arch width parameters in all distance training data and the lower width standard deviation of all lower dental arch width parameters in all distance training data to obtain the training distance standard deviation.
15. The oral cavity structure information generation method according to claim 11, wherein the distribution probability coefficient is 2.
16. The method for generating oral cavity structure information according to claim 3, wherein the "calculating a feature distance value between the reference feature coordinate and the boundary feature coordinate" specifically includes:
and calculating the distance values of the reference characteristic coordinates and the boundary characteristic coordinates in the extending direction of the tooth centerline to obtain the characteristic distance values.
17. The oral cavity structure information generating method according to claim 16, wherein the first direction is a projection of an extending direction of a long axis of a dental body on the oral cavity three-dimensional model; the boundary characteristic coordinates and the reference characteristic coordinates are located in a preset model coordinate system, the model coordinate system at least comprises a first coordinate axis and a third coordinate axis, the third coordinate axis extends along the extension direction of the tooth center line, and the first coordinate axis extends along the extension direction of the middle incisor width; the "calculating a distance value between the reference feature coordinate and the boundary feature coordinate in the extending direction of the tooth centerline" specifically includes:
and calculating a coordinate difference value of the reference feature coordinate and the boundary feature coordinate on the third coordinate axis.
18. The method for generating oral cavity structural information according to claim 3, wherein the comparing the characteristic distance value with a distance integrity criterion value representing the preset position condition and determining whether an intraoral tissue region corresponding to the target tooth site in the oral cavity three-dimensional model satisfies a preset integrity condition according to a comparison result specifically includes:
Judging whether the characteristic distance value is smaller than the distance integrity criterion value or not;
if so, judging that the position in the oral cavity three-dimensional model pointed by the characteristic distance value is a characteristic missing position;
if not, continuously comparing the next characteristic distance value with the distance integrity criterion value;
and judging whether the intraoral tissue area corresponding to the target tooth position in the oral three-dimensional model meets the integrity condition or not according to the number of the characteristic missing positions.
19. The method for generating oral cavity structural information according to claim 18, wherein the "determining whether or not an intraoral tissue region corresponding to the target tooth site in the oral cavity three-dimensional model satisfies the integrity condition according to the number of the feature missing positions" specifically includes:
judging the numerical value magnitude relation between the number of the characteristic missing positions and the allowable error numerical value;
if the number of the characteristic missing positions is larger than or equal to the allowable error number value, judging that the intra-oral tissue region corresponding to the target tooth position in the oral three-dimensional model does not meet the integrity condition;
if the number of the characteristic missing positions is smaller than the allowable error number value, judging that the intra-oral tissue region corresponding to the target tooth position in the oral three-dimensional model meets the integrity condition;
Wherein, the allowable error quantity value is an integer which is more than or equal to one half of the target tooth position quantity.
20. The method for generating the oral cavity structure information according to claim 2, wherein the "calculating a positional relationship between an upper jaw model boundary and a lower jaw model boundary according to the target dental position feature in the oral cavity three-dimensional model data and the model boundary feature corresponding to the target dental position feature" specifically includes:
determining at least incisor tooth positions on the oral cavity three-dimensional model as target tooth positions, and calculating reference feature coordinates of the target tooth positions based on a preset feature recognition rule so as to represent the target tooth position features;
determining an upper boundary extreme value coordinate and a lower boundary extreme value coordinate of the oral cavity three-dimensional model along a first direction according to the reference characteristic coordinate; wherein the upper limit value coordinate is positioned at the upper jaw of the oral cavity three-dimensional model, the distance between the upper limit value coordinate and the occlusal plane has a maximum value, and the lower limit value coordinate is positioned at the lower jaw of the oral cavity three-dimensional model, the distance between the lower limit value coordinate and the occlusal plane has a maximum value;
calculating to obtain a characteristic height value between the upper boundary extreme value coordinate and the lower boundary extreme value coordinate so as to represent the position relation between the upper jaw model boundary and the lower jaw model boundary;
The step of comparing the position relationship with a preset position condition corresponding to the position relationship to determine whether an intraoral tissue region corresponding to the target tooth site in the oral three-dimensional model meets a preset integrity condition specifically includes:
and comparing the characteristic height value with a height integrity criterion value representing the preset position condition, and judging whether an intraoral tissue area corresponding to the target tooth position in the oral three-dimensional model meets a preset integrity condition according to a comparison result.
21. The oral cavity structure information generating method according to claim 20, wherein the first direction is a projection of an extending direction of a long axis of a dental body onto a jaw face of the oral cavity three-dimensional model, the reference feature coordinate, the upper boundary extreme value coordinate and the lower boundary extreme value coordinate are located in a preset model coordinate system, an origin point of the model coordinate system is located on a straight line where an incisor crest is located and includes at least a first coordinate axis and a third coordinate axis, the third coordinate axis extends along an extending direction of a tooth center line, and the first coordinate axis extends along a central incisor width extending direction; the "determining the upper boundary extreme value coordinate and the lower boundary extreme value coordinate of the oral cavity three-dimensional model along the first direction according to the reference feature coordinate" specifically includes:
Calculating an upper boundary coordinate set corresponding to an upper jaw target tooth position on the oral cavity three-dimensional model and a lower boundary coordinate set corresponding to a lower jaw target tooth position on the oral cavity three-dimensional model so as to represent the model boundary characteristics;
traversing and determining an upper boundary coordinate having a maximum coordinate value corresponding to the third coordinate axis in the upper boundary coordinate set as the upper boundary extreme value coordinate, and traversing and determining a lower boundary coordinate having a minimum coordinate value corresponding to the third coordinate axis in the lower boundary coordinate set as the lower boundary extreme value coordinate;
the "obtaining a feature height value between the upper boundary extreme value coordinate and the lower boundary extreme value coordinate by calculation" specifically includes:
and calculating a coordinate difference value of the upper boundary extreme value coordinate and the lower boundary extreme value coordinate on the third coordinate axis to serve as the characteristic height value.
22. The method as claimed in claim 20, wherein before comparing the feature height value with a height integrity criterion value representing the predetermined position condition and determining whether an intraoral tissue region corresponding to the target tooth site in the three-dimensional model of the oral cavity satisfies a predetermined integrity condition according to the comparison result, the method comprises:
Acquiring at least two groups of three-dimensional training model data and at least two groups of height training data corresponding to the three-dimensional training model data; the high training data comprise a distance between an upper extreme tissue coordinate, corresponding to the intraoral tissue area, on the three-dimensional training model on one side of the first direction and a lower extreme tissue coordinate, corresponding to the intraoral tissue area, on the three-dimensional training model on the other side of the first direction;
calculating the average height data and the standard deviation of the training height of the height training data, and calculating to obtain the height integrity criterion value according to the difference between the average height data and the product of the standard deviation of the training height and a preset distribution probability coefficient; wherein the distribution probability coefficient is 3.
23. The method for generating oral cavity structure information according to claim 1, wherein the extracting oral cavity feature data corresponding to the intraoral tissue region in the intraoral image data specifically includes:
calling a corresponding first neural network model and a corresponding second neural network model according to the intraoral organization region;
inputting the intraoral image data into the first neural network model to extract an interested region to obtain characteristic region image data;
Inputting the feature region image data into the second neural network model for feature recognition to obtain the oral cavity feature data corresponding to the oral tissue region.
24. The oral structure information generation method according to claim 23, wherein the first neural network model and the second neural network model are convolutional neural network models; the intraoral tissue region comprises the vestibular sulcus; before the "calling the corresponding first neural network model and second neural network model according to the intraoral organization region", the method comprises the following steps:
acquiring at least two groups of training image data and a region of interest mark corresponding to the training image data; wherein the region of interest marks an extension range in the corresponding training image, covering at least the upper lip, the lower lip and the dentition in the training image;
calling a preset convolutional neural network model, and performing iterative training by taking the training image data and the region-of-interest mark as model input to obtain a first training parameter and a corresponding first neural network model;
acquiring at least two groups of training image data, and a tooth position mark and a vestibular sulcus mark corresponding to the training image data;
And calling a preset convolutional neural network model, and performing iterative training by taking the training image data, the dentition mark and the vestibular sulcus mark as model input to obtain a second training parameter and a corresponding second neural network model.
25. The method for generating oral cavity structure information according to claim 1, wherein the establishing a pixel size relationship between the intraoral image data and the oral cavity three-dimensional model data specifically includes:
determining at least one target tooth position corresponding to each other in the oral cavity three-dimensional model and the oral cavity image as a relative reference tooth position, and calculating size data of a crown of the relative reference tooth position in at least one same direction in the oral cavity three-dimensional model and the oral cavity image to respectively obtain reference pixel size data and reference physical size data;
and fitting to obtain a size mapping factor according to the reference pixel size data and the reference physical size data so as to represent the pixel size relation.
26. The oral structure information generation method according to claim 25, wherein the at least one same direction includes a crown width direction; the size mapping factor is a quotient of the reference physical size data and the reference pixel size data.
27. The method for generating oral cavity structural information according to claim 26, wherein the calculating size data of the crown of the relative reference tooth site in at least one same direction in the intraoral photographic image and the oral cavity three-dimensional model to obtain reference pixel size data and reference physical size data, respectively, specifically comprises:
determining a first reference characteristic point and a second reference characteristic point of the relative reference tooth position on the intraoral photographic image, and calculating and obtaining the reference pixel size data according to the pixel number between the first reference characteristic point and the second reference characteristic point; wherein the first reference characteristic point and the second reference characteristic point are positioned at two sides of the long axis of the tooth body of the relative reference tooth position, and the distance between the first reference characteristic point and the corresponding incisal end of the dental crown of the relative reference tooth position is equal to the distance between the second reference characteristic point and the corresponding incisal end of the dental crown of the relative reference tooth position;
determining a third reference characteristic point and a fourth reference characteristic point of the relative reference tooth position on the oral cavity three-dimensional model, and calculating and obtaining the reference physical size data according to the Euclidean distance between the third reference characteristic point and the fourth reference characteristic point; the third reference characteristic point and the fourth reference characteristic point are positioned on two sides of the long axis of the tooth body of the relative reference tooth position, and the distance between the third reference characteristic point and the corresponding crown incisal end of the relative reference tooth position is equal to the distance between the fourth reference characteristic point and the corresponding crown incisal end of the relative reference tooth position.
28. The method for generating oral cavity structural information according to claim 27, wherein the determining at least one target tooth position corresponding to each other in the intraoral photographic image and the oral cavity three-dimensional model as a relative reference tooth position specifically comprises:
judging whether the left maxillary middle incisor tooth position and the right maxillary middle incisor tooth position in the oral photography image and the oral three-dimensional model both comprise complete crowns;
if so, determining the left-side central incisor tooth position and the right-side central incisor tooth position of the upper jaw, which correspond to each other in the oral cavity three-dimensional model and the oral cavity image, as relative reference tooth positions;
the "determining a first reference feature point and a second reference feature point of the relative reference tooth position on the intraoral photographic image, and calculating and obtaining the reference pixel size data according to the number of pixels between the first reference feature point and the second reference feature point" specifically includes:
determining the first reference feature point at a far-middle boundary of an incisor tooth crown in a left maxillary side of the intraoral photographic image and the second reference feature point at a far-middle boundary of an incisor tooth crown in a right maxillary side of the intraoral photographic image;
And calculating the number of pixels between the first reference characteristic point and the second reference characteristic point, and taking one half of the number of the pixels as the reference pixel size data.
29. The oral cavity structure information generating method according to claim 28, wherein a distance from the first reference feature point to a gingival end of the corresponding incisor is three times a distance from the first reference feature point to a coronal end of the corresponding incisor.
30. The method for generating oral cavity structural information according to claim 27, wherein the determining at least one target tooth position corresponding to each other in the intraoral photographic image and the oral cavity three-dimensional model as a relative reference tooth position specifically comprises:
judging whether the left maxillary middle incisor tooth position and the right maxillary middle incisor tooth position in the oral photography image and the oral three-dimensional model both comprise complete crowns;
if only the left maxillary central incisor tooth position in the oral photography image and the oral three-dimensional model contains a complete dental crown, determining the left maxillary central incisor tooth position corresponding to each other in the oral photography image and the oral three-dimensional model as a relative reference tooth position;
the "determining a first reference feature point and a second reference feature point of the relative reference tooth position on the intraoral camera image, calculating and obtaining the reference pixel size data according to the number of pixels between the first reference feature point and the second reference feature point" specifically includes:
Determining the first reference feature point at a far-middle boundary of an incisor crown in the left maxillary side of the intraoral photographic image and the second reference feature point at a near-middle boundary of an incisor crown in the left maxillary side of the intraoral photographic image;
and calculating the number of pixels between the first reference characteristic point and the second reference characteristic point, and taking the number of pixels as the reference pixel size data.
31. The method for generating oral cavity structural information according to claim 1, wherein the "reconstructing a portion representing the oral cavity feature data on the oral cavity three-dimensional model data to obtain the oral cavity structural information" specifically includes:
determining at least two intraoral tissue feature points from the oral cavity feature data in the intraoral image data;
according to the intraoral tissue characteristic points, fitting tissue space characteristic points corresponding to the intraoral tissue characteristic points at the characteristic missing positions in the oral three-dimensional model according to the pixel size relation, and forming a part representing the oral characteristic data on the oral three-dimensional model data.
32. The oral cavity structure information generating method according to claim 31, wherein the intraoral tissue region includes a vestibular sulcus, the oral cavity feature data includes vestibular sulcus height data, and the intraoral tissue feature point includes an upper sulcus bottom feature point and a lower sulcus bottom feature point.
33. The method for generating oral cavity structure information according to claim 32, wherein the "fitting tissue space feature points corresponding to the intraoral tissue feature points following the pixel size relationship" specifically includes:
calculating the distance between the upper sulcus bottom characteristic point and the gingival margin midpoint of the corresponding tooth position to obtain an upper sulcus bottom distance parameter, and calculating to obtain a corresponding upper sulcus bottom mapping parameter according to the upper sulcus bottom distance parameter and a size mapping factor representing the size relation of the pixels;
fitting to obtain an upper space characteristic point representing the position of the upper gully bottom according to the reference characteristic coordinates of the target tooth position corresponding to the upper gully bottom characteristic point and the upper gully bottom mapping parameters;
calculating the distance between the lower sulcus bottom characteristic point and the gingival margin midpoint of the corresponding tooth position to obtain a lower sulcus bottom distance parameter, and calculating to obtain a corresponding lower sulcus bottom mapping parameter according to the lower sulcus bottom distance parameter and the size mapping factor;
and fitting to obtain a lower space characteristic point representing the position of the lower sulcus bottom according to the reference characteristic coordinates of the target tooth position corresponding to the lower sulcus bottom characteristic point and the lower sulcus bottom mapping parameters.
34. The method according to claim 31, wherein the intraoral tissue region includes a vestibular sulcus and/or a tooth root ridge, the intraoral tissue feature data includes dental arch width data, and the intraoral tissue feature points include a left-side sulcus feature point and a right-side sulcus feature point.
35. The method for generating oral cavity structural information according to claim 31, wherein the determining at least two intraoral tissue feature points from the intraoral image data in the intraoral image data specifically includes:
determining all intraoral tissue feature points in the intraoral photographic image according to the oral cavity feature data;
the forming of the part characterizing the oral cavity characteristic data on the oral cavity three-dimensional model data specifically comprises:
and fitting a tissue space distribution curve or a tissue space distribution curve according to the tissue space characteristic points, and using the tissue space distribution curve or the tissue space distribution curve as a part representing the oral cavity characteristic data on the oral cavity three-dimensional model data.
36. An oral structure information generating method, comprising:
acquiring three-dimensional model data of an oral cavity and corresponding intraoral image data;
extracting oral cavity feature data corresponding to a target intraoral tissue area in an oral cavity three-dimensional model in the intraoral image data, establishing a pixel size relation between the intraoral image data and the oral cavity three-dimensional model data, and reconstructing a part representing the oral cavity feature data on the oral cavity three-dimensional model data to obtain the oral cavity structure information.
37. An oral cavity structure information generating system comprises a processor, a memory and a communication bus, and is characterized in that the processor and the memory are communicated with each other through the communication bus;
the memory is used for storing application programs;
the processor, when executing the application program stored on the memory, is configured to implement the steps of the oral structure information generation method according to any one of claims 1 to 36.
38. A storage medium having an application program stored thereon, wherein the application program, when executed, implements the steps of the oral structure information generation method according to any one of claims 1 to 36.
39. An oral cavity apparatus constructed according to oral cavity structure information generated by the oral cavity structure information generation method according to any one of claims 1 to 36.
40. The oral device of claim 39, wherein the oral device is for training orofacial muscle function and/or for treating mouth breathing.
CN202211209794.3A 2022-09-30 2022-09-30 Oral cavity structure information generation method, system, storage medium and oral cavity instrument Pending CN115565684A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211209794.3A CN115565684A (en) 2022-09-30 2022-09-30 Oral cavity structure information generation method, system, storage medium and oral cavity instrument

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211209794.3A CN115565684A (en) 2022-09-30 2022-09-30 Oral cavity structure information generation method, system, storage medium and oral cavity instrument

Publications (1)

Publication Number Publication Date
CN115565684A true CN115565684A (en) 2023-01-03

Family

ID=84742938

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211209794.3A Pending CN115565684A (en) 2022-09-30 2022-09-30 Oral cavity structure information generation method, system, storage medium and oral cavity instrument

Country Status (1)

Country Link
CN (1) CN115565684A (en)

Similar Documents

Publication Publication Date Title
US11986369B2 (en) Methods and systems for determining a dental treatment difficulty in digital treatment planning
JP5671734B2 (en) Computer-aided creation of custom tooth setup using facial analysis
CN106137414B (en) Method and system for determining target dentition layout
CA3140069A1 (en) Visual presentation of gingival line generated based on 3d tooth model
US8177551B2 (en) Method and system for comprehensive evaluation of orthodontic treatment using unified workstation
US8469705B2 (en) Method and system for integrated orthodontic treatment planning using unified workstation
US8199988B2 (en) Method and apparatus for combining 3D dental scans with other 3D data sets
US10980612B2 (en) Face tracking and reproduction with post-treatment smile
US20140379356A1 (en) Method and system for integrated orthodontic treatment planning using unified workstation
US11779445B2 (en) Systems and methods for determining a bite position between teeth of a subject
WO2007063980A1 (en) Intraoral panoramic image pickup device and intraoral panoramic image pickup system
CN113827362B (en) Tooth movement evaluation method based on alveolar bone morphology under curve natural coordinate system
JP2021524789A (en) Tooth virtual editing method, system, computer equipment and storage medium
WO2021218724A1 (en) Intelligent design method for digital model for oral digital impression instrument
CN112419476A (en) Method and system for creating three-dimensional virtual image of dental patient
US20220361983A1 (en) Methods and systems for determining occlusal contacts between teeth of a subject
US20220257341A1 (en) Virtual articulation in orthodontic and dental treatment planning
KR102204990B1 (en) Method for Inter Proximal Reduction in digital orthodontic guide and digital orthodontic guide apparatus for performing the method
US11833759B1 (en) Systems and methods for making an orthodontic appliance
CN115565684A (en) Oral cavity structure information generation method, system, storage medium and oral cavity instrument
CN105411716B (en) A kind of edentulous jaw alveolar ridge intercuspal position Direct Determination
Adel et al. Quantifying maxillary anterior tooth movement in digital orthodontics: Does the choice of the superimposition software matter?
KR20200021261A (en) Dental CAD apparatus using characteristic of mirroring teeth and operating method the dental CAD apparatus
CN114343906B (en) Method and device for acquiring occlusion vertical distance, medium and electronic equipment
CN113487667B (en) Method and system for measuring palate volume of upper jaw, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination