CN115363795A - Virtual articulator structure and using method thereof, and virtual articulator - Google Patents

Virtual articulator structure and using method thereof, and virtual articulator Download PDF

Info

Publication number
CN115363795A
CN115363795A CN202211127514.4A CN202211127514A CN115363795A CN 115363795 A CN115363795 A CN 115363795A CN 202211127514 A CN202211127514 A CN 202211127514A CN 115363795 A CN115363795 A CN 115363795A
Authority
CN
China
Prior art keywords
point set
feature point
lower jaw
upper jaw
virtual articulator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211127514.4A
Other languages
Chinese (zh)
Other versions
CN115363795B (en
Inventor
欧贺国
孙靖超
刘超
郑旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Romo Technology Beijing Co ltd
Original Assignee
Romo Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Romo Technology Beijing Co ltd filed Critical Romo Technology Beijing Co ltd
Priority to CN202211127514.4A priority Critical patent/CN115363795B/en
Publication of CN115363795A publication Critical patent/CN115363795A/en
Application granted granted Critical
Publication of CN115363795B publication Critical patent/CN115363795B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C11/00Dental articulators, i.e. for simulating movement of the temporo-mandibular joints; Articulation forms or mouldings

Abstract

The application provides a virtual articulator structure, a using method and a virtual articulator, wherein the constructing method comprises the following steps: importing a first maxillary feature point set, a first mandibular feature point set and a first motion trajectory of a first condyle and a second condyle, wherein the first maxillary feature point set and the first mandibular feature point set are determined based on the real positions of the soft/hard tissues of the oral cavity, and the first motion trajectory is determined based on the real occlusion relationship; determining a translation trajectory of a first set of mandible feature points based on the first motion trajectory; importing a first upper jaw digital model and a first lower jaw digital model which are determined based on the real shapes of the upper jaw and the lower jaw respectively; and binding the first upper jaw digital model and the first upper jaw characteristic point set, and binding the first lower jaw digital model and the first lower jaw characteristic point set to form the virtual articulator. The technical scheme of the application can accurately simulate and reproduce the occlusion relation of the upper jaw and the lower jaw.

Description

Virtual articulator structure, using method thereof and virtual articulator
Technical Field
The application belongs to the technical field of orthodontic, and particularly provides a structure and a using method of a virtual articulator and the virtual articulator
Background
The dental articulator (the dental articulator is a special word for oral medicine, and means occlusion of teeth) is an auxiliary tool in the fields of orthodontics, oral restoration and the like, and can simulate and reproduce various occlusion relations of upper and lower teeth and jaws, including static position relations of the upper and lower teeth and dynamic position relations and movement track characteristics between the upper and lower teeth and jaws in the process of various actions (such as opening and closing of an opening and lateral chewing) of an oral cavity. By using the dental articulator, a series of operations related to orthodontics and oral restoration, such as occlusion relation analysis, motion trail analysis, restoration collision detection, orthodontic effect evaluation, and the like, can be performed.
The occlusion relation is simulated through a traditional mechanical occlusion frame, the geometrical characteristics of the dental jaw and the craniofacial part of a measurer and the movement track of some characteristic points (such as condyles) in the actual occlusion process are firstly obtained through a face arch, then the geometrical characteristics and the movement track are abstracted and transferred into parameters of the occlusion frame, and finally the relative positions and the movement tracks of all parts of the occlusion frame are adjusted by using the parameters, so that the occlusion relation is simulated and reproduced. The above procedure generally takes a lot of time and is easily uncomfortable for the measurer.
For this reason, in recent years, a virtual articulator based on a computer interaction technology has been developed, which is capable of simulating various motions of a mechanical articulator by converting each solid structure of the mechanical articulator into a three-dimensional digital model. However, since the motion relationship is generally consistent with that of the mechanical articulator, only the complex relative motion of the upper and lower jaws during occlusion can be simplified, and the simplified motion manner obviously cannot accurately simulate and reproduce the complex motion of the oral tissue, thereby seriously affecting the accuracy of various analysis, detection and evaluation related to the occlusion relationship.
Disclosure of Invention
In order to solve the above problems in the prior art, it is necessary to provide a virtual articulator capable of accurately simulating and reproducing complex oral cavity movements, and a configuration and a using method thereof, so as to improve the accuracy of the analysis of the occlusal relationship/movement trajectory of the upper and lower jaws and the assessment of orthodontic effect in the orthodontic process.
One aspect of the present application provides a method for constructing a virtual articulator, comprising the steps of:
s1, importing a first maxillary feature point set, a first mandibular feature point set and a first motion trajectory of a first condyle and a second condyle, wherein the first maxillary feature point set and the first mandibular feature point set are determined based on real positions of oral soft/hard tissues, and the first motion trajectory is determined based on a real occlusion relationship;
s2, determining a translation track of the first mandible feature point set based on the first motion track;
s3, importing a first upper jaw digital model and a first lower jaw digital model, wherein the first upper jaw digital model and the first lower jaw digital model are determined respectively based on the real shapes of the upper jaw and the lower jaw;
and S4, binding the first upper jaw digital model and the first upper jaw feature point set, and binding the first lower jaw digital model and the first lower jaw feature point set to form a virtual articulator.
Further, step S2 includes the steps of:
s21, locking the position relation between the first upper jaw feature point set and the first lower jaw feature point set;
s22, unifying the coordinate proportions of the first upper jaw characteristic point set, the first lower jaw characteristic point set and the first motion track;
s23, positioning the locked first upper jaw characteristic point set and the locked first lower jaw characteristic point set to a first initial position, wherein the initial position is determined based on the starting point of the first motion trail;
and S24, fixing the position of the first upper jaw feature point set, unlocking the first upper jaw feature point set and the first lower jaw feature point set, and enabling the first lower jaw feature point set to translate along the first motion track.
Further, a rotation axis of the first mandibular feature point set is a connecting line of the first condyle and the second condyle.
Further, the relative positions between the feature points in the first maxillary feature point set are unchanged; the relative position between the individual feature points in the first set of mandible feature points is unchanged.
Another aspect of the present application provides a method for using a virtual articulator, comprising the steps of:
a1: constructing a virtual articulator of a first form according to the virtual articulator construction method;
a2: simulating a true bite relationship using the virtual articulator of the first configuration.
Further, the use method of the virtual articulator further comprises the following steps:
a3: importing a second maxillary characteristic point set, a second mandibular characteristic point set and a second motion trail of the first condyle and the second condyle;
a4: determining a translational track of a second set of mandibular feature points based on the second motion track;
a5: importing a second upper jaw digital model and a second lower jaw digital model;
a6: binding the second upper jaw digital model with the second upper jaw characteristic point set, and binding the second lower jaw digital model with the second lower jaw characteristic point set to form a second-form virtual articulator;
a7: and simulating the target occlusion relation by using the virtual articulator in the second form.
Preferably, the second motion trajectory is determined based on a target bite relationship.
Preferably, the second upper jaw feature point set and the second lower jaw feature point set are determined based on the target position of the soft/hard tissues of the oral cavity; and the second upper jaw digital model and the second lower jaw digital model are determined respectively based on the target morphologies of the upper jaw and the lower jaw.
Further, step A4 includes the steps of:
a41, locking the position relation between the second upper jaw characteristic point set and the second lower jaw characteristic point set;
a42, unifying the coordinate proportions of the second upper jaw characteristic point set, the second lower jaw characteristic point set and the second motion track;
a43, positioning the locked second upper jaw characteristic point set and the second lower jaw characteristic point set to a second initial position, wherein the second initial position is determined based on the starting point of the second motion track;
and A44, fixing the position of the second upper jaw feature point set, and unlocking the position of the second upper jaw feature point set and the second lower jaw feature point set, so that the second lower jaw feature point set can perform translation along a second motion track.
In another aspect, the present application provides a virtual articulator comprising a first configuration and a second configuration;
the virtual articulator in the first form comprises a first upper jaw digital model, a first lower jaw digital model, a first upper jaw characteristic point set, a first lower jaw characteristic point set and a first motion trail, and the first form is determined according to the virtual articulator construction method;
the virtual articulator in the second form comprises a second upper jaw digital model, a second lower jaw digital model, a second upper jaw feature point set, a second lower jaw feature point set and a second motion track, and the second form is determined according to the virtual articulator using method.
The virtual articulator structure, the use method and the virtual articulator provided by the embodiment of the application have at least the following beneficial effects:
(1) According to the technical scheme, the motion trail of the condyles is bound with the characteristic points of the upper jaw and the lower jaw and the digital models of the upper jaw and the lower jaw, so that the complex oral motion process can be accurately simulated and reproduced, and orthodontists can be helped to intuitively and conveniently analyze various oral malformation problems;
(2) According to the technical scheme, the movement of the lower jaw digital model relative to the upper jaw digital model is reproduced in a translational superposition rotation mode, wherein the translational basis is a movement track obtained according to the occlusion relation, and the rotation angle can be flexibly adjusted, so that the degree of opening and closing the lower jaw can be adjusted at different positions of the movement track according to the requirements of diagnosis and evaluation, the relative position relation between the upper jaw and the lower jaw when the condyles are at different positions and rotation angles is more comprehensively analyzed, and the defect that the existing virtual articulator can only simulate the movement state of a real mechanical articulator is effectively overcome;
(3) According to the technical scheme, the real occlusion relation of the oral cavity and the target occlusion relation expected by orthodontic treatment can be simulated respectively through the first form and the second form of the virtual dental articulator, and the application scene of the virtual dental articulator is greatly expanded.
Drawings
FIG. 1 is a schematic flow chart illustrating a method for constructing a virtual articulator according to an embodiment of the present application;
FIG. 2 is a true cranial slice image;
FIG. 3 is a schematic illustration of a plurality of feature points calibrated according to FIG. 2;
fig. 4 is a schematic diagram that visually displays a first motion trajectory of a first condyle and a second condyle according to an embodiment of the present application;
FIG. 5 is a schematic diagram of positioning a first set of maxillary feature points and a first set of mandibular feature points to a first initial position according to an embodiment of the present application;
FIG. 6A is a schematic view of a first mandible digital model according to an embodiment of the present application translating along a first motion trajectory;
FIG. 6B is a schematic view of a first mandible digital model according to an embodiment of the present application superimposed for rotation about an axis of rotation while being translated along a first motion trajectory;
FIG. 7 is a schematic diagram of a comparison of a first motion profile and a second motion profile;
fig. 8A-8B are schematic diagrams illustrating analysis and evaluation of orthodontic objectives expected to be achieved by orthodontic treatment using a virtual articulator of a second configuration, according to embodiments of the present application.
Detailed Description
Hereinafter, the present application will be further described based on preferred embodiments with reference to the accompanying drawings.
In addition, various components on the drawings are enlarged or reduced for convenience of understanding, but this is not intended to limit the scope of the present application.
Singular references also include plural references and vice versa.
In the description of the embodiments of the present application, it should be noted that if the terms "upper", "lower", "inner", "outer", etc. are used for indicating the orientation or positional relationship based on the orientation or positional relationship shown in the drawings or the orientation or positional relationship which is usually arranged when the products of the embodiments of the present application are used, the description is only for convenience and simplicity, but the indication or suggestion that the referred device or element must have a specific orientation, be constructed in a specific orientation and be operated, and thus, the application cannot be construed as being limited. Moreover, the terms first, second, etc. may be used in the description to distinguish between different elements, but these should not be limited by the order of manufacture or by importance to be understood as indicating or implying any particular importance, and their names may differ from their names in the detailed description of the application and the claims.
The terminology used in the description is for the purpose of describing the embodiments of the application and is not intended to be limiting of the application. It should also be noted that unless otherwise explicitly stated or limited, the terms "disposed," "connected," and "connected" should be interpreted broadly, as if they were fixed or removable, or integrally connected; they may be mechanically coupled, directly coupled, indirectly coupled through intervening media, or may be interconnected between two elements. The specific meaning of the above terms in the present application will be specifically understood by those skilled in the art.
The existing virtual articulator can visually reproduce the structure and function of the real existing mechanical articulator in the application program in the orthodontic field, such as: patent 201180020930.8 discloses a computer implemented method of using a dynamic virtual articulator to simulate occlusion of teeth when a patient performs computer aided design of one or more dental restorations. In the method, mechanical articulators sold by a plurality of companies are precisely modeled in a 3D mode, and all parts are assembled according to the actual position relations of the parts (the assembly is a special term of computer aided design, and means that a plurality of 3D modeled parts are constrained according to the actual relative positions and motion relations of the parts), so that the virtual articulators can simulate and reproduce the actual motion process of the mechanical articulators.
However, the existing virtual articulator is difficult to simulate the occlusion process truly, and the main reasons are as follows: the existing three-dimensional digital models of various virtual articulators are generated by performing 3D modeling on the physical structures of the mechanical articulators, and the motion relations of the models are also consistent with those of the mechanical articulators, however, due to structural limitations of the mechanical articulators, the mechanical articulators can only simplify and abstract the complex relative motion relations performed when the upper jaw and the lower jaw are occluded into a few parameters, and transfer the parameters to the mechanical structures of the articulators. For example, the mechanical articulator can only simply simulate the motion trail of the condyles in the occlusal process into linear motion through the hinge shaft, and the simplified motion mode obviously cannot accurately simulate and reproduce the complex motion of the oral tissue, so that the analysis, detection and evaluation precision related to the occlusal relationship is seriously influenced; meanwhile, when the orthodontic effect achieved by different orthodontic schemes is predicted and evaluated, the virtual articulator is required to be capable of simulating various occlusion processes as much as possible, and the virtual articulator which is consistent with a real mechanical articulator in appearance form obviously cannot meet the requirements, so that the application scene of the virtual articulator is limited to a certain extent.
Therefore, in order to solve the above problems of the conventional virtual articulator, a first aspect of an embodiment of the present application provides a method for constructing a virtual articulator, as shown in fig. 1, including the following steps:
s1, importing a first maxillary feature point set, a first mandibular feature point set and a first motion track of a first condyle and a second condyle, wherein the first maxillary feature point set and the first mandibular feature point set are determined based on the real positions of oral soft/hard tissues, and the first motion track is determined based on a real occlusion relationship;
s2, determining a translation track of the first mandible feature point set based on the first motion track;
s3, importing a first upper jaw digital model and a first lower jaw digital model which are determined based on a real upper jaw and a real lower jaw;
and S4, binding the first upper jaw digital model and the first upper jaw feature point set, and binding the first lower jaw digital model and the first lower jaw feature point set to form a virtual articulator.
The following describes the steps S1 to S4 in detail with reference to the drawings and specific embodiments.
Step S1 is used for importing the position information of real oral cavity characteristic points and the real motion trail information of condyles for constructing the virtual articulator.
Specifically, the real oral cavity feature points include a first upper jaw feature point set and a first lower jaw feature point set, which respectively represent specific positions of hard tissues (such as jawbone, skull, teeth and other tissues) and soft tissues (such as cheek skin, tongue, jaw and other oral cavity internal tissues) of an upper jaw and a lower jaw in the real oral cavity, and based on the position relationship of the feature points, various oral cavity problems (such as dentition irregularity, jaw overgrowth or underdevelopment, and occlusal relation deformity) can be measured and evaluated.
Table 1 lists the specific feature points comprised by the first set of maxillary feature points in some embodiments:
table 1 specific feature points included in the first maxillary feature point set
Point mark Feature point name Meaning of characteristic points
S Butterfly and saddle point Center of the butterfly saddle image
N Nasal root point Most anterior point of naso-frontal suture on median sagittal plane
P Ear point The uppermost point of the external auditory canal
Ba Point of skull base Midpoint of the anterior border of the foramen magnum
Pt Wing point Point of intersection of posterior wall of pterygopalatine fossa and lower edge of circular hole
Ptm Fracture point of upper jaw Lowest point of the fracture of the upper jaw
Or Orbit point Lowest point of infraorbital margin
ANS Anterior nasal spina The tip of the anterior nasal spine
PNS Points of posterior nasal acantha The tip of the bony spine at the posterior part of the hard palate
A Upper tooth socket point The most concave point of the bone image between the anterior nasal spine and the edge point of the superior alveolar
SPr Upper alveolar ridge point Most anterior point of incisor margin
U1 Upper central incisor point Incisal marginal point of upper and middle incisor
G' Forehead point The foremost point of forehead
N’ Soft tissue nasion point Corresponding nasion points on the sides of the soft tissue
Prn Nasal tip point Most protruding point of nose
Table 2 lists the specific feature points comprised by the first set of mandible feature points in some embodiments:
point mark Characteristic point name Meaning of characteristic points
Co. Condylar apex The uppermost point of the condyle
Ar Joint point Intersection point of skull base inferior border and mandibular condylar cervical posterior border
Go Angular point of mandible Most prominent point of angle of mandible
B Lower tooth socket base point The most concave point of the bone between the lower alveolar edge and the anterior point of the chin
L1 Lower central incisor point Lower central incisor margin point
Pog Point before chin Most prominent point of chin
Gn Chin vertex Lowest point of mandibular-chin combined outer anterior border line
Me Point under the chin The lowest point of the chin
Ax Condylar process point Prominent pivot point of mandible condyle
Upr Maxillary soft tissue points Soft tissue side appearance outer contour point set of brain gate to upper lip mouth corner
Lpr Lower jaw soft tissue point Soft tissue side appearance outer contour point set from lower lip mouth corner to neck
In some specific embodiments of the present application, importing the first upper jaw feature point set and the first lower jaw feature point set may be implemented by: and reading the coordinate information corresponding to each feature point, and performing visual display. Specifically, in some optional embodiments, the coordinate information of each feature point may be obtained by a real lateral cranial slice, fig. 2 is a real lateral cranial slice image obtained by taking an X-ray based on a real patient in a bite state, from which the morphology and contour features of each oral soft/hard tissue can be seen, fig. 3 shows the result of tracing the lateral cranial slice of fig. 2 and calibrating a plurality of feature points, and the specific meaning of each feature point has been described above. In actual orthodontic measurement and evaluation, it is considered that each of the feature points is located on a skull midsagittal plane (the skull midsagittal plane is an imaginary plane for bisecting the skull into two parts which are mirror-symmetrical in left and right), and two-dimensional coordinate values of each of the feature points on the skull midsagittal plane can be obtained by using scale information shown in fig. 2 and 3.
In addition to the first maxillary feature point set and the first mandibular feature point set, a first motion trajectory of the first condyle and the second condyle is also introduced in step S1.
The condyles, also called condyloid processes, are spherical articular heads at the ends of the mandible, which share a pair (for the sake of distinction, referred to as the first and second condyles, respectively, in this application). The two condyles are respectively positioned in the joint sockets of the temporomandibular joints at the left and the right sides, and when the lower jaw performs various movements such as opening and closing, chewing, lateral movement and the like relative to the upper jaw, the lower jaw takes the condyles as the axis and respectively performs translation, rotation and the like under the constraint of the real occlusion relationship between the upper jaw and the lower jaw. Obviously, the real occlusion relationship between the upper and lower jaws can be simulated and reproduced by the motion trajectory of the condyles in various occlusion actions.
Specifically, in an embodiment of the present application, a first trajectory of motion of the first condyle and the second condyle is determined based on the true bite relationship. At present, there are many devices for acquiring the true motion trajectory of the condyles, which are called as mandibular motion measurement systems or digital electronic facial arches, and track and record the motion trajectory of characteristic points such as the condyles when the oral cavity performs various motions by using ultrasonic, photoelectric and other sensors installed at different parts of the oral cavity. The above-mentioned techniques for obtaining the trajectory of the motion of the condyles are known to those skilled in the art.
Table 3 shows a specific data format of the first motion trajectory of the first and second condyles imported in step S1, in which each row represents the three-dimensional coordinates of a series of positions through which the condyles recorded by the digital electronic facial arch pass during motion (in which the XY plane is parallel to the cranial median sagittal plane), respectively.
TABLE 3 first motion trajectory data of the first and second condyles
Figure BDA0003849530250000061
Figure BDA0003849530250000071
Similar to the first maxillary feature point set and the first mandibular feature point set, in some embodiments of the present application, the first motion trajectory of the imported first condyle and the imported first motion trajectory of the second condyle can also be visually displayed, and fig. 4 shows a schematic diagram of a specific visually displayed first motion trajectory of the first condyle and the second condyle, where the motion trajectory displayed in the diagram can be obtained by fitting coordinate points in table 1.
The distance between the first condyle and the second condyle in fig. 4 is constant at any time, and represents the actual distance between the two condyles of the real mandible, and in some embodiments, the distance may be measured by performing a CT scan or the like on the mandible, and in other embodiments, the distance may be determined based on a large amount of statistical data or empirical values. The line connecting the first condyle and the second condyle forms a rotation axis, and when the mandible performs various actual occlusion actions, the movement of the mandible can be decomposed into translation along the first movement track (specifically, the first condyle and the second condyle of the mandible perform translation along the respective first movement tracks) and rotation around the rotation axis.
Step S2 is for determining a translation trajectory of the first mandible feature point, in some embodiments, step S2 further comprises the steps of:
s21, locking the position relation between the first upper jaw feature point set and the first lower jaw feature point set;
s22, unifying the coordinate proportions of the first upper jaw feature point set, the first lower jaw feature point set and the first motion trail;
s23, positioning the locked first upper jaw characteristic point set and the locked first lower jaw characteristic point set to a first initial position, wherein the initial position is determined based on the starting point of the first motion trail;
and S24, fixing the position of the first upper jaw feature point set, unlocking the first upper jaw feature point set and the first lower jaw feature point set, and enabling the first lower jaw feature point set to translate along the first motion track.
Specifically, when the upper and lower jaws perform various occlusion motions, the relative positions of the respective feature points in the first upper jaw feature point set are kept unchanged, and the relative positions of the respective feature points in the first lower jaw feature point set are also kept unchanged, but the first upper jaw feature point set and the first lower jaw feature point set perform relative translation and relative rotation as a whole. In the actual occlusion relationship simulation and reproduction, the first upper jaw feature point set can be generally set to be in a static state, and only the first lower jaw feature point set is controlled to perform corresponding translation and rotation.
In the above step, the relative positions of the first maxillary feature point set and the first mandibular feature point set are first locked and unified in coordinate proportion with the first motion trajectory, and then the first maxillary feature point set and the first mandibular feature point set are moved to the first initial position as a whole.
Specifically, the first initial position represents a position at the starting point of various occlusion movements, and in general, may be a position at which the true upper and lower jaws are in a closed occlusion state. In some embodiments, the first maxillary feature point set and the first mandibular feature point set may be integrally moved to be perpendicular to a line connecting the first condyle and the second condyle at the start point of the first motion trajectory, and the drop foot is a midpoint of the line, the position of the condylar point in the first mandibular feature point set is adjusted to fall on the drop foot, and the first maxillary feature point set and the first mandibular feature point set are integrally rotated to be consistent with the horizontal direction of the first motion trajectory, so as to position the locked first maxillary feature point set and the first mandibular feature point set to the first initial position. Fig. 5 shows a schematic diagram of positioning the first maxillary feature point set and the first mandibular feature point set to the first initial position in a specific embodiment (as described above, each feature point in the first maxillary feature point set and the first mandibular feature point set is in the median sagittal plane), when the first condyle and the second condyle are respectively located at the starting point of the respective first motion tracks.
After the locked first maxillary feature point set and the first mandibular feature point set are positioned at the first initial position, the first maxillary feature point set may be fixed (i.e., the first maxillary feature point set is kept still), and then the first mandibular feature point set and the first mandibular feature point set may be unlocked and translated along the first movement trajectory. Specifically, in some embodiments, the translation of the first set of mandibular feature points along the first motion trajectory may be achieved by setting the plane in which the first set of mandibular feature points lies always perpendicular to the line joining the first condyle and the second condyle (the drop foot is the midpoint of the line), and the condylar points of which are always located on the drop foot.
Through the steps S21 to S24, the first upper jaw feature point set can be fixed, and the first lower jaw feature point set translates according to the first motion trajectory.
Further, in an embodiment of the present application, the first set of underbite feature points can also be rotated, and the rotation axis is a connection line between the first condyle and the second condyle. In some embodiments of the present application, the angle at which the first set of mandible feature points is rotated may be set by data measured by the digital electronic facebow, thereby enabling an accurate reproduction of the real occlusion process measured by the digital electronic facebow; in other embodiments of the present application, the angle of the rotation can be adjusted by the user without exceeding biomechanical constraints, so that the degree of mandibular opening and closing can be adjusted at different positions of the motion trajectory according to the needs of diagnosis and evaluation, and a more comprehensive analysis of the relative position between the upper and lower jaws at different positions and angles of rotation of the condyles can be performed.
Step S3 is used to further import a first upper jaw digital model and a first lower jaw digital model, wherein the first upper jaw digital model and the first lower jaw digital model are determined based on the real morphology of the upper jaw and the lower jaw, respectively.
At present, there are various methods for obtaining the form of the jaw and building a three-dimensional digital model, for example, each soft and hard tissue of the oral cavity can be scanned by CBCT, and the three-dimensional modeling of the jaw digital model is performed according to the scanning result. According to the specific requirements of different application scenarios, the digital dental model can comprise hard tissue parts such as teeth and dental bones and soft tissue parts such as gingiva and periodontal ligament. The above-mentioned techniques for creating corresponding digital models based on the true morphology of the upper and lower jaws are well known to those skilled in the art.
After the first upper jaw digital model and the first lower jaw digital model are imported, the first upper jaw digital model and the first lower jaw digital model are respectively bound with the first upper jaw feature point set and the first lower jaw feature point set in step S4, and the virtual articulator can be formed. Specifically, a doctor or an experienced operator may select and mark points corresponding to at least three non-collinear feature points in the feature point set on the dental digital model, and the feature points and the mark points are matched by a matching algorithm known to those skilled in the art, so as to bind the dental digital model and the feature point set. After the binding is completed, the translation and the rotation of the first lower jaw digital model relative to the first upper jaw digital model can be synchronously realized by controlling the translation of the first lower jaw feature point set along the first motion track and the rotation around the rotating shaft, so that the construction process of the virtual articulator is realized. Fig. 6A is a schematic diagram showing the first digital model and the first set of mandibular feature points in translation along the first motion trajectory in one specific embodiment, and fig. 6B is a schematic diagram showing the first digital model and the first set of mandibular feature points in translation along the first motion trajectory while superimposing rotations around the rotation axis in rotation in one specific embodiment.
Another aspect of the embodiments of the present application provides a method for using a virtual articulator, including the following steps:
a1: constructing a virtual articulator of a first form according to the virtual articulator construction method;
a2: and simulating a real occlusion relationship by using the virtual articulator of the first form.
Wherein, the step A1 constructs a virtual articulator by using the virtual articulator constructing method, and the virtual articulator has a first shape. As is apparent from the foregoing analysis, the "first form" represents the real conditions of the various tissue structures of the oral cavity and the occlusal relationship of the upper and lower jaws of the patient, and further, the virtual articulator with the first form can be used to simulate the real occlusal relationship in step A2, for example: and translating the first mandible digital model and the first mandible characteristic point set bound by the first mandible digital model along a first motion track, and simultaneously rotating the first mandible digital model around a connecting line of the first condyle and the second condyle, so as to simulate the real occlusion relation of the patient and further analyze various oral cavity malformation problems of the patient.
After the real occlusion relation is analyzed and evaluated through the virtual articulator in the first form, a doctor can diagnose the problems in the oral cavity and provide a corresponding orthodontic prescription. A typical orthodontic prescription may include a determination of the severity of the problem with misaligned dentition, abnormal development of dental arches, and abnormal occlusion, the orthodontic treatment required, and the treatment goals desired. For example: when the patient needs to be treated for the occlusion relation deformity, the treatment can be aimed at adjusting the real occlusion relation of the patient to be the target occlusion relation; for another example, when orthodontic treatment is required to be performed on various tissues in the oral cavity of the patient, such as teeth, a dental jaw bone, and the like, the treatment target may be a target form forming an upper jaw and a lower jaw, and at the same time, the relative position between the feature points of the upper jaw and the lower jaw will be correspondingly moved to be in a target position.
The above-mentioned therapeutic target is visually displayed, which is obviously more convenient for the doctor to check and evaluate whether the doctor can achieve the expected effect, for this reason, in the embodiment of the present application, the above-mentioned virtual articulator use method further includes the following steps:
a3: importing a second maxillary feature point set, a second mandibular feature point set and a second motion trail of the first condyle and the second condyle;
a4: determining a translational track of a second set of mandibular feature points based on the second motion track;
a5: importing a second upper jaw digital model and a second lower jaw digital model;
a6: binding the second upper jaw digital model with the second upper jaw characteristic point set, and binding the second lower jaw digital model with the second lower jaw characteristic point set to form a second-form virtual articulator;
a7: and simulating the target occlusion relation by using the virtual articulator of the second form.
Specifically, the second motion profile is determined based on the target bite relationship. The second upper jaw characteristic point set and the second lower jaw characteristic point set are determined based on the target position of the soft/hard tissue of the oral cavity; and the second upper jaw digital model and the second lower jaw digital model are determined respectively based on the target forms of the upper jaw and the lower jaw.
Through the steps, a second maxillary feature point set, a second mandibular feature point set, a second maxillary digital model and a second mandibular digital model can be generated respectively through orthodontic objects specified by an orthodontic prescription, a second movement trajectory of the first condyle and the second condyle can be determined according to the occlusion relation of the objects, and then the virtual occlusal splint in the second shape can be constructed through steps similar to steps S1 to S4.
Further, step A4 includes the steps of:
a41, locking the position relation between the second upper jaw characteristic point set and the second lower jaw characteristic point set;
a42, unifying the coordinate proportions of the second upper jaw characteristic point set, the second lower jaw characteristic point set and the second motion trail;
a43, positioning the locked second upper jaw characteristic point set and the second lower jaw characteristic point set to a second initial position, wherein the second initial position is determined based on the starting point of the second motion track;
and A44, fixing the position of the second upper jaw feature point set, and unlocking the position of the second upper jaw feature point set and the second lower jaw feature point set, so that the second lower jaw feature point set can translate along the second motion track.
The steps for constructing the virtual articulator in the second configuration are similar to those for constructing the virtual articulator in the first configuration, and are not repeated herein.
Fig. 7 shows a comparison of the first motion trajectory and the second motion trajectory, and fig. 8A and 8B are schematic diagrams illustrating analysis and evaluation of orthodontic objectives expected to be achieved by orthodontic treatment using the virtual articulator in the second configuration. Obviously, the motion trajectory, the feature point set, and the dental model in fig. 8A and 8B correspond to fig. 6A and 6B, respectively, a second motion trajectory, a second upper/lower jaw feature point set, and a second upper/lower jaw model. Based on the analysis and evaluation of the orthodontic target to be achieved, the physician may further adjust the orthodontic target, for example, when the upper and lower dentitions collide during the translation and rotation of the second digital mandibular model according to the second motion trajectory, or the occlusion relationship is not ideal, the physician may perform a new tooth arrangement to generate a new second digital maxillary model and/or second mandibular model, or readjust the occlusion relationship of the target to generate a new second motion trajectory. Therefore, it is obvious that the second form of the virtual articulator can be multiple and respectively correspond to different orthodontic objects.
Through the virtual dental articulator in the second form, orthodontists can more intuitively and flexibly evaluate orthodontics targets and timely correct some non-ideal orthodontics targets, so that the best orthodontic effect is ensured.
Yet another aspect of the present application provides a virtual articulator comprising a first configuration and a second configuration;
the virtual articulator in the first form comprises a first upper jaw digital model, a first lower jaw digital model, a first upper jaw characteristic point set, a first lower jaw characteristic point set and a first motion trail, and the first form is determined according to the virtual articulator construction method;
the virtual articulator in the second form comprises a second upper jaw digital model, a second lower jaw digital model, a second upper jaw feature point set, a second lower jaw feature point set and a second motion track, and the second form is determined according to the virtual articulator using method.
The detailed description of the method for generating the virtual articulator in the first and second embodiments and the detailed implementation of the components included in the report are omitted here for brevity.
While the present invention has been described in detail and with reference to specific embodiments thereof, it will be apparent to one skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope thereof as defined in the appended claims.

Claims (10)

1. A virtual articulator construction method is characterized by comprising the following steps:
s1, importing a first maxillary feature point set, a first mandibular feature point set and a first motion track of a first condyle and a second condyle, wherein the first maxillary feature point set and the first mandibular feature point set are determined based on the real positions of oral soft/hard tissues, and the first motion track is determined based on a real occlusion relationship;
s2, determining a translation track of the first mandible feature point set based on the first motion track;
s3, importing a first upper jaw digital model and a first lower jaw digital model which are determined based on the real shapes of the upper jaw and the lower jaw respectively;
and S4, binding the first upper jaw digital model and the first upper jaw feature point set, and binding the first lower jaw digital model and the first lower jaw feature point set to form the virtual articulator.
2. The virtual articulator construction method of claim 1, characterized in that step S2 further comprises the steps of:
s21, locking the position relation between the first upper jaw feature point set and the first lower jaw feature point set;
s22, unifying the coordinate proportions of the first upper jaw feature point set, the first lower jaw feature point set and the first motion trail;
s23, positioning the locked first upper jaw feature point set and first lower jaw feature point set to a first initial position, wherein the initial position is determined based on the starting point of the first motion track;
and S24, fixing the position of the first upper jaw feature point set, unlocking the first upper jaw feature point set and the first lower jaw feature point set, and enabling the first lower jaw feature point set to translate along the first motion track.
3. The virtual articulator construction method of claim 2, wherein:
the rotating shaft of the first mandibular feature point set is the connecting line of the first condyle and the second condyle.
4. The virtual articulator construction method of claim 3, wherein:
the relative positions of all the feature points in the first maxillary feature point set are unchanged;
the relative position between the individual feature points in the first set of mandible feature points is unchanged.
5. A use method of a virtual articulator is characterized by comprising the following steps:
a1: constructing a virtual articulator of a first configuration according to a virtual articulator construction method as defined in any one of claims 1 to 4;
a2: and simulating a real occlusion relationship by using the virtual articulator of the first form.
6. The method of using the virtual articulator as defined in claim 5, further comprising the steps of:
a3: importing a second maxillary feature point set, a second mandibular feature point set and a second motion trail of the first condyle and the second condyle;
a4: determining a translational track of a second set of mandibular feature points based on the second motion track;
a5: importing a second upper jaw digital model and a second lower jaw digital model;
a6: binding the second upper jaw digital model with the second upper jaw characteristic point set, and binding the second lower jaw digital model with the second lower jaw characteristic point set to form a second-form virtual articulator;
a7: and simulating the target occlusion relation by using the virtual articulator of the second form.
7. The method of using a virtual articulator as defined in claim 6, wherein:
the second motion profile is determined based on the target bite relationship.
8. The method of using a virtual articulator as defined in claim 6, wherein:
the second upper jaw characteristic point set and the second lower jaw characteristic point set are determined based on the target position of the soft/hard tissue of the oral cavity;
and the second upper jaw digital model and the second lower jaw digital model are determined respectively based on the target morphologies of the upper jaw and the lower jaw.
9. The method for using the virtual articulator as claimed in claim 6, wherein step A4 further comprises the steps of:
a41, locking the position relation between the second upper jaw feature point set and the second lower jaw feature point set;
a42, unifying the coordinate proportions of the second upper jaw characteristic point set, the second lower jaw characteristic point set and the second motion trail;
a43, positioning the locked second upper jaw characteristic point set and the second lower jaw characteristic point set to a second initial position, wherein the second initial position is determined based on the starting point of the second motion track;
and A44, fixing the position of the second upper jaw feature point set, and unlocking the position of the second upper jaw feature point set and the second lower jaw feature point set, so that the second lower jaw feature point set can perform translation along a second motion track.
10. A virtual articulator, characterized by:
comprises a first form and a second form;
the virtual articulator in the first form comprises a first upper jaw digital model, a first lower jaw digital model, a first upper jaw feature point set, a first lower jaw feature point set and a first motion trajectory, and the first form is determined according to the virtual articulator construction method of any one of claims 1-4;
the virtual articulator in the second configuration comprises a second maxillary digital model, a second mandibular digital model, a second maxillary feature point set, a second mandibular feature point set, and a second motion trajectory, and the second configuration is determined according to the virtual articulator use method of any one of claims 6-9.
CN202211127514.4A 2022-09-16 2022-09-16 Virtual dental articulator structure and use method thereof, and virtual dental articulator Active CN115363795B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211127514.4A CN115363795B (en) 2022-09-16 2022-09-16 Virtual dental articulator structure and use method thereof, and virtual dental articulator

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211127514.4A CN115363795B (en) 2022-09-16 2022-09-16 Virtual dental articulator structure and use method thereof, and virtual dental articulator

Publications (2)

Publication Number Publication Date
CN115363795A true CN115363795A (en) 2022-11-22
CN115363795B CN115363795B (en) 2024-03-01

Family

ID=84072028

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211127514.4A Active CN115363795B (en) 2022-09-16 2022-09-16 Virtual dental articulator structure and use method thereof, and virtual dental articulator

Country Status (1)

Country Link
CN (1) CN115363795B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102715965A (en) * 2012-06-25 2012-10-10 电子科技大学 Dental jaw movement locus recording device and dental jaw relationship transferring method
CN102741880A (en) * 2009-08-21 2012-10-17 阿莱恩技术有限公司 Digital dental modeling
DE102012104373A1 (en) * 2012-05-21 2013-12-05 Albert Mehl Method for simulating motion of e.g. upper jaw, for optimizing tooth restorations of male patient in clinic, involves directly determining movement parameters of upper jaw and/or lower jaw from measurement records of upper and lower jaws
PL422015A1 (en) * 2017-06-24 2019-01-02 Walerzak Konrad Nzoz Centrum Leczenia Wad Zgryzu Method for registering movements and geometry of the mandibular joint
KR20220045909A (en) * 2020-10-06 2022-04-13 이우형 A system that can analyze the arch shape and occlusion pattern based on big data cloud that classifies the arch shape and occlusion pattern of the maxillary and mandibular dentition, and the maxillary and mandibular dentition can be merged into the anatomical position
CN114948287A (en) * 2022-05-10 2022-08-30 上海爱乐慕健康科技有限公司 Occlusion induction appliance design and manufacturing method and occlusion induction appliance

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102741880A (en) * 2009-08-21 2012-10-17 阿莱恩技术有限公司 Digital dental modeling
DE102012104373A1 (en) * 2012-05-21 2013-12-05 Albert Mehl Method for simulating motion of e.g. upper jaw, for optimizing tooth restorations of male patient in clinic, involves directly determining movement parameters of upper jaw and/or lower jaw from measurement records of upper and lower jaws
CN102715965A (en) * 2012-06-25 2012-10-10 电子科技大学 Dental jaw movement locus recording device and dental jaw relationship transferring method
PL422015A1 (en) * 2017-06-24 2019-01-02 Walerzak Konrad Nzoz Centrum Leczenia Wad Zgryzu Method for registering movements and geometry of the mandibular joint
KR20220045909A (en) * 2020-10-06 2022-04-13 이우형 A system that can analyze the arch shape and occlusion pattern based on big data cloud that classifies the arch shape and occlusion pattern of the maxillary and mandibular dentition, and the maxillary and mandibular dentition can be merged into the anatomical position
CN114948287A (en) * 2022-05-10 2022-08-30 上海爱乐慕健康科技有限公司 Occlusion induction appliance design and manufacturing method and occlusion induction appliance

Also Published As

Publication number Publication date
CN115363795B (en) 2024-03-01

Similar Documents

Publication Publication Date Title
Elnagar et al. Digital Workflow for Combined Orthodontics and Orthognathic Surgery.
US11432919B2 (en) Physical and virtual systems for recording and simulating dental motion having 3D curvilinear guided pathways and timing controls
JP6775621B2 (en) Methods and systems for obtaining data from people to create 3D models
Bobek et al. Virtual surgical planning for orthognathic surgery using digital data transfer and an intraoral fiducial marker: the charlotte method
Srivastava et al. Facial asymmetry revisited: Part I-diagnosis and treatment planning
RU2567604C2 (en) Dynamic virtual articulator
AU2008240993B2 (en) Method for deriving shape information
Ferrario et al. Three-dimensional dental arch curvature in human adolescents and adults
Ayoub et al. A novel approach for planning orthognathic surgery: the integration of dental casts into three-dimensional printed mandibular models
EP3641653B1 (en) Method of recording of temporomandibular joint movement and geometry
Özdemir et al. Virtual articulators, virtual occlusal records and virtual patients in dentistry
Creagh et al. Integrating a facially driven treatment planning to the digital workflow for rehabilitation of edentulous arches: a case report
CN115363795B (en) Virtual dental articulator structure and use method thereof, and virtual dental articulator
Bassetti The Vertical Dimension in prosthesis and orthognathodontics: Integration between function and aesthetics
Fushima et al. Real-time orthognathic surgical simulation using a mandibular motion tracking system
Patel et al. Surgical planning: 2D to 3D
JP3757160B2 (en) 3D facial diagram display method for orthodontics
Grauer Three-dimensional applications in orthodontics
Grabowski et al. Assessment and Evaluation in the Aesthetic Orthognathic Patient
CN115408918A (en) Orthodontic force evaluation display system of appliance
Gateño et al. Orthognathic Examination and Treatment Planning
Delimulati Design of parallel robot for dental articulation and its optimization
AlOtaibi et al. Integrated Management of the Orthognathic Patient
Tum Determination of occlusal plane using bony anatomical landmarks through the analysis of cone beam computed tomography
CN116138802A (en) Upper and lower jaw dental arch coordination analysis method based on oral cavity cone beam CT

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant