CN109907827B - Operation navigation system for mandibular angle osteotomy - Google Patents

Operation navigation system for mandibular angle osteotomy Download PDF

Info

Publication number
CN109907827B
CN109907827B CN201910305289.0A CN201910305289A CN109907827B CN 109907827 B CN109907827 B CN 109907827B CN 201910305289 A CN201910305289 A CN 201910305289A CN 109907827 B CN109907827 B CN 109907827B
Authority
CN
China
Prior art keywords
osteotomy
preoperative
prediction model
patient
navigation system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910305289.0A
Other languages
Chinese (zh)
Other versions
CN109907827A (en
Inventor
薛红宇
张颂
蔡辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University Third Hospital Peking University Third Clinical Medical College
Original Assignee
Peking University Third Hospital Peking University Third Clinical Medical College
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University Third Hospital Peking University Third Clinical Medical College filed Critical Peking University Third Hospital Peking University Third Clinical Medical College
Priority to CN201910305289.0A priority Critical patent/CN109907827B/en
Publication of CN109907827A publication Critical patent/CN109907827A/en
Application granted granted Critical
Publication of CN109907827B publication Critical patent/CN109907827B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)

Abstract

The invention discloses an operation navigation system for mandibular angle osteotomy, which comprises the following steps: s1, establishing a osteotomy face prediction model learning version according to related data of a previous mandibular angle osteotomy operation patient based on a multitask convolutional neural network, training the learning version by using the related data of the newly-assembled mandibular angle osteotomy operation patient to obtain a stable osteotomy face prediction model, and then overlapping a dangerous area data set to obtain a stable osteotomy face prediction model version; s2, inputting the relevant information of the patient with the mandibular angle osteotomy into the osteotomy face prediction model, and predicting the 3D effect change range of the face after the operation; s3, drawing and projecting a perspective three-dimensional image fitting with a real-time surgical field image on a lens screen according to the maximum osteotomy amount of the mandibular angle osteotomy patient and the preoperative CT; and S4, continuously performing function superposition on the osteotomy face prediction model, continuously testing and improving, and perfecting the surgical navigation system. The system predicts the postoperative effect by establishing a model, renders the effect in real time during the operation and improves the operation precision.

Description

Operation navigation system for mandibular angle osteotomy
Technical Field
The invention relates to the technical field of surgical navigation, in particular to a surgical navigation system for mandibular angle osteotomy.
Background
Due to the change in the facial appearance of the patient after the mandibular angle osteotomy, there is a partial correlation with the change in soft tissue volume due to the change in local soft tissue tension, in addition to the amount of mandibular bone osteotomy. Therefore, the bone removal amount of the mandibular angle osteotomy is not a subtraction between the face appearance of the preoperative patient and the postoperative prediction effect, and the 3D design of the operation effect by means of the three-dimensional CT and the facial 3D scanogram system has not been able to accurately predict the bone removal amount and the shape and position of the osteotomy face of the mandibular angle in order to achieve the postoperative prediction effect in the mandibular angle osteotomy.
The existing surgical navigation system for mandibular angle osteotomy carries out spiral CT scanning on a patient, reconstructs and processes image data, designs an osteotomy line by means of previous clinical experience, marks the osteotomy line on a three-dimensional data model of a mandible, and is divided into a plurality of drill points by a robot-assisted surgical system, and carries out drilling at a plurality of drill points on a bone surface along the osteotomy line so as to realize osteotomy. Meanwhile, the system is connected with the marking module through drilling in the lower jaw angle area to form a marking complex, or the dental cast is customized according to the lower dentition form of the patient, the marking module is connected to the dental cast, and the relative position of the lower jaw is judged by the fact that the patient wears the dental cast connected with the marking module in the operation. In the operation, the system judges the relative position of the mandible through the identification and marking module by means of the augmented reality technology and determines the position of the osteotomy line so as to realize the mandible angle osteotomy through the RAS. Through clinical practical application, the surgical navigation system has small average error, can ensure the safety of surgery, and has strong advantages in the aspects of assisting the accumulation of doctor experience and the like. However, the system has three disadvantages: (1) additional fixation of the marker points is required: the patient is ordered to wear the tooth socket die connected with the marking module to determine the osteotomy line, and the connecting mode is non-rigid connection, so that the marking module and the mandible have higher risk of relative displacement during operation, the osteotomy line judgment has errors, and the operation safety is reduced; or the marking module is connected to the mandibular angle area in a drilling mode, the stability of the relative position relationship between the marking module and the mandibular shaft body is greatly improved, but the actual operation space of the mandibular angle osteotomy of the intraoral approach is narrow and deep, the difficulty of the operation of fixing the marking module is high, and meanwhile, the marking module is too large relative to the surgical field, so that applicable cases for properly placing the marking module are greatly limited. (2) The operation navigation system adopts a mechanical intermittent punching mode for the treatment of the osteotomy mode, and the hole channel formed by punching through the bone tissue is linear, so the mode is not suitable for the case that the mandible outer plate removal operation is required at the same time, and the osteotomy plane formed by the operation navigation system is a plane rather than a curved surface, thereby the application range of the operation navigation system is limited to a great extent. (3) The operation navigation system does not consider the soft tissue change after the mandibular angle osteotomy, the osteotomy line is designed completely by the past experience of an operator without quantitative indexes, and the system improves the safety of the operation, but has no advantages in the aspects of accurately predicting the postoperative effect of a patient, improving the satisfaction degree of the patient and the like.
Therefore, it is an urgent problem to design an operation navigation system for mandibular angle osteotomy based on artificial intelligence technology and augmented reality technology.
Disclosure of Invention
The invention aims to provide a surgical navigation system for mandibular angle osteotomy, which is obtained by establishing an osteotomy face prediction model and training and perfecting the model, thereby providing technical support for surgery, predicting postoperative effect, improving surgery precision and reducing surgery risk.
The above object of the present invention is achieved by the following technical solutions:
a surgical navigation system for mandibular angle osteotomy, comprising the steps of:
s1, establishing a osteotomy face prediction model learning version according to related data of a previous mandibular angle osteotomy operation patient based on a multitask convolutional neural network, training the osteotomy face prediction model learning version by using the related data of a newly-assembled mandibular angle osteotomy operation patient to obtain a stable osteotomy face prediction model, and then overlapping a dangerous area data set to obtain an osteotomy face prediction model stable version;
s2, inputting the relevant information of the patient with mandibular angle osteotomy into the osteotomy face prediction model, predicting the resection line and the osteotomy face, and predicting the postoperative face effect;
s3, drawing and projecting a perspective three-dimensional image fitting with a real-time surgical field image on a lens screen according to the maximum osteotomy amount of the mandibular angle osteotomy patient and the preoperative CT;
and S4, continuously performing function superposition on the osteotomy face prediction model, continuously testing and improving, and perfecting the surgical navigation system.
The invention is further configured to: in step S1, the relevant data of the previous mandibular angle osteotomy patient includes a preoperative CT image, a postoperative CT image, a preoperative facial picture, and a postoperative facial picture; comparing the preoperative CT image and the postoperative CT image after pixel level alignment to obtain a difference value which is a first final osteotomy surface, and quantifying the first final osteotomy surface to obtain a first final osteotomy surface parameter; according to the preoperative CT image, a first risk area 1 of nerve deformation and a first risk area 2 of arteriovenous deformation are obtained, the first risk area 1 and the first risk area 2 are quantized, and parameters of the first risk area 1 and the first risk area 2 are obtained.
The invention is further configured to: in step S1, a training set is formed from the final osteotomy parameter dataset, the preoperative facial photograph dataset, the postoperative facial photograph dataset, and the preoperative CT image dataset of the previous patient from different perspectives, and is input to a multitask convolutional neural network for training to obtain an osteotomy prediction model learning version, i.e., the surgical navigation system version 1.0.
The invention is further configured to: and forming a test set by a second final osteotomy face parameter data set, a preoperative facial picture data set and a postoperative facial picture data set of newly-grouped patients at different visual angles, and testing the osteotomy face prediction model learning version to obtain a stable osteotomy face prediction model.
The invention is further configured to: the risk zone data set includes first risk zone data for past patients, second risk zone data for newly-enrolled patients.
The invention is further configured to: in step S2, the information related to the patient with mandibular angle osteotomy includes a preoperative CT image, a preoperative facial picture, and a postoperative predicted facial picture; obtaining a lower dentition rivet point, a notch nerve shaped-walking area and a third risk area 1 of a chin nerve shaped-walking area and a third risk area 2 of a facial artery and a posterior facial vein shaped-walking area of a patient in mandibular angle osteotomy from a preoperative CT image; and avoiding the third risk area 1 and the third risk area 2, and obtaining the maximum bone removal range of the patient with the mandibular angle osteotomy.
The invention is further configured to: and obtaining the effect prediction of the maximum change amount of the postoperative face 3D according to the maximum bone removing range, the preoperative face picture and the preoperative CT image.
The invention is further configured to: step S3, obtaining tooth rivet points from the preoperative CT, inputting preoperative CT, preoperative picture, prediction postoperative picture data set and maximum bone removing range into a bone cutting surface prediction model stable version, and predicting a cutting line and a bone removing surface; and then, according to different visual angles, by combining with AR equipment, rendering the operative field cutting line, the bone surface, the third danger zone 1 and the third danger zone 2 in real time, and projecting the visual three-dimensional model onto a lens screen after the visual three-dimensional model is superposed with the actual operative field through an AR system to complete the 3.0 version of the operative navigation system.
The invention is further configured to: in step S4, the 3D effect prediction of the postoperative face is superposed on the 3.0 version of the operation navigation system, the improvement system is perfected, and the 4.0 version of the operation navigation system is completed;
the invention is further configured to: the 4.0 version of the operation navigation system is debugged repeatedly and applied to clinical practical work, and is further upgraded according to the needs of practical conditions, so that the stability of the system is improved, the accuracy of the postoperative effect expectation and the operation process is improved, and the 5.0 version of the operation navigation system is completed.
Compared with the prior art, the invention has the beneficial technical effects that:
1. according to the method and the device, the osteotomy face data set is analyzed and constructed by utilizing historical data, and a data basis is provided for researching the intelligent prediction of the osteotomy face.
2. Further, by learning the nonlinear influence of soft tissue variables on the final postoperative effect and modeling, accurate estimation of the osteotomy surface based on preoperative CT data, preoperative pictures and simulated postoperative effects is realized;
3. furthermore, the system predicts the postoperative effect by establishing a model, renders the effect in real time during the operation, improves the operation precision, reduces the operation risk, shortens the operation time, reduces the operation complications and improves the satisfaction degree of patients.
4. Furthermore, combine AR equipment, carry out real-time early warning in the art, avoid touching danger area, guarantee safety in the art.
5. Furthermore, the convolution network is combined with the mandibular angle osteotomy, so that the prediction precision is improved, and the postoperative effect is better.
Drawings
FIG. 1 is a schematic diagram of a surgical navigation system according to an embodiment of the present invention;
FIG. 2 is a schematic view of a surgical navigation system according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of predictive model building according to an embodiment of the present invention;
figure 4 is a schematic diagram of a prediction of an osteotomy face in accordance with one embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
Fig. 1 is a general configuration diagram of the surgical navigation system.
Specifically, a surgical navigation system for mandibular angle osteotomy, as shown in fig. 2, includes the steps of:
s1, establishing a osteotomy face prediction model learning version according to the related data of the previous mandibular angle osteotomy patient, and testing the osteotomy face prediction model learning version by using the related data of the newly-assembled mandibular angle osteotomy patient to obtain a stable osteotomy face prediction model;
specifically, as shown in fig. 3, the method includes the following steps:
a1, obtaining first osteotomy surface parameters according to a preoperative CT image and a postoperative CT image of a patient subjected to mandibular angle osteotomy of a previous patient, and obtaining an osteotomy surface prediction model learning version based on a multitask convolutional neural network by combining a preoperative picture and a postoperative picture of the previous patient to obtain an operation navigation system version 1.0;
a2, collecting preoperative CT images and postoperative CT images of newly-organized patients to obtain second osteotomy face parameters, constructing a test set by combining preoperative pictures and postoperative pictures, testing the osteotomy face prediction model to obtain a stable osteotomy face prediction model, combining the dangerous area of the previous patient and the dangerous area of the newly-organized patients to obtain a stable version of the osteotomy face prediction model, and completing the 2.0 version of the surgical navigation system.
The following detailed description:
the preoperative CT image and the postoperative CT image of a patient with the previous mandibular angle osteotomy are aligned in a pixel level mode, comparison is carried out, the obtained difference value is the final osteotomy face of the previous patient, namely the first final osteotomy face, and the first final osteotomy face is split and calibrated.
First, the first final osteotomy plane is divided into two components, i.e., a first mandible resection line α 1 and a first mandible outer plate removal plane β 1, and the labeling information of the first mandible resection line α 1, the first mandible outer plate removal plane β 1 and a first mandible rivet point, which includes a plurality of points and determines a first mandible row rivet point reference plane γ 1, is directly obtained on the preoperative CT.
Then, the first final osteotomy plane is quantized to parametrically indicate the positional relationship among the first mandibular resection line α 1, the first mandibular outer plate removal plane β 1 and the first mandibular rivet point reference plane γ 1, that is, the positional relationship among the first mandibular resection line α 1 and the first mandibular outer plate removal plane β 1 with reference to the first mandibular rivet point reference plane γ 1, specifically, the distance from the geometric center of the first mandibular resection line α 1 to the geometric center of the first mandibular rivet point reference plane γ 1 is indicated by parameter 11, the angle of deflection between the first mandibular resection line α 1 and the first mandibular rivet point reference plane γ 1 is indicated by parameter 12, the distance from the geometric center of the first mandibular removal plane β 1 to the geometric center of the first mandibular rivet point reference plane γ 1 is indicated by parameter 13, and the angle of deflection between the first mandibular outer plate removal plane β 1 and the first mandibular outer plate removal plane γ 1 is indicated by parameter 14.
Because the final osteotomy surface is of a three-dimensional structure, the parameters of the same view angle form a data set, and the parameters of different view angles form different data sets.
The method comprises the steps of forming a training set by a plurality of visual angle data sets of CT before an existing patient, a plurality of visual angle data sets of a picture before the operation, a plurality of visual angle data sets of a picture after the operation and a final osteotomy face data set, inputting a multitask convolutional neural network for training, and obtaining an osteotomy face prediction model learning version, namely an operation navigation system version 1.0.
In this step, the nonlinear influence of soft tissue variables on the final postoperative effect is learned and modeled, and accurate estimation of the osteotomy face based on preoperative CT data, preoperative pictures and simulated postoperative effects is achieved.
For the osteotomy face prediction model learning version, a stability test is required.
And collecting data of newly-grouped patients with the mandibular angle osteotomy to form a test set.
Similarly, the preoperative CT image and the postoperative CT image of the newly-enrolled patient are aligned in pixel level and compared, and the obtained difference value is the final osteotomy face of the newly-enrolled patient, namely the second final osteotomy face. And splitting and calibrating the second final osteotomy surface.
First, the second final osteotomy plane is divided into two components, i.e., a second mandible resection line α 2 and a second mandible outer plate removal plane β 2, and the labeling information of the second mandible resection line α 2, the second mandible outer plate removal plane β 2 and a second lower tooth rivet point, which includes a plurality of points, is directly obtained on the preoperative CT, and the plurality of lower tooth rivet points determine a second lower dentition rivet point reference plane γ 2.
Then, the second final osteotomy face is quantized to parametrically indicate the interrelations among the second mandibular resection line α 2, the second mandibular outer plate removal plane β 2, and the second mandibular rivet point reference plane γ 2, that is, the positional relationships of the second mandibular resection line α 2 and the second mandibular outer plate removal plane β 2 are parametrically indicated with reference to the second mandibular rivet point reference plane γ 2, specifically, the distance of the geometric center of the second mandibular resection line α 2 from the geometric center of the second mandibular rivet point reference plane γ 2 is indicated by parameter 21, the deflection angle of the second mandibular resection line α 2 from the second mandibular rivet point reference plane γ 2 is indicated by parameter 22, the distance of the geometric center of the second mandibular outer plate removal plane β 2 from the center of the second mandibular rivet point reference plane γ 2 is indicated by parameter 23, and the geometric deflection angle of the second mandibular outer plate removal plane β 2 from the center of the second mandibular rivet point reference plane γ 2 is indicated by parameter 24.
From the data sets of the above parameters for the different perspectives, a data set of a second final osteotomy face of the newly enrolled patient is constructed.
And forming a test set by the data set of the second final osteotomy surface, preoperative CT, and preoperative picture and postoperative picture of the combined newly-added patients.
And inputting the test set data into the osteotomy face prediction model learning version for testing to obtain a stable osteotomy face prediction model, and improving the accuracy and stability of the surgical navigation system.
Marking a lower alveolar nerve deformation area and a mental nerve deformation area of a previous patient as a first risk area 1 and marking a facial artery and a posterior vein deformation area as a first risk area 2 according to preoperative CT of the previous patient; and carrying out quantitative evaluation on each danger area, and constructing a first danger area data set.
Quantifying the first dangerous area, and representing the relationship between each dangerous area and a first lower dentition rivet point reference plane gamma 1 by using parameters, specifically, representing the distance from the geometric center of the first dangerous area 1 to the geometric center of the first reference plane gamma 1 by using a parameter 15, and representing the deflection angle between the first dangerous area 1 and the first reference plane gamma 1 by using a parameter 16; the distance of the geometric center of first hazard zone 2 from the geometric center of first reference plane γ 1 is represented by parameter 17, and the deflection angle of first hazard zone 2 from first reference plane γ 1 is represented by parameter 18.
Marking a lower alveolar nerve deformation area and a mental nerve deformation area of a newly-entered patient as a second risk area 1 and marking a facial artery and a posterior vein deformation area as a second risk area 2 according to preoperative CT of the newly-entered patient; and carrying out quantitative evaluation on each danger area to construct a second danger data set.
Similarly, the second danger zone is quantified, and the relationship between each danger zone and the second lower dentition rivet point reference plane gamma 2 is represented by parameters, specifically, the distance from the geometric center of the second danger zone 1 to the geometric center of the first reference plane gamma 2 is represented by a parameter 25, and the deflection angle between the second danger zone 1 and the first reference plane gamma 2 is represented by a parameter 26; the distance of the geometric center of second hazard zone 2 from the geometric center of first reference plane γ 2 is represented by parameter 27, and the deflection angle of second hazard zone 2 from first reference plane γ 2 is represented by parameter 28.
And superposing the first dangerous area data set and the second dangerous area data set to the stable osteotomy face prediction model to obtain a stable osteotomy face prediction model version, and finishing the construction of the 2.0 version of the surgical navigation system.
And S2, inputting the relevant information of the patient with the mandibular angle osteotomy into the osteotomy face prediction model, and predicting the maximum change range of the postoperative face 3D effect.
Marking a lower alveolar nerve deformed region and a mental nerve deformed region of a patient to be subjected to mandibular angle osteotomy, namely an actual patient as a third risk region 1 and marking a facial artery and a posterior facial vein deformed region of the patient as a third risk region 2 according to preoperative CT of the patient; and carrying out quantitative evaluation on each danger area, and constructing a third danger area data set.
And (3) grabbing a lower dentition rivet point according to the preoperative CT image of the patient with the mandibular angle osteotomy, and obtaining a lower dentition rivet point third reference plane gamma 3 of the patient with the mandibular angle osteotomy.
And according to the preoperative CT image of the patient with the mandibular angle osteotomy, avoiding the third dangerous area 1 and the third dangerous area 2, and obtaining the maximum bone removal range of the patient with the mandibular angle osteotomy.
The maximum deboning range of a patient with the mandibular angle osteotomy operation is quantified and comprises the steps of splitting and marking the maximum deboning range, splitting the maximum deboning range into a third mandibular resection line α 3 and a third mandibular outer plate removal plane β 3, and directly marking the third mandibular resection line α 3 and the third mandibular outer plate removal plane β 3 on a preoperative CT image.
The relationship between the third mandible resection line α 3 and the third mandible outer plate removal plane β 3 and the third reference plane γ 3, respectively, is quantitatively estimated, specifically, the distance from the geometric center of the third mandible resection line α 3 to the geometric center of the third reference plane γ 3 is represented by a parameter 31, the deflection angle of the third mandible resection line α 3 to the third reference plane γ 3 is represented by a parameter 32, the distance from the geometric center of the third mandible outer plate removal plane β 3 to the geometric center of the third reference plane γ 3 is represented by a parameter 33, and the deflection angle of the third mandible outer plate removal plane β 3 to the third reference plane γ 3 is represented by a parameter 34.
Carrying out quantitative evaluation on each danger zone, and representing the relationship between each danger zone and a reference plane gamma 3 by using parameters, wherein the distance from the geometric center of the third danger zone 1 to the geometric center of the third reference plane gamma 3 is represented by using a parameter 35, and the deflection angle between the third danger zone 1 and the reference plane gamma 3 is represented by using a parameter 36; the distance of the geometric center of third hazard zone 2 from the geometric center of third reference plane γ 3 is represented by parameter 37, and the deflection angle of third hazard zone 2 from third reference plane γ 3 is represented by parameter 38.
The maximum extent of bone removal is not represented by the final surgical osteotomy.
Inputting preoperative CT, preoperative picture and maximum bone removal range of a patient with mandibular angle osteotomy into an osteotomy face prediction model stable version to obtain 3D effect prediction of the maximum change amount of the postoperative face, namely predicting the change range of the postoperative face effect graph.
S3, drawing and projecting a perspective three-dimensional image fitting with the real-time image of the operative field on the lens screen according to the preoperative CT, the resection line and the bone removal surface of the patient with the mandibular angle osteotomy.
Specifically, as shown in fig. 4, the method includes the following steps:
b1, obtaining a lower dentition rivet point and a dangerous area of the patient of the mandibular angle osteotomy according to the preoperative CT image of the patient of the mandibular angle osteotomy, and calibrating the maximum bone removal range;
b2, inputting the preoperative CT image, preoperative picture, prediction postoperative picture and maximum bone removal range of the patient with the mandibular angle osteotomy into an osteotomy face prediction model, predicting the actual patient's osteotomy line and bone removal face, and rendering the operative field resection line, bone removal face and dangerous area in real time by combining AR equipment.
The following detailed description: according to the conditions of equipment weight, performance stability, wearing mode stability, whether the operation sterile principle is met or not, appropriate wearable augmented reality equipment and a software platform which can be used for secondary development are tested, selected and purchased, and appropriate AR (augmented reality) equipment, also called wearable augmented reality equipment, is selected.
By utilizing the augmented reality technology, real-time drawing and fitting of the osteotomy face and the dangerous area under the operative field are realized, the osteotomy precision of the mandibular angle osteotomy is improved, the early warning effect on the operator is realized, and the dangerous area is prevented from being touched.
Inputting preoperative CT, preoperative picture, predicted postoperative picture and maximum bone removal range of a patient with mandibular angle osteotomy into an osteotomy face prediction model stable version, and predicting an actual patient resection line and a bone removal face; and then, rendering the surgical field cutting line, the bone removing surface, the third dangerous area 1 and the third dangerous area 2 in real time according to the cutting line, the bone removing surface and the dangerous area at different visual angles by combining AR equipment, and projecting the visual three-dimensional model onto a lens screen after the visual three-dimensional model is overlapped with the actual surgical field through an AR system to complete the 3.0 version of the surgical navigation system.
Specifically, based on AR equipment, a set of visual three-dimensional model aiming at an osteotomy face in a mandibular angle osteotomy is established, a three-dimensional CT image is combined, a plurality of lower teeth on the operation side are marked, a plurality of rivet points are set according to the lower teeth, a danger area is marked according to the preoperative CT image, and the spatial relationship among possible rivet points, the danger area and the osteotomy face three-dimensional model is determined in the three-dimensional mode.
In the actual operation process, the AR equipment worn by an operator shoots an operation field and captures a preset rivet point through a camera carried by the AR equipment, and perspective images of the osteotomy surface, the danger area 1 and the danger area 2 which are matched with the real-time image of the operation field are projected on a screen of the AR equipment according to the established three-dimensional spatial position relation, so that the real-time rendering of the osteotomy line, the osteotomy surface, the danger area 1 and the danger area 2 in the operation field is realized. According to a large amount of practical intraoperative images, 3 left and right rivet points which are easy to grab and do not influence the stability of spatial position relation construction are screened out. An image automatic identification technology based on AR equipment constructs a system function for analyzing and identifying the image of the surgical field in real time and automatically capturing a preset rivet point, and the system function is combined with the rivet point, a danger area 1, a danger area 2 and a three-dimensional image of an osteotomy surface which are constructed in advance, so that an operator can realize the perspective projection of a three-dimensional model of the osteotomy surface and an AR equipment screen by wearing the AR equipment in the actual operation and under the assistance of the AR equipment, and the function of fitting a mandible part in the surgical field observed by penetrating the AR equipment screen from the visual angle of the operator, and simultaneously can project images of a third danger area 1 and a third danger area 2 on the AR equipment screen in a perspective manner so as to correspond to a lower alveolar nerve and genius nerve deformed area, a facial artery and a posterior venous deformed area, and realize the early warning effect on the operator.
Meanwhile, in the actual operation process, the operation navigation system is tested and adjusted, so that the system is accurately positioned and can stably project, and the navigation effect of the operation navigation system in the mandibular angle osteotomy operation is realized.
And S4, continuously performing function superposition on the osteotomy face prediction model, continuously testing and improving, and perfecting the surgical navigation system.
Superposing the 3D effect prediction of the postoperative face in the step S2 to the 2.0 version of the surgical navigation system in the step S3, perfecting the improved system and completing the 4.0 version of the surgical navigation system;
after the 4.0 version of the surgical navigation system is repeatedly debugged, the surgical navigation system is applied to clinical practical work, and is further upgraded according to the needs of practical conditions, so that the stability of the system is improved, the postoperative effect expectation and the precision in the surgical process are improved, and the 5.0 version of the surgical navigation system is completed.
For a patient, the maximum bone removal range is calibrated according to the actual preoperative CT of the patient and the dangerous areas 1 and 2, the preoperative picture, the preoperative CT and the maximum bone removal amount are input, the postoperative face 3D effect maximum change diagram is obtained by means of an artificial intelligence technology, the adjustable range of the postoperative predicted face contour of the patient is estimated, the high-precision personalized design of the postoperative effect of the patient is achieved, the preoperative communication cost is shortened, and the satisfaction degree of the patient is improved.
For a doctor, the surgery navigation system automatically pre-judges the maximum bone removal range of a patient by marking the dangerous area, and predicts the adjustable range of the facial profile after retropulsion, so that the personalized design which is based on a 3D photographing and processing system and can realize the postoperative effect and be accurate is realized, the surgery field projection is carried out on the resection line and the bone removal surface by combining AR equipment, the surgery precision is improved, the dangerous area is marked under the surgery field, a prompt effect is played to a doctor, the surgery risk is reduced, the surgery time is shortened, and the surgery complications are reduced.
The embodiments of the present invention are preferred embodiments of the present invention, and the scope of the present invention is not limited by these embodiments, so: all equivalent changes made according to the structure, shape and principle of the invention are covered by the protection scope of the invention.

Claims (10)

1. The utility model provides a surgery navigation of angle of jaw osteotomy which characterized in that: establishing a prediction model subsystem, a prediction subsystem, a fitting subsystem, an improvement subsystem and AR equipment; the system comprises a prediction model establishing subsystem, a prediction model creating subsystem and a prediction model training subsystem, wherein the prediction model establishing subsystem is used for establishing a bone cutting face prediction model learning version based on a multitask convolutional neural network according to related data of a previous mandibular angle bone cutting operation patient, training the bone cutting face prediction model learning version by using the related data of a newly-inserted mandibular angle bone cutting operation patient to obtain a stable bone cutting face prediction model, and then superposing dangerous area data sets of the previous mandibular angle bone cutting operation patient and the newly-inserted mandibular angle bone cutting operation patient to obtain a stable bone cutting face prediction model; the prediction subsystem is used for inputting relevant information of a patient with mandibular angle osteotomy into the osteotomy face prediction model stable version and predicting the maximum change range of the 3D effect of the face after the operation; the fitting subsystem is used for calibrating the maximum osteotomy amount according to a preoperative CT image and a dangerous area of a patient with the mandibular angle osteotomy, predicting the postoperative effect by combining the preoperative CT image and a preoperative picture, and drawing and projecting a perspective three-dimensional image fitted with an operative field real-time image on a lens screen by combining AR equipment; the improvement subsystem is used for continuously performing function superposition on the osteotomy face prediction model, continuously testing and improving, and perfecting an operation navigation system.
2. The surgical navigation system of claim 1, wherein: in the prediction model subsystem, relevant data of the previous mandibular angle osteotomy patient comprise a preoperative CT image, a postoperative CT image, a preoperative facial picture and a postoperative facial picture;
comparing the preoperative CT image and the postoperative CT image after pixel level alignment to obtain a difference value which is a first final osteotomy surface, and quantifying the first final osteotomy surface to obtain a first final osteotomy surface parameter;
according to the preoperative CT image, a first risk area 1 of nerve deformation and a first risk area 2 of arteriovenous deformation are obtained, the first risk area 1 and the first risk area 2 are quantized, and parameters of the first risk area 1 and the first risk area 2 are obtained.
3. The surgical navigation system of claim 1, wherein: in the prediction model establishing subsystem, a training set is formed by a final osteotomy face parameter data set, a preoperative facial picture data set, a postoperative facial picture data set and a preoperative CT image data set of a previous patient at different visual angles, and is input into a multitask convolutional neural network for training to obtain an osteotomy face prediction model learning version, namely an operation navigation system version 1.0.
4. The surgical navigation system of claim 3, wherein: and forming a test set by a second final osteotomy face parameter data set, a preoperative facial picture data set and a postoperative facial picture data set of newly-grouped patients at different visual angles, and testing the osteotomy face prediction model learning version to obtain a stable osteotomy face prediction model.
5. The surgical navigation system of claim 1, wherein: the risk zone data set includes first risk zone data for past patients, second risk zone data for newly-enrolled patients.
6. The surgical navigation system of claim 1, wherein: in the prediction subsystem, the related information of the patient with the mandibular angle osteotomy comprises a preoperative CT image, a preoperative facial picture and a postoperative predicted facial picture; obtaining a lower dentition rivet point, a notch nerve shaped-walking area and a third risk area 1 of a chin nerve shaped-walking area and a third risk area 2 of a facial artery and a posterior facial vein shaped-walking area of a patient in mandibular angle osteotomy from a preoperative CT image; and avoiding the third risk area 1 and the third risk area 2, and obtaining the maximum bone removal range of the patient with the mandibular angle osteotomy.
7. The surgical navigation system of claim 6, wherein: and obtaining the effect prediction of the maximum change amount of the postoperative face 3D according to the maximum bone removing range, the preoperative face picture and the preoperative CT image.
8. The surgical navigation system of claim 6, wherein: in the fitting subsystem, tooth rivet points are obtained from a preoperative CT image of a patient with mandibular angle osteotomy, the preoperative CT image, a preoperative picture, a prediction postoperative picture data set and a maximum bone removing range are input into a bone cutting surface prediction model stable version, and a cutting line and a bone removing surface are predicted; and then, according to different visual angles, by combining with AR equipment, rendering the operative field cutting line, the bone surface, the third danger zone 1 and the third danger zone 2 in real time, and projecting the visual three-dimensional model onto a lens screen after the visual three-dimensional model is superposed with the actual operative field through an AR system to complete the 3.0 version of the operative navigation system.
9. The surgical navigation system of claim 1, wherein: in the improvement subsystem, the 3D effect prediction of the postoperative face is superposed on the 3.0 version of the operation navigation system, the improvement system is improved, and the 4.0 version of the operation navigation system is completed.
10. The surgical navigation system of claim 9, wherein: the 4.0 version of the operation navigation system is debugged repeatedly and applied to clinical practical work, and is further upgraded according to the needs of practical conditions, so that the stability of the system is improved, the accuracy of the postoperative effect expectation and the operation process is improved, and the 5.0 version of the operation navigation system is completed.
CN201910305289.0A 2019-04-16 2019-04-16 Operation navigation system for mandibular angle osteotomy Active CN109907827B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910305289.0A CN109907827B (en) 2019-04-16 2019-04-16 Operation navigation system for mandibular angle osteotomy

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910305289.0A CN109907827B (en) 2019-04-16 2019-04-16 Operation navigation system for mandibular angle osteotomy

Publications (2)

Publication Number Publication Date
CN109907827A CN109907827A (en) 2019-06-21
CN109907827B true CN109907827B (en) 2020-07-14

Family

ID=66977351

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910305289.0A Active CN109907827B (en) 2019-04-16 2019-04-16 Operation navigation system for mandibular angle osteotomy

Country Status (1)

Country Link
CN (1) CN109907827B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112257912B (en) * 2020-10-15 2024-09-06 北京爱康宜诚医疗器材有限公司 Method and device for predicting operation evaluation information, processor and electronic device
CN113143457A (en) * 2021-02-09 2021-07-23 席庆 Maxillofacial operation auxiliary system and method based on MR head-mounted equipment
CN113052864B (en) * 2021-03-02 2022-12-23 四川大学 Method for predicting body appearance after plastic surgery based on machine learning
CN113768627B (en) * 2021-09-14 2024-09-03 武汉联影智融医疗科技有限公司 Visual navigator receptive field acquisition method, device and surgical robot
CN116211458B (en) * 2022-12-12 2023-10-03 高峰医疗器械(无锡)有限公司 Implant planning method, device, equipment and storage medium
CN116883428B (en) * 2023-07-07 2024-05-31 东北大学 Mandible spiral CT image partition segmentation method

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2465102A1 (en) * 2001-10-31 2003-05-08 Imagnosis Inc. Medical simulation apparatus and method for controlling 3-dimensional image display in the medical simulation apparatus
WO2015138657A1 (en) * 2014-03-11 2015-09-17 Ohio State Innovation Foundation Methods, devices, and manufacture of the devices for musculoskeletal reconstructive surgery
US20170071671A1 (en) * 2015-09-11 2017-03-16 Siemens Healthcare Gmbh Physiology-driven decision support for therapy planning
CN105608741A (en) * 2015-12-17 2016-05-25 四川大学 Computer simulation method for predicting soft tissue appearance change after maxillofacial bone plastic surgery
CN105943113B (en) * 2016-04-13 2018-05-25 南方医科大学 A kind of preparation method of mandibular angle bone cutting navigation template
WO2019056059A1 (en) * 2017-09-21 2019-03-28 Tmj Orthopaedics Pty Ltd A surgical procedure for cancerous mandibular reconstruction and a temporary mandibular spacer therefor
CN109567942B (en) * 2018-10-31 2020-04-14 上海盼研机器人科技有限公司 Craniomaxillofacial surgical robot auxiliary system adopting artificial intelligence technology

Also Published As

Publication number Publication date
CN109907827A (en) 2019-06-21

Similar Documents

Publication Publication Date Title
CN109907827B (en) Operation navigation system for mandibular angle osteotomy
CN109875683B (en) Method for establishing osteotomy face prediction model in mandibular angle osteotomy
CN109069097B (en) Dental three-dimensional data processing device and method thereof
US10265149B2 (en) Method and system for modeling the mandibular kinematics of a patient
US9173716B2 (en) Computer-aided planning with dual alpha angles in femoral acetabular impingement surgery
US20160331463A1 (en) Method for generating a 3d reference computer model of at least one anatomical structure
WO2013163800A2 (en) Oral surgery auxiliary guidance method
WO2016197326A1 (en) Image correction design system and method for oral and maxillofacial surgery
CN112885436B (en) Dental surgery real-time auxiliary system based on augmented reality three-dimensional imaging
CN114173704A (en) Method for generating dental arch model
US11925519B2 (en) Method for evaluating a dental situation with the aid of a deformed dental arch model
Jeon et al. Quantitative analysis of the mouth opening movement of temporomandibular joint disorder patients according to disc position using computer vision: a pilot study
CN111227933B (en) Prediction and real-time rendering system for mandibular angle osteotomy
CN107802276B (en) Tracing drawing device and method for skull image
KR101801376B1 (en) Skull deformity analyzing system using a 3d topological descriptor and a method for analyzing skull deformity using the same
CN110478042B (en) Interventional operation navigation device based on artificial intelligence technology
CN115105062B (en) Hip and knee joint coordination evaluation method, device and system and storage medium
CN116421341A (en) Orthognathic surgery planning method, orthognathic surgery planning equipment, orthognathic surgery planning storage medium and orthognathic surgery navigation system
KR101796111B1 (en) Skull deformity analyzing system using a 3d morphological descriptor and a method for analyzing skull deformity using the same
RU2610911C1 (en) System and method of virtual smile prototyping based on tactile computer device
CN105286784B (en) Image correction design system and method for facial and jaw surgery
CN109620406B (en) Display and registration method for total knee arthroplasty
CN113143457A (en) Maxillofacial operation auxiliary system and method based on MR head-mounted equipment
CN113781453B (en) Scoliosis advancing and expanding prediction method and device based on X-ray film
US20240320935A1 (en) Systems, Methods and Devices for Augmented Reality Assisted Surgery

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant