Background
The augmented reality technology is a computer simulation technology, and combines multiple technologies such as virtual reality, computer vision, computer network, human-computer interaction and the like. Perception of the real world is augmented by integrating virtual objects into image sequences acquired from various camera technologies. The existing tracking registration technology mostly adopts a mode based on feature points to carry out target identification, the extraction of the feature points is not limited to plane objects and regular objects, and the false-true fusion is completed by carrying out attitude estimation through the feature point extraction, matching and tracking. Augmented reality is changing the practice of healthcare by providing powerful and intuitive methods of exploring and interacting with digital medical data, as well as integrating data into the physical world to create a natural, interactive virtual experience.
In surgical operations, preoperative planning has a dominant effect on the outcome of the operation. Scientific preoperative planning can deal with the emergency in the operation in advance and improve the success rate of the operation. Preoperative planning requires the location of the lesion to be determined by means of gray scale images of Computed Tomography (CT) or Magnetic Resonance Imaging (MRI). The position and the size of a focus in a three-dimensional space of a human body are determined by a two-dimensional image, and important organs are avoided as much as possible when a surgical path is selected, which means that the reading capability of a doctor on a CT (computed tomography) image and an MRI (magnetic resonance imaging) image is extremely tested. During the operation, the physician usually can only see the organ surface exposed under the visual field, and to perform the precise operation, the physician usually needs to review and refer to the complicated human anatomy structure, and combine with the preoperative planned operation scheme path memorized in the brain. If the focus position cannot be accurately found, the canceration tissue is cut too little or the normal tissue is cut too much, so that the tumor residue or the excessive damage to the organ function is caused, and the operation effect is greatly reduced.
Disclosure of Invention
In view of this, the present invention provides a digital developing method for mandibular and facial lesions based on edge detection, which performs digital development on mandibular and facial lesions by means of edge detection of teeth, so as to intuitively display the conditions of lesions to a doctor through a screen, provide the doctor with visual planning, and reduce the dependency on prior knowledge.
In order to realize the purpose, the invention adopts the following technical scheme:
a mandibular surface lesion digital visualization method based on edge detection comprises the following steps:
step S1, acquiring medical image data of a patient, and respectively constructing a mandible model, a tooth model and a tumor model;
step S2, unifying the coordinate systems of the mandible model, the tooth model and the tumor model;
step S3, identifying the tooth model, extracting the edge contour feature set of the tooth and storing the edge contour feature set in a database;
and step S4, the terminal identifies by aligning the maxillofacial teeth, extracts the edge contour features of the teeth at the current position, matches the edge contour feature set with the highest similarity in the database, and automatically projects and matches the images of the corresponding teeth, mandible and tumor onto the real teeth.
Further, the step S1 is specifically:
acquiring DICOM medical image data of human lower jaw face lesion; extracting the ct range of the tumor by setting the upper and lower limits of a threshold value aiming at the acquired DICOM medical image data, and reconstructing a three-dimensional virtual model of the tumor;
extracting ct ranges of the jaw and the teeth by setting upper and lower threshold limits, removing pixel points at the joint of the upper jaw and the lower jaw, and extracting the lower jaw and the teeth from the jaw by a region growing algorithm;
the ct range of the tooth is extracted by setting the upper and lower threshold values, because the mandible is overlapped with the tooth threshold value, the pixel point at the joint with the mandible is removed, the tooth is extracted from the mandible by a region growing algorithm, and the three-dimensional virtual model of the tooth is reconstructed to be used as the virtual model for feature extraction.
And subtracting pixel points of the tooth model from the extracted mandible and the teeth through a Boolean algorithm, reserving the mandible without the teeth, and reconstructing a three-dimensional virtual model of the mandible.
Further, the step S2 is specifically: and respectively importing the three model files into three-dimensional model making software, unifying the coordinate systems of the three models under the same coordinate system and combining the three model files into a group.
Further, the step S2 is specifically:
taking the virtual three-dimensional model of the teeth as a virtual model for feature extraction, taking an image of the virtual model of the teeth under a visual angle, extracting an edge contour feature set under the visual angle through an edge detection algorithm, and storing the edge contour feature set into a feature database of a system;
and repeating the operation for a plurality of times to obtain a complete edge profile feature set in a plurality of directions.
Further, the step S4 is specifically:
creating an augmented reality system, adding an augmented reality camera and connecting a database to the augmented reality system;
adding a three-dimensional model group of teeth, mandible and tumor in an augmented reality system, adding materials with different colors for the teeth, the mandible and the tumor, and adjusting lower opacity;
the augmented reality system is led into an augmented reality equipment terminal, the tooth part is identified through the equipment terminal, and the edge contour characteristic set of the tooth is extracted;
putting the feature set into a database for matching, matching data with the highest similarity of the current contour feature set in a system library to obtain lesion information corresponding to the feature set, and automatically projecting and matching the lesion information to teeth; the user can directly observe the position of the lesion part on the maxillofacial surface through the screen.
A digital developing system for mandibular plane lesion based on edge detection comprises a database and an augmented reality device terminal; the database is used for storing the edge profile characteristic set of the teeth of the patient and the corresponding three-dimensional model sets of the teeth, the mandible and the tumor; the augmented reality equipment terminal is used for identifying the edge contour characteristics of the teeth and matching the corresponding model projection to the real teeth through augmented reality.
Compared with the prior art, the invention has the following beneficial effects:
the invention carries out digital development on the maxillofacial lesion by means of edge detection of teeth, can visually display the condition of the lesion to a doctor through a screen, provides visual planning for the doctor and reduces the dependence on prior knowledge.
Detailed Description
The invention is further explained below with reference to the drawings and the embodiments.
Referring to fig. 1, the present invention provides a digital developing method of mandibular plane lesion based on edge detection, comprising the following steps:
step S1: medical image data of a patient is obtained through electronic Computed Tomography (CT), the medical image data is imported into medical image processing software, and a required tissue model is created according to different threshold ranges of different tissues of a human body in a CT image and depending on the difference of the threshold ranges.
Preferably, the threshold value of the human jaw bone is selected from 555-. Under the normal condition, maxilla and mandible can have the continuous condition of pixel on the CT image, need obtain complete mandible and need get rid of the pixel of the two connected portions, generally in the position of mandible head, mandible about the regional separation that increases of rethread remains mandible. When extracting the tooth, can include partial jaw in threshold value selection process, consequently will get rid of jaw and the pixel of the pixel junction of tooth, can select the junction of the two on the model according to the visible part of tooth. The representation of the teeth in the CT image often has artifacts, the artifacts can interfere the selection of the teeth, and pixel points of the artifacts need to be manually removed. The threshold difference of the tumor and the soft tissue on the CT image is small, a part of soft tissue can be selected when the tumor model is obtained, soft tissue pixel points around the tumor need to be manually removed, the tumor and the soft tissue are separated through region growing to obtain a complete tumor model, and finally the three models are exported in an stl file format.
And step S2, respectively importing the three model files into three-dimensional model making software, unifying coordinate systems of the three models under the same coordinate system and combining the coordinate systems into a group so as to prevent the problem that the coordinate alignment needs to be carried out for multiple times when the center points of the coordinate systems are different when the models are applied to an augmented reality system. Exporting the models of the group in a FBX file format, wherein the unit is mm;
step S3: and performing edge detection on the virtual model of the teeth, extracting feature point information of the edge position of the teeth, and storing the contour feature point information of the current model and corresponding pose coordinate information into an augmented reality system database. Selecting the relative position of the current tooth virtual model on a screen, and extracting edge characteristic information to be used as a prompt graph for aligning the edge contour of the augmented reality system;
and step S4, adding an augmented reality camera, a feature information base and a model group synthesized by the tooth model, the mandible model and the tumor model which are manufactured in the step S2 into the augmented reality system. The augmented reality system is led into augmented reality glasses or a mobile phone, a program is opened, a user can align the maxillofacial teeth through a prompt graph of contour alignment on a screen to identify, the edge contour features of the teeth at the current position are extracted, the edge contour feature set with the highest similarity in the system is matched, corresponding position information is provided, and images of preset teeth, mandibles and tumors are automatically projected and matched onto the real teeth. The user can directly observe the position information of the lesion part on the maxillofacial surface through the screen.
Preferably, in this embodiment, the colors and opacities of the model set are adjusted to 255,255,255,70 RGBA (red, green, blue and opacities) for the teeth and mandible and 50,70,210,100 RGBA for the tumor.
The above description is only a preferred embodiment of the present invention, and all equivalent changes and modifications made in accordance with the claims of the present invention should be covered by the present invention.