CN113823000A - Enhanced display method, system, device and storage medium based on head - Google Patents
Enhanced display method, system, device and storage medium based on head Download PDFInfo
- Publication number
- CN113823000A CN113823000A CN202111129700.7A CN202111129700A CN113823000A CN 113823000 A CN113823000 A CN 113823000A CN 202111129700 A CN202111129700 A CN 202111129700A CN 113823000 A CN113823000 A CN 113823000A
- Authority
- CN
- China
- Prior art keywords
- dimensional
- model
- head
- face
- augmented reality
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 54
- 230000003190 augmentative effect Effects 0.000 claims abstract description 82
- 210000003625 skull Anatomy 0.000 claims abstract description 52
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 37
- 230000000007 visual effect Effects 0.000 claims abstract description 22
- 238000002591 computed tomography Methods 0.000 claims description 36
- 238000009616 inductively coupled plasma Methods 0.000 claims description 14
- 238000013507 mapping Methods 0.000 claims description 10
- 238000012360 testing method Methods 0.000 claims description 6
- 238000005516 engineering process Methods 0.000 abstract description 20
- 210000003128 head Anatomy 0.000 description 57
- 238000010586 diagram Methods 0.000 description 10
- 239000011159 matrix material Substances 0.000 description 8
- 238000012545 processing Methods 0.000 description 8
- 230000008901 benefit Effects 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 6
- 238000001356 surgical procedure Methods 0.000 description 6
- 239000011521 glass Substances 0.000 description 5
- 230000002452 interceptive effect Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 230000003993 interaction Effects 0.000 description 4
- 238000010521 absorption reaction Methods 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 230000004069 differentiation Effects 0.000 description 2
- 238000005111 flow chemistry technique Methods 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 239000003550 marker Substances 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 210000000056 organ Anatomy 0.000 description 2
- 230000001737 promoting effect Effects 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 230000035945 sensitivity Effects 0.000 description 2
- 241000894007 species Species 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 210000001519 tissue Anatomy 0.000 description 2
- 238000012800 visualization Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000003416 augmentation Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 210000004204 blood vessel Anatomy 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000004424 eye movement Effects 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 230000005251 gamma ray Effects 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 230000033001 locomotion Effects 0.000 description 1
- 238000012067 mathematical method Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 231100000915 pathological change Toxicity 0.000 description 1
- 230000036285 pathological change Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 238000002834 transmittance Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Geometry (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention provides a head-based enhanced display method, a system, equipment and a storage medium, wherein the method comprises the following steps: obtaining a three-dimensional face model and a three-dimensional model skull model of the head based on the head CT three-dimensional data; acquiring three-dimensional data of at least part of a face of a user in real time through augmented reality equipment; and acquiring a real-time image of the three-dimensional model skull model under the current visual angle through matching the three-dimensional data acquired in real time with the three-dimensional face model, acquiring an enhanced image based on the real-time image and displaying the enhanced image on the augmented reality equipment. The method can utilize the augmented reality technology, does not need to paste mark points through an intelligent face matching algorithm, conveniently and quickly realizes the matching of the pre-scanning data and the real scene, accelerates the operation speed, relieves the pain of a patient, facilitates the operation of a doctor and improves the safety of the operation.
Description
Technical Field
The present invention relates to the field of image recognition of human tissue, and in particular, to a head-based enhanced display method, system, device, and storage medium.
Background
The head is the most important organ of the human body, at present, in an operating room of the head, a mark point needs to be pasted on the head so that a doctor can clearly determine the target position of an operation and avoid skull, important blood vessels or organs and the like, the operation is complex, the speed is slow, the operation preparation time is long, and the pain of a patient is increased.
Accordingly, the present invention provides a method, system, device and storage medium for enhanced display based on a header.
Disclosure of Invention
Aiming at the problems in the prior art, the invention aims to provide a head-based enhanced display method, a head-based enhanced display system, a head-based enhanced display device and a head-based enhanced display storage medium, which overcome the difficulties in the prior art, can utilize the augmented reality technology, realize the matching of pre-scanned data and a real scene conveniently and quickly through an intelligent face matching algorithm without pasting mark points, accelerate the operation speed, relieve the pain of a patient, facilitate the operation of a doctor and improve the safety of the operation.
The embodiment of the invention provides a head-based enhanced display method, which comprises the following steps:
s110, obtaining a three-dimensional face model and a three-dimensional model skull model of the head based on the head CT three-dimensional data;
s120, acquiring three-dimensional data of at least part of the face of the user in real time through augmented reality equipment;
s130, acquiring a real-time image of the three-dimensional model skull model under the current visual angle through matching of the three-dimensional data acquired in real time and the three-dimensional face model, acquiring an augmented image based on the real-time image and displaying the augmented image on the augmented reality equipment.
Preferably, in step S110, the three-dimensional face model and the three-dimensional skull model are generated under a stereo coordinate based on the CT three-dimensional data of the head.
Preferably, the step S110 further includes establishing at least one puncturing channel based on the three-dimensional face model and the three-dimensional model skull model, the puncturing channel avoiding the three-dimensional model skull model, and one end of the puncturing channel meeting on the surface of the three-dimensional face model to form at least one positioning region.
Preferably, in step S120, the augmented reality device creates three-dimensional data of at least part of the face of the user by scanning time-of-flight data of the face of the user.
Preferably, the step S130 includes the following steps:
s131, comparing the three-dimensional data acquired in real time with the three-dimensional face model to obtain a region with the highest similarity in the three-dimensional face model;
s132, simulating the position of an observation point and the range of a visual angle in the three-dimensional coordinate according to the three-dimensional data;
s133, generating image information of the three-dimensional model skull model at the observation point position and the view angle range according to the observation point position, the view angle range and the three-dimensional model skull model; and
and S134, displaying the image information on the augmented reality equipment.
Preferably, the step S131 includes the following steps
S1311, obtaining at least five first feature points of a three-dimensional face model obtained through CT scanning through an intelligent three-dimensional face feature point recognition algorithm;
s1312, obtaining five second feature points of the three-dimensional data by using an intelligent three-dimensional human face feature point recognition algorithm, wherein the second feature points and the first feature points respectively have a mapping relation;
s1313, performing rough registration on the three-dimensional face model and the three-dimensional data based on the mapping relation of the first feature points and the second feature points;
and S1314, performing fine registration on the roughly registered three-dimensional face through an ICP (inductively coupled plasma) algorithm, and testing to obtain that the matching precision is less than 0.5 mm.
Preferably, in step S130, the method further includes:
based on the matching of the three-dimensional data acquired in real time and the three-dimensional face model, the augmented reality device displays the positioning region in an enhanced manner on a transparent display medium obtaining a current visual angle based on the current visual angle relative to the three-dimensional face model; or
Based on the matching of the three-dimensional data acquired in real time and the three-dimensional face model, the augmented reality device acquires an image of a combined model of the three-dimensional model skull and the puncture channel at the current visual angle and displays the image on the augmented reality device.
An embodiment of the present invention further provides a head-based enhanced display system, for implementing the above-mentioned head-based enhanced display method, where the head-based enhanced display system includes:
the model establishing module is used for obtaining a three-dimensional face model and a three-dimensional model skull model of the head based on the head CT three-dimensional data;
the data acquisition module is used for acquiring three-dimensional data of at least part of the face of the user in real time through augmented reality equipment;
and the augmented reality module is used for acquiring a real-time image of the three-dimensional model skull model under the current visual angle through matching the three-dimensional data acquired in real time with the three-dimensional face model, acquiring an augmented image based on the real-time image and displaying the augmented image on the augmented reality equipment.
Embodiments of the present invention also provide a head-based enhanced display device, including:
a processor;
a memory having stored therein executable instructions of the processor;
wherein the processor is configured to perform the steps of the above-described head-based enhanced display method via execution of the executable instructions.
Embodiments of the present invention also provide a computer-readable storage medium storing a program that, when executed by a processor, implements the steps of the above-described head-based enhanced display method.
The invention aims to provide a head-based enhanced display method, a head-based enhanced display system, head-based enhanced display equipment and a head-based enhanced display storage medium, which can utilize an augmented reality technology, do not need to paste mark points through an intelligent face matching algorithm, conveniently and quickly realize the matching of pre-scanning data and a real scene, accelerate the operation speed, relieve the pain of a patient, facilitate the operation of a doctor and improve the safety of the operation.
Drawings
Other features, objects and advantages of the present invention will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, with reference to the accompanying drawings.
FIG. 1 is a flow chart of a head-based enhanced display method of the present invention.
Fig. 2 to 3 are schematic diagrams of a first implementation process of the head-based enhanced display method of the present invention.
Fig. 4 to 5 are schematic diagrams illustrating a second implementation process of the head-based enhanced display method according to the present invention.
FIG. 6 is a block schematic diagram of a head-based augmented display system of the present invention.
Fig. 7 is a schematic structural diagram of a head-based enhanced display device of the present invention.
Fig. 8 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present application is provided by way of specific examples, and other advantages and effects of the present application will be readily apparent to those skilled in the art from the disclosure herein. The present application is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present application. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
Embodiments of the present application will be described in detail below with reference to the accompanying drawings so that those skilled in the art to which the present application pertains can easily carry out the present application. The present application may be embodied in many different forms and is not limited to the embodiments described herein.
Reference throughout this specification to "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," or the like, means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. Furthermore, the particular features, structures, materials, or characteristics shown may be combined in any suitable manner in any one or more embodiments or examples. Moreover, various embodiments or examples and features of different embodiments or examples presented in this application can be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first", "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the expressions of the present application, "plurality" means two or more unless specifically defined otherwise.
In order to clearly explain the present application, components that are not related to the description are omitted, and the same reference numerals are given to the same or similar components throughout the specification.
Throughout the specification, when a device is referred to as being "connected" to another device, this includes not only the case of being "directly connected" but also the case of being "indirectly connected" with another element interposed therebetween. In addition, when a device "includes" a certain component, unless otherwise stated, the device does not exclude other components, but may include other components.
When a device is said to be "on" another device, this may be directly on the other device, but may also be accompanied by other devices in between. When a device is said to be "directly on" another device, there are no other devices in between.
Although the terms first, second, etc. may be used herein to describe various elements in some instances, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, the first interface and the second interface are represented. Also, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used in this specification, specify the presence of stated features, steps, operations, elements, components, items, species, and/or groups, but do not preclude the presence, or addition of one or more other features, steps, operations, elements, components, items, species, and/or groups thereof. The terms "or" and/or "as used herein are to be construed as inclusive or meaning any one or any combination. Thus, "A, B or C" or "A, B and/or C" means "any of the following: a; b; c; a and B; a and C; b and C; A. b and C ". An exception to this definition will occur only when a combination of elements, functions, steps or operations are inherently mutually exclusive in some way.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used herein, the singular forms "a", "an" and "the" include plural forms as long as the words do not expressly indicate a contrary meaning. The term "comprises/comprising" when used in this specification is taken to specify the presence of stated features, regions, integers, steps, operations, elements, and/or components, but does not exclude the presence or addition of other features, regions, integers, steps, operations, elements, and/or components.
Although not defined differently, including technical and scientific terms used herein, all terms have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. Terms defined in commonly used dictionaries are to be additionally interpreted as having meanings consistent with those of related art documents and the contents of the present prompts, and must not be excessively interpreted as having ideal or very formulaic meanings unless defined.
FIG. 1 is a flow chart of a head-based enhanced display method of the present invention. As shown in fig. 1, an embodiment of the present invention provides a head-based enhanced display method, including the steps of:
and S110, obtaining a three-dimensional face model and a three-dimensional model skull model of the head based on the CT three-dimensional data of the head.
And S120, acquiring three-dimensional data of at least part of the face of the user in real time through the augmented reality equipment.
And S130, acquiring a real-time image of the three-dimensional model skull model under the current visual angle through matching of the three-dimensional data acquired in real time and the three-dimensional face model, acquiring an enhanced image based on the real-time image and displaying the enhanced image on the augmented reality equipment.
In a preferred embodiment, in step S110, a three-dimensional face model and a three-dimensional skull model are generated under a stereo coordinate based on the CT three-dimensional data of the head.
In a preferred embodiment, step S110 further includes establishing at least one puncturing channel based on the three-dimensional face model and the three-dimensional model skull model, the puncturing channel avoiding the three-dimensional model skull model, and one end of the puncturing channel meeting on the surface of the three-dimensional face model to form at least one positioning region.
In a preferred embodiment, the augmented reality device builds three-dimensional data of at least part of the user 'S face by scanning time-of-flight data of the user' S face in step S120.
In a preferred embodiment, step S130 includes the following steps:
s131, comparing the three-dimensional data acquired in real time with the three-dimensional face model to obtain a region with the highest similarity in the three-dimensional face model.
And S132, simulating the position of the observation point and the range of the view angle in the three-dimensional coordinate according to the three-dimensional data.
And S133, generating image information of the three-dimensional model skull model at the observation point position and within the visual angle range according to the observation point position, the visual angle range and the three-dimensional model skull model. And
and S134, displaying the image information on the augmented reality equipment.
In a preferred embodiment, step S131 includes the following steps
S1311, obtaining at least five first feature points of the three-dimensional face model obtained through CT scanning through an intelligent three-dimensional face feature point recognition algorithm.
S1312, obtaining five second feature points of the three-dimensional data by using an intelligent three-dimensional face feature point recognition algorithm, wherein the second feature points and the first feature points respectively have mapping relations.
S1313, performing rough registration on the three-dimensional face model and the three-dimensional data based on the mapping relation of the first feature points and the second feature points. In this embodiment, the registration of the three-dimensional face model and the three-dimensional data is performed by using an existing or open-source intelligent three-dimensional face feature point recognition algorithm, which is not described herein again.
And S1314, performing fine registration on the roughly registered three-dimensional face through an ICP (inductively coupled plasma) algorithm, and testing to obtain that the matching precision is less than 0.5 mm. The ICP algorithm is a data registration method based on the prior art, and a nearest point search method is utilized, so that the problem of an algorithm based on a free form curved surface is solved.
The basic principle of the ICP algorithm is: and respectively finding out the nearest points (pi, qi) in the target point cloud P and the source point cloud Q with matching according to a certain constraint condition, and then calculating optimal matching parameters R and t to ensure that an error function is minimum. The error function is E (R, t) is:
wherein n is the number of the nearest point pairs, pi is a point in the target point cloud P, qi is the nearest point corresponding to pi in the source point cloud Q, R is a rotation matrix, and t is a translation vector.
In a preferred embodiment, step S130 further includes:
based on the matching of the three-dimensional data acquired in real time and the three-dimensional face model, the augmented reality equipment enhances and displays the positioning area on the transparent display medium for acquiring the current visual angle based on the current visual angle relative to the three-dimensional face model; or
Based on the matching of the three-dimensional data acquired in real time and the three-dimensional face model, the augmented reality equipment acquires an image of a combined model of the three-dimensional model skull and the puncture channel at the current visual angle and displays the image on the augmented reality equipment.
In the embodiment, CT three-dimensional data is obtained based on CT (computed tomography) scanning, namely electronic computed tomography, which utilizes precisely collimated X-ray beams, gamma rays, ultrasonic waves and the like to perform section scanning one by one around a certain part of a human body together with a detector with extremely high sensitivity, has the characteristics of short scanning time, clear images and the like, and can be used for checking various diseases; the following can be classified according to the radiation used: x-ray CT (X-CT), and gamma-ray CT (gamma-CT). In CT, a certain thickness of the layer of the human body is scanned by X-ray beams, the X-rays transmitted through the layer are received by a detector, converted into visible light, converted into electrical signals by photoelectric conversion, converted into digital signals by an analog/digital converter (analog/digital converter), and input into a computer for processing. The image formation is handled as a division of the selected slice into cuboids of the same volume, called voxels (voxels). The information obtained from the scanning is calculated to obtain the X-ray attenuation coefficient or absorption coefficient of each voxel, and then arranged into a matrix, i.e. a digital matrix (digital matrix), which can be stored in a magnetic or optical disk. Each digit in the digital matrix is converted into small blocks with unequal gray scale from black to white, i.e. pixels (pixels), by a digital/analog converter (digital/analog converter), and the small blocks are arranged in a matrix, i.e. a CT image is formed. Therefore, the CT image is a reconstructed image. The X-ray absorption coefficient of each voxel can be calculated by different mathematical methods. The working procedure of CT is as follows: according to the different absorption and transmittance of different tissues of human body to X-ray, it uses the instrument with very high sensitivity to measure human body, then inputs the data obtained by measurement into the electronic computer, after the data is processed by the electronic computer, the cross-section or three-dimensional image of the examined position of human body can be taken, and the small pathological changes of any position in human body can be found.
The invention combines the pre-scanned CT three-dimensional data with a real-time scene by utilizing an augmented reality technology and combining an intelligent face matching algorithm developed by the user, displays the real CT three-dimensional data human head structure in the real scene, and increases the safety of the head operation.
1. Pre-scanning to obtain a CT image of the head of the patient;
2. obtaining a three-dimensional model of the face of the patient by using a developed intelligent segmentation algorithm;
3. obtaining a two-dimensional image of the skull of the patient by utilizing a threshold segmentation algorithm, and obtaining a three-dimensional model of the skull of the patient by utilizing a three-dimensional reconstruction moving cube algorithm;
4. acquiring a three-dimensional model of the face of a patient in an operating room by utilizing HoloLens glasses (Hololens is an MR head display device developed by Microsoft corporation);
5. using an intelligent three-dimensional face characteristic point recognition algorithm to obtain five three-dimensional face characteristic points obtained by CT scanning;
6. obtaining three-dimensional human face characteristic points of a patient in an operating room on site by using an intelligent three-dimensional human face characteristic point identification algorithm;
7. roughly registering the identified CT three-dimensional face characteristic points and the operating room three-dimensional face characteristic points;
8. performing fine registration on the roughly registered three-dimensional face by using an ICP (inductively coupled plasma) algorithm, and testing to obtain that the matching precision is less than 0.5 mm;
9. because the CT three-dimensional face and the CT skull are in the same coordinate system, the CT skull is subjected to spatial transformation by using a transformation matrix from the CT face to the face in an operating room, which is obtained after precise registration, and the transformed CT skull three-dimensional model is displayed through hollenss glasses, so that the pre-scanned patient three-dimensional skull model can be displayed in the real world, and the follow-up operation is facilitated.
The key points of the invention comprise:
1. reconstructing a human face three-dimensional model by utilizing CT data;
2. automatically finding the face characteristic points by using an intelligent algorithm instead of artificial marking points;
3. automatically finding facial feature points of the operating room patient by using an intelligent algorithm;
4. carrying out coarse registration on the CT face characteristic points and the operating room patient characteristic points;
5. carrying out fine registration on the CT face three-dimensional model and the operating room patient face three-dimensional model by utilizing an ICP (inductively coupled plasma) algorithm to obtain a transformation matrix;
6. and displaying the transformed three-dimensional data of the patient skull in the real world by using a holens eye.
The invention can utilize intelligent algorithm and enhance the realization technology, and increase the accuracy and the safety of the positioning of the craniocerebral operation.
Fig. 2 to 3 are schematic diagrams of a first implementation process of the head-based enhanced display method of the present invention. As shown in fig. 2 and 3, a three-dimensional face model and a three-dimensional skull model are generated under a stereocoordinate based on the CT three-dimensional data of the head. And establishing at least one puncturing channel based on the three-dimensional face model and the three-dimensional model skull model, wherein the puncturing channel avoids the three-dimensional model skull model, and one ends of the puncturing channel are intersected on the surface of the three-dimensional face model to form at least one positioning area.
The flight time data of the face of the user 1 is scanned by a scanning component carried by the augmented reality device 2 (the existing MR glasses can be adopted) to establish three-dimensional data of at least part of the face of the user 1 (the existing infrared scanning or laser point cloud scanning mode can be adopted for acquiring the three-dimensional data).
And comparing the three-dimensional data acquired in real time with the three-dimensional face model to obtain the region with the highest similarity in the three-dimensional face model. At least five first feature points of a three-dimensional face model obtained by CT scanning can be obtained through an intelligent three-dimensional face feature point recognition algorithm, five second feature points of three-dimensional data are obtained through the intelligent three-dimensional face feature point recognition algorithm, the second feature points and the first feature points respectively have mapping relations, the three-dimensional face model and the three-dimensional data are subjected to rough registration based on the mapping relations of the first feature points and the second feature points, the three-dimensional face subjected to rough registration is subjected to fine registration through an ICP algorithm, and the matching accuracy obtained through testing is smaller than 0.5 mm. And simulating the position of an observation point and the range of a visual angle in the three-dimensional coordinate according to the three-dimensional data. And generating image information of the three-dimensional model skull model at the observation point position and the view angle range according to the observation point position, the view angle range and the three-dimensional model skull model. Based on the matching of the three-dimensional data acquired in real time and the three-dimensional face model, the augmented reality device 2 obtains an augmented image 24 of the combined model of the three-dimensional model skull and the puncture channel at the current view angle, the augmented image 24 further comprises a positioning region 23 formed by the intersection of the puncture channel and the surface of the three-dimensional face model, and the augmented image 24 is displayed on the augmented reality device 2, and from the perspective of the user (doctor), the augmented image 24 of the combined model of the three-dimensional model skull and the puncture channel of the user 1 and the positioning region 23 (the puncture region displayed by adopting a mixed reality method based on the invention) marked in the augmented image 24 can be seen.
The Augmented Reality (Augmented Reality) technology in this embodiment is a technology that skillfully fuses virtual information and a real world, and a variety of technical means such as multimedia, three-dimensional modeling, real-time tracking and registration, intelligent interaction, sensing and the like are widely applied, and virtual information such as characters, images, three-dimensional models, music, videos and the like generated by a computer is applied to the real world after being simulated, and the two kinds of information complement each other, thereby realizing the "augmentation" of the real world. Three major technical points of AR: three-dimensional registration (tracking registration technology), virtual reality fusion display and human-computer interaction. The method comprises the steps of firstly carrying out data acquisition on a real scene through a camera and a sensor, transmitting the data into a processor to carry out analysis and reconstruction on the real scene, updating spatial position change data of a user in a real environment in real time through accessories such as an AR (augmented reality) head display or a camera, a gyroscope, a sensor and the like on intelligent mobile equipment, obtaining the relative position of a virtual scene and the real scene, realizing the alignment of a coordinate system, carrying out fusion calculation on the virtual scene and the real scene, and finally presenting a synthetic image of the virtual scene and the real scene to the user. The user can acquire control signals through AR head display or interactive accessories on the intelligent mobile device, such as a microphone, an eye movement tracker, an infrared sensor, a camera, a sensor and other devices, and perform corresponding human-computer interaction and information updating to realize augmented reality interactive operation. The three-dimensional registration is the core of the AR technology, that is, two-dimensional or three-dimensional objects in a real scene are used as markers, and the virtual information and the real scene information are aligned and matched, that is, the position, size, motion path, and the like of the virtual object must be perfectly matched with the real environment, so as to achieve the goal of generating virtual reality and real reality.
The augmented reality technology display system is more important content, in order to obtain the system that comparatively real virtual combines together, make the convenience degree of practical application constantly promote, it is its important basis to use the display that the color is comparatively abundant, on this basis, the display contains relevant content such as helmet-mounted display and non-helmet-mounted display equipment, the perspective helmet can provide relevant reverse order for the user and fuse the situation together, these systems are in concrete operation process, the principle of operation and the similar degree between the content such as immersive helmet in the virtual reality field are more advanced. The system is integrated with an interactive interface of a user, images and the like, a more real and effective environment is used for implementing the mode of applying a miniature camera to shoot the images of the external environment, so that the images of a computer can be fused with a virtual environment and a real environment when being effectively processed, and the images between the two can be also superposed. The optical perspective helmet display can be fully integrated with a real environment by utilizing a semi-transparent semi-reflective optical synthesizer arranged in front of eyes of a user on the basis, a real scene can provide support for the user on the basis of a semi-transparent mirror, and the related operation requirements of the user are met. The related technologies of augmented reality used in this embodiment are all prior art means, and are not described herein again.
Fig. 4 to 5 are schematic diagrams illustrating a second implementation process of the head-based enhanced display method according to the present invention. As shown in fig. 4 and 5, a three-dimensional face model and a three-dimensional skull model are generated under a stereocoordinate based on the CT three-dimensional data of the head. And establishing at least one puncturing channel based on the three-dimensional face model and the three-dimensional model skull model, wherein the puncturing channel avoids the three-dimensional model skull model, and one ends of the puncturing channel are intersected on the surface of the three-dimensional face model to form at least one positioning area.
The flight time data of the face of the user 1 is scanned by a scanning component carried by the augmented reality device 2 (the existing AR glasses can be adopted) to establish three-dimensional data of at least part of the face of the user 1 (the existing infrared scanning or laser point cloud scanning mode can be adopted to collect the three-dimensional data).
And comparing the three-dimensional data acquired in real time with the three-dimensional face model to obtain the region with the highest similarity in the three-dimensional face model. At least five first feature points of a three-dimensional face model obtained by CT scanning can be obtained through an intelligent three-dimensional face feature point recognition algorithm, five second feature points of three-dimensional data are obtained through the intelligent three-dimensional face feature point recognition algorithm, the second feature points and the first feature points respectively have mapping relations, the three-dimensional face model and the three-dimensional data are subjected to rough registration based on the mapping relations of the first feature points and the second feature points, the three-dimensional face subjected to rough registration is subjected to fine registration through an ICP algorithm, and the matching accuracy obtained through testing is smaller than 0.5 mm. And simulating the position of an observation point and the range of a visual angle in the three-dimensional coordinate according to the three-dimensional data. And generating image information of the three-dimensional model skull model at the observation point position and the view angle range according to the observation point position, the view angle range and the three-dimensional model skull model.
Based on the matching of the three-dimensional data acquired in real time and the three-dimensional face model, the augmented reality device 2 displays the positioning region 23 in an enhanced manner on the transparent display medium (transparent glass lens) for obtaining the current viewing angle based on the current viewing angle relative to the three-dimensional face model, wherein 24 is the face of the user 1 viewed through the transparent display medium, and from the perspective of the user (doctor), the positioning region 23 (the puncture region displayed by the augmented reality means based on the present invention) marked on the face of the user 1 for the subsequent operation can be seen.
Mixed reality is a combination of technologies that provide not only new viewing methods but also new input methods, and all methods are combined with each other, thereby promoting innovation. The combination of input and output is a key differentiation advantage for small and medium-sized enterprises. Therefore, mixed reality can directly influence your workflow and help your staff to improve work efficiency and innovation capability. Let us look at some feasible schemes to understand the working principle and what benefits it has. Mixed Reality (MR) is a further development of virtual reality technology that builds an interactive feedback information loop between the real world, the virtual world and the user by presenting virtual scene information in the real scene to enhance the realism of the user experience. Mixed Reality (MR) is a further development of virtual reality technology, which builds an interactive feedback information loop among the virtual world, the real world and the user by introducing real scene information into the virtual environment, so as to enhance the sense of reality experienced by the user.
Mixed reality is a combination of technologies that provide not only new viewing methods but also new input methods, and all methods are combined with each other, thereby promoting innovation. The combination of input and output is a key differentiation advantage for small and medium-sized enterprises. Therefore, mixed reality can directly influence your workflow and help your staff to improve work efficiency and innovation capability. Let us look at some feasible schemes to understand the working principle and what benefits it has. Mixed Reality (MR), which includes both augmented reality and augmented virtual, refers to a new visualization environment created by the merging of real and virtual worlds. Physical and digital objects coexist in the new visualization environment and interact in real time. The system generally employs three main features: 1. it combines virtual and real 2. run in virtual three dimensions (3D registration) 3. real time. Mixed Reality (MR) implementations need to be in an environment where real-world objects can interact with each other. It is the domain of VR if everything is virtual. If the displayed virtual information can only be simply superposed on the real things, the AR is the virtual reality (AR). The key point of MR is the interaction with the real world and the timely acquisition of information. The related technologies of mixed reality used in this embodiment are all prior art means, and are not described herein again.
FIG. 6 is a block schematic diagram of a head-based augmented display system of the present invention. As shown in fig. 6, the head-based augmented display system 5 of the present invention includes:
and the model establishing module 51 is used for obtaining a three-dimensional face model and a three-dimensional model skull model of the head based on the head CT three-dimensional data.
A data acquisition module 52 for acquiring three-dimensional data 53 of at least a portion of a user's face in real time via an augmented reality device.
And the augmented reality module is used for acquiring a real-time image of the three-dimensional model skull model under the current visual angle through matching the three-dimensional data acquired in real time with the three-dimensional face model, acquiring an augmented image based on the real-time image and displaying the augmented image on augmented reality equipment.
The head-based augmented display system can utilize an augmented reality technology, does not need to paste mark points through an intelligent face matching algorithm, conveniently and quickly realizes the matching of pre-scanning data and a real scene, accelerates the operation speed, relieves the pain of a patient, facilitates the operation of a doctor and improves the safety of the operation.
The above-mentioned embodiments are only preferred examples of the present invention, and are not intended to limit the present invention, and any equivalent substitutions, modifications and changes made within the principle of the present invention are within the protection scope of the present invention.
An embodiment of the present invention further provides a head-based enhanced display device, including a processor. A memory having stored therein executable instructions of the processor. Wherein the processor is configured to perform the steps of the head-based enhanced display method via execution of executable instructions.
As shown above, the head-based augmented display system of the embodiment of the present invention can utilize augmented reality technology, and implement matching between pre-scan data and a real scene conveniently and quickly by an intelligent face matching algorithm without posting a marker point, thereby accelerating surgery speed, alleviating patient pain, facilitating doctor operation, and improving surgery safety.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or program product. Thus, various aspects of the invention may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" platform.
Fig. 7 is a schematic structural diagram of a head-based enhanced display device of the present invention. An electronic device 600 according to this embodiment of the invention is described below with reference to fig. 7. The electronic device 600 shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 7, the electronic device 600 is embodied in the form of a general purpose computing device. The components of the electronic device 600 may include, but are not limited to: at least one processing unit 610, at least one memory unit 620, a bus 630 connecting the different platform components (including the memory unit 620 and the processing unit 610), a display unit 640, etc.
Wherein the storage unit stores program code executable by the processing unit 610 to cause the processing unit 610 to perform steps according to various exemplary embodiments of the present invention described in the above-mentioned electronic prescription flow processing method section of the present specification. For example, processing unit 610 may perform the steps as shown in fig. 1.
The storage unit 620 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM)6201 and/or a cache memory unit 6202, and may further include a read-only memory unit (ROM) 6203.
The memory unit 620 may also include a program/utility 6204 having a set (at least one) of program modules 6205, such program modules 6205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
The electronic device 600 may also communicate with one or more external devices 700 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 600, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 600 to communicate with one or more other computing devices. Such communication may occur via an input/output (I/O) interface 650. Also, the electronic device 600 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via the network adapter 660. The network adapter 660 may communicate with other modules of the electronic device 600 via the bus 630. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 600, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage platforms, to name a few.
Embodiments of the present invention also provide a computer-readable storage medium for storing a program, which, when executed by a processor, performs the steps of the head-based enhanced display method. In some possible embodiments, the aspects of the present invention may also be implemented in the form of a program product comprising program code for causing a terminal device to perform the steps according to various exemplary embodiments of the present invention described in the above-mentioned electronic prescription flow processing method section of this specification, when the program product is run on the terminal device.
As shown above, the head-based augmented display system of the embodiment of the present invention can utilize augmented reality technology, and implement matching between pre-scan data and a real scene conveniently and quickly by an intelligent face matching algorithm without posting a marker point, thereby accelerating surgery speed, alleviating patient pain, facilitating doctor operation, and improving surgery safety.
Fig. 8 is a schematic structural diagram of a computer-readable storage medium of the present invention. Referring to fig. 8, a program product 800 for implementing the above method according to an embodiment of the present invention is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable storage medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable storage medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
In summary, the present invention aims to provide a head-based augmented display method, system, device and storage medium, which can utilize augmented reality technology, do not need to post a mark point through an intelligent face matching algorithm, conveniently and quickly implement matching between pre-scan data and a real scene, accelerate surgery speed, alleviate pain of a patient, facilitate doctor operation and improve surgery safety.
The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several simple deductions or substitutions can be made without departing from the spirit of the invention, and all shall be considered as belonging to the protection scope of the invention.
Claims (10)
1. A method for enhanced head-based display, comprising the steps of:
s110, obtaining a three-dimensional face model and a three-dimensional model skull model of the head based on the head CT three-dimensional data;
s120, acquiring three-dimensional data of at least part of the face of the user in real time through augmented reality equipment;
s130, acquiring a real-time image of the three-dimensional model skull model under the current visual angle through matching of the three-dimensional data acquired in real time and the three-dimensional face model, acquiring an augmented image based on the real-time image and displaying the augmented image on the augmented reality equipment.
2. The head-based augmented display method of claim 1, wherein in step S110, the three-dimensional face model and the three-dimensional skull model are generated under a stereo coordinate based on CT three-dimensional data of the head.
3. The head-based augmented display method of claim 1, wherein the step S110 further comprises establishing at least one puncturing channel based on the three-dimensional face model and the three-dimensional model skull model, the puncturing channel avoiding the three-dimensional model skull model, and one end of the puncturing channel meeting on the surface of the three-dimensional face model to form at least one positioning region.
4. The head-based augmented display method of claim 1, wherein in step S120 the augmented reality device builds three-dimensional data of at least part of the user 'S face by scanning time-of-flight data of the user' S face.
5. The head-based enhanced display method according to claim 2, wherein the step S130 comprises the steps of:
s131, comparing the three-dimensional data acquired in real time with the three-dimensional face model to obtain a region with the highest similarity in the three-dimensional face model;
s132, simulating the position of an observation point and the range of a visual angle in the three-dimensional coordinate according to the three-dimensional data;
s133, generating image information of the three-dimensional model skull model at the observation point position and the view angle range according to the observation point position, the view angle range and the three-dimensional model skull model; and
and S134, displaying the image information on the augmented reality equipment.
6. The head-based enhanced display method according to claim 1, wherein the step S131 comprises the following steps
S1311, obtaining at least five first feature points of a three-dimensional face model obtained through CT scanning through an intelligent three-dimensional face feature point recognition algorithm;
s1312, obtaining five second feature points of the three-dimensional data by using an intelligent three-dimensional human face feature point recognition algorithm, wherein the second feature points and the first feature points respectively have a mapping relation;
s1313, performing rough registration on the three-dimensional face model and the three-dimensional data based on the mapping relation of the first feature points and the second feature points;
and S1314, performing fine registration on the roughly registered three-dimensional face through an ICP (inductively coupled plasma) algorithm, and testing to obtain that the matching precision is less than 0.5 mm.
7. The head-based enhanced display method according to claim 3, wherein said step S130 further comprises:
based on the matching of the three-dimensional data acquired in real time and the three-dimensional face model, the augmented reality device displays the positioning region in an enhanced manner on a transparent display medium obtaining a current visual angle based on the current visual angle relative to the three-dimensional face model; or
Based on the matching of the three-dimensional data acquired in real time and the three-dimensional face model, the augmented reality device acquires an image of a combined model of the three-dimensional model skull and the puncture channel at the current visual angle and displays the image on the augmented reality device.
8. A head-based enhanced display system for implementing the head-based enhanced display method of claim 1, comprising:
the model establishing module is used for obtaining a three-dimensional face model and a three-dimensional model skull model of the head based on the head CT three-dimensional data;
the data acquisition module is used for acquiring three-dimensional data of at least part of the face of the user in real time through augmented reality equipment;
and the augmented reality module is used for acquiring a real-time image of the three-dimensional model skull model under the current visual angle through matching the three-dimensional data acquired in real time with the three-dimensional face model, acquiring an augmented image based on the real-time image and displaying the augmented image on the augmented reality equipment.
9. A head-based augmented display device, comprising:
a processor;
a memory having stored therein executable instructions of the processor;
wherein the processor is configured to perform the steps of the head-based augmented display method of any one of claims 1 to 7 via execution of the executable instructions.
10. A computer-readable storage medium storing a program which, when executed by a processor, performs the steps of the head-based augmented display method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111129700.7A CN113823000A (en) | 2021-09-26 | 2021-09-26 | Enhanced display method, system, device and storage medium based on head |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111129700.7A CN113823000A (en) | 2021-09-26 | 2021-09-26 | Enhanced display method, system, device and storage medium based on head |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113823000A true CN113823000A (en) | 2021-12-21 |
Family
ID=78921284
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111129700.7A Pending CN113823000A (en) | 2021-09-26 | 2021-09-26 | Enhanced display method, system, device and storage medium based on head |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113823000A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114521911A (en) * | 2022-02-22 | 2022-05-24 | 上海爱乐慕健康科技有限公司 | Augmented reality display method and system based on lateral position of skull and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101797182A (en) * | 2010-05-20 | 2010-08-11 | 北京理工大学 | Nasal endoscope minimally invasive operation navigating system based on augmented reality technique |
CN108324246A (en) * | 2018-01-19 | 2018-07-27 | 上海联影医疗科技有限公司 | Medical diagnosis auxiliary system and method |
CN109785374A (en) * | 2019-01-23 | 2019-05-21 | 北京航空航天大学 | A kind of automatic unmarked method for registering images in real time of dentistry augmented reality surgical navigational |
CN113409456A (en) * | 2021-08-19 | 2021-09-17 | 江苏集萃苏科思科技有限公司 | Modeling method, system, device and medium for three-dimensional model before craniocerebral puncture operation |
-
2021
- 2021-09-26 CN CN202111129700.7A patent/CN113823000A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101797182A (en) * | 2010-05-20 | 2010-08-11 | 北京理工大学 | Nasal endoscope minimally invasive operation navigating system based on augmented reality technique |
CN108324246A (en) * | 2018-01-19 | 2018-07-27 | 上海联影医疗科技有限公司 | Medical diagnosis auxiliary system and method |
CN109785374A (en) * | 2019-01-23 | 2019-05-21 | 北京航空航天大学 | A kind of automatic unmarked method for registering images in real time of dentistry augmented reality surgical navigational |
CN113409456A (en) * | 2021-08-19 | 2021-09-17 | 江苏集萃苏科思科技有限公司 | Modeling method, system, device and medium for three-dimensional model before craniocerebral puncture operation |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114521911A (en) * | 2022-02-22 | 2022-05-24 | 上海爱乐慕健康科技有限公司 | Augmented reality display method and system based on lateral position of skull and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11413094B2 (en) | System and method for multi-client deployment of augmented reality instrument tracking | |
EP3726467B1 (en) | Systems and methods for reconstruction of 3d anatomical images from 2d anatomical images | |
US11759261B2 (en) | Augmented reality pre-registration | |
US10952809B2 (en) | Methods and systems for generating and using 3D images in surgical settings | |
US20230016227A1 (en) | Medical augmented reality navigation | |
US9956054B2 (en) | Dynamic minimally invasive surgical-aware assistant | |
US9848186B2 (en) | Graphical system with enhanced stereopsis | |
US11963723B2 (en) | Visualization of medical data depending on viewing-characteristics | |
US11547499B2 (en) | Dynamic and interactive navigation in a surgical environment | |
Kilgus et al. | Mobile markerless augmented reality and its application in forensic medicine | |
US11961193B2 (en) | Method for controlling a display, computer program and mixed reality display device | |
US20210353361A1 (en) | Surgical planning, surgical navigation and imaging system | |
Wen et al. | Augmented reality guidance with multimodality imaging data and depth-perceived interaction for robot-assisted surgery | |
Chu et al. | Perception enhancement using importance-driven hybrid rendering for augmented reality based endoscopic surgical navigation | |
CN110051434A (en) | AR operation piloting method and terminal in conjunction with endoscope | |
WO2020168698A1 (en) | Vrds 4d medical image-based vein ai endoscopic analysis method and product | |
CN108876783B (en) | Image fusion method and system, medical equipment and image fusion terminal | |
Abou El-Seoud et al. | An interactive mixed reality ray tracing rendering mobile application of medical data in minimally invasive surgeries | |
CN114711962A (en) | Augmented reality operation planning navigation system and method | |
CN113842227B (en) | Medical auxiliary three-dimensional model positioning and matching method, system, equipment and medium | |
CN109771052B (en) | Three-dimensional image establishing method and system based on multi-view imaging and multi-polarization state imaging | |
CN113823000A (en) | Enhanced display method, system, device and storage medium based on head | |
US20230054394A1 (en) | Device and system for multidimensional data visualization and interaction in an augmented reality virtual reality or mixed reality image guided surgery | |
JP2023004884A (en) | Rendering device for displaying graphical representation of augmented reality | |
Kirmizibayrak et al. | Interactive visualization and analysis of multimodal datasets for surgical applications |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |