CN116909442A - Holographic three-dimensional medical data visualization method and system capable of isolating gesture interaction - Google Patents

Holographic three-dimensional medical data visualization method and system capable of isolating gesture interaction Download PDF

Info

Publication number
CN116909442A
CN116909442A CN202310794023.3A CN202310794023A CN116909442A CN 116909442 A CN116909442 A CN 116909442A CN 202310794023 A CN202310794023 A CN 202310794023A CN 116909442 A CN116909442 A CN 116909442A
Authority
CN
China
Prior art keywords
user
mixed reality
dimensional
virtual image
interaction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310794023.3A
Other languages
Chinese (zh)
Inventor
司伟鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN202310794023.3A priority Critical patent/CN116909442A/en
Publication of CN116909442A publication Critical patent/CN116909442A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering

Abstract

The application relates to the technical field of man-machine interaction, in particular to a holographic three-dimensional medical data visualization method and system capable of isolating space gesture interaction, wherein the method comprises the following steps: firstly, acquiring user environment information and user motion information; the user environment information is image information of a real scene around the user; the user wears a mixed reality display device; then, based on the user environment information and the user motion information, generating a virtual image by adopting a stereoscopic interpolation volume drawing method; then, adopting an optical model to project a virtual image into the field of view of a user, and fusing the virtual image and a real scene to obtain a three-dimensional model of mixed reality rendering; and finally, capturing gestures of a user through a gesture recognition method based on the three-dimensional model rendered by the mixed reality and performing spaced gesture interaction. According to the application, through efficient three-dimensional object body drawing and real-time accurate interaction method design, the space-free gesture interaction between the user and the virtual scene is realized.

Description

Holographic three-dimensional medical data visualization method and system capable of isolating gesture interaction
Technical Field
The embodiment of the application relates to the technical field of man-machine interaction, in particular to a holographic three-dimensional medical data visualization method and system capable of isolating gesture interaction.
Background
With the continuous development and innovation of digital medical treatment, the augmented reality gradually becomes a key technology for assisting a surgeon in clinical diagnosis and treatment. Due to the subject fusion between the medical and computer science fields, the multi-mode medical data obtained by medical imaging such as computed tomography (Computed Tomography, CT), magnetic resonance imaging (Magnetic Resonance Imaging, MRI) and ultrasonic examination (US) are rendered and reconstructed by a computer to obtain a model, so that three-dimensional visualization of the medical data is possible, and a wide space is provided for development of Extended Reality (XR) assisted medical treatment.
The medical interactive visualization system based on XR and equipped with HoloLen2 head-mounted display has demonstrated the potential of presenting high-quality 3D patient data, and on the basis of three-dimensional visualization, surgical simulation provides an immersive surgical training environment for doctors, allows the doctors to operate 3D objects in the environment, can help the doctors to perform visual diagnosis while enhancing the looks and feel of the doctors, and effectively avoids risks possibly brought by real surgery. For holographic XR medical visualization systems, efficient preoperative precursor mapping and interactive techniques that are accurate in real-time during surgery are key to achieving assistance to the physician in performing surgical diagnostics.
Unlike conventional two-dimensional display techniques of medical data, XR can more truly and accurately reveal three-dimensional anatomy during intra-operative diagnostic procedures and preoperative surgical planning. With the development of GPU technology, volume rendering technology has made tremendous progress in recent years. Volume rendering techniques are techniques that utilize direct volume rendering techniques to enhance visual and spatial perception from three-dimensional datasets. In the process of volume rendering using a GPU, the existing methods have never fully achieved the same quality and performance as in medical augmented reality. The proposed methods are limited to rendering polygonal surface data, and although the relevant anatomy and structure can be depicted, these methods are too time consuming to meet the high efficiency requirements of XR assisted diagnostic systems for volume rendering.
Disclosure of Invention
The embodiment of the application provides a holographic three-dimensional medical data visualization method and a holographic three-dimensional medical data visualization system capable of isolating gesture interaction, which enable a user to move, rotate, zoom and shear virtual tissues from any angle through efficient three-dimensional object body drawing and real-time accurate interaction method design, control virtual surgical instruments in real time and observe pathological changes of a patient, and realize isolating gesture interaction between the user and a virtual scene.
In order to solve the technical problems, in a first aspect, an embodiment of the present application provides a holographic three-dimensional medical data visualization method capable of separating space gesture interaction, including the following steps: firstly, acquiring user environment information and user motion information; the user environment information is image information of a real scene around the user; the user wears a mixed reality display device; then, based on the user environment information and the user motion information, generating a virtual image by adopting a stereoscopic interpolation volume drawing method; then, adopting an optical model to project a virtual image into the field of view of a user, and fusing the virtual image and a real scene to obtain a three-dimensional model of mixed reality rendering; and finally, capturing gestures of the user through a gesture recognition method based on the three-dimensional model of the mixed reality rendering, and performing spaced gesture interaction to realize interaction between the user and the virtual scene.
In some exemplary embodiments, the mixed reality display device is a head mounted mixed reality display device holonens 2; the optical model is an optical model in the head-mounted mixed reality display device holonens 2.
In some exemplary embodiments, the mixed reality display device incorporates a depth camera, an inertial measurement unit, and a plurality of sensors to obtain user environment information and user motion information.
In some exemplary embodiments, an identification camera and a tracking camera are also provided within the mixed reality display device; the recognition camera is used for detecting and recognizing specific images, objects or scenes in the real scene; the tracking camera is used for tracking the position and the direction of the camera, and placing the virtual image in the real world according to the output of the identification camera so as to realize virtual-real registration.
In some exemplary embodiments, fusing the virtual image and the real scene to obtain a three-dimensional model of the mixed reality rendering includes: generating the virtual image into a three-dimensional model of the virtual image through a 3D modeling and rendering technology, adjusting the attribute of the virtual image in real time, and fusing the three-dimensional model of the virtual image with a real scene to obtain a three-dimensional model of mixed reality rendering; wherein the attributes of the virtual image include a position, a size, and a transparency of the virtual image.
In some exemplary embodiments, the mixed reality rendered three-dimensional model includes three-dimensional models of a plurality of different angles; wherein the three-dimensional models of different angles correspond to the viewing angle of the user.
In some exemplary embodiments, while conducting the spaced-apart gesture interactions, the user manipulates the virtual surgical instrument in real-time and observes the patient's pathological condition by gestures to move, rotate, zoom and shear virtual tissue in the mixed reality rendered three-dimensional model from any angle.
In some exemplary embodiments, after obtaining the three-dimensional model of the mixed reality rendering, further comprising: a three-dimensional model of the mixed reality rendering is evaluated.
In some example embodiments, evaluating the three-dimensional model of the mixed reality rendering includes: and respectively evaluating the high efficiency and the real-time performance of the three-dimensional model rendered by the mixed reality by taking the medical image of the three-dimensional CT original data, the medical image of the three-dimensional CT reconstruction data and the medical image of the three-dimensional ultrasonic original data as contrast data.
In a second aspect, the embodiment of the application also provides a holographic three-dimensional medical data visualization system capable of separating space from gesture interaction, which comprises a mixed reality display device and a gesture recognition system in communication connection with the mixed reality display device; the mixed reality display device is worn on the head of the user; the mixed reality display device is internally provided with a depth camera, an inertia measurement unit and a plurality of sensors, and is used for acquiring user environment information and user motion information; the user environment information is image information of a real scene around the user; the mixed reality display device also generates a virtual image by adopting a stereoscopic interpolation volume drawing method according to the user environment information and the user motion information; the virtual image is projected into the field of view of the user by adopting the optical model, and the virtual image and the real scene are fused to obtain a three-dimensional model of mixed reality rendering; the gesture recognition system is used for capturing gestures of a user and performing spaced gesture interaction according to the three-dimensional model rendered by the mixed reality through a gesture recognition method, so that interaction between the user and the virtual scene is realized.
The technical scheme provided by the embodiment of the application has at least the following advantages:
the embodiment of the application provides a holographic three-dimensional medical data visualization method and system capable of isolating gesture interaction, wherein the method comprises the following steps: firstly, acquiring user environment information and user motion information; the user environment information is image information of a real scene around the user; the user wears a mixed reality display device; then, based on the user environment information and the user motion information, generating a virtual image by adopting a stereoscopic interpolation volume drawing method; then, adopting an optical model to project a virtual image into the field of view of a user, and fusing the virtual image and a real scene to obtain a three-dimensional model of mixed reality rendering; and finally, capturing gestures of the user through a gesture recognition method based on the three-dimensional model of the mixed reality rendering, and performing spaced gesture interaction to realize interaction between the user and the virtual scene.
In order to enable a doctor to better perceive the spatial structure of a target area of a patient, the application provides a holographic three-dimensional medical data visual visualization system suitable for the interaction of spaced gestures of an surgical subject, which covers preoperative planning and holographic three-dimensional visualization of tissue structures, utilizes an augmented reality technology to help the surgeon predict tissue structure information, cuts tissue organs under the guidance of virtual information, combines a registration tracking technology and the interaction of spaced gestures to assist the operator to naturally perform operation, thereby converting traditional operation into operation which can be expected and accurately guided.
Drawings
One or more embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings, which are not to be construed as limiting the embodiments unless specifically indicated otherwise.
FIG. 1 is a schematic flow chart of a method for visualizing holographic three-dimensional medical data capable of separating spatial gesture interactions according to an embodiment of the present application;
fig. 2 is a schematic diagram of Vuforia-based image recognition and virtual-real registration according to an embodiment of the present application;
fig. 3 is a schematic diagram of a holographic gesture interaction principle according to an embodiment of the present application.
Detailed Description
From the background, the prior art volume rendering method uses a direct volume rendering technique to enhance visual and spatial perception from a three-dimensional dataset. In the process of volume rendering using a GPU, the existing methods have never fully achieved the same quality and performance as in medical augmented reality. Therefore, the method proposed by the prior art has the technical problem that the method is limited to rendering polygonal surface data and cannot meet the high-efficiency requirement of the XR auxiliary diagnosis system on volume rendering.
Unlike conventional two-dimensional display techniques of medical data, XR can more truly and accurately reveal three-dimensional anatomy during intra-operative diagnostic procedures and preoperative surgical planning. With the development of GPU technology, volume rendering technology has made tremendous progress in recent years. Visual and spatial perception is enhanced from three-dimensional datasets using direct volume rendering techniques. A related art proposes ClearView for segmentation-free, real-time, focus and background visualization of three-dimensional data. ClearView draws the viewer's attention to the focal region, into which important features of the three-dimensional data are embedded. The transparency of the surrounding environmental layer is adjusted according to the curvature characteristics of the environmental layer or the distance from the focal layer.
Most early interaction techniques were designed from a device-centric perspective, such as using two-dimensional touch screen input and a stylus, keyboard, and device sensors to interact with virtual objects. Yet another related art proposes that after a comprehensive analysis of immersive visualizations: the immersive interactive interface can display data by utilizing stereoscopic vision of people, and the data expression space is changed from a two-dimensional plane to a three-dimensional space around a user; providing mobility so that the physical work environment of the user is no longer limited to a fixed office desktop; natural interaction modes such as gestures and the like are provided, so that interaction is more visual, and a user can interact with data in parallel by utilizing multiple channels.
Since the existing methods have never fully achieved the same quality and performance as in medical augmented reality, the proposed methods are limited to rendering polygonal surface data, and although the relevant anatomy and structure can be depicted, these methods are too time consuming to meet the high efficiency requirements of XR-aided diagnosis systems for volume rendering. Through analytical study of relevant scholars and cases, the medical auxiliary diagnostic system should be efficient and increase rendering speed as much as possible.
In order to solve the technical problems, the embodiment of the application provides a holographic three-dimensional medical data visualization method and a holographic three-dimensional medical data visualization system capable of isolating gesture interaction, wherein the method comprises the following steps: firstly, acquiring user environment information and user motion information; the user environment information is image information of a real scene around the user; the user wears a mixed reality display device; then, based on the user environment information and the user motion information, generating a virtual image by adopting a stereoscopic interpolation volume drawing method; then, adopting an optical model to project a virtual image into the field of view of a user, and fusing the virtual image and a real scene to obtain a three-dimensional model of mixed reality rendering; and finally, capturing gestures of the user through a gesture recognition method based on the three-dimensional model of the mixed reality rendering, and performing spaced gesture interaction to realize interaction between the user and the virtual scene. According to the embodiment of the application, by providing the holographic three-dimensional medical data visualization method and system capable of realizing the space-isolated gesture interaction, through the efficient three-dimensional object body drawing and real-time accurate interaction method design, a user can move, rotate, expand, contract and shear virtual tissues from any angle, real-time operate virtual surgical instruments and observe pathological changes of a patient, and space-isolated gesture interaction between the user and a virtual scene is realized.
Embodiments of the present application will be described in detail below with reference to the attached drawings. However, it will be understood by those of ordinary skill in the art that in various embodiments of the present application, numerous specific details are set forth in order to provide a thorough understanding of the present application. However, the claimed technical solution of the present application can be realized without these technical details and various changes and modifications based on the following embodiments.
Referring to fig. 1, an embodiment of the present application provides a method for visualizing holographic three-dimensional medical data capable of separating space-gesture interactions, including:
s1, acquiring user environment information and user motion information; the user environment information is image information of a real scene around the user; the user wears a mixed reality display device.
And S2, generating a virtual image by adopting a stereo interpolation volume drawing method based on the user environment information and the user motion information.
And S3, projecting the virtual image into the field of view of the user by adopting the optical model, and fusing the virtual image and the real scene to obtain the three-dimensional model of the mixed reality rendering.
And S4, capturing gestures of a user through a gesture recognition method based on the three-dimensional model rendered by the mixed reality, and performing spaced gesture interaction to realize interaction between the user and the virtual scene.
In some embodiments, the mixed reality display device is a head mounted mixed reality display device holonens 2; the optical model is an optical model in the head-mounted mixed reality display device holonens 2.
The application provides an interactive three-dimensional medical data holographic visualization system which is used by adapting to Hololens2 head-mounted display equipment. Through efficient three-dimensional object volume rendering and real-time accurate interactive method design, the system enables a surgeon to move, rotate, zoom and shear virtual tissues from any angle, operate virtual surgical instruments in real time and observe pathological changes of a patient, and intuitively and accurately diagnose a region of interest. By carrying out experiments based on three-dimensional CT original data, three-dimensional CT reconstruction data and three-dimensional ultrasonic original data on HoloLens2, compared with an advanced volume rendering algorithm model, the method has certain advantages in rendering effect and speed, and the feasibility of the system is verified when the HoloLens2 achieves rendering speeds of 42.00+/-5.80 FPS, 60.00+/-1.80 FPS and 42.05 +/-5.55 FPS respectively.
In some embodiments, the mixed reality display device incorporates a depth camera, an inertial measurement unit, and a plurality of sensors to obtain user environment information and user motion information.
Specifically, the application adopts a new generation of mixed reality head mounted display device holonens 2 of Microsoft corporation to realize the mixed reality display of the three-dimensional model. The principle of the technology based on the HoloLens2 for fusing the virtual image and the real scene is that a plurality of sensors such as a depth camera, an inertia measurement unit and an optical sensor are used for acquiring image information of the real scene and motion information of a user, then the information is transmitted to a computer for processing and analysis, and finally a virtual image is generated and projected into the field of view of the user through optical equipment of the HoloLens2, so that the display effect of mixed reality is realized. In the mixed reality display of holonens 2, realizing the fusion of a virtual image and a real scene is a key for realizing mixed reality. To achieve this goal, holonens 2 employs a variety of techniques including optical, sensor, and computer graphics techniques, among others. The optical technology is the basis of mixed reality display, and utilizes a transparent waveguide to project a virtual image into the field of view of a user, and the fusion of the virtual image and a real scene is realized through optical principles such as reflection, refraction and the like.
Sensor technology senses the environment surrounding the user and the movement information of the user and transmits the information to a computer for processing and analysis. HoloLens2 adopts various sensors, including a depth camera, an inertial measurement unit, an optical sensor and the like, and the sensors can acquire information of a user such as position, gesture and the like in real time, so that accurate tracking and positioning of the user are realized, and the position and the size of a virtual image are adjusted to be perfectly fused with a real scene. Virtual images are generated and displayed into the field of view of the user using computer graphics. In HoloLens2, virtual images are generated into a realistic three-dimensional model through 3D modeling and rendering technology, and the position, the size, the transparency and other attributes of the virtual images are adjusted in real time, so that fusion with a real scene is realized.
In some embodiments, an identification camera and a tracking camera are further arranged in the mixed reality display device; the recognition camera is used for detecting and recognizing specific images, objects or scenes in the real scene; the tracking camera is used for tracking the position and the direction of the camera, and placing the virtual image in the real world according to the output of the identification camera so as to realize virtual-real registration.
In the holographic three-dimensional medical data visualization method capable of separating space and gesture interaction, which is designed by the application, a Vufronia engine is adopted, as shown in figure 2. Vuforia can locate specific patterns by using image recognition and tracking functions through the built-in cameras (recognition cameras and tracking cameras) of the mixed reality device. The basic architecture of Vuforia includes an identification engine and a tracking engine. The recognition engine is used to detect and recognize a particular image, object or scene in the real world. The tracking engine is responsible for tracking the position and orientation of the camera and accurately placing virtual content in the real world based on the output of the recognition engine. By this method, the application can realize accurate virtual-real registration.
In some embodiments, fusing the virtual image and the real scene to obtain a three-dimensional model of the mixed reality rendering includes: generating the virtual image into a three-dimensional model of the virtual image through a 3D modeling and rendering technology, adjusting the attribute of the virtual image in real time, and fusing the three-dimensional model of the virtual image with a real scene to obtain a three-dimensional model of mixed reality rendering; wherein the attributes of the virtual image include a position, a size, and a transparency of the virtual image.
In some embodiments, the mixed reality rendered three-dimensional model includes three-dimensional models of a plurality of different angles; wherein the three-dimensional models of different angles correspond to the viewing angle of the user.
Specifically, in the holographic three-dimensional medical data visualization method capable of separating space gesture interaction provided by the embodiment of the application, the holographic three-dimensional medical data visualization method is subjected to mixed reality rendering by using HoloLens2 according to the illumination model. After model rendering is completed, the model display at different angles is provided based on the observation angle and the position of the wearer of holonens 2.
In some embodiments, while conducting the spaced-apart gesture interactions, the user manipulates the virtual surgical instrument in real-time and observes the patient's pathology through gestures to move, rotate, zoom and shear virtual tissue in the mixed reality rendered three-dimensional model from any angle.
Specifically, the holographic three-dimensional medical data visualization method capable of separating gesture interaction provided by the application has the advantages that the portable holographic visualization interaction based on gestures is adopted in human-computer interaction, and the portable holographic visualization interaction technology based on gestures is a human-computer interaction mode based on holographic projection and gesture recognition. The method mainly comprises the steps of projecting a virtual three-dimensional scene into a real scene, capturing gestures of a user through a gesture recognition technology and realizing interaction, so that a natural and visual interaction mode between the user and the virtual scene is realized, and finer and flexible operation can be realized.
The interaction needs to recognize gestures by capturing hand actions of a user and convert the gestures into corresponding instructions, so that control over a virtual scene is realized. Through training and recognition of a large amount of gesture data, accuracy, instantaneity and stability of gesture recognition are improved.
The principle of the gesture interaction method is shown in fig. 2. Gesture recognition systems use devices such as cameras or depth cameras to capture the hand motion of a user and convert it into digital signals. These signals are input into a computer and processed by algorithms and models to recognize different gestures. These gestures may represent different instructions, such as swipes, zooms, rotations, etc. Gesture recognition systems are trained extensively to recognize different gestures.
Computer vision techniques help gesture recognition systems recognize specific patterns and shapes in images to accurately recognize different gestures. For example, information such as the position of the finger, the direction, speed, and size of the gesture is detected, thereby helping the system determine the user's gesture. The interaction method supports movement and rotation of the whole model, and can also zoom and shrink local key parts.
In addition, the user may make a cut through the gesture, which enables the surgeon to view various cross sections of the target area, thereby observing and analyzing the patient's lesions without dead angles.
In some embodiments, after obtaining the three-dimensional model of the mixed reality rendering, further comprising: a three-dimensional model of the mixed reality rendering is evaluated.
In some embodiments, evaluating a three-dimensional model of a mixed reality rendering includes: and respectively evaluating the high efficiency and the real-time performance of the three-dimensional model rendered by the mixed reality by taking the medical image of the three-dimensional CT original data, the medical image of the three-dimensional CT reconstruction data and the medical image of the three-dimensional ultrasonic original data as contrast data.
The application uses two types of medical data, namely a CT image and an ultrasonic image to carry out experiments so as to respectively test and evaluate the high efficiency and the real-time performance of the three-dimensional model of the mixed reality rendering. For each medical data, experiments were performed as follows:
(1) Three-dimensional CT raw data of sizes (270, 512, 512), three-dimensional CT reconstruction data of 512×512 slice resolution and slice thickness of 0.7-0.8mm, or three-dimensional ultrasound raw data of resolution 183×115×126FPS are obtained.
(2) And (3) adjusting the transfer function, emphasizing and classifying the interesting characteristics in the data so as to truly restore the materials of the areas such as bones, blood vessels or soft tissues.
(3) The surgeon wears holonens 2 for interaction of medical three-dimensional data.
The following test and evaluation are performed on the efficiency and real-time performance of the medical interactive visualization method, respectively.
In order to verify the effectiveness of the medical interactive visualization method, the application designs experiments comparing three typical medical images: three-dimensional CT original data, three-dimensional CT reconstruction data and three-dimensional ultrasonic original data are rendered by using six volume rendering algorithm models to obtain results. Wherein a base model (no illumination), an illumination model, a back-to-front direct volume rendering + a cubic convolution interpolation model, an early ray termination model + a cubic convolution interpolation model and an early ray termination are used. The algorithm of the application has the highest contrast resolution, can more clearly see tiny blood vessels and structures on the heart, and can perform proper texture analysis in the same tissue with homogeneous characteristics. The algorithm model is more suitable for myocardial assessment, a spherical surface taking a focus as a center can focus on a focus area, and various tissue structures are displayed in a three-dimensional composite mode, and the boundaries and the contours of the three-dimensional composite display mode are highlighted.
In order to verify the instantaneity of the medical interactive visualization method, 20 interactive visualization interaction experiments are carried out, and the average number of frames per second is recorded and calculated to evaluate the instantaneity of the medical interactive visualization system. Under the precondition that the highest displayable frame rate of holonens 2 is 60FPS, for three-dimensional CT original data, the base model obtains 59.95±2.25FPS which is the highest rendering speed, and the highest resolution which can be displayed by holonens 2 is almost the highest, and then the early ray termination model (the method of the application) is 40.00±5.80FPS, but from the viewpoint of visual authenticity reduction, the base model cannot provide effective spatial reference, and the application is obviously superior to the base model. For the three-dimensional ultrasonic original data and the three-dimensional CT reconstruction data, the model of the application obtains the highest rendering speed of 60.00+/-1.80 FPS and 42.05 +/-5.55 FPS respectively, and in an ultrasonic image, the rendering speed reaches the highest display frame rate of HoloLens 2. Therefore, compared with some other five representative methods, the method can achieve the best rendering speed and has better real-time performance, and the medical interactive visual system provided by the application can quickly respond to the pose change of the three-dimensional data.
In summary, the doctor can better perceive the spatial structure of the target area of the patient, a holographic three-dimensional medical data visualization system which is suitable for the interaction of the spaced gestures of the surgical medicine is developed, the holographic three-dimensional visualization of the preoperative planning and the tissue structure is covered, the augmented reality technology is utilized to help the surgeon to perceive the tissue structure information, the tissue organs are sheared under the guidance of the virtual information, and the operator is assisted to naturally and intuitively perform the operation by combining the registration tracking technology and the interaction of the spaced gestures, so that the traditional operation is converted into the operation which can be expected and accurately guided. According to the characteristics of surgical medicine, an applicable medical visual prototype system is developed, an interaction mode related to the medical visual prototype system is formulated, and feasibility of the system is verified by utilizing three-dimensional CT original data, three-dimensional CT reconstruction data and three-dimensional ultrasonic original data. On the basis, intensive research and exploration are carried out to determine the practical application value of the composition in clinic, and the improvement and optimization are continuously carried out, so that a solid foundation is laid for the future.
The application utilizes the space-apart gesture interaction and the three-dimensional medical data visualization technology to assist doctors in realizing visual diagnosis, and the method comprises two strategies: and (3) spatial interpolation volume rendering based on the illumination model and holographic half-air gesture interaction. For each strategy we performed multiple experiments to verify their validity separately. Compared with some representative methods, the method can ensure the visual effect and reach the fastest rendering speed, and the interactive algorithm can meet the real-time and precision requirements of auxiliary doctor diagnosis. Therefore, our method can provide a user with a more intuitive three-dimensional medical data visualization effect in holographic XR environments.
The embodiment of the application also provides a holographic three-dimensional medical data visualization system capable of isolating the interaction of the gestures, which comprises a mixed reality display device and a gesture recognition system in communication connection with the mixed reality display device; the mixed reality display device is worn on the head of the user; the mixed reality display device is internally provided with a depth camera, an inertia measurement unit and a plurality of sensors, and is used for acquiring user environment information and user motion information; the user environment information is image information of a real scene around the user; the mixed reality display device also generates a virtual image by adopting a stereoscopic interpolation volume drawing method according to the user environment information and the user motion information; the virtual image is projected into the field of view of the user by adopting the optical model, and the virtual image and the real scene are fused to obtain a three-dimensional model of mixed reality rendering; the gesture recognition system is used for capturing gestures of a user and performing spaced gesture interaction according to the three-dimensional model rendered by the mixed reality through a gesture recognition method, so that interaction between the user and the virtual scene is realized.
Compared with the prior art, the holographic three-dimensional medical data visualization system capable of isolating the interaction of the gestures has the advantages that: although many medical visualization systems exist at present, due to the fact that the systems are mainly based on two-dimensional screen display, holographic display of medical images is still limited, and multi-view interactive operation cannot be achieved. The surgeon uses a mouse to operate on the screen, which greatly limits the field of view of the surgeon compared to the half-space gesture interactions used in our system, and this interaction approach is far from the natural approach in real-world surgical scenes, limiting the working space for the surgeon to analyze the medical images, and risking potential diagnostic errors for the surgeon due to visual obstruction of the two-dimensional images. In contrast, in our system, the surgeon wearing holonens 2 is free to move while looking at the virtual-physical fusion scene and performing the surgical task. Our system can improve the comfort level of doctor in diagnosis and further optimize the treatment effect of patients. The surgeon can intuitively present medical knowledge to the patient, helping the patient to better understand and receive the treatment, thereby alleviating the psychological stress and anxiety of the patient.
By the technical scheme, the embodiment of the application provides a holographic three-dimensional medical data visualization method and system capable of isolating gesture interaction, and the method comprises the following steps: firstly, acquiring user environment information and user motion information; the user environment information is image information of a real scene around the user; the user wears a mixed reality display device; then, based on the user environment information and the user motion information, generating a virtual image by adopting a stereoscopic interpolation volume drawing method; then, adopting an optical model to project a virtual image into the field of view of a user, and fusing the virtual image and a real scene to obtain a three-dimensional model of mixed reality rendering; and finally, capturing gestures of the user through a gesture recognition method based on the three-dimensional model of the mixed reality rendering, and performing spaced gesture interaction to realize interaction between the user and the virtual scene.
In order to enable a doctor to better perceive the spatial structure of a target area of a patient, the application provides a holographic three-dimensional medical data visual visualization system suitable for the interaction of spaced gestures of an surgical subject, which covers preoperative planning and holographic three-dimensional visualization of tissue structures, utilizes an augmented reality technology to help the surgeon predict tissue structure information, cuts tissue organs under the guidance of virtual information, combines a registration tracking technology and the interaction of spaced gestures to assist the operator to naturally perform operation, thereby converting traditional operation into operation which can be expected and accurately guided.
That is, it will be understood by those skilled in the art that all or part of the steps in implementing the methods of the embodiments described above may be implemented by a program stored in a storage medium, where the program includes several instructions for causing a device (which may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps in the methods of the embodiments described above. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples of carrying out the application and that various changes in form and details may be made therein without departing from the spirit and scope of the application. Various changes and modifications may be made by one skilled in the art without departing from the spirit and scope of the application, and the scope of the application is therefore intended to be limited only by the appended claims.

Claims (10)

1. A holographic three-dimensional medical data visualization method capable of isolating gesture interaction is characterized by comprising the following steps of:
acquiring user environment information and user motion information; the user environment information is image information of a real scene around the user; the user wearing a mixed reality display device;
generating a virtual image by adopting a stereoscopic interpolation volume drawing method based on the user environment information and the user motion information;
adopting an optical model to project the virtual image into the field of view of the user, and fusing the virtual image and a real scene to obtain a three-dimensional model of mixed reality rendering;
based on the three-dimensional model of the mixed reality rendering, capturing gestures of a user through a gesture recognition method and performing spaced gesture interaction, so that interaction between the user and a virtual scene is realized.
2. The method for visualizing three-dimensional medical data in a space-isolated gesture interaction of claim 1, wherein said mixed reality display device is a head-mounted mixed reality display device holonens 2;
the optical model is an optical model in a head-mounted mixed reality display device HoloLens 2.
3. The method for visualizing the three-dimensional medical data in a space-isolated gesture interaction of claim 1, wherein said mixed reality display device incorporates a depth camera, an inertial measurement unit and a plurality of sensors to obtain said user environmental information and said user motion information.
4. The method for visualizing the holographic three-dimensional medical data capable of separating the gesture interaction according to claim 3, wherein an identification camera and a tracking camera are further arranged in the mixed reality display device; the identification camera is used for detecting and identifying specific images, objects or scenes in the real scene; the tracking camera is used for tracking the position and the direction of the camera, and placing the virtual image in the real world according to the output of the identification camera so as to realize virtual-real registration.
5. The method for visualizing three-dimensional medical data in a space-isolated gesture interaction of claim 1, wherein fusing the virtual image and the real scene to obtain a three-dimensional model for mixed reality rendering comprises:
generating the virtual image into a three-dimensional model of the virtual image through a 3D modeling and rendering technology, adjusting the attribute of the virtual image in real time, and fusing the three-dimensional model of the virtual image with a real scene to obtain a three-dimensional model of mixed reality rendering; wherein the attributes of the virtual image include a position, a size, and a transparency of the virtual image.
6. The method for visualizing three-dimensional medical data in a hollow-out gesture interactive holographic three-dimensional system of claim 1, wherein said mixed reality rendered three-dimensional model comprises three-dimensional models of a plurality of different angles; wherein, the liquid crystal display device comprises a liquid crystal display device,
the three-dimensional models of different angles correspond to the viewing angle of the user.
7. The method for visualizing the holographic three-dimensional medical data capable of isolated gesture interaction of claim 1, wherein a user manipulates the virtual surgical instrument in real time and observes the pathological condition of the patient by moving, rotating, zooming and shearing virtual tissues in the three-dimensional model rendered by mixed reality from any angle through gestures while isolated gesture interaction is performed.
8. The method for visualizing three-dimensional medical data in a space-time gesture-interactive holographic representation as claimed in claim 1, further comprising, after obtaining the three-dimensional model of the mixed reality rendering:
and evaluating the three-dimensional model of the mixed reality rendering.
9. The method of claim 8, wherein evaluating the three-dimensional model of the mixed reality rendering comprises:
and respectively evaluating the high efficiency and the real-time performance of the three-dimensional model rendered by the mixed reality by taking the medical image of the three-dimensional CT original data, the medical image of the three-dimensional CT reconstruction data and the medical image of the three-dimensional ultrasonic original data as contrast data.
10. The holographic three-dimensional medical data visualization system capable of isolating gesture interaction is characterized by comprising a mixed reality display device and a gesture recognition system in communication connection with the mixed reality display device;
the mixed reality display device is worn on the head of a user; the mixed reality display device is internally provided with a depth camera, an inertia measurement unit and a plurality of sensors and is used for acquiring user environment information and user motion information; the user environment information is image information of a real scene around the user;
the mixed reality display device also generates a virtual image by adopting a stereoscopic interpolation volume drawing method according to the user environment information and the user motion information; the virtual image is projected into the field of view of the user by adopting an optical model, and the virtual image and a real scene are fused to obtain a three-dimensional model of mixed reality rendering;
the gesture recognition system is used for capturing gestures of the user and performing spaced gesture interaction according to the three-dimensional model rendered by the mixed reality through a gesture recognition method, so that interaction between the user and the virtual scene is realized.
CN202310794023.3A 2023-06-29 2023-06-29 Holographic three-dimensional medical data visualization method and system capable of isolating gesture interaction Pending CN116909442A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310794023.3A CN116909442A (en) 2023-06-29 2023-06-29 Holographic three-dimensional medical data visualization method and system capable of isolating gesture interaction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310794023.3A CN116909442A (en) 2023-06-29 2023-06-29 Holographic three-dimensional medical data visualization method and system capable of isolating gesture interaction

Publications (1)

Publication Number Publication Date
CN116909442A true CN116909442A (en) 2023-10-20

Family

ID=88355567

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310794023.3A Pending CN116909442A (en) 2023-06-29 2023-06-29 Holographic three-dimensional medical data visualization method and system capable of isolating gesture interaction

Country Status (1)

Country Link
CN (1) CN116909442A (en)

Similar Documents

Publication Publication Date Title
JP5992448B2 (en) Image system and method
Sielhorst et al. Advanced medical displays: A literature review of augmented reality
KR100439756B1 (en) Apparatus and method for displaying virtual endoscopy diaplay
EP2765776A1 (en) Graphical system with enhanced stereopsis
CN101797182A (en) Nasal endoscope minimally invasive operation navigating system based on augmented reality technique
JP2014512550A6 (en) Image system and method
US11615560B2 (en) Left-atrial-appendage annotation using 3D images
CN103356155A (en) Virtual endoscope assisted cavity lesion examination system
CN109389669A (en) Human 3d model construction method and system in virtual environment
JPH09508994A (en) Image forming apparatus and method
JP2007512854A (en) Surgical navigation system (camera probe)
WO2008076079A1 (en) Methods and apparatuses for cursor control in image guided surgery
CN112740285A (en) Overlay and manipulation of medical images in a virtual environment
Mirhosseini et al. Benefits of 3D immersion for virtual colonoscopy
CN112618026A (en) Remote operation data fusion interactive display system and method
EP3803540B1 (en) Gesture control of medical displays
Krapichler et al. VR interaction techniques for medical imaging applications
Abou El-Seoud et al. An interactive mixed reality ray tracing rendering mobile application of medical data in minimally invasive surgeries
US10854005B2 (en) Visualization of ultrasound images in physical space
CN116909442A (en) Holographic three-dimensional medical data visualization method and system capable of isolating gesture interaction
TW201137809A (en) Method and system for multi-dimensional stereo visualization of physiological image
US11869216B2 (en) Registration of an anatomical body part by detecting a finger pose
CN111462314B (en) Organ three-dimensional image reconstruction method, operation navigation method and operation auxiliary system
Zhang et al. From AR to AI: augmentation technology for intelligent surgery and medical treatments
Wang et al. An evaluation of using real-time volumetric display of 3D ultrasound data for intracardiac catheter manipulation tasks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination