CN114723893A - Organ tissue spatial relationship rendering method and system based on medical images - Google Patents
Organ tissue spatial relationship rendering method and system based on medical images Download PDFInfo
- Publication number
- CN114723893A CN114723893A CN202210447078.2A CN202210447078A CN114723893A CN 114723893 A CN114723893 A CN 114723893A CN 202210447078 A CN202210447078 A CN 202210447078A CN 114723893 A CN114723893 A CN 114723893A
- Authority
- CN
- China
- Prior art keywords
- rendering
- organ
- image
- dimensional
- spatial relationship
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
- G06T15/55—Radiosity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Image Generation (AREA)
Abstract
The invention belongs to the technical field of medical image rendering, and particularly discloses a rendering method and a rendering system for organ tissue spatial relationship based on medical images, wherein the method comprises the following steps: carrying out image gray value normalization and denoising processing on an input image, and extracting the outline of a target organ and an interested region ROI (region interest); segmenting the image, generating three-dimensional voxel data of corresponding organ tissues, and extracting a surface three-dimensional grid model according to the three-dimensional voxel data; selecting a corresponding rendering mode for rendering the surface three-dimensional grid model of each organ tissue and the corresponding use scene; and writing the surface three-dimensional grid model and the corresponding rendering mode into a vertex shader and a fragment shader, and obtaining a final reconstructed image through a rendering pipeline flow. The scheme can intelligently identify different organ tissues and visually display the space wrapping relation of each organ in a semitransparent mode, so that the diagnosis efficiency of a doctor is greatly improved.
Description
Technical Field
The invention belongs to the technical field of medical image rendering, and particularly relates to a medical image-based organ and tissue spatial relationship rendering method and system.
Background
Common medical images are two-dimensional image sequences, the spatial relationship of organs and tissues of a patient is judged through the images, the spatial relationship is particularly important for surgical planning, and the rendering mode of the images greatly influences the judgment efficiency of doctors.
The existing rendering technology for expressing the spatial relationship of organ tissues and the existing problems are mainly as follows:
1. the single image is rendered, a doctor needs to make judgment by browsing each image in the sequence, time consumption is huge, the angle of image expression is single, and judgment errors are easily caused.
2. The multi-plane reconstruction is a two-dimensional image processing method for obtaining arbitrary coronal, sagittal, transverse and oblique planes of human tissues and organs from the original transverse axial images through post-processing. Although the multi-plane reconstruction can display images of any section, the multi-plane reconstruction still belongs to two-dimensional section rendering, and a reader is required to have higher professional literacy and stronger spatial sense to make accurate judgment.
3. Three-dimensional volume rendering, a common method is volume rendering based on ray tracing, the method can accurately display the form and the adjacent relation of tissues and organs, but the organ classification and identification are lacked, different organs often show the same rendering effect, and a reader needs to judge the boundary relation of the organs by himself. In addition, the volume rendering technology has high requirements on the performance of the computer, and is easy to cause performance problems.
4. Compared with three-dimensional rendering, the three-dimensional reconstruction surface rendering has the advantages of higher computing speed and lower requirements on computer performance. In the prior art, an artificial intelligence algorithm is also used for segmenting and identifying the image, so that the problem of organ classification deficiency is solved, but the problem that the spatial relationship of the intertwined and interlaced organs cannot be accurately expressed for rendering each segmented organ tissue exists.
Disclosure of Invention
The invention aims to provide a rendering method and a rendering system for organ tissue spatial relationship based on medical images, which can intelligently identify different organ tissues and visually display the spatial wrapping relationship of each organ in a semitransparent mode, so that the diagnosis efficiency of doctors is greatly improved.
The invention provides a rendering method of organ tissue spatial relationship based on medical images, which comprises the following steps:
s1, carrying out image gray value normalization and denoising processing on the input image, and extracting the outline of a target organ and an interested region ROI (region of interest);
s2, segmenting the image, generating three-dimensional voxel data of corresponding organ tissues, and extracting a surface three-dimensional grid model according to the three-dimensional voxel data;
s3, selecting a corresponding rendering mode to render the surface three-dimensional grid model of each organ tissue and the corresponding use scene;
and S4, writing the surface three-dimensional grid model and the corresponding rendering mode into a vertex shader and a fragment shader, and obtaining a final reconstructed image through a rendering pipeline process.
Preferably, the S2 specifically includes:
s21, stacking the results of the cross-section images of all the two-dimensional slice images, and resampling to the original image size to obtain three-dimensional voxel data of organ tissues;
and S22, extracting an isosurface according to the three-dimensional voxel data of the organ tissues, and generating a surface three-dimensional grid model.
Preferably, the S21 further includes before: and stacking nine adjacent two-dimensional slice images into a nine-channel two-dimensional image along the sequence Z axis, and inputting the nine-channel two-dimensional image into the depth learning network, wherein the obtained prediction result is a segmentation result corresponding to the fifth slice image.
Preferably, said S22 is followed by: and performing Laplace smoothing on the surface three-dimensional grid model to ensure that the model effect is more vivid.
Preferably, the rendering pipeline process in S4 specifically includes: the computer rendering program further performs primitive assembly, rasterization, and frame buffering.
Preferably, the rendering mode in S3 specifically includes:
the dynamic Hash mapping mode is used for establishing a Hash table of organ names and rendering sequences and dynamically adjusting the Hash table according to the observation visual angle so as to achieve rapid drawing;
a depth peeling mode for peeling or drawing images of different depths by a plurality of times from far to near and superimposing the images according to color and transparency;
and a color blending mode for calculating a weighted average overlap value of object color and transparency at each pixel.
7. The medical image-based organ-tissue spatial relationship rendering method according to claim 1, wherein the target organ comprises: skin, bone, lung, heart, bronchi and arteries.
The invention also provides a rendering system of organ tissue spatial relationship based on medical images, which is used for realizing the rendering method of organ tissue spatial relationship based on medical images and comprises the following steps:
the preprocessing module is used for carrying out image gray value normalization and denoising processing on an input image, and extracting the outline of a target organ and an interested region ROI (region of interest);
the organ data extraction module is used for segmenting the image, generating three-dimensional voxel data corresponding to organ tissues and extracting a surface three-dimensional grid model according to the three-dimensional voxel data;
the rendering module is used for selecting a corresponding rendering mode for rendering the surface three-dimensional grid model of each organ tissue and the corresponding use scene;
and the image reconstruction module is used for writing the surface three-dimensional grid model and the corresponding rendering mode into a vertex shader and a fragment shader and obtaining a finally reconstructed image through a rendering pipeline flow.
The invention also provides an electronic device which comprises a memory and a processor, wherein the processor is used for realizing the steps of the organ tissue spatial relation rendering method based on the medical image when executing the computer management program stored in the memory.
The invention also provides a computer readable storage medium, on which a computer management program is stored, which when executed by a processor implements the steps of the medical image-based organ and tissue spatial relationship rendering method.
Compared with the prior art, the organ and tissue spatial relationship rendering method and system based on the medical image, provided by the invention, comprise the following steps: carrying out image gray value normalization and denoising processing on an input image, and extracting the outline of a target organ and an interested region ROI (region of interest); segmenting the image, generating three-dimensional voxel data of corresponding organ tissues, and extracting a surface three-dimensional grid model according to the three-dimensional voxel data; selecting a corresponding rendering mode for rendering the surface three-dimensional grid model of each organ tissue and the corresponding use scene; and writing the surface three-dimensional grid model and the corresponding rendering mode into a vertex shader and a fragment shader, and obtaining a final reconstructed image through a rendering pipeline flow. The scheme can intelligently identify different organ tissues and visually display the space wrapping relation of each organ in a semitransparent mode, so that the diagnosis efficiency of a doctor is greatly improved.
Drawings
Fig. 1 is a flowchart of an organ-tissue spatial relationship rendering method based on medical images according to the present invention;
FIG. 2 is a schematic diagram of a hardware structure of a possible electronic device provided in the present invention;
fig. 3 is a schematic diagram of a hardware structure of a possible computer-readable storage medium according to the present invention.
Detailed Description
The following detailed description of the present invention is provided in conjunction with the accompanying drawings, but it should be understood that the scope of the present invention is not limited to the specific embodiments.
Throughout the specification and claims, unless explicitly stated otherwise, the word "comprise", or variations such as "comprises" or "comprising", will be understood to imply the inclusion of a stated element or component but not the exclusion of any other element or component.
As shown in fig. 1, a method for rendering spatial relationship of organ and tissue based on medical image according to a preferred embodiment of the present invention includes: s1, carrying out image gray value normalization and denoising processing on the input image, and extracting the outline of a target organ and an interested region ROI (region of interest); s2, segmenting the image, generating three-dimensional voxel data of corresponding organ tissues, and extracting a surface three-dimensional grid model according to the three-dimensional voxel data; s3, selecting a corresponding rendering mode to render the surface three-dimensional grid model of each organ tissue and the corresponding use scene; and S4, writing the surface three-dimensional grid model and the corresponding rendering mode into a vertex shader and a fragment shader, and obtaining a finally reconstructed image through a rendering pipeline process. The scheme can intelligently identify different organ tissues and visually display the space wrapping relation of each organ in a semitransparent mode, so that the diagnosis efficiency of a doctor is greatly improved.
Firstly, an image sequence is obtained and input into a preprocessing module of a computer for preprocessing. The preprocessing specifically includes performing image gray value normalization and denoising on the input image, and extracting the contour of the target organ and the region of interest roi (region of interest).
Then, the image is segmented in an organ extraction module, three-dimensional voxel data of corresponding organ tissues are generated, and then a surface three-dimensional grid model is extracted according to the three-dimensional voxel data.
Wherein the extracted target organs include skin, bone, lung, heart, bronchus, and artery. The data extraction of different organ tissues can also be carried out according to specific requirements.
And then the data enters a rendering module for rendering. Firstly, performing semitransparent color processing on the surface three-dimensional grid model, and then selecting a corresponding rendering mode for rendering the surface three-dimensional grid model of each organ tissue and the corresponding use scene.
And finally, writing the surface three-dimensional grid model and the corresponding rendering mode into a vertex shader and a fragment shader, and obtaining a final reconstructed image through a rendering pipeline process.
Preferably, the S21 is preceded by an image segmentation step: and stacking nine adjacent two-dimensional slice images into a nine-channel two-dimensional image along the sequence Z axis, and inputting the nine-channel two-dimensional image into the depth learning network, wherein the obtained prediction result is a segmentation result corresponding to the fifth slice image. Through image segmentation, the whole image can be separated according to organ tissues, so that local pixels are improved, whole data can be reduced, the whole data is broken into parts, and follow-up high-efficiency analysis on data processing is facilitated.
Preferably, the step S22 further includes: and performing Laplace smoothing on the surface three-dimensional grid model to ensure that the model effect is more vivid.
Preferably, the rendering pipeline process in S4 specifically includes: the computer rendering program further performs primitive assembly, rasterization, and frame buffering.
Preferably, the rendering mode in S3 specifically includes:
the dynamic Hash mapping mode is used for establishing a Hash table of organ names and rendering sequences and dynamically adjusting the Hash table according to the observation visual angle so as to achieve rapid drawing; the method has the advantages of quickest rendering speed and lowest requirement on equipment performance, and is commonly used for rendering mobile-end equipment.
A depth peeling mode for peeling or drawing images of different depths by a plurality of times from far to near and superimposing the images according to color and transparency; the rendering effect is most suitable for the effect of a real semitransparent object (similar to the perspective effect of a plurality of colored glasses), the effect is optimal, but the requirement on the performance of equipment is the highest, and the rendering effect is commonly used for high-energy and new-energy computer equipment on a workstation.
And a color blending mode for calculating a weighted average overlap value of object color and transparency at each pixel. The performance requirement and the rendering effect of the method are compromise schemes of the two methods, and a good rendering effect can be achieved on the premise of ensuring the performance. Belonging to a default selection scheme.
Fig. 2 is a schematic diagram of an embodiment of an electronic device according to an embodiment of the invention. As shown in fig. 2, an embodiment of the present invention provides an electronic device, which includes a memory 1310, a processor 1320, and a computer program 1311 stored in the memory 1310 and executable on the processor 1320, where the processor 1320 executes the computer program 1311 to implement the following steps: s1, carrying out image gray value normalization and denoising processing on the input image, and extracting the outline of a target organ and an interested region ROI (region of interest);
s2, segmenting the image, generating three-dimensional voxel data of corresponding organ tissues, and extracting a surface three-dimensional grid model according to the three-dimensional voxel data;
s3, selecting a corresponding rendering mode to render the surface three-dimensional grid model of each organ tissue and the corresponding use scene;
and S4, writing the surface three-dimensional grid model and the corresponding rendering mode into a vertex shader and a fragment shader, and obtaining a final reconstructed image through a rendering pipeline process.
Please refer to fig. 3, which is a schematic diagram of an embodiment of a computer-readable storage medium according to the present invention. As shown in fig. 3, the present embodiment provides a computer-readable storage medium 1400, on which a computer program 1411 is stored, which computer program 1411, when executed by a processor, implements the steps of: s1, carrying out image gray value normalization and denoising processing on the input image, and extracting the outline of a target organ and an interested region ROI (region of interest);
s2, segmenting the image, generating three-dimensional voxel data of corresponding organ tissues, and extracting a surface three-dimensional grid model according to the three-dimensional voxel data;
s3, selecting a corresponding rendering mode to render the surface three-dimensional grid model of each organ tissue and the corresponding use scene;
and S4, writing the surface three-dimensional grid model and the corresponding rendering mode into a vertex shader and a fragment shader, and obtaining a final reconstructed image through a rendering pipeline process.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing descriptions of specific exemplary embodiments of the present invention have been presented for purposes of illustration and description. It is not intended to limit the invention to the precise form disclosed, and obviously many modifications and variations are possible in light of the above teaching. The exemplary embodiments were chosen and described in order to explain certain principles of the invention and its practical application to enable one skilled in the art to make and use various exemplary embodiments of the invention and various alternatives and modifications. It is intended that the scope of the invention be defined by the claims and their equivalents.
Claims (10)
1. A rendering method of organ tissue spatial relationship based on medical images is characterized by comprising the following steps:
s1, carrying out image gray value normalization and denoising processing on the input image, and extracting the outline of a target organ and an interested region ROI (region of interest);
s2, segmenting the image, generating three-dimensional voxel data of corresponding organ tissues, and extracting a surface three-dimensional grid model according to the three-dimensional voxel data;
s3, selecting a corresponding rendering mode to render the surface three-dimensional grid model of each organ tissue and the corresponding use scene;
and S4, writing the surface three-dimensional grid model and the corresponding rendering mode into a vertex shader and a fragment shader, and obtaining a final reconstructed image through a rendering pipeline process.
2. The medical image-based organ and tissue spatial relationship rendering method according to claim 1, wherein the step S2 specifically comprises:
s21, stacking the results of the cross-section images of all the two-dimensional slice images, and resampling to the original image size to obtain three-dimensional voxel data of organ tissues;
and S22, extracting an isosurface according to the three-dimensional voxel data of the organ tissues, and generating a surface three-dimensional grid model.
3. The medical image-based organ-tissue spatial relationship rendering method according to claim 2, wherein said S21 is preceded by: and stacking the nine adjacent two-dimensional slice images into a nine-channel two-dimensional image along the sequence Z axis, and inputting the nine-channel two-dimensional image into a deep learning network to obtain a prediction result which is a segmentation result corresponding to the fifth slice image.
4. The medical image-based organ-tissue spatial relationship rendering method according to claim 2, wherein the step S22 is followed by further comprising: and performing Laplace rounding on the surface three-dimensional grid model to ensure that the model effect is more vivid.
5. The medical image-based organ and tissue spatial relationship rendering method according to claim 1, wherein the rendering pipeline process in S4 specifically includes: the computer rendering program further performs primitive assembly, rasterization, and frame buffering.
6. The medical image-based organ and tissue spatial relationship rendering method according to claim 1, wherein the rendering mode in S3 specifically includes:
the dynamic Hash mapping mode is used for establishing a Hash table of organ names and rendering sequences and dynamically adjusting the Hash table according to the observation visual angle so as to achieve rapid drawing;
a depth peeling mode for peeling or drawing images of different depths by a plurality of times from far to near and superimposing the images according to color and transparency;
and a color blending mode for calculating a weighted average overlap value of object color and transparency at each pixel.
7. The medical image-based organ-tissue spatial relationship rendering method according to claim 1, wherein the target organ comprises: skin, bone, lung, heart, bronchi and arteries.
8. A system for rendering spatial relationship of organ and tissue based on medical image, which is used for implementing the method for rendering spatial relationship of organ and tissue based on medical image according to any one of claims 1-7, and comprises:
the preprocessing module is used for carrying out image gray value normalization and denoising processing on an input image, and extracting the outline of a target organ and an interested region ROI (region of interest);
the organ data extraction module is used for segmenting the image, generating three-dimensional voxel data corresponding to organ tissues and extracting a surface three-dimensional grid model according to the three-dimensional voxel data;
the rendering module is used for selecting a corresponding rendering mode for rendering the surface three-dimensional grid model of each organ tissue and the corresponding use scene;
and the image reconstruction module is used for writing the surface three-dimensional grid model and the corresponding rendering mode into a vertex shader and a fragment shader and obtaining a finally reconstructed image through a rendering pipeline flow.
9. An electronic device, comprising a memory and a processor, wherein the processor is configured to execute a computer management program stored in the memory to implement the steps of the medical image-based organ tissue spatial relationship rendering method according to any one of claims 1-7.
10. A computer-readable storage medium, on which a computer management-like program is stored, which, when executed by a processor, implements the steps of the medical image-based organ tissue spatial relationship rendering method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210447078.2A CN114723893A (en) | 2022-04-26 | 2022-04-26 | Organ tissue spatial relationship rendering method and system based on medical images |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210447078.2A CN114723893A (en) | 2022-04-26 | 2022-04-26 | Organ tissue spatial relationship rendering method and system based on medical images |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114723893A true CN114723893A (en) | 2022-07-08 |
Family
ID=82245801
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210447078.2A Pending CN114723893A (en) | 2022-04-26 | 2022-04-26 | Organ tissue spatial relationship rendering method and system based on medical images |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114723893A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024093911A1 (en) * | 2022-10-31 | 2024-05-10 | 深圳迈瑞生物医疗电子股份有限公司 | Ultrasonic imaging method and ultrasonic device |
-
2022
- 2022-04-26 CN CN202210447078.2A patent/CN114723893A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024093911A1 (en) * | 2022-10-31 | 2024-05-10 | 深圳迈瑞生物医疗电子股份有限公司 | Ultrasonic imaging method and ultrasonic device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Tietjen et al. | Combining Silhouettes, Surface, and Volume Rendering for Surgery Education and Planning. | |
CN107808156A (en) | Area-of-interest exacting method | |
US8970581B2 (en) | System and method for interactive contouring for 3D medical images | |
CN109801268B (en) | CT radiography image renal artery segmentation method based on three-dimensional convolution neural network | |
CN111311705B (en) | High-adaptability medical image multi-plane reconstruction method and system based on webgl | |
AU2019430369B2 (en) | VRDS 4D medical image-based vein Ai endoscopic analysis method and product | |
US11995786B2 (en) | Interactive image editing | |
Huang et al. | 3D reconstruction and visualization from 2D CT images | |
WO2021030995A1 (en) | Inferior vena cava image analysis method and product based on vrds ai | |
CN114723893A (en) | Organ tissue spatial relationship rendering method and system based on medical images | |
CN114648525A (en) | Organ segmentation method and system based on watershed analysis | |
CN112331311A (en) | Method and device for fusion display of video and preoperative model in laparoscopic surgery | |
WO2021081771A1 (en) | Vrds ai medical image-based analysis method for heart coronary artery, and related devices | |
Deschamps et al. | Fast evolution of image manifolds and application to filtering and segmentation in 3D medical images | |
CN111798468B (en) | Image processing method and device, storage medium and electronic terminal | |
CN111613300B (en) | Tumor and blood vessel Ai processing method and product based on VRDS 4D medical image | |
Liu et al. | Optimization of reconstruction of 2D medical images based on computer 3D reconstruction technology | |
Viola | Importance-driven expressive visualization | |
WO2021081850A1 (en) | Vrds 4d medical image-based spine disease recognition method, and related devices | |
Gavrilescu et al. | Gradient-based classification and representation of features from volume data | |
CN117649418B (en) | Chest multi-organ segmentation method and system and computer readable storage medium | |
JPWO2014030262A1 (en) | Shape data generation program, shape data generation method, and shape data generation apparatus | |
CN111613302B (en) | Tumor Ai processing method and product based on medical image | |
Singh et al. | A Narrative Review on 3D Visualization Techniques in Neurosurgical Education, Simulation and Planning | |
Hombeck et al. | Enhancing Vascular Analysis with Distance Visualizations: An Overview and Implementation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |