CN111833427A - Method and device for volume rendering of three-dimensional image - Google Patents
Method and device for volume rendering of three-dimensional image Download PDFInfo
- Publication number
- CN111833427A CN111833427A CN202010707018.0A CN202010707018A CN111833427A CN 111833427 A CN111833427 A CN 111833427A CN 202010707018 A CN202010707018 A CN 202010707018A CN 111833427 A CN111833427 A CN 111833427A
- Authority
- CN
- China
- Prior art keywords
- points
- point
- dimensional image
- target area
- volume rendering
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/08—Volume rendering
Landscapes
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
- Image Generation (AREA)
Abstract
The invention provides a method and a device for performing volume rendering on a three-dimensional image, wherein the method for performing volume rendering on the three-dimensional image comprises the following steps: determining depth information of a plurality of points located on a boundary of a target area of a three-dimensional image; determining a plurality of sets of start points and end points of the target area based on the depth information of the plurality of points; the target region is volume rendered based on the sets of start and end points. The technical scheme of the invention can improve the volume rendering efficiency, ensure the fluency of image display and improve the interaction inductance.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a method and a device for performing volume rendering on a three-dimensional image.
Background
The three-dimensional reconstruction and the three-dimensional visualization of the image can help people to obtain detailed information in the image and guide people to make corresponding judgment. As a three-dimensional visualization method, volume rendering can fully mine various information in the image and show detailed characteristics in the image. However, the existing volume rendering method has the problems of large calculation amount, low rendering speed and easy occurrence of unsmooth interaction.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method and an apparatus for volume rendering a three-dimensional image, which can improve volume rendering efficiency, ensure smoothness of image display, and improve interaction.
In a first aspect, an embodiment of the present invention provides a method for volume rendering a three-dimensional image, including: determining depth information of a plurality of points located on a boundary of a target area of a three-dimensional image; determining a plurality of sets of start points and end points of the target area based on the depth information of the plurality of points; the target region is volume rendered based on the sets of start and end points.
In some embodiments of the present invention, determining depth information for a plurality of points located on a boundary of a target area of a three-dimensional image comprises: determining a triangular patch in a plurality of voxel units on the boundary of the target region by using a surface rendering algorithm; depth information for a plurality of points is determined based on the triangular patch.
In some embodiments of the present invention, determining depth information for a plurality of points based on a triangular patch comprises: slicing the target area by depth stripping to obtain a plurality of surfaces having depth values; depth information for a plurality of points is determined based on intersections of a plurality of surfaces having depth values and the triangular patch.
In some embodiments of the invention, the face-rendering algorithm comprises a marching cubes algorithm or a marching tetrahedron algorithm.
In some embodiments of the present invention, the method of volume rendering a three-dimensional image further comprises: and segmenting the three-dimensional image by using the deep learning network model to determine the target area.
In some embodiments of the invention, the algorithms employed for volume rendering include ray casting algorithms, wherein determining sets of start and end points for the target region based on depth information for a plurality of points comprises: when the ray determined based on the ray casting algorithm passes through a first point and a second point of the plurality of points, the point with the smaller depth value in the first point and the second point is taken as a starting point, and the point with the larger depth value in the first point and the second point is taken as an end point.
In some embodiments of the invention, the three-dimensional image is a medical image of the lung and the target region is an organ of the lung.
In a second aspect, an embodiment of the present invention provides an apparatus for volume rendering a three-dimensional image, including: a first determination module for determining depth information of a plurality of points located on a boundary of a target area of a three-dimensional image; a second determining module for determining a plurality of sets of start and end points of the target area based on the depth information of the plurality of points; and the volume rendering module is used for performing volume rendering on the target region based on the plurality of groups of starting points and end points.
In a third aspect, an embodiment of the present invention provides a computer-readable storage medium, which stores a computer program for executing the method for volume rendering a three-dimensional image according to the first aspect.
In a fourth aspect, an embodiment of the present invention provides an electronic device, including: a processor; a memory for storing processor executable instructions, wherein the processor is adapted to perform the method of volume rendering a three-dimensional image as described in the first aspect above.
The embodiment of the invention provides a method and a device for volume rendering of a three-dimensional image.
Drawings
Fig. 1 is a flowchart illustrating a method for volume rendering a three-dimensional image according to an exemplary embodiment of the present invention.
Fig. 2 is a schematic diagram illustrating a triangular patch located in a voxel according to another exemplary embodiment of the present invention.
Fig. 3 is a flowchart illustrating a method for volume rendering a three-dimensional image according to another exemplary embodiment of the present invention.
Fig. 4 is a schematic structural diagram illustrating an apparatus for volume rendering a three-dimensional image according to an exemplary embodiment of the present invention.
Fig. 5 is a block diagram illustrating an electronic device for volume rendering a three-dimensional image according to an exemplary embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Since the embodiments of the present application relate to medical image processing and application to neural networks, for ease of understanding, related terms and related concepts such as neural networks that may be related to the embodiments of the present application will be briefly described below.
(1) Digital radiography: digital Radiography (DR) is a digital imaging technology, which is developed on the basis of digital fluorescence photography (DF), and takes an image intensifier tube as an information carrier, receives X-ray information penetrating through a human body, converts the X-ray information into a digital signal after being collected by a video camera, and digitizes the digital signal.
(2) Ct (computed tomography), that is, electronic computed tomography, which uses precisely collimated X-ray beams, gamma rays, ultrasonic waves, etc. to perform cross-sectional scanning one by one around a certain part of a human body together with a detector having extremely high sensitivity, and can be used for the examination of various diseases; the following can be classified according to the radiation used: x-ray CT (X-CT), and gamma-ray CT (gamma-CT). Chest CT is a method of examining the chest by X-ray Computed Tomography (CT).
(3) Deep learning: deep Learning (DL) is one type of machine Learning, and Artificial intelligence is implemented in computing systems by building Artificial Neural Networks (ans) with hierarchical structures.
(4) A convolutional neural network: convolutional Neural Networks (CNN) are a class of Feed forward Neural Networks (Feed forward Neural Networks) that contain convolution computations and have a deep structure, and are one of the representative algorithms for deep learning (deep learning). The convolutional neural network is constructed by imitating a visual perception (visual perception) mechanism of a living being, can perform supervised learning and unsupervised learning, and has the advantages that the convolutional neural network can learn grid-likeness characteristics such as pixels and audio with small calculation amount, has stable effect and has no additional characteristic engineering (feature engineering) requirement on data due to the fact that convolutional kernel parameter sharing in an implicit layer and sparsity of connection between layers.
(5) Image Segmentation (Segmentation): segmentation includes semantic segmentation (segmentation), which is an extension to background separation and requires separation of image portions with different semantics, and instance segmentation (instance segmentation), which is an extension to detection tasks and requires description of the contour of an object (finer than a detection box). Segmentation is a pixel-level description of an image, which gives each pixel class meaning and is suitable for understanding a scene with higher requirements, such as segmentation of roads and non-roads in unmanned driving.
(6) The ray casting (RayCasting) method is a direct volume rendering algorithm based on a sequence of images. A light ray is emitted from each pixel of the image along a fixed direction (usually a sight line direction), the light ray penetrates through the whole image sequence, in the process, the image sequence is sampled to obtain color information, meanwhile, color values are accumulated according to a light ray absorption model until the light ray penetrates through the whole image sequence, and finally, the obtained color value is the color of the rendered image.
(7) The marching cube (marching cube) is a classical algorithm for surface rendering and is also the common basis for all voxel-based surface rendering algorithms. Moving Tetrahedron (MT) is an improvement of the MC algorithm, introducing more triangular patches, and in order to increase rendering speed, an appropriate method is needed to process the geometric mesh. The geometry shader of the GPU provides a good solution to this problem. The basic principle is as follows: the surface of the object is actually a closed gray-level iso-surface with a gray value called the threshold, inside which the gray values of all voxels are above this threshold, and outside which the gray values of all voxels are below this threshold.
(8) Depth stripping is a technique that orders depth values. The principle of the method is visual, and the point with the minimum Z value in the scene is output to the screen through standard depth detection, namely the vertex closest to the user. But there is also a vertex next to us and a vertex third next to us. To display them, a multi-pass rendering approach may be used. On the first rendering pass, processing is done in the normal way, which yields the z-value of each vertex in the surface nearest to us. In the second rendering, the depth value of each vertex is compared with the previous depth value, and the Z values obtained in the first rendering are stripped and the subsequent processes are analogized.
The three-dimensional visualization of the medical image can show the three-dimensional morphology of tissues, organs or other target areas in the medical image, and is helpful for a doctor to obtain the detailed characteristics of the part to be diagnosed, so as to judge the condition of a patient according to the detailed characteristics. Volume rendering is a common method for three-dimensional visualization of medical images, and can obtain two-dimensional images with detail characteristics, high quality and three-dimensional effect.
The conventional volume rendering method generally starts from a position where light is fixed, for example, from a screen pixel, projects light along a line of sight direction, then resamples the light, and finally obtains color values of the resample points according to a data field and synthesizes the color values, thereby obtaining the color value of the screen pixel emitting the light, and further obtaining the color value of the whole screen.
Although the volume rendering method can keep the detail characteristics in the medical image, the problems of large data storage amount, long calculation time, low rendering speed and unsmooth interaction easily occur.
Fig. 1 is a flowchart illustrating a method for volume rendering a three-dimensional image according to an exemplary embodiment of the present invention. The method of fig. 1 may be performed by a computing device, such as a server or a terminal. As shown in fig. 1, the method of volume rendering a three-dimensional image includes the following.
110: depth information of a plurality of points located on a boundary of a target area of a three-dimensional image is determined.
In particular, the method for volume rendering three-dimensional images can be used in medical science, scientific research or other fields requiring three-dimensional visualization of images. The embodiment of the present invention does not specifically limit the type of the three-dimensional image, that is, the method for volume rendering a three-dimensional image according to the embodiment of the present invention may be applied to three-dimensional visualization of various types of images.
In one embodiment, the three-dimensional image is a medical image, which may be obtained by Computed Tomography (CT), Computed Radiography (CR), Digital Radiography (DR), nuclear magnetic resonance (nmr), or ultrasound. The medical images may be images of different parts of the human body, such as medical images of the lungs, brain, etc.
For convenience of description, the method for volume rendering a three-dimensional image according to an embodiment of the present invention is described in detail below, taking the three-dimensional image as a lung medical image as an example.
The three-dimensional image includes a target region and a background region. A region of interest, which may also be referred to as a region of interest (ROI), for example, for a lung medical image, the ROI region may represent a region where a lung organ is located; the background area represents the area around the lung organs.
Alternatively, the region of interest may be a diseased region of another organ or tissue of the human body, such as a region where a lung nodule is located, or another region of the lung that the user needs to observe.
The three-dimensional image may be viewed as a three-dimensional data field and the plurality of points on the boundary of the target area may be points located on the surface of the target area. The depth information of a point may refer to a depth value of the point in the three-dimensional data field. The depth value may represent the distance of the point from the screen or the eyes of the viewer, e.g., the closer the point is to the screen, the smaller the depth value.
120: sets of start and end points of the target area are determined based on the depth information of the plurality of points.
In particular, the target in the target region (e.g., the lung) may be a volumetric structure. When the observer views the target area through the screen, the target area has a first surface close to the observer and a second surface far from the observer. It is possible to determine whether a point is located on the first surface or the second surface by the depth information of the point. A point on the first surface may be a starting point and a point on the second surface may be an ending point. A start point may correspond to an end point, forming a set of start and end points.
130: the target region is volume rendered based on the sets of start and end points.
Specifically, in the volume rendering process, the projection of the light may be started from a position of the viewpoint, a point at which the projection of the light first enters the target area is a starting point, which determines a direction of the light, and a point at which the projection of the light second passes through the target area is an end point, and obviously, a depth value of the starting point is smaller than a depth value of the end point. Here, the position of the viewpoint is a point set on the simulated real three-dimensional space. According to the preset step length, the sampling point between the starting point and the end point can be determined, and the color values corresponding to a plurality of sampling points (which can comprise the starting point and the end point) can be obtained according to the three-dimensional data field. The color values corresponding to the plurality of sampling points are synthesized (e.g., integrated), so that a two-dimensional image with a three-dimensional effect can be obtained. The two-dimensional image can visually represent the detail features of the target area.
This allows the sampling process to start only from the surface of the target area, since the starting point of the volume rendering is on the surface of the target area, rather than the screen pixel location (intersection of the ray with the three-dimensional data field boundary). And light projects from the starting point to the end point, and sampling points are extracted from the connecting line of the starting point and the end point, so that the calculated amount in the volume rendering process can be reduced, the calculation time is shortened, the volume rendering efficiency is improved, the smoothness of image display is ensured, and the interaction inductance is improved.
The embodiment of the invention provides a method for volume rendering of a three-dimensional image, which is characterized in that multiple groups of starting points and end points of a target area are determined, and when light is projected, light passes through the starting points and the end points, and sampling points are extracted between the starting points and the end points, so that the volume rendering efficiency can be improved, the smoothness of image display is ensured, and the interaction inductance is improved.
According to an embodiment of the present invention, determining depth information of a plurality of points located on a boundary of a target area of a three-dimensional image includes: determining a triangular patch in a plurality of voxel units on the boundary of the target region by using a surface rendering algorithm; depth information for a plurality of points is determined based on the triangular patch.
In particular, a three-dimensional data field can be seen as being composed of a plurality of voxels. Each voxel may be a cube with 8 vertices. Each voxel cell may refer to a voxel. For some voxels located near the boundary of the target region, a part of the vertices on the voxel may be located in the target region and another part of the vertices may be located in the background region, i.e. the voxel intersects the surface of the target region.
The intersection point of the surface of the target area and the voxel can be calculated through linear interpolation, the intersection point is located on the edge of the cube (voxel), and the intersection points are connected according to a certain mode to form a triangular surface patch. The triangular patch may be infinitely approximated to the surface of the target area, or some points on the triangular patch may be located on the surface of the target area.
There may be multiple triangular patches in a cube, as shown in FIG. 2, with bolded vertices located in the target region and non-bolded vertices located in the background region.
In one embodiment, the face drawing algorithm includes a Marching Cubes (MC) algorithm.
In another embodiment, the surface rendering algorithm comprises a Marching Tetrahedron (MT) algorithm. In this embodiment, the voxel unit may be referred to as a tetrahedron. In particular, the voxels may be divided into a plurality of tetrahedrons, and the intersections of each tetrahedron with the surface of the target region may be connected to form a triangular patch.
The intersection point of the triangular patch and the surface of the target area can be determined based on the triangular patch, and then the depth information of the intersection point is obtained.
In one embodiment, determining depth information for a plurality of points based on a triangular patch comprises: slicing the target area by depth stripping to obtain a plurality of surfaces having depth values; depth information for a plurality of points is determined based on intersections of a plurality of surfaces having depth values and the triangular patch.
In particular, Depth stripping (Depth Peeling) is a technique of ordering Depth values. And slicing the target area by utilizing deep stripping to obtain N curved surfaces from near to far. The first curved surface may be closest to the screen and the nth curved surface may be farthest from the screen. The depth values of the various points in each surface can be obtained by depth stripping.
Each surface corresponds to a channel (pass) which corresponds to two depth buffers and a color buffer, wherein one depth buffer stores the depth value of each point in the surface of the previous layer, and the other stores the depth value of each point in the current surface. The color buffer stores color values of various points in the current surface.
Combining the N curved surfaces and the triangular surface patches, wherein the intersection point of the curved surfaces and the triangular surface patches is a point on the surface of the target area, and the depth information of the intersection point can be determined according to the depth buffer obtained in the depth stripping process.
In this embodiment, the boundary of the target area may be accurately divided by surface rendering, and the depth information of a plurality of points located on the surface of the target area may be determined by combining the triangular patch obtained by surface rendering and the N curved surfaces obtained by depth stripping.
In one embodiment, the intersection point of the triangular patch and the first M curved surfaces close to the screen can be used as a starting point, the intersection point of the triangular patch and the first M curved surfaces far away from the screen can be used as an end point, and M is greater than or equal to 1 and less than M. Each start point corresponds to an end point.
According to an embodiment of the present invention, the method for volume rendering a three-dimensional image further includes: and segmenting the three-dimensional image by using the deep learning network model to determine the target area.
Specifically, the deep learning network model may be a model composed of any one of a convolutional deep network, a cyclic neural network, a full convolutional neural network, and the like. That is, the deep learning network model may be any model that can implement segmentation of the target region, and the embodiment of the present invention does not limit the specific type thereof.
In one embodiment, the deep learning network model is a U-net network model.
Alternatively, in another embodiment, the target region in the three-dimensional image may be obtained by means of manual segmentation, or by means of semi-automatic segmentation.
According to an embodiment of the present invention, the algorithm used for volume rendering includes a Ray Casting (Ray Casting) algorithm.
At 120, when the ray determined based on the ray casting algorithm passes through a first point and a second point of the plurality of points, a point of the first point and the second point having a smaller depth value is a starting point, and a point of the first point and the second point having a larger depth value is an ending point.
Taking the target area as a sphere as an example, when multiple points on the surface of the sphere are obtained according to the depth stripping algorithm and then volume rendering is performed by using a ray casting algorithm, a ray starting from each pixel on the screen can pass through one point of the front spherical surface and one point of the rear spherical surface of the sphere, and the point depth value of the front spherical surface is smaller than that of the point on the rear spherical surface, so that the point on the front spherical surface is a starting point and the point on the rear spherical surface is an ending point.
The rays that pass through a start point and an end point are one, for example, the size of the screen is 1080 x 1080, 1080 x 1080 rays are emitted from the viewpoint position, and the intersection line with the curved surface is determined for each ray.
Fig. 3 is a flowchart illustrating a method for volume rendering a three-dimensional image according to another exemplary embodiment of the present invention. FIG. 3 is an example of the embodiment of FIG. 1, and the same parts are not repeated herein, and the differences are mainly described here. As shown in fig. 3, the method includes the following.
310: and segmenting the three-dimensional image by using the deep learning network model to determine the target area.
In particular, the deep learning network model may be a U-net network model and the three-dimensional image may be a medical image, such as a CT image. The target region may be a region in which a pulmonary organ is located.
320: a triangular patch of a plurality of voxel units on a boundary of the target region is determined using a surface rendering algorithm.
Specifically, the voxel unit herein may refer to a cubic voxel. A Marching Cubes (MC) algorithm may be employed to determine triangular patches in voxels near the boundary of the target region.
330: the target area is sliced by depth stripping to obtain a plurality of surfaces with depth values.
And slicing the target area by utilizing deep stripping to obtain N curved surfaces from near to far. The first curved surface may be closest to the screen and the nth curved surface may be farthest from the screen. The depth values of the various points in each surface can be obtained by depth stripping.
Each surface corresponds to a channel (pass) which corresponds to two depth buffers and a color buffer, wherein one depth buffer stores the depth value of each point in the surface of the previous layer, and the other stores the depth value of each point in the current surface. The color buffer stores color values of various points in the current surface.
340: depth information for a plurality of points is determined based on intersections of a plurality of surfaces having depth values and the triangular patch.
Combining the N curved surfaces and the triangular surface patches, wherein the intersection point of the curved surfaces and the triangular surface patches is a point on the surface of the target area, and the depth information of the intersection point can be determined according to the depth buffer obtained in the depth stripping process.
350: sets of start and end points of the target area are determined based on the depth information of the plurality of points.
Specifically, an intersection point (an intersection point with a smaller depth value) of the triangular patch with the first M curved surfaces close to the screen may be used as a starting point, and an intersection point (an intersection point with a larger depth value) with the N-M curved surfaces far from the screen may be used as an end point, where M is greater than or equal to 1 and less than M. Each start point corresponds to an end point.
360: the target region is volume rendered based on the sets of start and end points.
Specifically, the ray is cast from the position of the viewpoint, and sampling is performed between the start point and the end point. According to the preset step length, the sampling point between the starting point and the end point can be determined, and the color values corresponding to a plurality of sampling points (which can comprise the starting point and the end point) can be obtained according to the three-dimensional data field. The color values corresponding to the plurality of sampling points are synthesized (e.g., integrated), so that a two-dimensional image with a three-dimensional effect can be obtained. The two-dimensional image can visually represent the detail features of the target area.
Fig. 4 is a schematic structural diagram illustrating an apparatus 400 for volume rendering a three-dimensional image according to an exemplary embodiment of the present invention. As shown in fig. 4, the apparatus 400 includes: a first determination module 410, a second determination module 420, and a volume rendering module 430.
The first determining module 410 is used for determining depth information of a plurality of points located on the boundary of the target area of the three-dimensional image; the second determining module 420 is configured to determine a plurality of sets of start and end points of the target area based on the depth information of the plurality of points; the volume rendering module 430 is configured to perform volume rendering on the target region based on the sets of start and end points.
The embodiment of the invention provides a device for volume rendering of a three-dimensional image, which can improve the volume rendering efficiency, ensure the smoothness of image display and improve the interaction inductance by determining a plurality of groups of starting points and end points of a target area, enabling light rays to pass through the starting points and the end points and extracting sampling points between the starting points and the end points when the light rays are projected.
According to an embodiment of the present invention, the first determining module 410 is configured to determine a triangular patch among a plurality of voxel units on a boundary of the target region using a surface rendering algorithm, and determine depth information of a plurality of points based on the triangular patch.
According to an embodiment of the present invention, the first determining module 410 is configured to slice the target area by depth stripping to obtain a plurality of surfaces having depth values, and determine depth information of the plurality of points based on intersections of the plurality of surfaces having depth values and the triangular patch.
According to an embodiment of the invention, the face-rendering algorithm comprises a marching cubes algorithm or a marching tetrahedron algorithm.
According to an embodiment of the present invention, the apparatus 400 further includes a segmentation module 440 for segmenting the three-dimensional image by using the deep learning network model to determine the target region.
According to an embodiment of the present invention, the algorithm used for volume rendering includes a ray casting algorithm, wherein the second determining module 420 is configured to, when the ray determined based on the ray casting algorithm passes through a first point and a second point of the plurality of points, take a point with a smaller depth value as a starting point and a point with a larger depth value as an ending point.
According to an embodiment of the invention, the three-dimensional image is a medical image of the lung and the target region is an organ of the lung.
It should be understood that, for the operations and functions of the first determining module 410, the second determining module 420, the volume rendering module 430, and the segmentation module 440 in the above embodiments, reference may be made to the description of the method for volume rendering a three-dimensional image provided in the above embodiments of fig. 1 and 3, and in order to avoid repetition, the description is not repeated here.
Fig. 5 is a block diagram illustrating an electronic device 500 for volume rendering a three-dimensional image according to an exemplary embodiment of the invention.
Referring to fig. 5, electronic device 500 includes a processing component 510 that further includes one or more processors and memory resources, represented by memory 520, for storing instructions, such as application programs, that are executable by processing component 510. The application programs stored in memory 520 may include one or more modules that each correspond to a set of instructions. Further, the processing component 510 is configured to execute instructions to perform the above-described method of volume rendering a three-dimensional image.
The electronic device 500 may also include a power supply component configured to perform power management of the electronic device 500, a wired or wireless network interface configured to connect the electronic device 500 to a network, and an input/output (I/O) interface. The electronic device 500 may be operated based on an operating system stored in the memory 520, such as Windows ServerTM,Mac OSXTM,UnixTM,LinuxTM,FreeBSDTMOr the like.
A non-transitory computer readable storage medium having instructions stored thereon, which when executed by a processor of the electronic device 500, enable the electronic device 500 to perform a method of volume rendering a three-dimensional image, comprising: determining depth information of a plurality of points located on a boundary of a target area of a three-dimensional image; determining a plurality of sets of start points and end points of the target area based on the depth information of the plurality of points; the target region is volume rendered based on the sets of start and end points.
All the above-mentioned optional technical solutions can be combined arbitrarily to form the optional embodiments of the present invention, and are not described herein again.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program check codes, such as a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that the terms "first," "second," "third," and the like in the description of the present invention are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In addition, in the description of the present invention, "a plurality" means two or more unless otherwise specified.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and the like that are within the spirit and principle of the present invention are included in the present invention.
Claims (10)
1. A method of volume rendering a three-dimensional image, comprising:
determining depth information of a plurality of points located on a boundary of a target area of a three-dimensional image;
determining a plurality of sets of start and end points of the target area based on the depth information of the plurality of points;
volume rendering is performed on the target region based on the plurality of sets of start and end points.
2. The method of claim 1, wherein determining depth information for a plurality of points located on a boundary of a target area of a three-dimensional image comprises:
determining a triangular patch in a plurality of voxel units on the boundary of the target region by using a surface rendering algorithm;
determining depth information for the plurality of points based on the triangular patch.
3. The method of claim 2, wherein the determining depth information for the plurality of points based on the triangular patch comprises:
slicing the target area by depth stripping to obtain a plurality of surfaces having depth values;
determining depth information for the plurality of points based on intersections of the plurality of surfaces with depth values and the triangular patch.
4. The method of claim 2, wherein the face-rendering algorithm comprises a marching cubes algorithm or a marching tetrahedron algorithm.
5. The method of claim 1, further comprising:
and segmenting the three-dimensional image by utilizing a deep learning network model to determine the target area.
6. The method according to any one of claims 1 to 5, wherein the volume rendering employs an algorithm comprising a ray casting algorithm,
wherein the determining sets of start and end points of the target area based on the depth information of the plurality of points comprises:
when the ray determined based on the ray casting algorithm passes through a first point and a second point of the plurality of points, the point with the smaller depth value in the first point and the second point is the starting point, and the point with the larger depth value in the first point and the second point is the end point.
7. The method of any one of claims 1 to 5, wherein the three-dimensional image is a pulmonary medical image and the target region is a pulmonary organ.
8. An apparatus for volume rendering a three-dimensional image, comprising:
a first determination module for determining depth information of a plurality of points located on a boundary of a target area of a three-dimensional image;
a second determining module for determining a plurality of sets of start and end points of the target area based on the depth information of the plurality of points;
and the volume rendering module is used for performing volume rendering on the target area based on the plurality of groups of starting points and end points.
9. A computer-readable storage medium, which stores a computer program for executing the method of volume rendering a three-dimensional image according to any one of claims 1 to 7.
10. An electronic device, the electronic device comprising:
a processor;
a memory for storing the processor-executable instructions,
wherein the processor is configured to perform the method of volume rendering a three-dimensional image according to any of the claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010707018.0A CN111833427B (en) | 2020-07-21 | 2020-07-21 | Method and device for volume rendering of three-dimensional image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010707018.0A CN111833427B (en) | 2020-07-21 | 2020-07-21 | Method and device for volume rendering of three-dimensional image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111833427A true CN111833427A (en) | 2020-10-27 |
CN111833427B CN111833427B (en) | 2021-01-05 |
Family
ID=72924155
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010707018.0A Active CN111833427B (en) | 2020-07-21 | 2020-07-21 | Method and device for volume rendering of three-dimensional image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111833427B (en) |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110109631A1 (en) * | 2009-11-09 | 2011-05-12 | Kunert Thomas | System and method for performing volume rendering using shadow calculation |
CN103154665A (en) * | 2010-06-09 | 2013-06-12 | 感知器股份有限公司 | System and method for determining the position and orientation of a 3d feature |
CN103366394A (en) * | 2013-06-27 | 2013-10-23 | 浙江工业大学 | Direct volume rendering method for abstracting features of medical volume data |
CN103559733A (en) * | 2013-10-09 | 2014-02-05 | 浙江大学 | Spherical body drawing method supporting three-dimension data inner viewpoint roaming |
CN104167010A (en) * | 2014-06-03 | 2014-11-26 | 上海联影医疗科技有限公司 | Iteration rendering method |
US20150379758A1 (en) * | 2012-12-28 | 2015-12-31 | Hitachi, Ltd. | Medical image processing device and image processing method |
CN106355586A (en) * | 2016-08-31 | 2017-01-25 | 黑龙江拓盟科技有限公司 | Automatic extraction method of human chest organ tissue |
CN106408645A (en) * | 2016-09-05 | 2017-02-15 | 上海联影医疗科技有限公司 | Medical image drawing method and apparatus |
CN107004301A (en) * | 2014-11-19 | 2017-08-01 | 微软技术许可有限责任公司 | Using depth information to draw augmented reality scene |
CN107330976A (en) * | 2017-06-01 | 2017-11-07 | 北京大学第三医院 | A kind of human body head three-dimensional modeling apparatus and application method |
CN108537803A (en) * | 2018-03-30 | 2018-09-14 | 北京灵医灵科技有限公司 | A kind of CT images interactive segmentation method and device |
CN110807134A (en) * | 2019-10-10 | 2020-02-18 | 哈尔滨工程大学 | Ocean three-dimensional scalar field visualization method |
-
2020
- 2020-07-21 CN CN202010707018.0A patent/CN111833427B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110109631A1 (en) * | 2009-11-09 | 2011-05-12 | Kunert Thomas | System and method for performing volume rendering using shadow calculation |
CN103154665A (en) * | 2010-06-09 | 2013-06-12 | 感知器股份有限公司 | System and method for determining the position and orientation of a 3d feature |
US20150379758A1 (en) * | 2012-12-28 | 2015-12-31 | Hitachi, Ltd. | Medical image processing device and image processing method |
CN103366394A (en) * | 2013-06-27 | 2013-10-23 | 浙江工业大学 | Direct volume rendering method for abstracting features of medical volume data |
CN103559733A (en) * | 2013-10-09 | 2014-02-05 | 浙江大学 | Spherical body drawing method supporting three-dimension data inner viewpoint roaming |
CN104167010A (en) * | 2014-06-03 | 2014-11-26 | 上海联影医疗科技有限公司 | Iteration rendering method |
CN107004301A (en) * | 2014-11-19 | 2017-08-01 | 微软技术许可有限责任公司 | Using depth information to draw augmented reality scene |
CN106355586A (en) * | 2016-08-31 | 2017-01-25 | 黑龙江拓盟科技有限公司 | Automatic extraction method of human chest organ tissue |
CN106408645A (en) * | 2016-09-05 | 2017-02-15 | 上海联影医疗科技有限公司 | Medical image drawing method and apparatus |
CN107330976A (en) * | 2017-06-01 | 2017-11-07 | 北京大学第三医院 | A kind of human body head three-dimensional modeling apparatus and application method |
CN108537803A (en) * | 2018-03-30 | 2018-09-14 | 北京灵医灵科技有限公司 | A kind of CT images interactive segmentation method and device |
CN110807134A (en) * | 2019-10-10 | 2020-02-18 | 哈尔滨工程大学 | Ocean three-dimensional scalar field visualization method |
Non-Patent Citations (3)
Title |
---|
S. STEGMAIER ET AL.: ""A simple and flexible volume rendering framework for graphics-hardware-based raycasting"", 《VOLUME GRAPHICS》 * |
汪欣: ""基于体绘制方法的医学图像三维重建研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
罗晖 等: ""基于深度的光线投影体绘制算法"", 《计算机应用与软件》 * |
Also Published As
Publication number | Publication date |
---|---|
CN111833427B (en) | 2021-01-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3358528B1 (en) | Point cloud proxy for physically-based volume rendering | |
KR101981202B1 (en) | Method and apparatus for reconstructing medical image | |
Stytz et al. | Three-dimensional medical imaging: algorithms and computer systems | |
JP4421016B2 (en) | Medical image processing device | |
JP6837551B2 (en) | HMDS-based medical imaging device | |
US7397475B2 (en) | Interactive atlas extracted from volume data | |
US9865079B2 (en) | Virtual endoscopic image generated using an opacity curve | |
US20140232719A1 (en) | Volume rendering of medical images | |
US10580181B2 (en) | Method and system for generating color medical image based on combined color table | |
US20040202990A1 (en) | System and method for performing a virtual endoscopy | |
JP2007537770A (en) | A dynamic crop box determination method for display optimization of luminal structures in endoscopic images | |
JPH10507954A (en) | Method and system for creating an interactive three-dimensional rendering that allows simulated movement within a lumen of a selected body organ having a hollow lumen | |
US10192352B2 (en) | Method, device and system for simulating shadow images | |
US20060262969A1 (en) | Image processing method and computer readable medium | |
AU2019430369B2 (en) | VRDS 4D medical image-based vein Ai endoscopic analysis method and product | |
CN108876783B (en) | Image fusion method and system, medical equipment and image fusion terminal | |
Kutter et al. | Real-time volume rendering for high quality visualization in augmented reality | |
KR102387928B1 (en) | Method for analyzing human tissue based on medical image and apparatus therefor | |
Turlington et al. | New techniques for efficient sliding thin-slab volume visualization | |
CN111833427B (en) | Method and device for volume rendering of three-dimensional image | |
CN100583161C (en) | Method for depicting an object displayed in a volume data set | |
US9536314B2 (en) | Image reconstruction | |
CN114708283A (en) | Image object segmentation method and device, electronic equipment and storage medium | |
CN110648752B (en) | Three-dimensional visualization method and equipment for medical data | |
US20240331269A1 (en) | Technique for rendering hidden structures with clip planes |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: Room B401, floor 4, building 1, No. 12, Shangdi Information Road, Haidian District, Beijing 100085 Applicant after: Tuxiang Medical Technology Co., Ltd Address before: Room B401, floor 4, building 1, No. 12, Shangdi Information Road, Haidian District, Beijing 100085 Applicant before: Beijing Tuoxiang Technology Co.,Ltd. |
|
CB02 | Change of applicant information | ||
GR01 | Patent grant | ||
GR01 | Patent grant |