CN107705350B - Medical image generation method, device and equipment - Google Patents

Medical image generation method, device and equipment Download PDF

Info

Publication number
CN107705350B
CN107705350B CN201710792855.6A CN201710792855A CN107705350B CN 107705350 B CN107705350 B CN 107705350B CN 201710792855 A CN201710792855 A CN 201710792855A CN 107705350 B CN107705350 B CN 107705350B
Authority
CN
China
Prior art keywords
data field
ray
point
sampling
volume data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710792855.6A
Other languages
Chinese (zh)
Other versions
CN107705350A (en
Inventor
孙万明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Medical Systems Co Ltd
Original Assignee
Neusoft Medical Systems Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neusoft Medical Systems Co Ltd filed Critical Neusoft Medical Systems Co Ltd
Priority to CN201710792855.6A priority Critical patent/CN107705350B/en
Publication of CN107705350A publication Critical patent/CN107705350A/en
Application granted granted Critical
Publication of CN107705350B publication Critical patent/CN107705350B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/08Volume rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Generation (AREA)

Abstract

The invention discloses a medical image generation method, a device and equipment, wherein the method comprises the following steps: acquiring a volume data field of a subject; the volume data field includes a volume data field obtained by scanning the subject by a CT apparatus; generating one or more multi-planar reconstructed MPR planes based on the selected angles and depths using the volumetric data field; and performing Volume Rendering (VR) by using the volume data field and the MPR plane based on a ray casting algorithm to generate a fused image. The invention can not only realize the determination of the form of the lesion part based on the MPR plane, but also realize the determination of the space position and the depth information of the lesion part based on the VR plane, and can be used as the gold standard for diagnosis of doctors.

Description

Medical image generation method, device and equipment
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method, an apparatus, and a device for generating a medical image.
Background
The volume data field formed by stacking a plurality of tomographic two-dimensional images can be obtained by scanning through CT equipment and the like, and the volume data field can only display two-dimensional images of the cross section, so that a doctor needs to check different tomographic two-dimensional images to diagnose the state of an illness, which is very inconvenient.
Volume Rendering (VR) is a technique for displaying a Volume data field, which is a stack of a plurality of two-dimensional images, in the form of a three-dimensional model, where the displayed images have spatial structure and depth information, so that a doctor can determine the spatial position, shape and surrounding tissue of different tissues in the human body. However, the content displayed by VR is rendered, and cannot completely reflect the status of the lesion, so that the doctor cannot diagnose the disease accordingly.
Disclosure of Invention
In view of the above, the present invention provides a medical image generation method, device and apparatus with better image display effect obtained by scanning, so as to solve the above technical problems.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
according to a first aspect of an embodiment of the present invention, a medical image generation method is provided, which includes:
acquiring a volume data field of a subject;
generating one or more multi-planar reconstructed MPR planes based on the selected angles and depths using the volumetric data field;
and performing Volume Rendering (VR) by using the volume data field and the MPR plane based on a ray casting algorithm to generate a fused image.
According to a second aspect of an embodiment of the present invention, there is provided a medical image generation apparatus, including:
a data acquisition module for acquiring a volume data field of a subject;
a planar reconstruction module to generate one or more multi-planar reconstructed MPR planes based on the selected angles and depths using the volumetric data field;
and the image generation module is used for performing Volume Rendering (VR) by using the volume data field and the MPR plane based on a ray casting algorithm so as to generate a fusion image.
According to a third aspect of the embodiments of the present invention, there is provided an electronic apparatus, characterized in that the electronic apparatus includes:
a processor;
a memory configured to store processor-executable instructions;
wherein the processor is configured to:
acquiring a volume data field of a subject;
generating one or more multi-planar reconstructed MPR planes based on the selected angles and depths using the volumetric data field;
and performing Volume Rendering (VR) by using the volume data field and the MPR plane based on a ray casting algorithm to generate a fused image.
According to a fourth aspect of embodiments of the present invention, there is provided a computer-readable storage medium having a computer program stored thereon, the program when processed by a processor implementing:
acquiring a volume data field of a subject;
generating one or more multi-planar reconstructed MPR planes based on the selected angles and depths using the volumetric data field;
and performing Volume Rendering (VR) by using the volume data field and the MPR plane based on a ray casting algorithm to generate a fused image.
Compared with the prior art, the medical image generation method, the medical image generation device and the medical image generation equipment have the advantages that the volume data field of the detected body is obtained, one or more multi-plane reconstruction MPR planes are generated based on the selected angle and depth by using the volume data field, and then the volume data field and the MPR planes are used for carrying out volume rendering VR based on a ray projection algorithm to generate a fusion image, so that the purpose that the shape of a lesion part is determined based on the MPR plane, the purpose that the space position and the depth information of the lesion part are determined based on the VR can be achieved, and the fusion image can be used as a gold standard for diagnosis of doctors.
Drawings
Fig. 1A shows a flow chart of a medical image generation method according to an exemplary embodiment of the present invention;
fig. 1B to 1D show MPR plane diagrams of coronal, sagittal and transverse positions, respectively, according to an exemplary embodiment of the present invention;
fig. 1E shows a fused image generated by volume rendering based on three MPR planes in coronal, sagittal, and transverse positions and a volume data field according to an exemplary embodiment of the invention;
fig. 1F illustrates a fused image generated by volume rendering based on a coronal MPR plane and a volume data field according to an exemplary embodiment of the present invention;
FIG. 2 shows a flow chart for volume rendering according to an exemplary embodiment of the invention;
FIG. 3 shows a flow diagram for sampling a volumetric data field according to an exemplary embodiment of the present invention;
FIG. 4 shows a flow chart for sampling a volumetric data field according to yet another exemplary embodiment of the present invention;
FIG. 5 illustrates a flowchart for determining projected color information for an exit point corresponding to each ray according to an exemplary embodiment of the present invention;
fig. 6A illustrates a flowchart of determining color information of an intersection of an MPR plane and each ray according to an exemplary embodiment of the present invention;
FIG. 6B is a schematic diagram illustrating an intersection of a projection ray and an MPR plane according to an exemplary embodiment of the present invention;
fig. 7 shows a flowchart for generating one or more multi-planar reconstructed MPR planes according to an exemplary embodiment of the invention;
fig. 8 shows a flow chart of a medical image generation method according to a further exemplary embodiment of the present invention;
fig. 9 shows a block diagram of a medical image generation apparatus according to an exemplary embodiment of the present invention;
fig. 10 shows a block diagram of a medical image generation apparatus according to a further exemplary embodiment of the present invention;
fig. 11 shows a block diagram of an electronic device according to an exemplary embodiment of the present invention.
Detailed Description
The present invention will be described in detail below with reference to specific embodiments shown in the drawings. These embodiments are not intended to limit the present invention, and structural, methodological, or functional changes made by those skilled in the art according to these embodiments are included in the scope of the present invention.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It will be understood that, although the terms first, second, etc. may be used herein to describe various structures, these structures should not be limited by these terms. These terms are only used to distinguish one type of structure from another.
Fig. 1A shows a flow chart of a medical image generation method according to an exemplary embodiment of the present invention; as shown in fig. 1A, the method includes steps S101-S103:
s101: a volume data field of a subject is acquired.
In an alternative embodiment, a two-dimensional digital tomographic image sequence of the subject, which constitutes the volume data field, can be obtained by medical Imaging techniques such as CT (Computer Tomography), MRI (Magnetic Resonance Imaging), Ultrasound (US), and Digital Subtraction Angiography (DSA).
S102: one or more multi-planar reconstructed MPR planes are generated based on the selected angles and depths using the volumetric data field.
In an alternative embodiment, the angle and the depth may be specified according to the actual needs of the doctor for diagnosing the lesion, which is not limited in this embodiment.
In an alternative embodiment, one or more MPR planes (see fig. 1B-1D, respectively) for human frontal orientation (coronal), human lateral orientation (sagittal) and human transverse orientation (transverse) may be generated from the volumetric data field and the selected angles and image depths, and MPR planes for arbitrary orientation (oblique) may also be generated.
It should be noted that the MPR plane generation method may adopt any multi-plane reconstruction technique of the related art, and the present invention is not limited thereto. S103: volume rendering VR is performed using the volume data field and the MPR plane based on ray casting algorithm (RayCasting) to generate a fused image.
In particular, fig. 2 shows a flow chart for volume rendering according to an exemplary embodiment of the present invention; as shown in fig. 2, the step S103 of performing volume rendering VR by using the volume data field and the MPR plane based on the ray casting algorithm may include steps S201 to S205:
s201: projecting a plurality of rays to the volume data field through a plurality of emergent points on a preset projection plane;
s202: sampling the volume data field according to the rays and the MPR plane to obtain color information of a plurality of sampling points corresponding to each ray;
s203: determining projection color information of an emergent point corresponding to each light ray according to the color information of the plurality of sampling points;
s204: determining color information of an intersection of the MPR plane and each of the rays;
s205: and superposing the color information of the intersection point and the projection color information of the emergent point of the corresponding light.
In an alternative embodiment, starting from each pixel point (named as an exit point) on a preset projection plane (for example, a screen), a ray is emitted along a specific viewpoint direction, the ray penetrates through a volume data field, a plurality of sampling points are selected along the ray, when the ray intersects with an MPR plane, the ray stops and continues to sample (that is, the volume data field behind the MPR plane is not displayed), sampling values of the corresponding sampling points are obtained, then the sampling values of the sampling points are used for obtaining color information (such as color values and light blocking values) through a transfer function, then the color information of each sampling point is superimposed, the obtained color is the projection color of the point on the projection plane, on the basis, the color information of the intersection point of the MPR plane and each ray can be determined, and the color information of the intersection point is superimposed with the projection color information of the exit point of the corresponding ray, thereby calculating the color value at the pixel point on the screen to generate a fused image.
In particular, fig. 1E shows a fused image generated by volume rendering based on three MPR planes in coronal, sagittal, and transverse positions and a volume data field according to an exemplary embodiment of the present invention; fig. 1F illustrates a fused image generated by volume rendering based on the MPR plane of the coronal phase and the volume data field according to an exemplary embodiment of the present invention.
In the medical image generation method of the embodiment, one or more multi-plane reconstructed MPR planes are generated based on a selected angle and depth by acquiring a volume data field of a subject, and then a volume data field and the MPR planes are used for performing volume rendering VR based on a ray projection algorithm to generate a fused image.
FIG. 3 shows a flow diagram for sampling a volumetric data field according to an exemplary embodiment of the present invention; the present embodiment is exemplified by how to sample a volume data field on the basis of the above-described embodiments. As shown in fig. 3, the sampling the volume data field according to the plurality of rays and the MPR plane in step S202 may include steps S301 to S303:
s301: and determining at least one point on each light ray, wherein the at least one point meets at least one preset condition.
Wherein the preset condition may include:
(i) the intersection of the ray with the MPR plane (if present);
(ii) the light blocking degree is less than or equal to 0;
(iii) the light ray passes through the point where the volume data field exits.
S302: and determining a point, closest to the emergent point of the corresponding light ray, in the at least one point as a sampling end point of the corresponding light ray.
In an optional embodiment, after one or more points satisfying any of the above preset conditions are determined, the distance between each determined point and the exit point of the corresponding light ray is calculated, and then the point of each point closest to the exit point of the corresponding light ray can be determined as the sampling end point of the corresponding light ray.
S303: and sampling the volume data field according to the light rays and the corresponding sampling terminals.
In an alternative embodiment, FIG. 4 shows a flow chart for sampling a volumetric data field according to yet another exemplary embodiment of the present invention. On the basis of the above embodiment, the color information may include a color value and a opacity. On this basis, step S303 may include steps S401-S402:
s401: collecting sampling values corresponding to a plurality of sampling points according to a set step length from an emergent point of each ray to a sampling end point in the process that each ray passes through the volume data field;
in an alternative embodiment, the ray advances along a specific viewpoint direction, and the volume data that the ray passes through during the advance is sampled, and when the ray reaches the corresponding sampling end point, the ray stops and the sampling is not continued (i.e. only the data between the ray projection point and the sampling end point is sampled).
S402: and converting the sampling value into the color value and the opacity of the corresponding sampling point through a preset transfer function.
In one embodiment, the sampled value val may be converted into a color value C by a transfer function f (x) according to formula (1.1)valAnd opacity Aval
f(val)=<Cval,Aval> (1.1)
Where val is the sample value, CvalIs a color value of AvalFor opacity, f (val) means that the sampled value val is converted into a color value C by a transfer function f (x)valAnd opacity Aval
According to the technical scheme, the points meeting the preset condition on each ray are determined, the point closest to the emergent point of the corresponding ray in the points is determined as the sampling end point, and then the volume data field is sampled according to the rays and the corresponding sampling end points, so that the fused image can be generated in the follow-up mode, only the volume data (relative to the observation point) in front of the MPR plane is subjected to volume rendering, but the volume data behind the MPR plane is not subjected to rendering, and therefore the MPR plane can be displayed at the corresponding position of the fused image, and the depth information reserved by the MPR plane in the finally generated fused image is enabled.
FIG. 5 illustrates a flowchart for determining projected color information for an exit point corresponding to each ray according to an exemplary embodiment of the present invention; on the basis of the above embodiments, the present embodiment takes an example of how to determine the projection color information of the exit point corresponding to each ray. As shown in fig. 5, the step S203 of determining the projection color information of the exit point corresponding to each light ray according to the color information of the plurality of sampling points may include steps S501 to S502:
s501: acquiring an initial color value and an initial opacity of each ray;
in an alternative embodiment, the initial Color value of the acquired ray is assumed to be Color 'and the initial opacity value Alpha'.
S502: and superposing the initial color value and the initial opacity of each ray with the color value and the opacity of each sampling point corresponding to the ray respectively to obtain the color value and the opacity of the exit point corresponding to the ray.
In an alternative embodiment, the initial color value and the initial opacity of the ray may be superimposed with the color value and the opacity of each sampling point corresponding to the ray according to equations (1.2) and (1.3):
Color=Color′+∑Cval*Aval*Alpha (1.2)
Alpha=Alpha′-∑Aval (1.3)
wherein, Color is the Color value of the emergent point corresponding to the light, Alpha is the light-blocking degree of the emergent point corresponding to the light, Color 'is the initial Color value of the light, Alpha' is the initial light-blocking degree, CvalIs a color value of AvalIs a light blocking degree.
According to the technical scheme, the initial color value and the initial light blocking degree of each ray are obtained, the initial color value and the initial light blocking degree of each ray are superposed with the color value and the light blocking degree of each sampling point corresponding to each ray respectively, the color value and the light blocking degree of the emergent point corresponding to each ray can be obtained, then the subsequent generation of the fusion image can be realized, and the quality and the efficiency of the subsequent generation of the fusion image can be guaranteed.
Fig. 6A illustrates a flowchart of determining color information of an intersection of an MPR plane and each ray according to an exemplary embodiment of the present invention; the present embodiment is exemplified by how to determine the color information of the intersection of the MPR plane and each ray on the basis of the above-described embodiments. As shown in fig. 6A, the determining the color information of the intersection of the MPR plane and each of the rays in step S204 may include steps S601-S602:
s601: determining sampling values of intersection points of the MPR plane and each ray;
in an alternative embodiment, it may be determined whether there is an intersection between the MPR plane and each ray, and if so, the sample value of the intersection is determined.
In solid geometry, a plane can be uniquely defined by a point on the plane and the normal vector of the plane. FIG. 6B is a schematic diagram illustrating an intersection of a projection ray and an MPR plane according to an exemplary embodiment of the present invention; as shown in FIG. 6B, assume that the center point of the MPR plane is (x)center,ycenter,zcenter) Normal vector of
Figure BDA0001399665740000091
The light ray exits from a point (x) on the projection surfacestart,ystart,zstart) Starting along a particular viewpoint
Figure BDA0001399665740000092
The direction is advanced. When in use
Figure BDA0001399665740000093
And
Figure BDA0001399665740000094
not being orthogonal, i.e.
Figure BDA0001399665740000095
The ray and MPR plane must have an intersection.
The parameter equation of the projection light is set as follows:
Figure BDA0001399665740000096
the point-normal equation of the MPR plane is as follows:
Xnormal(x-Xcenter)+ynormal(y-ystart)+znormal(z-zstart=0 (1.5)
the formula (1.4) and (1.5) are combined to obtain:
Figure BDA0001399665740000097
and (3) substituting t into the parameter equation (1.4) of the light ray to obtain the intersection point of the light ray and the MPR plane, wherein t is the distance between the intersection point and the emergent point. In an alternative embodiment, if there are intersections of the ray with multiple MPR planes, the closest intersection may be selected as the final intersection.
S602: and inquiring a color lookup table LUT corresponding to the MPR plane according to the sampling value of the intersection point to obtain the color information of the intersection point of the light rays.
In an alternative embodiment, the color lookup Table LUT is a linear color lookup Table (LUT) generated based on a windowing technique when generating the MPR plane, and is used for mapping the volume data value to the gray value of the pixel.
In an alternative embodiment, the color LUT corresponding to the MPR plane is queried according to the sampling value of the intersection, so that the sampling value can be converted into color information (e.g., grayscale color).
According to the technical scheme, the color information of the intersection point of the light can be obtained by determining the sampling value of the intersection point of the MPR plane and each light, and querying the color lookup table LUT corresponding to the MPR plane according to the sampling value of the intersection point, so that the color information of the intersection point and the projection color information of the emergent point of the corresponding light can be superposed subsequently to generate the fused image, and the quality and the efficiency of the subsequent generation of the fused image can be ensured.
In the process of implementing the embodiment of the present invention, the inventor finds that a circle of air (such as the black border around the human body in fig. 1B) is also enclosed around the subject in the generated MPR plane, and these black borders are also part of the MPR plane and also block the VR display behind the MPR plane, but these black borders have no effect on the display effect and the doctor's diagnosis, and therefore can be removed, so that the edge of the MPR plane is shrunk to the edge of the human body, and the content behind the image is prevented from being blocked.
Fig. 7 shows a flowchart for generating one or more multi-planar reconstructed MPR planes according to an exemplary embodiment of the invention; the present embodiment is based on the above embodiments, and is exemplarily illustrated how to generate one or more multi-plane reconstructed MPR planes. As shown in fig. 7, the generating one or more multi-planar reconstructed MPR planes based on the selected angles and depths using the volume data field in step S102 may include steps S701-S702:
s701: and removing the air data in the volume data field according to a threshold segmentation method to obtain the volume data field with the air data removed.
In an alternative embodiment, the air around the subject in the volume data field may be removed according to a thresholding method, and at least one MPR plane may be generated according to the selected angle, the depth, and the air-removed volume data field.
In particular, there is a large difference in the measured values of air and body tissue, which is very easily distinguishable. Taking CT as an example, the CT value is determined by the linear absorption coefficient of the body tissue to the X-ray, the CT value of air is usually-1000 Hu, and the air around the body can be accurately segmented by using-850 Hu as a threshold.
Therefore, when the MPR plane is generated, air around the detected object in the volume data field is removed, the edge of the MPR plane can be contracted to the edge of a human body, and the blocking of the content after the image is avoided.
S702: generating one or more MPR planes from the angle, the depth, and the volumetric data field of the de-air data.
In an alternative embodiment, after obtaining the volumetric data field with the air removed data, one or more MPR planes may be generated according to the selected angle, depth, and volumetric data field with the air removed data.
The invention is illustrated below by means of a specific example, without restricting its scope.
Fig. 8 shows a flow chart of a medical image generation method according to a further exemplary embodiment of the present invention; as shown in fig. 8, the method includes steps S801 to S807:
s801: a volume data field obtained by scanning a subject such as a CT apparatus is acquired, an MPR plane is generated from the volume data field, air data in the volume data field is removed, and light is projected onto the volume data field.
In an alternative embodiment, the air surrounding the body can be removed using a thresholding method, since the air is very distinguishable from the measurements of body tissue due to the large difference between the air and the measurements. Taking the CT value as an example, since the CT value is determined by the linear absorption coefficient of the body tissue to the X-ray, the CT value of the air is usually-1000 Hu, so-850 Hu can be used as the threshold value, and the air around the body can be accurately segmented.
S802: and judging whether the intersection point exists between the ray and the MPR plane.
In an alternative embodiment, before the ray casting, it is determined whether there is an intersection between the ray and the reconstructed MPR plane: if an intersection exists, the intersection is calculated.
In solid geometry, a plane can be uniquely defined by a point on the plane and a normal vector of the plane. In the present invention, we set a central point (x) for the MPR reconstruction planecenter,ycenter,zcenter) Sum normal vector
Figure BDA0001399665740000111
The light ray exits from a point (x) on the projection surfacestart,ystart,zstart) Starting along
Figure BDA0001399665740000112
The direction is advanced.
S803: and if the intersection exists, determining the intersection closest to the light ray emergence point as the sampling end point of the light ray.
In particular, when
Figure BDA0001399665740000113
And
Figure BDA0001399665740000114
not being orthogonal, i.e.
Figure BDA0001399665740000115
The ray and MPR plane must have an intersection.
The parametric equation for the projection ray is:
Figure BDA0001399665740000116
the point-normal equation of the MPR plane is as follows:
xnormal(x-xcenter)+ynormal(y-ystart)+znormal(z-zstart=0 (2.2)
the following can be obtained by combining (1) and (2):
Figure BDA0001399665740000121
and (3) substituting t into the parameter equation (1) of the light ray to obtain the intersection point of the light ray and the MPR plane, wherein t is the distance between the intersection point and the emergent point. And when the ray and the MPR planes have intersection points, selecting the intersection point with the nearest distance as a final intersection point.
S804: if the intersection point does not exist, sampling is carried out on the volume data field in the light advancing process until the light advances to the point where the light blocking degree is less than or equal to 0, or the sampling is stopped when the light advances to the point away from the volume data, and the projection color of the light is determined according to the sampling result.
S805: sampling is carried out on the volume data field in the light advancing process until the light advances to the sampling end point, and the projection color of the light is determined according to the sampling result.
In an alternative embodiment, the light starts from the exit point, the initial Color 'is set to black, and the initial opacity is Alpha'. The light ray samples the volume data according to the set step length, and the sampling value val is transmittedConversion of function f (x) into color values CvalAnd opacity AvalSuperimposed with the light color.
f(val)=<Cval,Aval> (2.3)
The color superposition method comprises the following steps:
Color=Color′+∑Cval*Aval*Alpha (2.4)
Alpha=Alpha′-∑Aval (2.5)
if the opacity Alpha of the light is less than or equal to 0, then the light continues to be sampled without any change in color, so the light stops being sampled.
On the basis, the end point of the light ray is the point which is closest to the emergent point in the following three points:
(i) the intersection of the ray with the MPR reconstruction plane (if present);
(ii) a point where the light blocking degree Alpha of the light is less than or equal to 0;
(iii) the light ray passes through the point where the volume data leaves.
When the light is sampling the volume data, the color of the sampling point is superimposed through the light blocking degree Alpha, and the whole transparency of the VR can be controlled by setting the light initial light blocking degree Alpha', for example: the higher the overall transparency of the VR, the smaller the initial opacity can be set.
S806: the MPR plane pixel value at the sampling end point of the ray is calculated and fused with the projected color of the ray.
S807: and generating a fused image based on the color fusion result.
In one embodiment, the sampling value at the MPR plane intersection may be converted into a gray color through a corresponding color lookup table LUT, and fused and superimposed with the light color (see formula (2.4) for a superimposing method).
As can be seen from the above description, the present embodiment has the following advantages:
(1) depth information of the MPR plane can be reserved, and doctors can conveniently check the focus position;
(2) volume rendering VR can be performed based on multiple MPR planes and volume data fields to facilitate diagnosis of disease conditions from different planes;
(3) the generated fusion image can ensure the image quality of VR and MPR;
(4) the air around the body in the MPR can be removed, and the body area can be highlighted;
(5) the transparency of VR is supported and adjusted, and VR can be completely hidden when necessary, thereby facilitating the diagnosis of doctors.
Fig. 9 shows a block diagram of a medical image generation apparatus according to an exemplary embodiment of the present invention. As shown in fig. 9, the apparatus includes a data acquisition module 110, a planar reconstruction module 120, and an image generation module 130, wherein:
a data acquisition module 110 for acquiring a volume data field of a subject; the volume data field includes a volume data field obtained by scanning the subject by an electron Computed Tomography (CT) apparatus;
a planar reconstruction module 120 for generating one or more multi-planar reconstructed MPR planes based on the selected angles and depths using the volumetric data field;
an image generating module 130, configured to perform volume rendering VR by using the volume data field and the MPR plane based on a ray casting algorithm to generate a fused image.
The medical image generation apparatus of this embodiment acquires a volume data field of a subject, generates one or more multi-planar reconstructed MPR planes based on a selected angle and depth using the volume data field, and performs volume rendering VR based on the volume data field and the MPR plane to generate a fused image based on a ray casting algorithm.
Fig. 10 shows a block diagram of a medical image generation apparatus according to a further exemplary embodiment of the present invention; the data obtaining module 210, the plane reconstructing module 220, and the image generating module 230 have the same functions as the data obtaining module 110, the plane reconstructing module 120, and the image generating module 130 in the embodiment shown in fig. 9, and are not described again. As shown in fig. 10, on the basis of the above embodiment, the image generation module 230 may include:
a light projection unit 231, configured to project a plurality of light rays to the volume data field through a plurality of exit points on a preset projection plane;
a data sampling unit 232, configured to sample the volume data field according to the multiple light rays and the MPR plane, so as to obtain color information of multiple sampling points corresponding to each light ray;
an exit point color determining unit 233, configured to determine projection color information of an exit point corresponding to each light ray according to the color information of the plurality of sampling points;
an intersection color determining unit 234 for determining color information of an intersection of the MPR plane and each of the rays;
a color superimposing unit 235 for superimposing the color information of the intersection point and the projection color information of the exit point of the corresponding light ray.
In an alternative embodiment, the data sampling unit may be further configured to determine at least one point on each of the rays; wherein each point satisfies at least one of the following preset conditions: an intersection of the ray with the MPR plane, a point on the ray where the opacity is less than or equal to 0, and a point where the ray exits through the volumetric data field;
determining a point, closest to the emergent point of the corresponding light ray, in the at least one point as a sampling end point of the corresponding light ray;
and sampling the volume data field according to the light rays and the corresponding sampling terminals.
In an alternative embodiment, the color information may include a color value and a opacity value;
the data sampling unit 232 may further be configured to collect sampling values corresponding to a plurality of sampling points according to a set step length from an exit point of each light ray to a sampling end point in a process that each light ray passes through the volume data field; and converting the sampling value into the color value and the opacity of the corresponding sampling point through a preset transfer function.
In an alternative embodiment, the exit point color determination unit 233 may be further configured to obtain an initial color value and an initial opacity value of each of the light rays; and the number of the first and second groups,
and superposing the initial color value and the initial opacity of each ray with the color value and the opacity of each sampling point corresponding to the ray respectively to obtain the color value and the opacity of the exit point corresponding to the ray.
In an alternative embodiment, the intersection color determination unit 234 may be further configured to determine a sampling value of an intersection of the MPR plane and each of the rays; and the number of the first and second groups,
and inquiring a color lookup table LUT corresponding to the MPR plane according to the sampling value of the intersection point to obtain the color information of the intersection point of the light rays.
In an optional embodiment, the plane reconstruction module 220 may further include:
the data removing unit 221 is configured to remove the air data in the volume data field according to a threshold segmentation method to obtain a volume data field from which the air data is removed;
a plane reconstruction unit 222 for generating one or more MPR planes from the angles, the depths, and the volumetric data field of the de-air data.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the invention. One of ordinary skill in the art can understand and implement it without inventive effort.
The embodiment of the medical image generation device can be applied to network equipment. The device embodiments may be implemented by software, or by hardware, or by a combination of hardware and software. The software implementation is taken as an example, and is formed by reading corresponding computer program instructions in the nonvolatile memory into the memory for operation through the processor of the device where the software implementation is located as a logical means. From a hardware level, as shown in fig. 11, which is a hardware structure diagram of an electronic device in which the medical image generation apparatus of the present invention is located, in addition to the processor, the network interface, the memory, and the nonvolatile memory shown in fig. 11, the device in which the apparatus is located in the embodiment may also include other hardware, such as a forwarding chip responsible for processing a message, and the like; the device may also be a distributed device in terms of hardware structure, and may include multiple interface cards to facilitate expansion of message processing at the hardware level.
An embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program implements the following task processing method when being processed by a processor:
acquiring a volume data field of a subject; the volume data field includes a volume data field obtained by scanning the subject by an electron Computed Tomography (CT) apparatus;
generating one or more multi-planar reconstructed MPR planes based on the selected angles and depths using the volumetric data field;
and performing Volume Rendering (VR) by using the volume data field and the MPR plane based on a ray casting algorithm to generate a fused image.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (14)

1. A medical image generation method, characterized by comprising:
acquiring a volume data field of a subject;
generating one or more multi-planar reconstructed MPR planes based on the selected angles and depths using the volumetric data field;
performing Volume Rendering (VR) by using the volume data field and the MPR plane based on a ray casting algorithm to generate a fusion image;
the ray casting algorithm-based volume rendering VR using the volume data field and the MPR plane includes:
projecting a plurality of rays to the volume data field through a plurality of emergent points on a preset projection plane;
sampling the volume data field according to the rays and the MPR plane to obtain color information of a plurality of sampling points corresponding to each ray;
determining projection color information of an emergent point corresponding to each light ray according to the color information of the plurality of sampling points;
determining color information of an intersection of the MPR plane and each of the rays;
and superposing the color information of the intersection point and the projection color information of the emergent point of the corresponding light.
2. The method of claim 1, wherein sampling the volumetric data field according to the plurality of rays and the MPR plane comprises:
determining at least one point on each of the rays; wherein each point satisfies at least one of the following preset conditions: an intersection of the ray with the MPR plane, a point on the ray where the opacity is less than or equal to 0, and a point where the ray exits through the volumetric data field;
determining a point, closest to the emergent point of the corresponding light ray, in the at least one point as a sampling end point of the corresponding light ray;
and sampling the volume data field according to the light rays and the corresponding sampling terminals.
3. The method of claim 2, wherein the color information comprises a color value and a opacity;
the sampling the volume data field according to the plurality of rays and the corresponding sampling terminals includes:
collecting sampling values corresponding to a plurality of sampling points according to a set step length from an emergent point of each ray to a sampling end point in the process that each ray passes through the volume data field;
and converting the sampling value into the color value and the opacity of the corresponding sampling point through a preset transfer function.
4. The method according to claim 3, wherein the determining the projection color information of the exit point corresponding to each light ray according to the color information of the plurality of sampling points comprises:
acquiring an initial color value and an initial opacity of each ray;
and superposing the initial color value and the initial opacity of each ray with the color value and the opacity of each sampling point corresponding to the ray respectively to obtain the color value and the opacity of the exit point corresponding to the ray.
5. The method of claim 1, wherein the determining the color information of the intersection of the MPR plane and each of the rays comprises:
determining sampling values of intersection points of the MPR plane and each ray;
and inquiring a color lookup table LUT corresponding to the MPR plane according to the sampling value of the intersection point to obtain the color information of the intersection point of the light rays.
6. The method of claim 1, wherein generating one or more multi-planar reconstructed MPR planes based on the selected angles and depths using the volumetric data field comprises:
removing air data in the volume data field according to a threshold segmentation method to obtain a volume data field with air data removed;
generating one or more MPR planes from the angle, the depth, and the volumetric data field of the de-air data.
7. A medical image generation apparatus, characterized by comprising:
a data acquisition module for acquiring a volume data field of a subject;
a planar reconstruction module to generate one or more multi-planar reconstructed MPR planes based on the selected angles and depths using the volumetric data field;
an image generation module, configured to perform volume rendering VR using the volume data field and the MPR plane based on a ray casting algorithm to generate a fused image;
the image generation module includes:
the light ray projection unit is used for projecting a plurality of light rays to the volume data field through a plurality of emergent points on a preset projection plane;
the data sampling unit is used for sampling the volume data field according to the light rays and the MPR plane so as to obtain color information of a plurality of sampling points corresponding to each light ray;
the emergent point color determining unit is used for determining projection color information of an emergent point corresponding to each ray according to the color information of the plurality of sampling points;
an intersection color determining unit for determining color information of an intersection of the MPR plane and each of the rays;
and the color superposition unit is used for superposing the color information of the intersection point and the projection color information of the emergent point of the corresponding light.
8. The apparatus of claim 7, wherein the data sampling unit is further configured to:
determining at least one point on each of the rays; wherein each point satisfies at least one of the following preset conditions: an intersection of the ray with the MPR plane, a point on the ray where the opacity is less than or equal to 0, and a point where the ray exits through the volumetric data field;
determining a point, closest to the emergent point of the corresponding light ray, in the at least one point as a sampling end point of the corresponding light ray;
and sampling the volume data field according to the light rays and the corresponding sampling terminals.
9. The apparatus of claim 8, wherein the color information comprises a color value and a opacity;
the data sampling unit is further configured to:
collecting sampling values corresponding to a plurality of sampling points according to a set step length from an emergent point of each ray to a sampling end point in the process that each ray passes through the volume data field;
and converting the sampling value into the color value and the opacity of the corresponding sampling point through a preset transfer function.
10. The apparatus of claim 9, wherein the exit point color determination unit is further configured to:
acquiring an initial color value and an initial opacity of each ray;
and superposing the initial color value and the initial opacity of each ray with the color value and the opacity of each sampling point corresponding to the ray respectively to obtain the color value and the opacity of the exit point corresponding to the ray.
11. The apparatus of claim 7, wherein the intersection color determination unit is further configured to:
determining sampling values of intersection points of the MPR plane and each ray;
and inquiring a color lookup table LUT corresponding to the MPR plane according to the sampling value of the intersection point to obtain the color information of the intersection point of the light rays.
12. The apparatus of claim 7, wherein the planar reconstruction module comprises:
the data removing unit is used for removing the air data in the volume data field according to a threshold segmentation method to obtain the volume data field with the air data removed;
a planar reconstruction unit for generating one or more MPR planes from the angles, the depths, and the volumetric data field of the de-air data.
13. An electronic device, characterized in that the electronic device comprises:
a processor;
a memory configured to store processor-executable instructions;
wherein the processor is configured to:
acquiring a volume data field of a subject;
generating one or more multi-planar reconstructed MPR planes based on the selected angles and depths using the volumetric data field;
performing Volume Rendering (VR) by using the volume data field and the MPR plane based on a ray casting algorithm to generate a fusion image;
the ray casting algorithm-based volume rendering VR using the volume data field and the MPR plane includes:
projecting a plurality of rays to the volume data field through a plurality of emergent points on a preset projection plane;
sampling the volume data field according to the rays and the MPR plane to obtain color information of a plurality of sampling points corresponding to each ray;
determining projection color information of an emergent point corresponding to each light ray according to the color information of the plurality of sampling points;
determining color information of an intersection of the MPR plane and each of the rays;
and superposing the color information of the intersection point and the projection color information of the emergent point of the corresponding light.
14. A computer-readable storage medium, on which a computer program is stored, which program, when being processed by a processor, is adapted to carry out:
acquiring a volume data field of a subject;
generating one or more multi-planar reconstructed MPR planes based on the selected angles and depths using the volumetric data field;
performing Volume Rendering (VR) by using the volume data field and the MPR plane based on a ray casting algorithm to generate a fusion image;
the ray casting algorithm-based volume rendering VR using the volume data field and the MPR plane includes:
projecting a plurality of rays to the volume data field through a plurality of emergent points on a preset projection plane;
sampling the volume data field according to the rays and the MPR plane to obtain color information of a plurality of sampling points corresponding to each ray;
determining projection color information of an emergent point corresponding to each light ray according to the color information of the plurality of sampling points;
determining color information of an intersection of the MPR plane and each of the rays;
and superposing the color information of the intersection point and the projection color information of the emergent point of the corresponding light.
CN201710792855.6A 2017-09-05 2017-09-05 Medical image generation method, device and equipment Active CN107705350B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710792855.6A CN107705350B (en) 2017-09-05 2017-09-05 Medical image generation method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710792855.6A CN107705350B (en) 2017-09-05 2017-09-05 Medical image generation method, device and equipment

Publications (2)

Publication Number Publication Date
CN107705350A CN107705350A (en) 2018-02-16
CN107705350B true CN107705350B (en) 2021-03-30

Family

ID=61172073

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710792855.6A Active CN107705350B (en) 2017-09-05 2017-09-05 Medical image generation method, device and equipment

Country Status (1)

Country Link
CN (1) CN107705350B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110648752B (en) * 2018-06-26 2020-07-24 北京埃德维亚医疗科技有限公司 Three-dimensional visualization method and equipment for medical data
CN109360233A (en) * 2018-09-12 2019-02-19 沈阳东软医疗系统有限公司 Image interfusion method, device, equipment and storage medium
CN111127536A (en) * 2019-12-11 2020-05-08 清华大学 Light field multi-plane representation reconstruction method and device based on neural network
CN114998291A (en) * 2022-06-21 2022-09-02 北京银河方圆科技有限公司 Medical image processing method and device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1639737A (en) * 2002-03-06 2005-07-13 西门子共同研究公司 Visualization of volume-volume fusion
CN101711681A (en) * 2008-10-07 2010-05-26 株式会社东芝 Three-dimensional image processing apparatus
CN103150749A (en) * 2011-08-11 2013-06-12 西门子公司 Floating volume-of-interest in multilayer volume ray casting
CN105787922A (en) * 2015-12-16 2016-07-20 沈阳东软医疗系统有限公司 Method and apparatus for implementing automatic MPR batch processing

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8184890B2 (en) * 2008-12-26 2012-05-22 Three Palm Software Computer-aided diagnosis and visualization of tomosynthesis mammography data
CN103239253B (en) * 2012-02-14 2015-07-15 株式会社东芝 Medical image diagnostic apparatus

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1639737A (en) * 2002-03-06 2005-07-13 西门子共同研究公司 Visualization of volume-volume fusion
CN101711681A (en) * 2008-10-07 2010-05-26 株式会社东芝 Three-dimensional image processing apparatus
CN103150749A (en) * 2011-08-11 2013-06-12 西门子公司 Floating volume-of-interest in multilayer volume ray casting
CN105787922A (en) * 2015-12-16 2016-07-20 沈阳东软医疗系统有限公司 Method and apparatus for implementing automatic MPR batch processing

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
医学体绘制的一种快速光线投射算法;牛翠霞;《系统仿真学报》;20060831;第343-346页 *
多层螺旋CT多平面重建与容积再现技术在第5腰椎横突肥大综合征中的应用价值;王磊;《实用医学影像杂志》;20161231;第471-474页 *

Also Published As

Publication number Publication date
CN107705350A (en) 2018-02-16

Similar Documents

Publication Publication Date Title
US8423124B2 (en) Method and system for spine visualization in 3D medical images
JP5639739B2 (en) Method and system for volume rendering of multiple views
CN107705350B (en) Medical image generation method, device and equipment
US20170135655A1 (en) Facial texture mapping to volume image
US7860331B2 (en) Purpose-driven enhancement filtering of anatomical data
US7492968B2 (en) System and method for segmenting a structure of interest using an interpolation of a separating surface in an area of attachment to a structure having similar properties
US20140147025A1 (en) System and method for improving workflow efficiences in reading tomosynthesis medical image data
US20080080770A1 (en) Method and system for identifying regions in an image
KR101775556B1 (en) Tomography apparatus and method for processing a tomography image thereof
US9424680B2 (en) Image data reformatting
JP2008532612A (en) Image processing system and method for alignment of two-dimensional information with three-dimensional volume data during medical treatment
AU2014231354B2 (en) Data display and processing algorithms for 3D imaging systems
US20080297509A1 (en) Image processing method and image processing program
JP2017164496A (en) Medical image processing apparatus and medical image processing program
JP2008509773A (en) Flexible 3D rotational angiography-computed tomography fusion method
US10013778B2 (en) Tomography apparatus and method of reconstructing tomography image by using the tomography apparatus
KR102171396B1 (en) Method for diagnosing dental lesion and apparatus thereof
US9585569B2 (en) Virtual endoscopic projection image generating device, method and program
CN108876783B (en) Image fusion method and system, medical equipment and image fusion terminal
CN112884879B (en) Method for providing a two-dimensional unfolded image of at least one tubular structure
CN110546684B (en) Quantitative evaluation of time-varying data
JP7387280B2 (en) Image processing device, image processing method and program
US10535167B2 (en) Method and system for tomosynthesis projection image enhancement and review
KR20180054020A (en) Apparatus and method for processing medical image, and computer readable recording medium related to the method
KR20160140189A (en) Apparatus and method for tomography imaging

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 110167 No. 177-1 Innovation Road, Hunnan District, Shenyang City, Liaoning Province

Applicant after: DongSoft Medical System Co., Ltd.

Address before: 110167 No. 177-1 Innovation Road, Hunnan District, Shenyang City, Liaoning Province

Applicant before: Dongruan Medical Systems Co., Ltd., Shenyang

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant