CN112669941A - Medical image processing method and device, computer equipment and storage medium - Google Patents

Medical image processing method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN112669941A
CN112669941A CN202011588285.7A CN202011588285A CN112669941A CN 112669941 A CN112669941 A CN 112669941A CN 202011588285 A CN202011588285 A CN 202011588285A CN 112669941 A CN112669941 A CN 112669941A
Authority
CN
China
Prior art keywords
volume data
global
sample volume
illumination image
rendering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011588285.7A
Other languages
Chinese (zh)
Other versions
CN112669941B (en
Inventor
万俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Healthcare Co Ltd filed Critical Shanghai United Imaging Healthcare Co Ltd
Priority to CN202011588285.7A priority Critical patent/CN112669941B/en
Publication of CN112669941A publication Critical patent/CN112669941A/en
Application granted granted Critical
Publication of CN112669941B publication Critical patent/CN112669941B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application relates to an image processing method and device, a computer device and a storage medium. The method comprises the following steps: according to the volume data to be processed and the first rendering parameter, the local illumination image and the attribute information corresponding to the volume data to be processed are obtained, the volume data are rendered through local illumination, then the local illumination image and the attribute information are input into the global image generation network, the global illumination image corresponding to the volume data to be processed is obtained, and the method for obtaining the global illumination image based on the local illumination image is achieved. According to the method, the rendering result of the global illumination can be realized without using a complex physical volume rendering algorithm, so that a user can intuitively perceive the shape and the spatial position relation of the shot object corresponding to the volume data, the purpose of real-time rendering can be achieved, and the interoperability of the user based on the rendered image is further enhanced.

Description

Medical image processing method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of medical imaging technologies, and in particular, to a method and an apparatus for processing a medical image, a computer device, and a storage medium.
Background
In the field of medical imaging, imaging devices such as CT, MR, PET, and ultrasound are often used to acquire tissue structure or function information of a patient for disease diagnosis, and data acquired by the imaging devices are often represented by volume data, and a volume rendering technology capable of visualizing the volume data has emerged.
In order for people to intuitively perceive the relationship between the shape and the spatial position of an object, it is required that the more realistic the lighting effect of a scene rendered by a volume rendering technology is, the better. Currently, a path-tracing volume rendering method is generally adopted to asymptotically render a global illumination effect. The method specifically comprises the following steps: firstly, setting light rays starting from a camera, performing distance sampling according to an importance sampling principle to obtain scattering points, performing direction sampling, enabling the light rays to be scattered, performing distance sampling along a new direction, calculating a next sampling point, and repeating the steps until the light rays leave volume data or reach a specified stop condition. The process is one-time sampling, and a global illumination effect without noise can be obtained only by tracking billions of light according to the Monte Carlo integral principle.
However, the rendering method needs to occupy a large amount of computing resources, takes a long time, and results in low rendering instantaneity.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a method, an apparatus, a computer device, and a storage medium for processing a medical image with global illumination effect, which can effectively improve rendering real-time performance.
In a first aspect, a method of processing a medical image, the method comprising:
obtaining a local illumination image and attribute information corresponding to the volume data to be processed according to the volume data to be processed and a first rendering parameter;
and inputting the local illumination image and the attribute information into a global image generation network to obtain a global illumination image corresponding to the volume data to be processed.
In one embodiment, the obtaining a local illumination image corresponding to the volume data to be processed according to the volume data to be processed and the first rendering parameter includes:
and rendering the volume data to be processed according to the first rendering parameter to obtain a local illumination image corresponding to the volume data to be processed.
In one embodiment, the attribute information includes at least one of an opacity, depth information, a direction of a gradient, and a magnitude of the gradient of the volume data to be processed during the rendering process.
In one embodiment, the global image generation network is a network generated based on local illumination images and global illumination images corresponding to the same sample volume data.
In one embodiment, a method for training the global image generation network includes:
acquiring the sample volume data and a second rendering parameter;
obtaining a local illumination image corresponding to the sample volume data, attribute information corresponding to the sample volume data and a global illumination image corresponding to the sample volume data according to the sample volume data and the second rendering parameter;
and training a global image generation network to be trained according to the local illumination image corresponding to the sample volume data, the attribute information corresponding to the sample volume data and the global illumination image corresponding to the sample volume data to obtain the global image generation network.
In one embodiment, the obtaining, according to the sample volume data and the second rendering parameter, a local illumination image corresponding to the sample volume data, attribute information corresponding to the sample volume data, and a global illumination image corresponding to the sample volume data includes:
rendering the sample volume data according to the second rendering parameter to obtain a local illumination image corresponding to the sample volume data;
and performing physical rendering on the sample volume data according to the second rendering parameter to obtain a global illumination image corresponding to the sample volume data.
In one embodiment, the training a global image generation network to be trained according to the local illumination image corresponding to the sample volume data, the attribute information corresponding to the sample volume data, and the global illumination image corresponding to the sample volume data to obtain the global image generation network includes:
inputting the local illumination image corresponding to the sample volume data and the attribute information corresponding to the sample volume data into the global image generation network to be trained to obtain an output result;
determining a value of training loss according to the output result and a global illumination image corresponding to the sample volume data;
and adjusting each parameter in the global image generation network to be trained according to the value of the training loss until the value of the loss meets a preset condition, so as to obtain the global image generation network.
In a second aspect, an apparatus for processing medical images, the apparatus comprising:
the first rendering module is used for obtaining a local illumination image and attribute information corresponding to the volume data to be processed according to the volume data to be processed and a first rendering parameter; (ii) a
And the second rendering module is used for inputting the local illumination image and the attribute information into a global image generation network to obtain a global illumination image corresponding to the volume data to be processed.
In a third aspect, a computer device comprises a memory storing a computer program and a processor implementing the method of the first aspect when the processor executes the computer program.
In a fourth aspect, a computer-readable storage medium has stored thereon a computer program which, when executed by a processor, implements the method of the first aspect described above.
According to the medical image processing method, the medical image processing device, the computer equipment and the storage medium, the local illumination image and the attribute information corresponding to the volume data to be processed are obtained according to the volume data to be processed and the first rendering parameter, the local illumination rendering of the volume data is realized, then the local illumination image and the attribute information are input into the global image generation network, the global illumination image corresponding to the volume data to be processed is obtained, and the method for obtaining the global illumination image based on the local illumination image is realized. According to the method, the rendering result of the global illumination can be realized without using a complex physical volume rendering algorithm, so that a user can intuitively perceive the shape and the spatial position relation of the shot object corresponding to the volume data, the purpose of real-time rendering can be achieved, and the interoperability of the user based on the rendered image is further enhanced.
Drawings
FIG. 1 is a schematic diagram showing an internal configuration of a computer device according to an embodiment;
FIG. 2 is a flow diagram illustrating a method for processing medical images according to one embodiment;
FIG. 3 is a diagram illustrating a structure of a global image generation network in one embodiment;
FIG. 4 is a schematic flow diagram illustrating a method for training a global image generation network in one embodiment;
FIG. 5 is a diagram illustrating the architecture of an image processing network in one embodiment;
FIG. 6 is a flowchart illustrating an implementation manner of S202 in the embodiment of FIG. 3;
FIG. 7 is a flowchart illustrating an implementation manner of S203 in the embodiment of FIG. 3;
FIG. 8 is a diagram illustrating the structure of a training network in one embodiment;
FIG. 9 is a schematic diagram showing the configuration of a medical image processing apparatus according to an embodiment;
FIG. 10 is a schematic diagram showing the configuration of a medical image processing apparatus according to an embodiment;
FIG. 11 is a schematic diagram showing the configuration of a medical image processing apparatus according to an embodiment;
FIG. 12 is a schematic diagram showing the configuration of a medical image processing apparatus according to an embodiment;
fig. 13 is a schematic structural diagram of a medical image processing apparatus according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The medical image processing method provided by the present application can be applied to a computer device shown in fig. 1, where the computer device can be a server, the computer device can also be a terminal, and its internal structure diagram can be shown in fig. 1. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of processing a medical image. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 1 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, as shown in fig. 2, a method for processing a medical image is provided, which is illustrated by applying the method to the computer device in fig. 1, and comprises the following steps:
s101, obtaining a local illumination image and attribute information corresponding to the volume data to be processed according to the volume data to be processed and the first rendering parameter.
The volume data to be processed is medical data obtained by acquiring tissue structure or function information of a patient using an electronic Computed Tomography (CT) device, a Magnetic Resonance Imaging (MR) device, a Positron Emission Computed Tomography (PET) device, an ultrasound or other Imaging devices in the medical Imaging field, and is used for disease diagnosis. The first rendering parameter is used for representing a setting parameter for a light source attribute and/or a setting parameter of a shooting camera in a rendering process, for example, a light source position, an intensity of light emitted by the light source, a direction of the light emitted by the light source, a refractive index of a surface of a shot object, a reflection intensity of the shot object to the light, a current angle of view, a shooting camera position, a shooting camera focusing parameter, a segmentation Mask of volume data to be rendered, and the like. The local illumination image, that is, a partial global illumination image, represents an illumination effect image that only considers the influence of the main light source on the image illumination effect during the rendering process, and does not consider the influence of other peripheral light rays such as reflected light, refracted light, and the like of other objects or other light sources on the image illumination effect. The attribute information is used for representing attribute information of the volume data to be processed in the rendering process. Optionally, the attribute information may include at least one of opacity, depth information, a direction of the gradient, and a magnitude of the gradient of the volume data to be processed during the rendering process. For example, the attribute information may include a depth, a gradient direction, and a gradient size when the opacity of the volume data to be processed is accumulated to a certain threshold during rendering and during ray progression.
In this embodiment, the computer device may be connected to any one of CT, MR, PET, and the like to perform scanning imaging on the patient, so as to obtain the volume data of the patient as the volume data to be processed. Meanwhile, the computer device may determine the first rendering parameter according to a preset lighting condition, a lighting environment, a light source attribute, a camera attribute, and the like, so as to set a lighting effect in the rendering process. After the computer device determines the volume data to be processed and the corresponding first rendering parameter, the volume data to be processed can be rendered according to the first rendering parameter by using a local illumination volume rendering algorithm, and a local illumination image corresponding to the volume data to be processed is obtained. The volume rendering algorithm of local illumination represents a method for rendering volume data, so that a rendered image carries a local illumination effect. In practical applications, the computer device may employ any existing volume rendering algorithm for local illumination, for example, a real-time volume rendering algorithm. Optionally, the computer device may further use a pre-trained volume rendering network for local illumination to obtain a local illumination image corresponding to the volume data to be processed based on the volume data to be processed and the corresponding first rendering parameter.
S102, inputting the local illumination image and the attribute information into a global image generation network to obtain a to-be-processed volume data pair
The global image generation network can be a pre-trained network or a self-learning continuously optimized network. The global image generation network may specifically include at least one of a deep convolutional network, a cyclic convolutional network, an anti-convolutional network of the codec structure, or a network of a combination of multiple networks. The global illumination image represents an illumination effect image which is presented by comprehensively considering the influence of the main light-emitting light source on the illumination effect of the image, and the influence of other peripheral light rays such as other object reflected light, refracted light and the like or other light sources on the illumination effect of the image in the rendering process. The global illumination image may provide the user with a stronger spatial perception and depth perception.
In this embodiment, when the computer device obtains the local illumination image and the attribute information corresponding to the volume data to be processed based on the foregoing steps, the local illumination image and the attribute information may be further input to the global image generation network, so as to further render the local illumination image, and further obtain the global illumination image corresponding to the volume data to be processed. Optionally, the computer device may also input the local illumination image and the attribute information to the global image generation network, so as to convert the local illumination image into the global illumination image, and further obtain the global illumination image corresponding to the volume data to be processed.
According to the medical image processing method, the local illumination image and the attribute information corresponding to the volume data to be processed are obtained according to the volume data to be processed and the first rendering parameter, the volume data are rendered in a local illumination mode, then the local illumination image and the attribute information are input into the global image generation network, the global illumination image corresponding to the volume data to be processed is obtained, and the method for obtaining the global illumination image based on the local illumination image is achieved. According to the method, the rendering result of the global illumination can be realized without using a complex physical volume rendering algorithm, so that a user can intuitively perceive the shape and the spatial position relation of the shot object corresponding to the volume data, the purpose of real-time rendering can be achieved, and the interoperability of the user based on the rendered image is further enhanced.
In an embodiment, an implementation manner of the foregoing S101 is provided, where in the foregoing S101, "obtaining, according to the volume data to be processed and the first rendering parameter, a local illumination image and attribute information corresponding to the volume data to be processed", specifically includes: and rendering the volume data to be processed according to the first rendering parameter to obtain a local illumination image corresponding to the volume data to be processed.
After the computer equipment determines the volume data to be processed and the corresponding first rendering parameter, real-time rendering is carried out on the volume data to be processed according to the first rendering parameter by adopting a real-time volume rendering algorithm, and a local illumination image corresponding to the volume data to be processed is obtained. Optionally, the computer device may further use a pre-trained volume rendering network to obtain a local illumination image corresponding to the volume data to be processed based on the volume data to be processed and the corresponding first rendering parameter.
In the actual medical field, the volume rendering technology is taken as a classic volume data visualization technology, and through interactive operation, people can conveniently perceive the anatomical structure and functional metabolism condition of a human body, so that the volume rendering technology needs to achieve the performance of real-time interaction in order to observe conveniently. In the embodiment, the local illumination image is obtained through the real-time volume rendering algorithm, so that the real-time rendering effect is achieved, and people can quickly perceive the shape and the spatial position of an object in the image.
Optionally, based on the medical image processing method described in the foregoing embodiment, this embodiment further provides a global image generation network, as shown in fig. 3, where the global image generation network is implemented integrally by using a coding and decoding manner with a U-shaped structure, and in order to ensure that the global image generation network adapts to images of any size, a full convolution network is specifically used. When the global image generation network is used for obtaining a global illumination image based on a local illumination image, the input local illumination image is a RGB three-channel two-dimensional image, the input attribute information is pixel-by-pixel, namely the dimension is consistent with the width and height of the local illumination image, each attribute corresponds to one channel, and the channels are provided with a plurality of attributes. And stacking the local illumination image and the attribute information image together to form a multi-channel matrix which is used as the input of the whole network. In the encoding stage, the matrix passes through a convolution downsampling module for multiple times, the characteristics of original information are extracted step by step, high-level characteristics are extracted from low-level characteristics, and the high-level characteristics are expressed by a matrix with the width and the height being reduced and the number of channels being multiple. The decoding stage is a process opposite to the encoding stage, each stage fuses a matrix of a corresponding level of the encoding stage, and the global illumination image with the same size as the local illumination image is obtained through a convolution upsampling module for multiple times. Optionally, the convolution pooling in the encoding stage may use three convolutions of 3 × 3 and one convolution of stride 2 for down-sampling, and the convolution up-sampling module in the decoding stage may use three convolutions of 3 × 3 and one deconvolution of stride 2.
Optionally, the global image generation network in the above embodiment may be a pre-trained network, specifically a network generated by training based on the local illumination image and the global illumination image corresponding to the same sample volume data.
It can be understood that, when training the global image generation network, the local illumination image and the global illumination image generated by the same sample volume data may be selected, that is, the local illumination image and the global illumination image are selected or generated in a paired manner.
Based on the foregoing embodiments, the present application provides a method for training the global image generation network, as shown in fig. 4, the training method includes:
s201, obtaining sample volume data and a second rendering parameter.
The type and the obtaining manner of the sample volume data and the to-be-processed volume data in the foregoing S101 are the same, and the type and the obtaining manner of the second rendering parameter and the first rendering parameter are the same, which may specifically refer to the foregoing description, and are not described herein again.
S202, according to the sample volume data and the second rendering parameters, a local illumination image corresponding to the sample volume data, attribute information corresponding to the sample volume data and a global illumination image corresponding to the sample volume data are obtained.
In this embodiment, after the computer device obtains the sample volume data and the second rendering parameter, the computer device may perform local illumination rendering on the sample volume data by using the second rendering parameter to obtain a local illumination image corresponding to the sample volume data, and simultaneously obtain attribute information corresponding to the sample volume data in the rendering process, for example, at least one of an opacity, depth information, a direction of a gradient, and a magnitude of the gradient of the sample volume data. Correspondingly, the computer equipment can perform global illumination rendering on the same sample volume data by using the second rendering parameter to obtain a global illumination image corresponding to the sample volume data.
S203, training the global image generation network to be trained according to the local illumination image corresponding to the sample volume data, the attribute information corresponding to the sample volume data and the global illumination image corresponding to the sample volume data to obtain a global image generation network.
The global image generation network to be trained may be predefined by a computer device, and may specifically be any one or a combination of multiple networks of a deep convolutional network, a cyclic convolutional network, a deconvolution network, and the like with a coding and decoding structure.
In this embodiment, the computer device may use the local illumination image corresponding to the sample volume data and the attribute information corresponding to the sample volume data as training data, use the global illumination image corresponding to the sample volume data as a training purpose or a gold standard, train the global image generation network to be trained, and the trained network is the global image generation network used in the foregoing embodiment.
Based on the above embodiments, the present application provides an image processing network, as shown in fig. 5, the image processing network includes: the local illumination rendering module and the global image generation network. The local illumination rendering module is used for rendering the volume data to be processed according to the input volume data to be processed and the first rendering parameter to obtain a local illumination image and attribute information corresponding to the volume data to be processed; the global image generation network is used for performing secondary rendering or conversion on the local illumination image according to the input local illumination image and the attribute information to obtain a global illumination image corresponding to the volume data to be processed.
Optionally, as shown in fig. 6, the step S202 specifically includes:
and S301, rendering the sample volume data according to the second rendering parameter to obtain a local illumination image corresponding to the sample volume data.
After the computer device obtains the sample volume data and the corresponding second rendering parameter, the sample volume data can be rendered in real time according to the second rendering parameter by adopting a real-time volume rendering algorithm, and the local illumination image corresponding to the sample volume data is obtained. Optionally, the computer device may further use a pre-trained volume rendering network to obtain a local illumination image corresponding to the sample volume data based on the sample volume data and the corresponding second rendering parameter.
And S302, performing physical rendering on the sample volume data according to the second rendering parameter to obtain a global illumination image corresponding to the sample volume data.
After the computer device obtains the sample volume data and the corresponding second rendering parameter, a physical volume rendering algorithm may be adopted to physically render the sample volume data according to the second rendering parameter, so as to obtain a global illumination image corresponding to the sample volume data. Optionally, the computer device may further use a pre-trained physical volume rendering network to obtain a global illumination image corresponding to the sample volume data based on the sample volume data and the corresponding second rendering parameter.
Optionally, as shown in fig. 7, the step S203 specifically includes:
s401, inputting the local illumination image corresponding to the sample volume data and the attribute information corresponding to the sample volume data into a global image generation network to be trained, and obtaining an output result.
When the computer device obtains the local illumination image corresponding to the sample volume data and the attribute information corresponding to the sample volume data based on the steps, the local illumination image corresponding to the sample volume data and the attribute information corresponding to the sample volume data can be input to a global image generation network to be trained, so that secondary rendering of the local illumination image corresponding to the sample volume data is realized, or conversion of the local illumination image corresponding to the sample volume data is realized, and a secondary rendered or converted global illumination image, namely an output result, is obtained.
S402, determining a value of training loss according to the output result and the global illumination image corresponding to the sample volume data.
The computer device may further perform cross entropy or other loss operations on the secondarily rendered or converted global illumination image and the global illumination image corresponding to the sample volume data to obtain a training loss value corresponding to the current global image generation network to be trained.
Optionally, in the process of training the global image generation network, in order to enable the network to learn the characteristics of global illumination, two loss losses may be designed for training the network.
For example, the loss L can be obtained by using the following relational expression (1) or a variation thereofs
Figure BDA0002866416450000101
The loss L can be obtained by using the following relation (2) or a variation thereoff
Figure BDA0002866416450000102
The resulting Loss can be obtained by a weighted sum of the two Loss losses mentioned above, for example, by the following relation (3) or a variant thereof:
L=wsLs+wfLf (3);
in the above formulas, P is a predicted image, i.e., an image included in the output result, T is a target image, i.e., a global illumination image corresponding to the sample volume data, N is the number of image pixels, and L issTo express the loss of spectral energy of a spatial pixel, LfTo express spaceLoss of frequency variation of the spectrum. w is asIs LsWeight of (1), wfIs LfThe weight of (c): l represents the final loss.
Figure BDA0002866416450000103
Representing the gradient calculation.
And S403, adjusting each parameter in the global image generation network to be trained according to the value of the training loss until the value of the training loss meets a preset condition, and obtaining the global image generation network.
After the computer device obtains the value of the training loss, each parameter in the global image generation network to be trained may be further adjusted according to the value of the training loss, a new global image generation network to be trained is obtained according to each adjusted parameter, and then the step of S401 is executed again and again until the value of the training loss meets the preset condition, that is, the training purpose is achieved, and the global image generation network applied in the foregoing embodiment may be obtained.
According to the method for training the global image generation network, the trained global image generation network can learn the capability of generating the global illumination image according to the local illumination image or the partial global illumination image and the attribute information, and then the trained global image generation network is used in the later period, so that the global illumination image can be rapidly obtained only based on the local illumination image or the partial global illumination image.
Based on the foregoing embodiments, the present application provides a training network, as shown in fig. 8, where the training network includes: the system comprises a local illumination rendering module, a global image generation network to be trained and a training loss module.
The local illumination rendering module is used for rendering the sample volume data according to the input sample volume data and a second rendering parameter to obtain a local illumination image corresponding to the sample volume data and attribute information corresponding to the sample volume data; and the global illumination rendering module is used for physically rendering the sample volume data according to the input sample volume data and the second rendering parameter to obtain a global illumination image corresponding to the sample volume data. It should be noted that the local illumination rendering module may specifically adopt a real-time rendering algorithm to achieve the rendering of the local illumination effect; the global illumination rendering module may specifically adopt a physical rendering algorithm to implement rendering of all illumination effects.
In this embodiment, when the computer device obtains the local illumination image, the attribute information, and the global illumination image corresponding to the same sample volume data through the local illumination rendering module and the global illumination rendering module, the local illumination image and the attribute information may be used as training data, and the global illumination image is used as a gold standard, and the training of the global image to be trained to generate the network is started. Specifically, the value of the training loss may be determined based on the output result of the global image generation network to be trained and the global illumination image output by the global illumination rendering module, and then each parameter of the global image generation network to be trained is adjusted by the value of the training loss until the global image generation network to be trained is trained, so that the global image generation network may be obtained.
It should be understood that although the various steps in the flow charts of fig. 2-7 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-7 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least some of the other steps.
In one embodiment, as shown in fig. 9, there is provided a medical image processing apparatus including:
the first rendering module 11 is configured to obtain a local illumination image and attribute information corresponding to volume data to be processed according to the volume data to be processed and a first rendering parameter;
and the second rendering module 12 is configured to input the local illumination image and the attribute information to a global image generation network, so as to obtain a global illumination image corresponding to the volume data to be processed.
In an embodiment, the first rendering module 11 is specifically configured to render the to-be-processed volume data according to the first rendering parameter, so as to obtain a local illumination image corresponding to the to-be-processed volume data.
In one embodiment, the attribute information includes at least one of opacity, depth information, direction of gradient, and size of gradient of the volume data to be processed during rendering.
In one embodiment, the global image generation network is a network generated based on local illumination images and global illumination images corresponding to the same sample volume data.
In one embodiment, as shown in fig. 10, the medical image processing apparatus includes:
and the training module 13 is configured to train the global image generation network.
In one embodiment, as shown in fig. 11, the training module 13 includes:
a first obtaining unit 131, configured to obtain the sample volume data and a second rendering parameter;
a second obtaining unit 132, configured to obtain, according to the sample volume data and the second rendering parameter, a local illumination image corresponding to the sample volume data, attribute information corresponding to the sample volume data, and a global illumination image corresponding to the sample volume data;
the training unit 133 is configured to train a global image generation network to be trained according to the local illumination image corresponding to the sample volume data, the attribute information corresponding to the sample volume data, and the global illumination image corresponding to the sample volume data, so as to obtain the global image generation network.
In one embodiment, as shown in fig. 12, the second obtaining unit 132 includes:
the first obtaining subunit 1321, configured to render the sample volume data according to the second rendering parameter, so as to obtain a local illumination image corresponding to the sample volume data;
a second obtaining subunit 1322 is configured to perform physical rendering on the sample volume data according to the second rendering parameter, so as to obtain a global illumination image corresponding to the sample volume data.
In one embodiment, as shown in fig. 13, the training unit 133 includes:
an input subunit 1331, configured to input the local illumination image corresponding to the sample volume data and the attribute information corresponding to the sample volume data into the global image generation network to be trained, so as to obtain an output result;
a determining subunit 1332, configured to determine a value of training loss according to the output result and the global illumination image corresponding to the sample volume data;
a training subunit 1333, configured to adjust each parameter in the global image generation network to be trained according to the value of the training loss until the value of the loss meets a preset condition, so as to obtain the global image generation network.
For specific limitations of the processing apparatus for medical images, reference may be made to the above limitations of the processing method for medical images, which are not described herein again. The modules in the medical image processing device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
obtaining a local illumination image and attribute information corresponding to the volume data to be processed according to the volume data to be processed and a first rendering parameter;
and inputting the local illumination image and the attribute information into a global image generation network to obtain a global illumination image corresponding to the volume data to be processed.
The implementation principle and technical effect of the computer device provided by the above embodiment are similar to those of the above method embodiment, and are not described herein again.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
obtaining a local illumination image and attribute information corresponding to the volume data to be processed according to the volume data to be processed and a first rendering parameter;
and inputting the local illumination image and the attribute information into a global image generation network to obtain a global illumination image corresponding to the volume data to be processed.
The implementation principle and technical effect of the computer-readable storage medium provided by the above embodiments are similar to those of the above method embodiments, and are not described herein again.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method of processing a medical image, the method comprising:
obtaining a local illumination image and attribute information corresponding to the volume data to be processed according to the volume data to be processed and a first rendering parameter;
and inputting the local illumination image and the attribute information into a global image generation network to obtain a global illumination image corresponding to the volume data to be processed.
2. The method according to claim 1, wherein obtaining the local illumination image corresponding to the volume data to be processed according to the volume data to be processed and the first rendering parameter comprises:
and rendering the volume data to be processed according to the first rendering parameter to obtain a local illumination image corresponding to the volume data to be processed.
3. The method according to claim 1 or 2, wherein the attribute information includes at least one of opacity, depth information, direction of gradient, and size of gradient of the volume data to be processed during rendering.
4. The method of claim 1, wherein the global image generation network is a network generated based on local illumination images and global illumination image training corresponding to the same sample volume data.
5. The method of claim 4, wherein training the global image generation network comprises:
acquiring the sample volume data and a second rendering parameter;
obtaining a local illumination image corresponding to the sample volume data, attribute information corresponding to the sample volume data and a global illumination image corresponding to the sample volume data according to the sample volume data and the second rendering parameter;
and training a global image generation network to be trained according to the local illumination image corresponding to the sample volume data, the attribute information corresponding to the sample volume data and the global illumination image corresponding to the sample volume data to obtain the global image generation network.
6. The method according to claim 5, wherein obtaining the local illumination image corresponding to the sample volume data, the attribute information corresponding to the sample volume data, and the global illumination image corresponding to the sample volume data according to the sample volume data and the second rendering parameter comprises:
rendering the sample volume data according to the second rendering parameter to obtain a local illumination image corresponding to the sample volume data;
and performing physical rendering on the sample volume data according to the second rendering parameter to obtain a global illumination image corresponding to the sample volume data.
7. The method according to claim 5, wherein the training a global image generation network to be trained according to the local illumination image corresponding to the sample volume data, the attribute information corresponding to the sample volume data, and the global illumination image corresponding to the sample volume data to obtain the global image generation network comprises:
inputting the local illumination image corresponding to the sample volume data and the attribute information corresponding to the sample volume data into the global image generation network to be trained to obtain an output result;
determining a value of training loss according to the output result and a global illumination image corresponding to the sample volume data;
and adjusting each parameter in the global image generation network to be trained according to the value of the training loss until the value of the training loss meets a preset condition, so as to obtain the global image generation network.
8. An apparatus for processing medical images, the apparatus comprising:
the first rendering module is used for obtaining a local illumination image and attribute information corresponding to the volume data to be processed according to the volume data to be processed and a first rendering parameter;
and the second rendering module is used for inputting the local illumination image and the attribute information into a global image generation network to obtain a global illumination image corresponding to the volume data to be processed.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202011588285.7A 2020-12-28 2020-12-28 Medical image processing method, medical image processing device, computer equipment and storage medium Active CN112669941B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011588285.7A CN112669941B (en) 2020-12-28 2020-12-28 Medical image processing method, medical image processing device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011588285.7A CN112669941B (en) 2020-12-28 2020-12-28 Medical image processing method, medical image processing device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112669941A true CN112669941A (en) 2021-04-16
CN112669941B CN112669941B (en) 2023-05-26

Family

ID=75411647

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011588285.7A Active CN112669941B (en) 2020-12-28 2020-12-28 Medical image processing method, medical image processing device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112669941B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113706583A (en) * 2021-09-01 2021-11-26 上海联影医疗科技股份有限公司 Image processing method, image processing device, computer equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104933751A (en) * 2015-07-20 2015-09-23 上海交通大学医学院附属瑞金医院 Angiocarpy coronary artery enhanced volume rendering method and system based on local histograms
CN107292875A (en) * 2017-06-29 2017-10-24 西安建筑科技大学 A kind of conspicuousness detection method based on global Local Feature Fusion
CN108538370A (en) * 2018-03-30 2018-09-14 北京灵医灵科技有限公司 A kind of illumination volume drawing output method and device
CN108876764A (en) * 2018-05-21 2018-11-23 北京旷视科技有限公司 Render image acquiring method, device, system and storage medium
CN109360233A (en) * 2018-09-12 2019-02-19 沈阳东软医疗系统有限公司 Image interfusion method, device, equipment and storage medium
US20190073569A1 (en) * 2017-09-07 2019-03-07 International Business Machines Corporation Classifying medical images using deep convolution neural network (cnn) architecture
CN110391014A (en) * 2018-04-18 2019-10-29 西门子医疗有限公司 Utilize the medical image acquisition for the sequence prediction for using deep learning

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104933751A (en) * 2015-07-20 2015-09-23 上海交通大学医学院附属瑞金医院 Angiocarpy coronary artery enhanced volume rendering method and system based on local histograms
CN107292875A (en) * 2017-06-29 2017-10-24 西安建筑科技大学 A kind of conspicuousness detection method based on global Local Feature Fusion
US20190073569A1 (en) * 2017-09-07 2019-03-07 International Business Machines Corporation Classifying medical images using deep convolution neural network (cnn) architecture
CN108538370A (en) * 2018-03-30 2018-09-14 北京灵医灵科技有限公司 A kind of illumination volume drawing output method and device
CN110391014A (en) * 2018-04-18 2019-10-29 西门子医疗有限公司 Utilize the medical image acquisition for the sequence prediction for using deep learning
CN108876764A (en) * 2018-05-21 2018-11-23 北京旷视科技有限公司 Render image acquiring method, device, system and storage medium
CN109360233A (en) * 2018-09-12 2019-02-19 沈阳东软医疗系统有限公司 Image interfusion method, device, equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113706583A (en) * 2021-09-01 2021-11-26 上海联影医疗科技股份有限公司 Image processing method, image processing device, computer equipment and storage medium
WO2023029321A1 (en) * 2021-09-01 2023-03-09 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for volume data rendering
CN113706583B (en) * 2021-09-01 2024-03-22 上海联影医疗科技股份有限公司 Image processing method, device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN112669941B (en) 2023-05-26

Similar Documents

Publication Publication Date Title
Zhu et al. How can we make GAN perform better in single medical image super-resolution? A lesion focused multi-scale approach
US20210264599A1 (en) Deep learning based medical image detection method and related device
US20200184708A1 (en) Consistent 3d rendering in medical imaging
US10339695B2 (en) Content-based medical image rendering based on machine learning
US10893262B2 (en) Lightfield rendering based on depths from physically-based volume rendering
CN110998602A (en) Classification and 3D modeling of 3D dento-maxillofacial structures using deep learning methods
JP2020500579A (en) Choosing acquisition parameters for an imaging system
CN111598989B (en) Image rendering parameter setting method and device, electronic equipment and storage medium
US20200242744A1 (en) Forecasting Images for Image Processing
US20210327105A1 (en) Systems and methods to semi-automatically segment a 3d medical image using a real-time edge-aware brush
KR20200137768A (en) A Method and Apparatus for Segmentation of Orbital Bone in Head and Neck CT image by Using Deep Learning and Multi-Graylevel Network
CN113496494A (en) Two-dimensional skeleton segmentation method and device based on DRR simulation data generation
WO2020234349A1 (en) Sampling latent variables to generate multiple segmentations of an image
CN111739614A (en) Medical image enhancement
KR101885562B1 (en) Method for mapping region of interest in first medical image onto second medical image and apparatus using the same
CN114897756A (en) Model training method, medical image fusion method, device, equipment and medium
KR102037117B1 (en) Method and program for reducing artifacts by structural simirality, and medial imaging device
CN112669941B (en) Medical image processing method, medical image processing device, computer equipment and storage medium
CN114972211A (en) Training method, segmentation method, device, equipment and medium of image segmentation model
Zhou et al. A superior image inpainting scheme using Transformer-based self-supervised attention GAN model
CN111489318B (en) Medical image enhancement method and computer-readable storage medium
CN115397332A (en) System and method for real-time video enhancement
CN115965785A (en) Image segmentation method, device, equipment, program product and medium
CN113706583B (en) Image processing method, device, computer equipment and storage medium
JP2019181168A (en) Medical image diagnostic apparatus, medical image processing device, and medical image processing program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant