CN114820861A - MR synthetic CT method, equipment and computer readable storage medium based on cycleGAN - Google Patents

MR synthetic CT method, equipment and computer readable storage medium based on cycleGAN Download PDF

Info

Publication number
CN114820861A
CN114820861A CN202210537273.4A CN202210537273A CN114820861A CN 114820861 A CN114820861 A CN 114820861A CN 202210537273 A CN202210537273 A CN 202210537273A CN 114820861 A CN114820861 A CN 114820861A
Authority
CN
China
Prior art keywords
image
generator
channel
contour
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210537273.4A
Other languages
Chinese (zh)
Inventor
王少彬
陈颀
白璐
陈宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yizhiying Technology Co ltd
Original Assignee
Beijing Yizhiying Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yizhiying Technology Co ltd filed Critical Beijing Yizhiying Technology Co ltd
Priority to CN202210537273.4A priority Critical patent/CN114820861A/en
Publication of CN114820861A publication Critical patent/CN114820861A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

Embodiments of the application provide a CycleGAN-based MR synthetic CT method, apparatus, and computer-readable storage medium. The method includes acquiring an MR image; inputting the MR image into a synthetic model based on the training of a cycleGAN network, and outputting CT corresponding to the MR image; wherein parameters of a generator in the synthetic model are adjusted based on a structural similarity constraint. In this way, highly accurate CT can be synthesized, the structure retention characteristics, the degree of detail recovery, and the like of the synthesized CT result are effectively improved, and background interference is effectively suppressed.

Description

MR synthetic CT method, equipment and computer readable storage medium based on cycleGAN
Technical Field
Embodiments of the present application relate to the field of data processing, and in particular, to a method, an apparatus, and a computer-readable storage medium for MR synthesis CT based on CycleGAN.
Background
Medical images are of great significance for medical diagnosis and treatment. Generally, in the process of diagnosis and treatment, due to the limitation of data of a single modality, doctors need to use data of different modalities as diagnosis bases and auxiliary treatment. CT (computed tomography) and MR (magnetic resonance imaging) are often the reference images commonly used by physicians. CT is a commonly used reference image in the current image-guided radiotherapy process, can provide density information required for formulating a radiotherapy dose plan, and has the advantages of high spatial resolution and simplicity in operation.
Currently, most of the mainstream methods for generating CT through the countermeasure network use L1 loss function to constrain the similarity between the reconstructed MR and the original MR. However, the L1 loss is generated only by calculating the absolute error of the pixel value on a pixel-by-pixel basis as the loss value of the reconstructed MR, and the resultant similarity between the two is not considered. Since the medical image itself has extremely high accuracy requirement, how to generate CT with high accuracy is an urgent problem to be solved.
Disclosure of Invention
According to an embodiment of the present application, a CycleGAN based MR synthetic CT protocol is provided.
In a first aspect of the application, a CycleGAN-based MR synthetic CT method is provided. The method comprises the following steps:
acquiring an MR image;
inputting the MR image into a synthetic model based on the training of a cycleGAN network, and outputting CT corresponding to the MR image;
wherein parameters of a generator in the synthetic model are adjusted based on a structural similarity constraint.
Further, training the synthetic model by:
constructing a training sample set, wherein the training sample set comprises a preset number of MR images, and CT images and conventional CT images corresponding to the MR images;
taking the MR image in the training sample set as input, taking a CT image corresponding to the MR image as output, and training a generator based on a cycleGAN network;
training a discriminator by taking the conventional CT image in the training sample set as input and the CT image corresponding to the MR image as output;
and adjusting parameters of the generator according to the difference degree of the loss functions of the generator and the discriminator until the difference value of the loss functions of the generator and the discriminator is smaller than a preset threshold value, and taking the generator and the discriminator at the moment as a final synthetic model.
Further, the adjusting the parameters of the generator in the synthetic model based on the structural similarity constraint includes: extracting the contours of the MR image and the output CT image;
calculating the similarity of the MR image contour and the CT image contour based on the structural similarity constraint; wherein the structural similarity constraint comprises a mutual information loss function and an active contour loss function;
based on the similarity, parameters of a generator in the synthetic model are adjusted.
Further, the calculating the similarity of the MR image contour and the CT image contour based on the structural similarity constraint comprises:
wherein calculating the similarity of the MR image contour and the CT image contour based on the mutual information loss function comprises: calculating the similarity of the MR image contour and the CT image contour by the following formula:
Figure BDA0003648774890000021
wherein MI is mutual information;
p (x) and p (y) respectively, MR image I MR And CT image G (I) MR ) A probability distribution of the extracted contours;
p (x, y) represents the joint probability distribution of p (x) and p (y);
calculating the similarity of the MR image contour and the CT image contour based on the active contour loss function comprises:
defining the active contour loss function to be:
LossAC=Length+λ·Refion
wherein,
Figure BDA0003648774890000022
converting the Length, Region into the form of a single pixel:
Figure BDA0003648774890000023
c1 and c2 are defined as follows:
Figure BDA0003648774890000031
wherein v represents a division reference value;
u represents a predicted value;
u xi,j and u yi,j X and y in (1) represent a horizontal direction and a vertical direction, respectively;
e represents a preset parameter;
c1 and c2 represent internal and external energies, respectively.
Further, still include:
and optimizing the multichannel characteristics of the generator trained on the basis of the cycleGAN network through a channel attention mechanism.
Further, optimizing the CycleGAN network-based multichannel characteristics of the trained producers by a channel attention mechanism includes:
carrying out reshape transformation, reshape transformation and transposition on the MR image in the training sample respectively to obtain two corresponding characteristic response graphs;
obtaining an attention response map of a channel dimension through softmax based on the two corresponding characteristic response maps;
and optimizing the multichannel characteristics of the generator based on the attention response graph and the characteristic response graph for carrying out reshape change on the MR image.
Further, the obtaining of the attention response map of the channel dimension by softmax based on the two corresponding feature response maps comprises:
the attention response graph of the channel dimension is calculated by the following formula:
Figure BDA0003648774890000032
wherein x is ji Indicating the influence of the ith channel on the jth channel;
A i representing a response graph obtained by carrying out reshape transformation on the MR image;
A j the response diagram obtained by reshape transformation and transposition of the MR image is shown.
Further, the optimizing the multichannel characteristics of the generator based on the attention response map and the characteristic response map for reshape change of the MR image comprises:
the channel characteristics of the generator are calculated by the following formula:
Figure BDA0003648774890000033
wherein beta is a scale coefficient;
optimizing a multi-channel feature of the generator based on the channel feature.
In a second aspect of the present application, an electronic device is provided. The electronic device includes: a memory having a computer program stored thereon and a processor implementing the method as described above when executing the program.
In a third aspect of the present application, a computer-readable storage medium is provided, having stored thereon a computer program which, when executed by a processor, implements a method as in the first aspect of the present application.
According to the MR synthesis CT method based on the cycleGAN, provided by the embodiment of the application, an MR image is obtained; inputting the MR image into a synthetic model based on the training of a cycleGAN network, and outputting CT corresponding to the MR image; based on the structural similarity constraint, the parameters of the generator in the synthetic model are adjusted, so that high-precision CT can be synthesized, the structure retention characteristic, the detail recovery degree and the like of the synthesized CT result are effectively improved, and background interference is effectively inhibited.
It should be understood that what is described in this summary section is not intended to limit key or critical features of the embodiments of the application, nor is it intended to limit the scope of the application. Other features of the present application will become apparent from the following description.
Drawings
The above and other features, advantages and aspects of various embodiments of the present application will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. In the drawings, like or similar reference characters designate like or similar elements, and wherein: fig. 1 shows a system architecture diagram in accordance with a method provided by an embodiment of the present application.
FIG. 2 shows a flow chart of a CycleGAN-based MR synthetic CT method according to an embodiment of the present application;
FIG. 3 shows a schematic diagram of training a synthetic model according to an embodiment of the application;
FIG. 4 shows a schematic diagram of a configuration of a channel attention mechanism according to an embodiment of the present application;
FIG. 5 shows a diagram of reshape transform and reshape transform with transpose according to an embodiment of the present application;
FIG. 6 shows a CT schematic according to an embodiment of the present application;
fig. 7 shows a schematic structural diagram of a terminal device or a server suitable for implementing the embodiments of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some, but not all embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
In addition, the term "and/or" herein is only one kind of association relationship describing an associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
Fig. 1 shows an exemplary system architecture 100 to which embodiments of the CycleGAN-based MR synthetic CT method of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. Various communication client applications, such as a model training application, a video recognition application, a web browser application, social platform software, etc., may be installed on the terminal devices 101, 102, 103.
The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, and 103 are hardware, they may be various electronic devices with a display screen, including but not limited to smart phones, tablet computers, e-book readers, MP3 players (Moving Picture Experts Group Audio Layer III, mpeg Audio Layer 3), MP4 players (Moving Picture Experts Group Audio Layer IV, mpeg Audio Layer 4), laptop portable computers, desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the electronic apparatuses listed above. It may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
When the terminals 101, 102, 103 are hardware, a video capture device may also be installed thereon. The video acquisition equipment can be various equipment capable of realizing the function of acquiring video, such as a camera, a sensor and the like. The user may capture video using a video capture device on the terminal 101, 102, 103.
The server 105 may be a server that provides various services, such as a background server that processes data displayed on the terminal devices 101, 102, 103. The background server may perform processing such as analysis on the received data, and may feed back a processing result (e.g., an identification result) to the terminal device.
The server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules used to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. In particular, in the case where the target data does not need to be acquired from a remote place, the above system architecture may not include a network but only a terminal device or a server.
Fig. 2 is a flowchart of the method for MR synthesis CT based on CycleGAN according to the embodiment of the present application. As can be seen from fig. 2, the method for MR synthesis CT based on CycleGAN of the present embodiment includes the following steps: s210, acquiring an MR image.
In the present embodiment, an executing subject (e.g., a server shown in fig. 1) for the CycleGAN-based MR synthetic CT method may acquire an MR image by a wired manner or a wireless connection manner.
Further, the execution subject may acquire an MR image transmitted by an electronic device (for example, the terminal device shown in fig. 1) communicatively connected to the execution subject, or may acquire an MR image that is pre-stored locally.
The MR image is a nuclear magnetic resonance examination in the imaging examination, is commonly used for the examination of craniocerebral, bones, visceral organs, soft tissues and the like, has obvious imaging performance and can provide positioning and qualitative reference.
And S220, inputting the MR image into a synthetic model based on the cycleGAN network training, and outputting CT corresponding to the MR image.
As shown in fig. 3, the implementation principle of the present disclosure is: the MR images are input to a composite model to synthesize a CT image, and the composite CT image is used to reconstruct the original MR image by the composite model (generator), thereby constituting a loop. That is, by the method of the present disclosure, not only the CT image can be converted into the MR image, but also the MR image can be restored into the CT image, and at the same time, the same image can be mutually converted, such as MR to MR, CT to CT (improving the image accuracy).
In some embodiments, the synthetic model is trained by:
constructing a training sample set, wherein the training sample set comprises a preset number of MR images, and CT images and conventional CT images corresponding to the MR images;
taking the MR image in the training sample set as input, taking a CT image corresponding to the MR image as output, and training a generator based on a cycleGAN network;
training a discriminator by taking the conventional CT image in the training sample set as input and the CT image corresponding to the MR image as output;
adjusting parameters of the generator according to the difference degree of the loss functions of the generator and the discriminator until the difference value of the loss functions of the generator and the discriminator is smaller than a preset threshold value, and taking the generator and the discriminator at the moment as a final synthesis model; wherein the training sample set may be an unpaired MR and CT image dataset, and the preset number of MR images includes reconstructed MR images corresponding to the CT images constructed by a generator in a composite model based on the output CT images.
In some embodiments, for more accurate training of the synthesis model, the data in the constructed training sample set may be normalized to a spatial resolution of 1nm and an image size of 512 x 512 pixels. Meanwhile, a training set and a test set can be constructed according to a preset proportion. For example, the training set contains 6688/5734 sets of MR and CT image pairs and the test set contains 935/954 sets of MR and CT image pairs. The training mode may be unpaired training, that is, unpaired MR and CT image data sets in the training sample (by adjusting generator parameters and reconstructing an MR image to match a CT image in the following method).
In some embodiments, the parameters of the generator in the composite model are adjusted based on the structural similarity constraint, i.e., the structural similarity of contour shape, region area, etc. between the original MR and the composite MR (reconstructed MR) is constrained, so as to constrain the composite CT to have better structure-preserving property. Wherein the structural similarity constraint comprises a mutual information loss function and an active contour loss function.
Specifically, the contours of the original MR and the synthetic CT are extracted.
In order to extract the contours of the original MR and the synthesized CT more effectively, in the present disclosure, the pixel values of the original MR and the synthesized CT which are greater than a certain threshold are set to 1, and the pixel values of the original MR and the synthesized CT which are less than the threshold are set to 0, so as to obtain the body region segmentation results of the two different modality images, and more effectively retain the body contour results. The certain threshold value can be set according to the actual application scene.
Further, based on the mutual information contour loss calculation method, the contour similarity is calculated for the region contour extracted from the original MR image and the synthesized CT image by the following method:
Figure BDA0003648774890000071
wherein MI is mutual information;
p (x) and p (y) respectively, MR image I MR And CT image G (I) MR ) A probability distribution of the extracted contours;
p (x, y) represents the joint probability distribution of p (x) and p (y);
according to the formula, when loss is calculated for the profiles of different modes, the difference of pixel values of the profiles of different modes from one pixel to the next at the bottom level is not concerned, but the data distribution characteristics of the profiles of different modes are concerned, so that the similarity of the overall profile of a higher level is concerned. Compared with the existing method for measuring the similarity of the MR and CT image region contours by adopting the L1 or L2 loss calculated pixel by pixel, namely, pixel value comparison is carried out pixel by pixel, so that no more global contour constraint exists, and the similarity of the image contours of two different modes can be effectively measured.
Further, calculating the similarity of the MR image contour and the CT image contour based on the active contour loss function comprises:
LossAC=Length+λ·Refion
wherein,
Figure BDA0003648774890000072
converting the Length, Region into the form of a single pixel:
Figure BDA0003648774890000081
c1 and c2 are defined as follows:
Figure BDA0003648774890000082
wherein v represents a division reference value;
u represents a predicted value; u is an element of [0,1 ]] ]m×n
u xi,j And u yi,j X and y in (1) represent a horizontal direction and a vertical direction, respectively;
e represents a preset parameter; parameters set for the case that the evolution is 0 are avoided;
c1 and c2 represent internal and external energies, respectively.
Further, c1 and c2 may be defined in advance as constants, such as c1 ═ 1, c2 ═ 0; u and v are predicted values anda given image representation; during initialization, let ε be a very small positive number, e.g. 10 -6 . By the defined active contour loss function, the length of the segmentation boundary contour and the fitting degree of the region can be fully considered, and the accuracy of the result is improved. In some embodiments, the structural similarity constraint (L) is:
L=αL AC +βL MI
wherein L is AC Is an active contour loss function;
L MI is a mutual information loss function;
the alpha and beta are weights predefined according to an application scene.
Based on the similarity, the parameters of the generator in the synthetic model are adjusted, constraining the reconstructed MR to be as similar as possible to the original MR.
In some embodiments, as shown in fig. 4, to enhance the feature representation capability of the depth network and the effective features characterizing the image, the multi-channel features extracted at the deepest layer of the encoder of the U-Net structure generator in the Cycle GAN network architecture are selectively enhanced based on the channel attention mechanism in the present disclosure (channel dimension is 256). In other words, the network learns the feature channel with stronger enhancement information and the feature channel with suppressed information redundancy end to end, so as to achieve the attention mechanism (selective enhancement) of the feature channel dimension.
Specifically, referring to fig. 5, reshape transformation and transposition are respectively performed on the MR image in the training sample to obtain two corresponding characteristic response graphs a i And A j
Wherein A is i The dimension of (a) is C × N;
A j the dimension of (a) is NxC;
N=H×W。
further, the characteristic response graph A i And A j Multiplying, and obtaining an attention response map X (dimension is C × C) of the channel dimension by softmax, that is, obtaining the attention response map of the channel dimension by calculating according to the following formula:
Figure BDA0003648774890000091
wherein x is ji Indicating the influence of the ith channel on the jth channel;
A i representing a response graph obtained by carrying out reshape transformation on the MR image;
A j the response diagram obtained by reshape transformation and transposition of the MR image is shown.
Further, the attention response map X is transposed (dimension C × C), and then transformed with reshape a j Multiplying the matrix (with dimension of C multiplied by N), multiplying by a scale coefficient beta, finally transforming reshape into an original shape, and finally combining with A j The final output E is obtained by adding, i.e. calculating the channel characteristics of the generator by the following formula:
Figure BDA0003648774890000092
where β is a scale coefficient, initialized to 0, and gradually learns a larger and more appropriate weight as the model is trained.
According to the formula, the characteristics of each channel which are finally output are weighted and summed of the characteristics of all channels and the original characteristic diagram, so that the global semantic dependence between the channel characteristic diagrams is enhanced, and the discrimination capability of the characteristic diagrams is enhanced finally. I.e. based on the channel characteristics, the multi-channel characteristics of the generator are optimized.
According to the embodiment of the disclosure, the following technical effects are achieved:
as shown in fig. 6, fig. 6 is a schematic diagram of a plurality of sets of CTs constructed by the method of the present disclosure. Therefore, the method can synthesize high-precision CT, effectively improve the structure retention characteristic, the detail recovery degree and the like of the synthesized CT result, and effectively inhibit background interference.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that the acts and modules referred to are not necessarily required in this application.
Fig. 7 shows a schematic structural diagram of a terminal device or a server suitable for implementing the embodiments of the present application.
As shown in fig. 7, the terminal device or server 700 includes a Central Processing Unit (CPU)701, which can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data necessary for the operation of the system 700 are also stored. The CPU 701, the ROM 702, and the RAM 703 are connected to each other via a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
The following components are connected to the I/O interface 705: an input portion 706 including a keyboard, a mouse, and the like; an output section 707 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 708 including a hard disk and the like; and a communication section 709 including a network interface card such as a LAN card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. A drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read out therefrom is mounted into the storage section 708 as necessary.
In particular, the above method flow steps may be implemented as a computer software program according to embodiments of the present application. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a machine-readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 709, and/or installed from the removable medium 711. The computer program executes the above-described functions defined in the system of the present application when executed by the Central Processing Unit (CPU) 701.
It should be noted that the computer readable medium shown in the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The units or modules described in the embodiments of the present application may be implemented by software or hardware. The described units or modules may also be provided in a processor. Wherein the designation of a unit or module does not in some way constitute a limitation of the unit or module itself.
As another aspect, the present application also provides a computer-readable storage medium, which may be included in the electronic device described in the above embodiments; or may be separate and not incorporated into the electronic device. The computer readable storage medium stores one or more programs that, when executed by one or more processors, perform the methods described herein.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the application referred to in the present application is not limited to the embodiments with a particular combination of the above-mentioned features, but also encompasses other embodiments with any combination of the above-mentioned features or their equivalents without departing from the spirit of the application. For example, the above features may be replaced with (but not limited to) features having similar functions as those described in this application.

Claims (10)

1. An MR synthetic CT method based on cycleGAN is characterized by comprising the following steps:
acquiring an MR image;
inputting the MR image into a synthetic model based on the training of a cycleGAN network, and outputting CT corresponding to the MR image;
wherein parameters of a generator in the synthetic model are adjusted based on a structural similarity constraint.
2. The method of claim 1, wherein training the synthetic model by:
constructing a training sample set, wherein the training sample set comprises a preset number of MR images, and CT images and conventional CT images corresponding to the MR images;
taking the MR image in the training sample set as input, taking a CT image corresponding to the MR image as output, and training a generator based on a cycleGAN network;
taking the conventional CT image in the training sample set as input, and taking the CT image corresponding to the MR image as output, and training the discriminator;
and adjusting the parameters of the generator according to the difference degree of the loss functions of the generator and the discriminator until the difference value of the loss functions of the generator and the discriminator is smaller than a preset threshold value, and taking the generator and the discriminator at the moment as a final synthesis model.
3. The method of claim 2, wherein adjusting the parameters of the generator in the synthetic model based on the structural similarity constraint comprises:
extracting contours of the MR image and the output CT image;
calculating the similarity of the MR image contour and the CT image contour based on the structural similarity constraint; wherein the structural similarity constraint comprises a mutual information loss function and an active contour loss function;
based on the similarity, parameters of a generator in the synthetic model are adjusted.
4. The method of claim 3, wherein the calculating the similarity of the MR image contour and the CT image contour based on the structural similarity constraint comprises:
wherein calculating the similarity of the MR image contour and the CT image contour based on the mutual information loss function comprises:
calculating the similarity of the MR image contour and the CT image contour by the following formula:
Figure FDA0003648774880000011
wherein MI is mutual information;
p (x) and p (y) respectively, MR image I MR And CT image G (I) MR ) A probability distribution of the extracted contours;
p (x, y) represents the joint probability distribution of p (x) and p (y);
calculating the similarity of the MR image contour and the CT image contour based on the active contour loss function comprises:
defining the active contour loss function to be:
LossAC=Length+λ·Refion
wherein,
Figure FDA0003648774880000021
Region=∫ Ω ((c1-v) 2 -(C2-V) 2 )udx
converting the Length, Region into the form of a single pixel:
Figure FDA0003648774880000022
Figure FDA0003648774880000023
c1 and c2 are defined as follows:
Figure FDA0003648774880000024
wherein v represents a division reference value;
u represents a predicted value;
u xi,j and u yi,j X and y in (1) represent a horizontal direction and a vertical direction, respectively;
e represents a preset parameter;
c1 and c2 represent internal and external energies, respectively.
5. The method of claim 4, further comprising:
and optimizing the multichannel characteristics of the generator trained on the basis of the cycleGAN network through a channel attention mechanism.
6. The method of claim 5, wherein optimizing the multi-channel features of the trained producers based on the CycleGAN network by a channel attention mechanism comprises:
carrying out reshape transformation, reshape transformation and transposition on the MR image in the training sample respectively to obtain two corresponding characteristic response graphs;
obtaining an attention response map of a channel dimension through softmax based on the two corresponding characteristic response maps;
and optimizing the multichannel characteristics of the generator based on the attention response graph and the characteristic response graph for carrying out reshape change on the MR image.
7. The method of claim 6, wherein deriving an attention response map for a channel dimension by softmax based on the two corresponding feature response maps comprises:
the attention response graph of the channel dimension is calculated by the following formula:
Figure FDA0003648774880000031
wherein x is ji Showing the effect of the ith channel on the jth channel;
A i representing a response graph obtained by carrying out reshape transformation on the MR image;
A j the response diagram obtained by reshape transformation and transposition of the MR image is shown.
8. The method of claim 7, wherein optimizing multichannel features of a generator based on the attention response map and a feature response map for reshape changes to the MR image comprises:
the channel characteristics of the generator are calculated by the following formula:
Figure FDA0003648774880000032
wherein beta is a scale coefficient;
optimizing a multi-channel feature of the generator based on the channel feature.
9. An electronic device comprising a memory and a processor, the memory having stored thereon a computer program, wherein the processor, when executing the computer program, implements the method of any of claims 1-8.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1 to 8.
CN202210537273.4A 2022-05-18 2022-05-18 MR synthetic CT method, equipment and computer readable storage medium based on cycleGAN Pending CN114820861A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210537273.4A CN114820861A (en) 2022-05-18 2022-05-18 MR synthetic CT method, equipment and computer readable storage medium based on cycleGAN

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210537273.4A CN114820861A (en) 2022-05-18 2022-05-18 MR synthetic CT method, equipment and computer readable storage medium based on cycleGAN

Publications (1)

Publication Number Publication Date
CN114820861A true CN114820861A (en) 2022-07-29

Family

ID=82515555

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210537273.4A Pending CN114820861A (en) 2022-05-18 2022-05-18 MR synthetic CT method, equipment and computer readable storage medium based on cycleGAN

Country Status (1)

Country Link
CN (1) CN114820861A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115393402A (en) * 2022-08-24 2022-11-25 北京医智影科技有限公司 Training method of image registration network model, image registration method and equipment
WO2024113170A1 (en) * 2022-11-29 2024-06-06 中国科学院深圳先进技术研究院 Cycle generative adversarial network-based medical image cross-modal synthesis method and apparatus

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1760659A1 (en) * 2005-08-30 2007-03-07 Agfa-Gevaert Method of segmenting anatomic entities in digital medical images
US20190318474A1 (en) * 2018-04-13 2019-10-17 Elekta, Inc. Image synthesis using adversarial networks such as for radiation therapy
CN114049334A (en) * 2021-11-17 2022-02-15 重庆邮电大学 Super-resolution MR imaging method taking CT image as input

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1760659A1 (en) * 2005-08-30 2007-03-07 Agfa-Gevaert Method of segmenting anatomic entities in digital medical images
US20190318474A1 (en) * 2018-04-13 2019-10-17 Elekta, Inc. Image synthesis using adversarial networks such as for radiation therapy
CN114049334A (en) * 2021-11-17 2022-02-15 重庆邮电大学 Super-resolution MR imaging method taking CT image as input

Non-Patent Citations (9)

* Cited by examiner, † Cited by third party
Title
CHEN, XU等: "Learning active contour models for medical image segmentation", 《PROCEEDINGS OF THE IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
CHEN, XU等: "Learning active contour models for medical image segmentation", 《PROCEEDINGS OF THE IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》, 31 December 2019 (2019-12-31), pages 11632 - 11639 *
CHEN, XU等: "Learning active contour models for medical image segmentation", PROCEEDINGS OF THE IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, pages 11632 - 11639 *
FU, JUN 等: "Dual attention network for scene segmentation", PROCEEDINGS OF THE IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, pages 3141 - 3145 *
FU, JUN等: "Dual attention network for scene segmentation", 《PROCEEDINGS OF THE IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
FU, JUN等: "Dual attention network for scene segmentation", 《PROCEEDINGS OF THE IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》, 31 December 2019 (2019-12-31), pages 3141 - 3145 *
GE, YUNHAO 等: ""Unpaired MR to CT synthesis with explicit structural constrained adversarial learning", 2019 IEEE 16TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2019), pages 0001 - 0004 *
GE, YUNHAO等: "Unpaired MR to CT synthesis with explicit structural constrained adversarial learning", 《2019 IEEE 16TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2019)》 *
GE, YUNHAO等: "Unpaired MR to CT synthesis with explicit structural constrained adversarial learning", 《2019 IEEE 16TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2019)》, 31 December 2019 (2019-12-31), pages 0001 - 0004 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115393402A (en) * 2022-08-24 2022-11-25 北京医智影科技有限公司 Training method of image registration network model, image registration method and equipment
WO2024113170A1 (en) * 2022-11-29 2024-06-06 中国科学院深圳先进技术研究院 Cycle generative adversarial network-based medical image cross-modal synthesis method and apparatus

Similar Documents

Publication Publication Date Title
US11633146B2 (en) Automated co-registration of prostate MRI data
US10867375B2 (en) Forecasting images for image processing
CN114820861A (en) MR synthetic CT method, equipment and computer readable storage medium based on cycleGAN
CN110992243B (en) Intervertebral disc cross-section image construction method, device, computer equipment and storage medium
CN109215014B (en) Training method, device and equipment of CT image prediction model and storage medium
GB2577656A (en) Method and apparatus for generating a derived image using images of different types
WO2021102644A1 (en) Image enhancement method and apparatus, and terminal device
WO2020168648A1 (en) Image segmentation method and device, and computer-readable storage medium
US12087433B2 (en) System and methods for reconstructing medical images using deep neural networks and recursive decimation of measurement data
CN114897756A (en) Model training method, medical image fusion method, device, equipment and medium
CN110827335A (en) Mammary gland image registration method and device
CN111325695A (en) Low-dose image enhancement method and system based on multi-dose grade and storage medium
CN113569855A (en) Tongue picture segmentation method, equipment and storage medium
CN113888566B (en) Target contour curve determination method and device, electronic equipment and storage medium
CN114708345A (en) CT image reconstruction method, device, equipment and storage medium
CN114972118B (en) Noise reduction method and device for inspection image, readable medium and electronic equipment
Yang et al. X‐Ray Breast Images Denoising Method Based on the Convolutional Autoencoder
CN114742916A (en) Image reconstruction method and device, storage medium and electronic equipment
CN116563539A (en) Tumor image segmentation method, device, equipment and computer readable storage medium
CN112037886B (en) Radiotherapy plan making device, method and storage medium
CN111598904B (en) Image segmentation method, device, equipment and storage medium
Tang et al. Learning from dispersed manual annotations with an optimized data weighting policy
CN114586065A (en) Method and system for segmenting images
CN117556077B (en) Training method of text image model, related method and related product
Usta et al. Comparison of myocardial scar geometries from 2D and 3D LGE-MRI

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination