CN110047138A - A kind of magnetic resonance thin layer image rebuilding method - Google Patents

A kind of magnetic resonance thin layer image rebuilding method Download PDF

Info

Publication number
CN110047138A
CN110047138A CN201910336275.5A CN201910336275A CN110047138A CN 110047138 A CN110047138 A CN 110047138A CN 201910336275 A CN201910336275 A CN 201910336275A CN 110047138 A CN110047138 A CN 110047138A
Authority
CN
China
Prior art keywords
magnetic resonance
thin layer
layer image
thick
generator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910336275.5A
Other languages
Chinese (zh)
Inventor
余锦华
谷家琪
汪源源
邓寅晖
童宇宸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fudan University
Original Assignee
Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fudan University filed Critical Fudan University
Priority to CN201910336275.5A priority Critical patent/CN110047138A/en
Publication of CN110047138A publication Critical patent/CN110047138A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Graphics (AREA)
  • Biophysics (AREA)
  • Geometry (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

A kind of magnetic resonance thin layer image rebuilding method, the magnetic resonance thick-layer image in cross section and sagittal plane is merged using confrontation network is generated, tentatively generate corresponding magnetic resonance thin layer image data, it recycles convolutional neural networks to carry out details correction to the magnetic resonance thin layer image data tentatively generated, rebuilds magnetic resonance thin layer image data.The present invention can obtain more true magnetic resonance thin layer image, it realizes on Y-PSNR, structural similarity and regularization mutual information and is promoted by a relatively large margin, children's thin layer brain Magnetic Resonance data capacity can effectively be increased, the research after being lays the foundation.

Description

A kind of magnetic resonance thin layer image rebuilding method
Technical field
The present invention relates to a kind of magnetic resonance thin layer image rebuilding methods.
Background technique
Magnetic resonance imaging data obtained can be roughly divided into thin layer magnetic resonance according to the space length for closing on scanning interlayer Image and thick-layer magnetic resonance image.Since the spatial resolution possessed is higher, thin layer magnetic resonance image studies brain structure With in brain art navigate for be a kind of rather ideal medical image.However, due to the efficiency of thin layer scanning, machine loss etc. Problem, clinic in be widely used that thick-layer magnetic resonance image, the data volume of thin layer magnetic resonance image are relatively limited.Children's brain Thin layer magnetic resonance image is in contrast less, but it but studies important in inhibiting to human brain development.
It is compared with adult brain image data, children's magnetic resonance brain image is more added with value of clinical studies.Typically, to youngster The analysis of virgin brain image provides foundation for the research of mankind's brain development.However, the children without obvious illness usually seldom carry out Brain magnetic resonance imaging, therefore children's brain Magnetic Resonance is more difficult to obtain than adult data, needless to say high quality is thin Tomographic image.
Currently used algorithm for reconstructing (for example, bilinear interpolation, rarefaction representation, 3D-SRU-Net etc.) is directly in number According to the region spatially to not data, the direct interpolation calculation of progress, it is total that the mode of interpolation direct in this way reconstructs the magnetic come The magnetic resonance thin layer image of vibration thin layer image, especially children, in Y-PSNR, structural similarity, regularization mutual information etc. Performance on imaging indicators is all undistinguished, is unable to reach the requirement that doctor can be helped to carry out clinical diagnosis.
Summary of the invention
The present invention provides a kind of magnetic resonance thin layer image rebuilding method, can obtain more true magnetic resonance thin layer figure Picture is realized on Y-PSNR, structural similarity and regularization mutual information and is promoted by a relatively large margin, can effectively be increased Children's thin layer brain Magnetic Resonance data capacity, the research after being lay the foundation.
In order to achieve the above object, the present invention provides a kind of magnetic resonance thin layer image rebuilding method comprising the steps of:
The magnetic resonance thick-layer image in cross section and sagittal plane is merged using confrontation network is generated, preliminary generation magnetic resonance is thin Tomographic image data;
Details correction is carried out to the magnetic resonance thin layer image data tentatively generated using convolutional neural networks, rebuilds magnetic resonance Thin layer image data;
The generation confrontation network includes a generator and a condition distinguishing device;
The convolutional neural networks include the U-shaped structure and an enhancing residual block of a dense connection of three-dimensional.
The method for tentatively generating magnetic resonance thin layer image data using generation confrontation network comprises the steps of:
Generator is trained using condition distinguishing device;
Magnetic resonance thick-layer cross-sectional image and magnetic resonance thick-layer sagittal view picture are inputted into trained generator, generate magnetic Resonate thin layer image data.
The method for rebuilding magnetic resonance thin layer image data using convolutional neural networks comprises the steps of:
By the U-shaped structure of the three-dimensional dense connection of magnetic resonance thin layer image data input of generator output, three-dimensional dense company Each layer of magnetic resonance thin layer image data of characteristic pattern is stitched together to export by the U-shaped structure connect gives enhancing residual block;
The spliced characteristic pattern for enhancing U-shaped structure output of the residual block to three-dimensional dense connection carries out numerical reduction, obtains Magnetic resonance thin layer image data after must rebuilding.
The generator includes cascade feature extraction branch, Fusion Features branch and reconstruct branch;
Magnetic resonance thick-layer cross-sectional image is denoted as IA, picture size is L × W × H, by magnetic resonance thick-layer sagittal view As being denoted as IS, picture size is L × W × rH, and wherein r represents the up-sampling rate along z-axis, and picture size is L × W × rH, Generator is with magnetic resonance thick-layer cross-sectional image IAWith magnetic resonance thick-layer sagittal view as ISAs input, the above sample rate r is rebuild Thin layer image I outY
The input of the feature extraction branch is magnetic resonance thick-layer cross-sectional image IAWith magnetic resonance thick-layer sagittal view As IS, using Three dimensional convolution layer from magnetic resonance thick-layer cross-sectional image IAWith magnetic resonance thick-layer sagittal view as ISMiddle extraction feature, Various sizes of characteristic pattern is generated using maximum pond layer, the output of feature extraction branch is cross sectional feature figure and sagittal plane Characteristic pattern.
The Fusion Features branch is up-sampled using the characteristic pattern that sub-pix convolution exports feature extraction branch, And random deactivation maneuver is carried out, Fusion Features branch exports fused characteristic pattern.
The characteristic pattern that the reconstruct branch exports Fusion Features branch is up-sampled, channel splicing and convolution are grasped Make, reconstruct branch exports thin layer image IY
The described method that generator is trained using condition distinguishing device comprising the following steps:
Generator exports thin layer image IY
Condition distinguishing device is with true thin layer image IGTAs true mapping, the thin layer image I exported with generatorYAs Falseness mapping carries out the random deactivation maneuver of convolution sum, final output scoring tensor I using Leaky ReLU activation primitiveRFor The calculating of loss function;
Utilize true thin layer image IGT, generator output false thin layer image IYWith commenting for condition distinguishing device output The amount of saying good-bye IRConstitute comprehensive loss function LG, generator constantly adjusts model parameter, so that comprehensive loss function LGValue increasingly It is low.
The comprehensive loss function LGAre as follows:
Wherein,It is adaptive Charbonnier loss function,It is 3D gradient calibration loss function,It is raw The confrontation loss function grown up to be a useful person,It is l2Weight regularization loss function, λ1, λ2And λ3Represent power every in loss function Weight;
Wherein, ε is represented a small amount of, and the weighting coefficient range that pixel error generates is between [0.5,1];
Wherein, E is to seek desired value, and I refers to data, and GT indicates true picture, Y is as subscript, after instruction is rebuild as subscript Data, ▽ is vector differentiating operator;
The loss function of condition distinguishing device are as follows:
Wherein, D represents condition distinguishing device,Mathematic expectaion is represented, represents each element mean value for calculating output tensor herein.
The present invention can obtain more true magnetic resonance thin layer image, in Y-PSNR, structural similarity and canonical Change to realize on mutual information and be promoted by a relatively large margin, can effectively increase children's thin layer brain Magnetic Resonance data capacity, It lays the foundation for research later.
Detailed description of the invention
Fig. 1 is a kind of flow chart of magnetic resonance thin layer image rebuilding method provided by the invention.
Fig. 2 is a kind of detail flowchart of magnetic resonance thin layer image rebuilding method provided by the invention.
Fig. 3 is the schematic diagram of generator.
Fig. 4 is the schematic diagram of condition distinguishing device.
Fig. 5 is the schematic diagram of convolutional neural networks.
Fig. 6 is the generation result visualization comparison diagram of different thin reconstruction methods.
Specific embodiment
Below according to FIG. 1 to FIG. 6, presently preferred embodiments of the present invention is illustrated.
As shown in Figure 1, the present invention provides a kind of magnetic resonance thin layer image rebuilding method comprising the steps of:
Step S1, the magnetic resonance thick-layer image in cross section and sagittal plane is merged using confrontation network is generated, it is preliminary to generate Corresponding magnetic resonance thin layer image data;
Step S2, details correction, weight are carried out to the magnetic resonance thin layer image data tentatively generated using convolutional neural networks Build magnetic resonance thin layer image data.
Generation confrontation network (3D-Y-Net-GAN) includes a generator and a condition distinguishing device (conditional discriminator).The convolutional neural networks include the U-shaped structure of a dense connection of three-dimensional (3D-DenseU-Net) and enhancing residual block.
As shown in Fig. 2, a kind of magnetic resonance thin layer image rebuilding method provided by the invention comprising the following steps:
Step S1.1, generator is trained using condition distinguishing device;
Step S1.2, magnetic resonance thick-layer cross-sectional image and magnetic resonance thick-layer sagittal view picture are inputted into trained generation Device generates magnetic resonance thin layer image data;
Step S2.2, by the U-shaped structure of the three-dimensional dense connection of magnetic resonance thin layer image data input of generator output The U-shaped structure of (3D-DenseU-Net), three-dimensional dense connection exist each layer of magnetic resonance thin layer image data of characteristic pattern splicing Enhancing residual block is given in output together;
Step S2.3, after enhancing residual block is to the splicing of U-shaped structure (3D-DenseU-Net) output of three-dimensional dense connection Characteristic pattern carry out numerical reduction, the magnetic resonance thin layer image data after being rebuild.
As shown in figure 3, the generator is 3D-Y-Net framework, the life in the step S1.1 and step S1.2 It grows up to be a useful person comprising cascade feature extraction (feature extraction, FE) branch, Fusion Features (feature fusion, FF) Branch and reconstruct (Reconstructionbranch) branch.
Magnetic resonance thick-layer cross-sectional image is denoted as IA, picture size is L × W × H, by magnetic resonance thick-layer sagittal view As being denoted as IS, picture size is L × W × rH, and wherein r represents the up-sampling rate along z-axis, and picture size is L × W × rH, Generator is with magnetic resonance thick-layer cross-sectional image IAWith magnetic resonance thick-layer sagittal view as ISAs input, the above sample rate r is rebuild Thin layer image I outY
The input of the feature extraction branch is magnetic resonance thick-layer cross-sectional image IAWith magnetic resonance thick-layer sagittal view As IS, using Three dimensional convolution layer from magnetic resonance thick-layer cross-sectional image IAWith magnetic resonance thick-layer sagittal view as ISMiddle extraction feature, Various sizes of characteristic pattern is generated using maximum pond layer (Maxpooling), the output of feature extraction branch is that cross section is special Sign figure (With ) and sagittal plane characteristic pattern (With )。
The Fusion Features branch is the inverse structure of feature extraction branch on the topology, and this feature fusion branch makes It is up-sampled, and carried out with the characteristic pattern that sub-pix convolution (Sub-pixelconvolution) exports feature extraction branch Random inactivation (dropout) operation, Fusion Features branch export fused characteristic pattern.Specifically, sub-pix convolution is volume The cascade of product operation and pixel reordering operations is compared with tradition transposition convolution, greatly reduces calculation amount, only reset just with pixel Increase characteristic pattern bulk.Therefore sub-pix convolution can efficiently substitute traditional transposition convolution.Feature extraction branch and The mutual contact mode of fusion branch receives the inspiration of U-Net, and this structure can sufficiently merge the feature of more sizes, guarantees image Structure it is consistent, while the problem of gradient disperse, is alleviated to a certain extent.The reconstruct branch is to Fusion Features The characteristic pattern of branch's output is up-sampled, channel splicing and convolution operation, reconstruct branch export thin layer image IY
In the present embodiment, as shown in figure 3, (a) figure illustrates the network structure of generator, (32,32,15,64) are represented Bulk is the characteristic pattern that 32 × 32 × 15, port number is 64, K3s [1,2,1] represent convolution kernel as 3 × 3 × 3, step-length as The Three dimensional convolution of [1,2,1], Dropout0.3 represents drop rate and operates as 0.3 dropout, random to inactivate (dropout) It is the method optimized to the artificial neural network with depth structure, by by the part of hidden layer in learning process (dropoutrate) weight or the random zero of output, reduce the interdependency (co-dependence) between node to realize The regularization (regularization) of neural network reduces its structure risk (structural risk).Up-sampling rate r exists 8 are taken in this example, and have used the training technique based on data block to reduce computing cost.Specifically, by magnetic resonance thick-layer Cross-sectional image IAIt is divided into the fritter having a size of 32 × 32 × 15, by magnetic resonance thick-layer sagittal view as ISWith output thin layer figure As IYIt is divided into the fritter having a size of 32 × 32 × 120, above-mentioned data block corresponds on spatial position.For cross section spy Extracting branch is levied, extracts feature from input picture using Three dimensional convolution layer, using with [1,2,1] or [2,1,1] step-length Maximum pond layer generate various sizes of characteristic pattern, various sizes of characteristic pattern assists in convolutional neural networks study To the characteristics of image of different spaces size.It is worth noting that, maximum pond layer can ignore that small figure to a certain extent As structural mismatch, to reduce spatial registration mismatch to negative effect caused by training.Specifically, Three dimensional convolution layer Structure is normalization (batch normalization, BN)+SWISH (the novel activation of Three dimensional convolution (Convolution)+batch Function).SWISH is the activation primitive newly proposed, can be avoided dead neuronal (dead caused by ReLU activation primitive Neuron) problem sets 1 for the hyper parameter in SWISH.The output of cross sectional feature extracting branch is to possess different spaces ruler The characteristic pattern of degree, is denoted as respectivelyWithSagittal plane is special Sign extracting branch and cross section branch possess essentially identical network structure, and output is WithHowever, cross section thick-layer data IAWith sagittal plane thickness Layer data ISBulk it is different, so being added to three in the inlet of sagittal plane feature extraction branch possesses [1,1,2] The convolutional layer of step-length comes so that the main structure size of two feature extraction branches is unified.(b) figure illustrates the network of reconstruct branch Structure, this branch are that the scene for being 8 for up-sampling rate specially designs, and the use of continuous three layers of up-sampling rate are not 2 Up-sampling operation, because this up-sampling mode can stretch image to a certain extent, causes the mistake of the artifact and details of interlayer Very, more up-sampling paths similar to dense connection has been used to alleviate this artifact.Specifically, Path 1-2-4 and Path The output of 1-4 is mutually spliced with channel, and as the input of Path 4-8, Path 2-4-8 is mutually spelled with the output of Path 2-8 with channel It connects, the input as tail convolutional layer, wherein Path represents characteristic pattern up-sampling path, and for example, Path 1-4 is indicated will It up-samples having a size of L × W × H characteristic pattern to having a size of L × W × 4H, the intersection of two arrows has carried out channel before representing convolution Splicing.Use IYCome mark rebuild branch output, this simultaneously be also entire generator output.
In view of original generation confrontation network is a kind of unsupervised training pattern, but it be used to solve supervision recurrence and ask Topic.Therefore, high score only is beaten for authentic specimen, can not be completely suitable for us for the original arbiter that false data makes low score needs It solves the problems, such as, because our generator has input magnetic resonance thick-layer image data as constraint, rather than makes an uproar from Gauss Priori vector is sampled in sound.In view of the above reason, we introduce a condition distinguishing device.Specifically, condition distinguishing device can Both to see sample to be sorted in assorting process, it can be seen that the input data of generator, so as to which thick-layer image is arrived The map classification of thin layer image is "true" or "false".
As shown in Fig. 2, in step S1.1, it is described that the method that generator is trained specifically is wrapped using condition distinguishing device Containing following steps:
Step S1.1.1, generator exports thin layer image IY
Step S1.1.2, condition distinguishing device is with true thin layer image IGTAs true mapping, with the thin of generator output Tomographic image IYIt is mapped as falseness, carries out the random deactivation maneuver of convolution sum using Leaky ReLU activation primitive, final output is commented The amount of saying good-bye IRFor the calculating of loss function;
In the present embodiment, as shown in figure 4, in the network structure of condition distinguishing device, Leaky ReLU activation primitive is negative The slope of semiaxis is 0.2, IGTRepresent authentic specimen input, IYFalse sample input is represented, k represents convolution kernel size, and f represents volume Product core number, all dropoutrate take 0.3;
Step S1.1.3, true thin layer image I is utilizedGT, generator output false thin layer image IYSentence with condition The scoring tensor I of other device outputRConstitute comprehensive loss function LGTo measure the difference of generator generated between image and true picture Different, generator constantly adjusts model parameter, so that comprehensive loss function LGValue it is lower and lower;
Wherein,It is adaptive Charbonnier loss function,It is 3D gradient calibration loss function,It is raw The confrontation loss function grown up to be a useful person,It is l2Weight regularization loss function, λ1, λ2And λ3Represent power every in loss function Weight.
1, adaptive Charbonnier loss function
In supervision recurrence task, l1And l2Norm is widely used in the error constraints of pixel scale, and this constraint is for base This picture structure is similar certain guarantee.However, l2Norm can frequently result in the reconstructed results of excess smoothness, l1Norm for Different errors has the punishment that resolution is not added.There is a l1The mutation loss function of norm, Charbonnier loss function table Reveal and surmounts l1And l2The robustness and validity of norm.There are also a kind of least squares errors weighted using bilinear interpolation Loss function, this loss function are used to rebuild focusing on for the network optimization in difficult region, and difficult region refers to Be that low-resolution image still differs biggish pixel region with true high-definition picture after bilinear interpolation.However Assessed with the rough result of bilinear interpolation rebuild difficult region be not always a good method, especially when up-sampling rate very More deteriorate when big.Therefore, the Pixel-level error between the reconstructed results and true picture of generator is changed by we Weighting coefficient, for weighting Charbonnier loss function:
Wherein, ε is represented a small amount of, is set as 10 in this example-6, the weighting coefficient range that pixel error generates is in [0.5,1] Between.Therefore pixel lesser for error, it is believed that rebuild the gradient reduction that difficulty is smaller, is generated, help to generate Device is more to the pixel orientation optimization for rebuilding difficulty.
2,3-D gradient calibration loss function;
Charbonnier loss function can adaptively constrain the error of pixel scale, but for the mistake of higher frequency Poor restriction ability is weaker.Therefore, we used the loss functions of three-dimensional gradient correction to come explicitly to reconstructed results edge The difference of x, y and z-axis applies constraint, and this second-order constraint can help our reconstruction model to recover sharper keen edge Information:
Wherein, E is to seek desired value, and I refers to data, and GT indicates true picture, Y is as subscript, after instruction is rebuild as subscript Data, ▽ is vector differentiating operator;
3, loss function is fought;
In order to enable the image generated is more life-like, we devise an arbiter to supervise the study of generator Journey.In view of the robustness and realization efficiency of confrontation network, shown in the following formula of the loss function of condition distinguishing device:
Wherein, D represents condition distinguishing device,Mathematic expectaion is represented, represents each element mean value for calculating output tensor herein. From formula it can be seen that, condition distinguishing device using authentic specimen and thick-layer image as really mapping, score it as close as possible to 1, and mapped using the false sample of generation and thick-layer image as falseness, it scores it as close as possible to 0.
Generator makes great efforts to generate false sample, promotes the scoring of condition arbiter come condition distinguishing device of out-tricking.Therefore, it generates The confrontation loss function of device are as follows:
It is worth noting that, the balance of generator and condition distinguishing device is very heavy for training generates confrontation network It wants.Condition distinguishing device is too strong to will lead to its convergence rapidly, and further gradient cannot be provided for generator;Generator crosses good general Condition distinguishing device is caused to be difficult to differentiate between true and false sample, the gradient that condition distinguishing device provides is difficult to that generator is helped to advanced optimize. This also means that we will balance confrontation loss function and adaptive Charbonnier loss function.In view of converged state When each loss function gradient should be at similarity number magnitude, we will fight the weight coefficient λ of loss function2Be set as one compared with Small value, at this time when Charbonnier loss function and gradient calibration loss function close on convergence, gradient can be with confrontation The gradient of loss function reaches balance.
4、l2Weight regularization loss function;
For theory, the norm of model parameter is smaller, it is meant that and the capacity of model is smaller, and the norm of model parameter is bigger, Model is easier that extreme parameters is relied on to carry out over-fitting training data.Therefore the norm of constraint network model parameter can be in certain journey Mitigate over-fitting on degree.We use l2Regularization loss function alleviates over-fitting, is shown below:
Wherein, all parameters are referred to, when the sum of the norm of all parameters is smaller and smaller, so that it may mitigate over-fitting.
As shown in figure 5, convolutional neural networks are the cascade of 3D-DenseU-Net and enhanced residual block, it is responsible for final Details reparation, (a) figure represents 3D-DenseU-Net, and (b) figure is enhancing residual block." × 0.5 " represents weight multiplied by 0.5 Attenuation coefficient.In order to be multiplexed thick-layer image in cross section in details reconstructs, by magnetic resonance thick-layer cross-sectional image IAIt presses The thin layer image I of generator output is inserted into according to its correspondence number of plies in thin layerYIn, and re-flagged as IYA。3D- The input of DenseU-Net is IY, ISAnd IYA, export and be denoted as IR.We use a kind of U-shaped structure (3D- of dense connection of three-dimensional DenseU-Net, three-dimensional intensive U-shaped network), this structure can splice the characteristic pattern (output that each layer of model) of multilayer Together as the input of certain convolutional layer, so as to make full use of the extracted low-level of convolutional network and high-level feature. Also, structure shape caused by the jump connection (skip connection) of 3D-DenseU-Net top layer, bottom in order to prevent Become and be distorted, is applied with a degree of numerical reduction (enhanced residual block) on characteristic patterns that we connect these jumps. In view of the balance between network receptive field and convergence rate, we randomly extracted from data certain amount having a size of 48 × 48 × 48 data block is trained.
Table 1
As shown in table 1, by the reconstruction effect and bi-cubic interpolation of the method for the present invention, rarefaction representation super-resolution rebuilding, Three-dimensional super-resolution rate is rebuild U-shaped network and is compared, and Std. indicates the standard deviation of quantized data;Med. quantized data is indicated Median, as it can be seen from table 1 magnetic resonance thin layer image rebuilding method proposed by the present invention is in Y-PSNR (PSNR), knot It realizes in structure similarity (SSIM) and regularization mutual information (NMI) and is promoted by a relatively large margin.Quantitative evaluation and visual assessment knot Fruit illustrates that method proposed by the present invention with more true reconstructed results has been more than other existing methods.
As shown in fig. 6, by the reconstruction effect and bi-cubic interpolation of the method for the present invention, rarefaction representation super-resolution rebuilding, Three-dimensional super-resolution rate is rebuild U-shaped network and is compared.Representational one layer is respectively taken from three planes of image, and can by it Fig. 6 is shown as depending on changing result.Contrastingly with above three method, reconstruction framework proposed by the present invention can export more true Magnetic resonance thin layer image, closer to real image shown in Fig. 6 right column.The generation result of traditional bilinear interpolation method is more Add details distortion that is fuzzy, and can finding out more serious, such as artifact etc..The reason of this result, is the perception of interpolation method Domain is too limited, and algorithm parameter can not learn, and the information without utilizing sagittal plane image.Rarefaction representation method for reconstructing generates Result show more smooth characteristic, tissue consistency is more preferable compared to bilinear interpolation method.But the weight of rarefaction representation It is unsatisfactory in sagittal plane, coronal-plane to build result, reason is its limited two dimension perception domain and limited modeling capacity.3D- SRU-Net can reconstruct the less magnetic resonance thin layer image of artifact, but compare with reconstruction framework proposed by the present invention, generate Result it is worse.There are two the above-mentioned performances that principal element can explain 3D-SRU-Net.Firstly, 3D-SRU-Net is a single-order Section network architecture, this characteristic result in its model off-capacity, cannot be in more of Fusion Features, up-sampling, details reservation etc. Balance is obtained in business.Therefore sagittal reconstruction result is poor.Secondly, the network architecture of 3D-SRU-Net includes to adopt on one octuple Sample channel, but the transposition convolution operation that convolution kernel is 3 × 3 × 3 has been used to up-sample characteristic pattern.In partial enlarged view, Notice that reconstruction framework of the invention can generate more actually image by network in the first stage, second stage sagittal plane, Coronal-plane recovers more textual details.Therefore, the above results can further show that two stages proposed by the present invention rebuild Method can obtain preferable effect in thin reconstruction.
The usual thickness of thin layer brain Magnetic Resonance is 1mm, possesses higher spatial resolution, therefore preferably rebuild thin Tomographic image helps to compare brain structural analysis, cranial capacity measurement and surgical navigational medically with adult's brain image data, For children's brain image more added with value of clinical studies, the present invention can effectively increase children's thin layer brain Magnetic Resonance data appearance Amount, the research after being lay the foundation.
It is discussed in detail although the contents of the present invention have passed through above preferred embodiment, but it should be appreciated that above-mentioned Description is not considered as limitation of the present invention.After those skilled in the art have read above content, for of the invention A variety of modifications and substitutions all will be apparent.Therefore, protection scope of the present invention should be limited to the appended claims.

Claims (10)

1. a kind of magnetic resonance thin layer image rebuilding method, which is characterized in that comprise the steps of:
The magnetic resonance thick-layer image in cross section and sagittal plane is merged using confrontation network is generated, it is preliminary to generate magnetic resonance thin layer figure As data;
Details correction is carried out to the magnetic resonance thin layer image data tentatively generated using convolutional neural networks, rebuilds magnetic resonance thin layer Image data;
The generation confrontation network includes a generator and a condition distinguishing device;
The convolutional neural networks include the U-shaped structure and an enhancing residual block of a dense connection of three-dimensional.
2. magnetic resonance thin layer image rebuilding method as described in claim 1, which is characterized in that described utilizes generation confrontation net The method that network tentatively generates magnetic resonance thin layer image data comprises the steps of:
Generator is trained using condition distinguishing device;
Magnetic resonance thick-layer cross-sectional image and magnetic resonance thick-layer sagittal view picture are inputted into trained generator, generate magnetic resonance Thin layer image data.
3. magnetic resonance thin layer image rebuilding method as claimed in claim 2, which is characterized in that described utilizes convolutional Neural net The method that network rebuilds magnetic resonance thin layer image data comprises the steps of:
By the U-shaped structure of the three-dimensional dense connection of magnetic resonance thin layer image data input of generator output, the U of three-dimensional dense connection Each layer of magnetic resonance thin layer image data of characteristic pattern is stitched together to export by type structure gives enhancing residual block;
The spliced characteristic pattern for enhancing U-shaped structure output of the residual block to three-dimensional dense connection carries out numerical reduction, is weighed Magnetic resonance thin layer image data after building.
4. magnetic resonance thin layer image rebuilding method as claimed in claim 3, which is characterized in that the generator includes cascade Feature extraction branch, Fusion Features branch and reconstruct branch;
Magnetic resonance thick-layer cross-sectional image is denoted as IA, picture size is L × W × H, and magnetic resonance thick-layer sagittal view picture is remembered For IS, picture size is L × W × rH, and wherein r represents the up-sampling rate along z-axis, and picture size is L × W × rH, is generated Device is with magnetic resonance thick-layer cross-sectional image IAWith magnetic resonance thick-layer sagittal view as ISAs input, the above sample rate r reconstructs thin Tomographic image IY
5. magnetic resonance thin layer image rebuilding method as claimed in claim 4, which is characterized in that the feature extraction branch Input is magnetic resonance thick-layer cross-sectional image IAWith magnetic resonance thick-layer sagittal view as IS, thick from magnetic resonance using Three dimensional convolution layer Layer cross-sectional image IAWith magnetic resonance thick-layer sagittal view as ISMiddle extraction feature generates different sizes using maximum pond layer Characteristic pattern, the output of feature extraction branch is cross sectional feature figure and sagittal plane characteristic pattern.
6. magnetic resonance thin layer image rebuilding method as claimed in claim 5, which is characterized in that the Fusion Features branch makes It is up-sampled with the characteristic pattern that sub-pix convolution exports feature extraction branch, and carries out random deactivation maneuver, Fusion Features Branch exports fused characteristic pattern.
7. magnetic resonance thin layer image rebuilding method as claimed in claim 6, which is characterized in that the reconstruct branch is to feature The characteristic pattern of fusion branch output is up-sampled, channel splicing and convolution operation, reconstruct branch export thin layer image IY
8. magnetic resonance thin layer image rebuilding method as claimed in claim 7, which is characterized in that described utilizes condition distinguishing device The method that be trained to generator comprising the following steps:
Generator exports thin layer image IY
Condition distinguishing device is with true thin layer image IGTAs true mapping, the thin layer image I exported with generatorYAs falseness Mapping carries out the random deactivation maneuver of convolution sum, final output scoring tensor I using Leaky ReLU activation primitiveRFor loss The calculating of function;
Utilize true thin layer image IGT, generator output false thin layer image IYWith the scoring of condition distinguishing device output Measure IRConstitute comprehensive loss function LG, generator constantly adjusts model parameter, so that comprehensive loss function LGValue it is lower and lower.
9. magnetic resonance thin layer image rebuilding method as claimed in claim 8, which is characterized in that the comprehensive loss function LG Are as follows:
Wherein,It is adaptive Charbonnier loss function,It is 3D gradient calibration loss function,It is generator Confrontation loss function,It is l2Weight regularization loss function, λ1, λ2And λ3Represent weight every in loss function;
Wherein, ε is represented a small amount of, and the weighting coefficient range that pixel error generates is between [0.5,1];
Wherein, E is to seek desired value, and I refers to data, and GT indicates the number after true picture, Y are rebuild as subscript, instruction as subscript According to,It is vector differentiating operator;
10. magnetic resonance thin layer image rebuilding method as claimed in claim 9, which is characterized in that the loss letter of condition distinguishing device Number are as follows:
Wherein, D represents condition distinguishing device,Mathematic expectaion is represented, represents each element mean value for calculating output tensor herein.
CN201910336275.5A 2019-04-24 2019-04-24 A kind of magnetic resonance thin layer image rebuilding method Pending CN110047138A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910336275.5A CN110047138A (en) 2019-04-24 2019-04-24 A kind of magnetic resonance thin layer image rebuilding method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910336275.5A CN110047138A (en) 2019-04-24 2019-04-24 A kind of magnetic resonance thin layer image rebuilding method

Publications (1)

Publication Number Publication Date
CN110047138A true CN110047138A (en) 2019-07-23

Family

ID=67279189

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910336275.5A Pending CN110047138A (en) 2019-04-24 2019-04-24 A kind of magnetic resonance thin layer image rebuilding method

Country Status (1)

Country Link
CN (1) CN110047138A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110443867A (en) * 2019-08-01 2019-11-12 太原科技大学 Based on the CT image super-resolution reconstructing method for generating confrontation network
CN110473285A (en) * 2019-07-30 2019-11-19 上海联影智能医疗科技有限公司 Image reconstructing method, device, computer equipment and storage medium
CN110916708A (en) * 2019-12-26 2020-03-27 南京安科医疗科技有限公司 CT scanning projection data artifact correction method and CT image reconstruction method
CN111000578A (en) * 2019-12-25 2020-04-14 东软医疗系统股份有限公司 Image reconstruction method and device, CT (computed tomography) equipment and CT system
CN111210444A (en) * 2020-01-03 2020-05-29 中国科学技术大学 Method, apparatus and medium for segmenting multi-modal magnetic resonance image
CN111339890A (en) * 2020-02-20 2020-06-26 中国测绘科学研究院 Method for extracting newly-added construction land information based on high-resolution remote sensing image
CN111681296A (en) * 2020-05-09 2020-09-18 上海联影智能医疗科技有限公司 Image reconstruction method and device, computer equipment and storage medium
CN111696168A (en) * 2020-06-13 2020-09-22 中北大学 High-speed MRI reconstruction method based on residual self-attention image enhancement
CN112598578A (en) * 2020-12-28 2021-04-02 北京航空航天大学 Super-resolution reconstruction system and method for nuclear magnetic resonance image
CN113034642A (en) * 2021-03-30 2021-06-25 推想医疗科技股份有限公司 Image reconstruction method and device and training method and device of image reconstruction model
CN113538616A (en) * 2021-07-09 2021-10-22 浙江理工大学 Magnetic resonance image reconstruction method combining PUGAN and improved U-net
CN114283235A (en) * 2021-12-07 2022-04-05 中国科学院国家空间科学中心 Three-dimensional magnetic layer reconstruction method and system based on limited angle projection data
CN116579414A (en) * 2023-03-24 2023-08-11 北京医准智能科技有限公司 Model training method, MRI thin layer data reconstruction method, device and equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108629816A (en) * 2018-05-09 2018-10-09 复旦大学 The method for carrying out thin layer MR image reconstruction based on deep learning
US20180293429A1 (en) * 2017-03-30 2018-10-11 George Mason University Age invariant face recognition using convolutional neural networks and set distances
CN108765294A (en) * 2018-06-11 2018-11-06 深圳市唯特视科技有限公司 A kind of image combining method generating confrontation network based on full convolutional network and condition
CN109461120A (en) * 2018-09-19 2019-03-12 华中科技大学 A kind of microwave remote sensing bright temperature image reconstructing method based on SRGAN

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180293429A1 (en) * 2017-03-30 2018-10-11 George Mason University Age invariant face recognition using convolutional neural networks and set distances
CN108629816A (en) * 2018-05-09 2018-10-09 复旦大学 The method for carrying out thin layer MR image reconstruction based on deep learning
CN108765294A (en) * 2018-06-11 2018-11-06 深圳市唯特视科技有限公司 A kind of image combining method generating confrontation network based on full convolutional network and condition
CN109461120A (en) * 2018-09-19 2019-03-12 华中科技大学 A kind of microwave remote sensing bright temperature image reconstructing method based on SRGAN

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李雪奥: "基于卷积神经网络的多曝光图像融合方法研究与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
王一达 等: "卷积神经网络重建欠采的磁共振图像", 《磁共振成像》 *

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110473285A (en) * 2019-07-30 2019-11-19 上海联影智能医疗科技有限公司 Image reconstructing method, device, computer equipment and storage medium
CN110473285B (en) * 2019-07-30 2024-03-01 上海联影智能医疗科技有限公司 Image reconstruction method, device, computer equipment and storage medium
CN110443867B (en) * 2019-08-01 2022-06-10 太原科技大学 CT image super-resolution reconstruction method based on generation countermeasure network
CN110443867A (en) * 2019-08-01 2019-11-12 太原科技大学 Based on the CT image super-resolution reconstructing method for generating confrontation network
CN111000578A (en) * 2019-12-25 2020-04-14 东软医疗系统股份有限公司 Image reconstruction method and device, CT (computed tomography) equipment and CT system
CN111000578B (en) * 2019-12-25 2023-05-02 东软医疗系统股份有限公司 Image reconstruction method, device, CT equipment and CT system
CN110916708A (en) * 2019-12-26 2020-03-27 南京安科医疗科技有限公司 CT scanning projection data artifact correction method and CT image reconstruction method
CN111210444A (en) * 2020-01-03 2020-05-29 中国科学技术大学 Method, apparatus and medium for segmenting multi-modal magnetic resonance image
CN111339890A (en) * 2020-02-20 2020-06-26 中国测绘科学研究院 Method for extracting newly-added construction land information based on high-resolution remote sensing image
CN111681296A (en) * 2020-05-09 2020-09-18 上海联影智能医疗科技有限公司 Image reconstruction method and device, computer equipment and storage medium
CN111681296B (en) * 2020-05-09 2024-03-22 上海联影智能医疗科技有限公司 Image reconstruction method, image reconstruction device, computer equipment and storage medium
CN111696168B (en) * 2020-06-13 2022-08-23 中北大学 High-speed MRI reconstruction method based on residual self-attention image enhancement
CN111696168A (en) * 2020-06-13 2020-09-22 中北大学 High-speed MRI reconstruction method based on residual self-attention image enhancement
CN112598578A (en) * 2020-12-28 2021-04-02 北京航空航天大学 Super-resolution reconstruction system and method for nuclear magnetic resonance image
CN112598578B (en) * 2020-12-28 2022-12-30 北京航空航天大学 Super-resolution reconstruction system and method for nuclear magnetic resonance image
CN113034642A (en) * 2021-03-30 2021-06-25 推想医疗科技股份有限公司 Image reconstruction method and device and training method and device of image reconstruction model
CN113538616A (en) * 2021-07-09 2021-10-22 浙江理工大学 Magnetic resonance image reconstruction method combining PUGAN and improved U-net
CN113538616B (en) * 2021-07-09 2023-08-18 浙江理工大学 Magnetic resonance image reconstruction method combining PUGAN with improved U-net
CN114283235A (en) * 2021-12-07 2022-04-05 中国科学院国家空间科学中心 Three-dimensional magnetic layer reconstruction method and system based on limited angle projection data
CN116579414A (en) * 2023-03-24 2023-08-11 北京医准智能科技有限公司 Model training method, MRI thin layer data reconstruction method, device and equipment
CN116579414B (en) * 2023-03-24 2024-04-02 浙江医准智能科技有限公司 Model training method, MRI thin layer data reconstruction method, device and equipment

Similar Documents

Publication Publication Date Title
CN110047138A (en) A kind of magnetic resonance thin layer image rebuilding method
CN109745062A (en) Generation method, device, equipment and the storage medium of CT image
CN107909621A (en) It is a kind of based on it is twin into confrontation network medical image synthetic method
CN109919838A (en) The ultrasound image super resolution ratio reconstruction method of contour sharpness is promoted based on attention mechanism
CN109447976B (en) Medical image segmentation method and system based on artificial intelligence
CN110443867A (en) Based on the CT image super-resolution reconstructing method for generating confrontation network
CN106373168A (en) Medical image based segmentation and 3D reconstruction method and 3D printing system
CN110276736A (en) A kind of magnetic resonance image fusion method based on weight prediction network
CN111798369A (en) Face aging image synthesis method for generating confrontation network based on circulation condition
CN113095987B (en) Robust watermarking method of diffusion weighted image based on multi-scale feature learning
Zhou et al. Volume upscaling with convolutional neural networks
CN106157244A (en) A kind of QR Code Image Super-resolution Reconstruction method based on rarefaction representation
CN111080657A (en) CT image organ segmentation method based on convolutional neural network multi-dimensional fusion
Dai et al. Data driven intelligent diagnostics for Parkinson’s disease
Boutillon et al. Combining shape priors with conditional adversarial networks for improved scapula segmentation in MR images
CN116739899A (en) Image super-resolution reconstruction method based on SAUGAN network
CN109559278B (en) Super resolution image reconstruction method and system based on multiple features study
CN109949321A (en) Cerebral magnetic resonance image organizational dividing method based on three-dimensional Unet network
CN107680070A (en) A kind of layering weight image interfusion method based on original image content
CN109886869A (en) A kind of unreal structure method of face of the non-linear expansion based on contextual information
CN107945114A (en) Magnetic resonance image super-resolution method based on cluster dictionary and iterative backprojection
CN111814891A (en) Medical image synthesis method, device and storage medium
CN116152235A (en) Cross-modal synthesis method for medical image from CT (computed tomography) to PET (positron emission tomography) of lung cancer
Mattingly et al. 3D modeling of branching structures for anatomical instruction
Hacker et al. Representation and visualization of variability in a 3D anatomical atlas using the kidney as an example

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190723

RJ01 Rejection of invention patent application after publication